Feb 27 16:06:42 crc systemd[1]: Starting Kubernetes Kubelet... Feb 27 16:06:42 crc restorecon[4743]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:42 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 16:06:43 crc restorecon[4743]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 16:06:43 crc restorecon[4743]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 27 16:06:44 crc kubenswrapper[4830]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 27 16:06:44 crc kubenswrapper[4830]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 27 16:06:44 crc kubenswrapper[4830]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 27 16:06:44 crc kubenswrapper[4830]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 27 16:06:44 crc kubenswrapper[4830]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 27 16:06:44 crc kubenswrapper[4830]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.477076 4830 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483170 4830 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483195 4830 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483204 4830 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483211 4830 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483217 4830 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483222 4830 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483229 4830 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483237 4830 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483243 4830 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483248 4830 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483253 4830 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483259 4830 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483263 4830 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483268 4830 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483290 4830 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483296 4830 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483301 4830 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483306 4830 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483322 4830 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483327 4830 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483332 4830 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483337 4830 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483342 4830 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483348 4830 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483353 4830 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483358 4830 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483363 4830 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483368 4830 feature_gate.go:330] unrecognized feature gate: Example Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483373 4830 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483378 4830 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483383 4830 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483388 4830 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483393 4830 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483398 4830 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483404 4830 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483409 4830 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483417 4830 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483423 4830 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483430 4830 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483438 4830 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483443 4830 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483448 4830 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483453 4830 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483459 4830 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483464 4830 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483469 4830 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483473 4830 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483479 4830 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483484 4830 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483489 4830 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483496 4830 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483502 4830 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483509 4830 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483514 4830 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483520 4830 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483525 4830 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483530 4830 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483535 4830 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483539 4830 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483545 4830 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483550 4830 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483555 4830 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483560 4830 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483565 4830 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483570 4830 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483575 4830 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483582 4830 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483589 4830 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483595 4830 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483604 4830 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.483610 4830 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.483706 4830 flags.go:64] FLAG: --address="0.0.0.0" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.483721 4830 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.483735 4830 flags.go:64] FLAG: --anonymous-auth="true" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.483757 4830 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.483768 4830 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.483775 4830 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.483784 4830 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.483792 4830 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.483798 4830 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.483804 4830 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.483811 4830 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.483817 4830 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.483823 4830 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.483829 4830 flags.go:64] FLAG: --cgroup-root="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.483835 4830 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.483842 4830 flags.go:64] FLAG: --client-ca-file="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.483848 4830 flags.go:64] FLAG: --cloud-config="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.483854 4830 flags.go:64] FLAG: --cloud-provider="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.483859 4830 flags.go:64] FLAG: --cluster-dns="[]" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.483869 4830 flags.go:64] FLAG: --cluster-domain="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.483875 4830 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.483881 4830 flags.go:64] FLAG: --config-dir="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.483887 4830 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.483893 4830 flags.go:64] FLAG: --container-log-max-files="5" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.483901 4830 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.483907 4830 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.483914 4830 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.483920 4830 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.483926 4830 flags.go:64] FLAG: --contention-profiling="false" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.483932 4830 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.483939 4830 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.483966 4830 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.483972 4830 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.483981 4830 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.483987 4830 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.483993 4830 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.483999 4830 flags.go:64] FLAG: --enable-load-reader="false" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484005 4830 flags.go:64] FLAG: --enable-server="true" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484011 4830 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484019 4830 flags.go:64] FLAG: --event-burst="100" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484032 4830 flags.go:64] FLAG: --event-qps="50" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484038 4830 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484044 4830 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484050 4830 flags.go:64] FLAG: --eviction-hard="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484057 4830 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484063 4830 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484069 4830 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484076 4830 flags.go:64] FLAG: --eviction-soft="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484082 4830 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484087 4830 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484093 4830 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484098 4830 flags.go:64] FLAG: --experimental-mounter-path="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484104 4830 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484110 4830 flags.go:64] FLAG: --fail-swap-on="true" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484116 4830 flags.go:64] FLAG: --feature-gates="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484124 4830 flags.go:64] FLAG: --file-check-frequency="20s" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484130 4830 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484136 4830 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484142 4830 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484148 4830 flags.go:64] FLAG: --healthz-port="10248" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484154 4830 flags.go:64] FLAG: --help="false" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484160 4830 flags.go:64] FLAG: --hostname-override="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484168 4830 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484174 4830 flags.go:64] FLAG: --http-check-frequency="20s" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484182 4830 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484188 4830 flags.go:64] FLAG: --image-credential-provider-config="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484193 4830 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484199 4830 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484206 4830 flags.go:64] FLAG: --image-service-endpoint="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484211 4830 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484217 4830 flags.go:64] FLAG: --kube-api-burst="100" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484223 4830 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484229 4830 flags.go:64] FLAG: --kube-api-qps="50" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484234 4830 flags.go:64] FLAG: --kube-reserved="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484240 4830 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484246 4830 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484254 4830 flags.go:64] FLAG: --kubelet-cgroups="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484259 4830 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484266 4830 flags.go:64] FLAG: --lock-file="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484271 4830 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484278 4830 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484284 4830 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484294 4830 flags.go:64] FLAG: --log-json-split-stream="false" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484300 4830 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484306 4830 flags.go:64] FLAG: --log-text-split-stream="false" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484312 4830 flags.go:64] FLAG: --logging-format="text" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484318 4830 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484325 4830 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484331 4830 flags.go:64] FLAG: --manifest-url="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484337 4830 flags.go:64] FLAG: --manifest-url-header="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484345 4830 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484351 4830 flags.go:64] FLAG: --max-open-files="1000000" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484359 4830 flags.go:64] FLAG: --max-pods="110" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484365 4830 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484371 4830 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484377 4830 flags.go:64] FLAG: --memory-manager-policy="None" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484384 4830 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484390 4830 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484396 4830 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484402 4830 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484416 4830 flags.go:64] FLAG: --node-status-max-images="50" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484423 4830 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484429 4830 flags.go:64] FLAG: --oom-score-adj="-999" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484441 4830 flags.go:64] FLAG: --pod-cidr="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484447 4830 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484457 4830 flags.go:64] FLAG: --pod-manifest-path="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484463 4830 flags.go:64] FLAG: --pod-max-pids="-1" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484469 4830 flags.go:64] FLAG: --pods-per-core="0" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484475 4830 flags.go:64] FLAG: --port="10250" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484481 4830 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484487 4830 flags.go:64] FLAG: --provider-id="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484493 4830 flags.go:64] FLAG: --qos-reserved="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484501 4830 flags.go:64] FLAG: --read-only-port="10255" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484507 4830 flags.go:64] FLAG: --register-node="true" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484513 4830 flags.go:64] FLAG: --register-schedulable="true" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484519 4830 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484529 4830 flags.go:64] FLAG: --registry-burst="10" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484535 4830 flags.go:64] FLAG: --registry-qps="5" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484540 4830 flags.go:64] FLAG: --reserved-cpus="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484546 4830 flags.go:64] FLAG: --reserved-memory="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484552 4830 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484557 4830 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484563 4830 flags.go:64] FLAG: --rotate-certificates="false" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484568 4830 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484574 4830 flags.go:64] FLAG: --runonce="false" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484579 4830 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484586 4830 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484592 4830 flags.go:64] FLAG: --seccomp-default="false" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484597 4830 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484604 4830 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484610 4830 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484616 4830 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484622 4830 flags.go:64] FLAG: --storage-driver-password="root" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484628 4830 flags.go:64] FLAG: --storage-driver-secure="false" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484634 4830 flags.go:64] FLAG: --storage-driver-table="stats" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484641 4830 flags.go:64] FLAG: --storage-driver-user="root" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484647 4830 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484653 4830 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484659 4830 flags.go:64] FLAG: --system-cgroups="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484664 4830 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484675 4830 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484681 4830 flags.go:64] FLAG: --tls-cert-file="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484687 4830 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484696 4830 flags.go:64] FLAG: --tls-min-version="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484702 4830 flags.go:64] FLAG: --tls-private-key-file="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484708 4830 flags.go:64] FLAG: --topology-manager-policy="none" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484714 4830 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484719 4830 flags.go:64] FLAG: --topology-manager-scope="container" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484750 4830 flags.go:64] FLAG: --v="2" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484760 4830 flags.go:64] FLAG: --version="false" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484768 4830 flags.go:64] FLAG: --vmodule="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484775 4830 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.484781 4830 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.484961 4830 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.484971 4830 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.484979 4830 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.484986 4830 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.484993 4830 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485000 4830 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485006 4830 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485012 4830 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485018 4830 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485023 4830 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485038 4830 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485045 4830 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485051 4830 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485057 4830 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485062 4830 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485067 4830 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485073 4830 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485078 4830 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485083 4830 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485087 4830 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485093 4830 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485097 4830 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485102 4830 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485107 4830 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485112 4830 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485117 4830 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485123 4830 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485128 4830 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485133 4830 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485139 4830 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485144 4830 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485155 4830 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485161 4830 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485165 4830 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485171 4830 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485177 4830 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485182 4830 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485188 4830 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485193 4830 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485199 4830 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485205 4830 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485212 4830 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485219 4830 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485224 4830 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485230 4830 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485235 4830 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485239 4830 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485244 4830 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485249 4830 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485254 4830 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485260 4830 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485265 4830 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485270 4830 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485275 4830 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485280 4830 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485286 4830 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485291 4830 feature_gate.go:330] unrecognized feature gate: Example Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485296 4830 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485301 4830 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485306 4830 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485311 4830 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485316 4830 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485321 4830 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485326 4830 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485330 4830 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485336 4830 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485340 4830 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485347 4830 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485352 4830 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485359 4830 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.485365 4830 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.486368 4830 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.498419 4830 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.498468 4830 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498590 4830 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498605 4830 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498613 4830 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498622 4830 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498631 4830 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498640 4830 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498648 4830 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498656 4830 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498664 4830 feature_gate.go:330] unrecognized feature gate: Example Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498672 4830 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498680 4830 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498687 4830 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498695 4830 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498704 4830 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498712 4830 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498721 4830 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498729 4830 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498737 4830 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498745 4830 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498753 4830 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498761 4830 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498796 4830 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498805 4830 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498812 4830 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498820 4830 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498828 4830 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498839 4830 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498846 4830 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498857 4830 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498870 4830 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498879 4830 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498889 4830 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498899 4830 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498910 4830 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498918 4830 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498927 4830 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498936 4830 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498968 4830 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498977 4830 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498985 4830 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.498993 4830 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499001 4830 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499008 4830 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499016 4830 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499024 4830 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499032 4830 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499040 4830 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499048 4830 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499057 4830 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499064 4830 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499074 4830 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499083 4830 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499093 4830 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499103 4830 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499113 4830 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499122 4830 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499130 4830 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499138 4830 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499146 4830 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499154 4830 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499162 4830 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499170 4830 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499178 4830 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499187 4830 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499195 4830 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499203 4830 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499211 4830 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499218 4830 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499226 4830 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499234 4830 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499242 4830 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.499254 4830 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499513 4830 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499529 4830 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499539 4830 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499548 4830 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499556 4830 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499564 4830 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499574 4830 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499582 4830 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499591 4830 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499599 4830 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499608 4830 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499616 4830 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499627 4830 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499636 4830 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499645 4830 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499653 4830 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499661 4830 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499669 4830 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499677 4830 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499685 4830 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499692 4830 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499700 4830 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499707 4830 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499716 4830 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499723 4830 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499731 4830 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499739 4830 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499747 4830 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499755 4830 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499762 4830 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499770 4830 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499778 4830 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499785 4830 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499793 4830 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499801 4830 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499809 4830 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499816 4830 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499823 4830 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499831 4830 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499839 4830 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499847 4830 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499855 4830 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499862 4830 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499870 4830 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499878 4830 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499888 4830 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499897 4830 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499905 4830 feature_gate.go:330] unrecognized feature gate: Example Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499912 4830 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499920 4830 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499928 4830 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499936 4830 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499967 4830 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499975 4830 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499983 4830 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499991 4830 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.499999 4830 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.500006 4830 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.500014 4830 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.500022 4830 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.500032 4830 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.500042 4830 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.500052 4830 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.500060 4830 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.500068 4830 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.500078 4830 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.500087 4830 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.500096 4830 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.500103 4830 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.500112 4830 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.500120 4830 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.500132 4830 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.501205 4830 server.go:940] "Client rotation is on, will bootstrap in background" Feb 27 16:06:44 crc kubenswrapper[4830]: E0227 16:06:44.505705 4830 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.512326 4830 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.512478 4830 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.514449 4830 server.go:997] "Starting client certificate rotation" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.514493 4830 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.533201 4830 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.550538 4830 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.558541 4830 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 27 16:06:44 crc kubenswrapper[4830]: E0227 16:06:44.559408 4830 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.36:6443: connect: connection refused" logger="UnhandledError" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.586806 4830 log.go:25] "Validated CRI v1 runtime API" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.629272 4830 log.go:25] "Validated CRI v1 image API" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.631730 4830 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.636832 4830 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-27-15-58-50-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.636880 4830 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.670158 4830 manager.go:217] Machine: {Timestamp:2026-02-27 16:06:44.666386364 +0000 UTC m=+0.755658907 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:1d4a94de-760c-40e1-8054-66d250f336ee BootID:058e4d33-3c10-460a-8f66-1f2272cb9956 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:72:85:16 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:72:85:16 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:4e:9b:bc Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:b3:ce:8e Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:47:a3:55 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:79:f4:9b Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:b8:60:a4 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:e2:a6:e9:59:b6:5d Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:a2:ce:42:fe:ca:f4 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.670557 4830 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.670805 4830 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.671275 4830 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.671631 4830 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.671701 4830 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.672070 4830 topology_manager.go:138] "Creating topology manager with none policy" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.672091 4830 container_manager_linux.go:303] "Creating device plugin manager" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.672810 4830 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.672867 4830 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.673690 4830 state_mem.go:36] "Initialized new in-memory state store" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.674233 4830 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.678401 4830 kubelet.go:418] "Attempting to sync node with API server" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.678439 4830 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.678463 4830 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.678484 4830 kubelet.go:324] "Adding apiserver pod source" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.678501 4830 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.682904 4830 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.687729 4830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.36:6443: connect: connection refused Feb 27 16:06:44 crc kubenswrapper[4830]: E0227 16:06:44.687941 4830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.36:6443: connect: connection refused" logger="UnhandledError" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.688201 4830 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.687885 4830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.36:6443: connect: connection refused Feb 27 16:06:44 crc kubenswrapper[4830]: E0227 16:06:44.688485 4830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.36:6443: connect: connection refused" logger="UnhandledError" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.692072 4830 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.694022 4830 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.694097 4830 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.694129 4830 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.694159 4830 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.694190 4830 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.694208 4830 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.694225 4830 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.694255 4830 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.694278 4830 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.694298 4830 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.694323 4830 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.694343 4830 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.695533 4830 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.696343 4830 server.go:1280] "Started kubelet" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.696567 4830 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.696809 4830 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.697934 4830 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.698213 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.36:6443: connect: connection refused Feb 27 16:06:44 crc systemd[1]: Started Kubernetes Kubelet. Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.699794 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.699837 4830 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 27 16:06:44 crc kubenswrapper[4830]: E0227 16:06:44.700392 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.700487 4830 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.700521 4830 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.700697 4830 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.701483 4830 factory.go:55] Registering systemd factory Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.701515 4830 factory.go:221] Registration of the systemd container factory successfully Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.701473 4830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.36:6443: connect: connection refused Feb 27 16:06:44 crc kubenswrapper[4830]: E0227 16:06:44.701579 4830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.36:6443: connect: connection refused" logger="UnhandledError" Feb 27 16:06:44 crc kubenswrapper[4830]: E0227 16:06:44.701798 4830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.36:6443: connect: connection refused" interval="200ms" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.701976 4830 factory.go:153] Registering CRI-O factory Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.702009 4830 factory.go:221] Registration of the crio container factory successfully Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.702118 4830 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.702171 4830 factory.go:103] Registering Raw factory Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.702202 4830 manager.go:1196] Started watching for new ooms in manager Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.702211 4830 server.go:460] "Adding debug handlers to kubelet server" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.703400 4830 manager.go:319] Starting recovery of all containers Feb 27 16:06:44 crc kubenswrapper[4830]: E0227 16:06:44.702407 4830 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.36:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189826278de25a96 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:44.69629199 +0000 UTC m=+0.785564483,LastTimestamp:2026-02-27 16:06:44.69629199 +0000 UTC m=+0.785564483,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.708568 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.708619 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.708638 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.708653 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.708668 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.708681 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.708695 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.708710 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.708728 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.708742 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.708756 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.708769 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.708782 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.708798 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.708839 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.708853 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.708868 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.708908 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.708923 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.708936 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.708970 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.708985 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.709000 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.709013 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.709026 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.709040 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.709058 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.709075 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.709089 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.709103 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.709142 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.709157 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.709171 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.709185 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.709198 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.709212 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.709227 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.709241 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.709262 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.709276 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.709289 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.709304 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.709323 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.709337 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.709353 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.709915 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.710035 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.710088 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.710155 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.710230 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.710260 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.710305 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.710359 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.710392 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.710428 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.710466 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.710490 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.710524 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.710549 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.710570 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.710600 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.710622 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.710653 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.710674 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.710694 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.710727 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.710748 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.710768 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.710798 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.710821 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.710850 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.710871 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.710893 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.710925 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.710974 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.711006 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.711028 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.711050 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.711077 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.711097 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.711125 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.711172 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.711194 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.711224 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.711246 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.711277 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.711299 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.711323 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.711351 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.711372 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.711402 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.711426 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.711446 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.711472 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.711492 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.711521 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.711541 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.711562 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.711589 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.711609 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.711636 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.711657 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.711677 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.711705 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.711735 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.711768 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.711801 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.711837 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.711861 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.711894 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.711924 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.711972 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.712003 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.712032 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.712064 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.712085 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.712112 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.712134 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.712154 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.712181 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.712201 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.712228 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.712248 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.712267 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.712292 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.712311 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.712337 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.712357 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.712378 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.712408 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.712430 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.716362 4830 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.716405 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.716428 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.716467 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.716484 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.717215 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.717395 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.717481 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.717598 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.717694 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.717858 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.717891 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.718090 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.718206 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.718325 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.718372 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.718403 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.719506 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.721031 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.721212 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.721275 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.721298 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.721318 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.721370 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.721391 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.721497 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.721520 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.721571 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.721592 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.721613 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.721664 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.721686 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.721870 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.721890 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.721936 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.722057 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.722076 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.722129 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.722149 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.722167 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.722188 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.722237 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.722257 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.722275 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.722327 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.722348 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.722400 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.722455 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.722506 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.722527 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.722546 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.722596 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.722619 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.722637 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.722721 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.722772 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.722792 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.722810 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.722859 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.722880 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.722898 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.722959 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.723003 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.723054 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.723073 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.723137 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.723177 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.723228 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.723248 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.723269 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.723344 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.723366 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.723386 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.723448 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.723467 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.723524 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.723542 4830 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.723560 4830 reconstruct.go:97] "Volume reconstruction finished" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.723575 4830 reconciler.go:26] "Reconciler: start to sync state" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.740420 4830 manager.go:324] Recovery completed Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.757996 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.759094 4830 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.760709 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.760752 4830 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.760764 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.760793 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.760795 4830 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.761063 4830 kubelet.go:2335] "Starting kubelet main sync loop" Feb 27 16:06:44 crc kubenswrapper[4830]: E0227 16:06:44.761119 4830 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 27 16:06:44 crc kubenswrapper[4830]: W0227 16:06:44.762233 4830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.36:6443: connect: connection refused Feb 27 16:06:44 crc kubenswrapper[4830]: E0227 16:06:44.762297 4830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.36:6443: connect: connection refused" logger="UnhandledError" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.763763 4830 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.763786 4830 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.763806 4830 state_mem.go:36] "Initialized new in-memory state store" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.777467 4830 policy_none.go:49] "None policy: Start" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.778268 4830 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.778310 4830 state_mem.go:35] "Initializing new in-memory state store" Feb 27 16:06:44 crc kubenswrapper[4830]: E0227 16:06:44.801310 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.836203 4830 manager.go:334] "Starting Device Plugin manager" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.836281 4830 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.836294 4830 server.go:79] "Starting device plugin registration server" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.836802 4830 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.836875 4830 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.837116 4830 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.837246 4830 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.837257 4830 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 27 16:06:44 crc kubenswrapper[4830]: E0227 16:06:44.849884 4830 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.861231 4830 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.861335 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.864319 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.864402 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.864419 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.864985 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.865371 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.865462 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.867364 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.867425 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.867447 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.867697 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.868392 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.868422 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.868464 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.869003 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.869053 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.869770 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.871791 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.871850 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.872203 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.870229 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.872302 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.872315 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.872588 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.872621 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.873814 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.873867 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.873918 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.873882 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.873939 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.873981 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.874188 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.874381 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.874455 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.875220 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.875257 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.875270 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.875512 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.875556 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.875709 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.875729 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.875738 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.877993 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.878023 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.878050 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:44 crc kubenswrapper[4830]: E0227 16:06:44.902433 4830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.36:6443: connect: connection refused" interval="400ms" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.925760 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.925812 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.925843 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.925867 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.925890 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.925910 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.925936 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.926019 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.926063 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.926121 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.926164 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.926195 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.926290 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.926347 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.926392 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.937783 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.938693 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.938741 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.938756 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:44 crc kubenswrapper[4830]: I0227 16:06:44.938790 4830 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 16:06:44 crc kubenswrapper[4830]: E0227 16:06:44.939217 4830 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.36:6443: connect: connection refused" node="crc" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.027547 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.027579 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.027596 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.027614 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.027632 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.027646 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.027661 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.027675 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.027692 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.027706 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.027722 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.027739 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.027754 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.027769 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.027785 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.027860 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.027909 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.027921 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.027992 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.027995 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.028024 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.028046 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.028061 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.028047 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.028051 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.028086 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.028099 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.028101 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.028150 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.028156 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.139573 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.140804 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.140834 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.140843 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.140866 4830 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 16:06:45 crc kubenswrapper[4830]: E0227 16:06:45.141217 4830 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.36:6443: connect: connection refused" node="crc" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.210204 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.236059 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 16:06:45 crc kubenswrapper[4830]: W0227 16:06:45.254624 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-08756273a34810531b49742b827bb3b3b2f4795e0c3b82e87f38d7bfae79d49d WatchSource:0}: Error finding container 08756273a34810531b49742b827bb3b3b2f4795e0c3b82e87f38d7bfae79d49d: Status 404 returned error can't find the container with id 08756273a34810531b49742b827bb3b3b2f4795e0c3b82e87f38d7bfae79d49d Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.257079 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.266221 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.270379 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:06:45 crc kubenswrapper[4830]: W0227 16:06:45.270657 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-14f643abfaca70ba9479b632577a85caafd48efe6af0cadc0dc3cad3de5acea8 WatchSource:0}: Error finding container 14f643abfaca70ba9479b632577a85caafd48efe6af0cadc0dc3cad3de5acea8: Status 404 returned error can't find the container with id 14f643abfaca70ba9479b632577a85caafd48efe6af0cadc0dc3cad3de5acea8 Feb 27 16:06:45 crc kubenswrapper[4830]: W0227 16:06:45.277190 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-7ab2ba9b22ad24b0df040bdcae7c959e938c637af2d6744982a9e494bec84c1b WatchSource:0}: Error finding container 7ab2ba9b22ad24b0df040bdcae7c959e938c637af2d6744982a9e494bec84c1b: Status 404 returned error can't find the container with id 7ab2ba9b22ad24b0df040bdcae7c959e938c637af2d6744982a9e494bec84c1b Feb 27 16:06:45 crc kubenswrapper[4830]: W0227 16:06:45.283824 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-9101b60758c8210719a0b602c81e27e6a5a47a54ffe8437ee7b7f82a938d45da WatchSource:0}: Error finding container 9101b60758c8210719a0b602c81e27e6a5a47a54ffe8437ee7b7f82a938d45da: Status 404 returned error can't find the container with id 9101b60758c8210719a0b602c81e27e6a5a47a54ffe8437ee7b7f82a938d45da Feb 27 16:06:45 crc kubenswrapper[4830]: W0227 16:06:45.292059 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-19bc2c2bd153864c8a6456b772f4a0fa8c460d9fbae2db12ccfe1b13f7eda85f WatchSource:0}: Error finding container 19bc2c2bd153864c8a6456b772f4a0fa8c460d9fbae2db12ccfe1b13f7eda85f: Status 404 returned error can't find the container with id 19bc2c2bd153864c8a6456b772f4a0fa8c460d9fbae2db12ccfe1b13f7eda85f Feb 27 16:06:45 crc kubenswrapper[4830]: E0227 16:06:45.303429 4830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.36:6443: connect: connection refused" interval="800ms" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.542281 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.544703 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.544778 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.544797 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.544845 4830 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 16:06:45 crc kubenswrapper[4830]: E0227 16:06:45.545780 4830 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.36:6443: connect: connection refused" node="crc" Feb 27 16:06:45 crc kubenswrapper[4830]: W0227 16:06:45.585717 4830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.36:6443: connect: connection refused Feb 27 16:06:45 crc kubenswrapper[4830]: E0227 16:06:45.585828 4830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.36:6443: connect: connection refused" logger="UnhandledError" Feb 27 16:06:45 crc kubenswrapper[4830]: W0227 16:06:45.604522 4830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.36:6443: connect: connection refused Feb 27 16:06:45 crc kubenswrapper[4830]: E0227 16:06:45.604603 4830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.36:6443: connect: connection refused" logger="UnhandledError" Feb 27 16:06:45 crc kubenswrapper[4830]: W0227 16:06:45.624437 4830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.36:6443: connect: connection refused Feb 27 16:06:45 crc kubenswrapper[4830]: E0227 16:06:45.624522 4830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.36:6443: connect: connection refused" logger="UnhandledError" Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.700124 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.36:6443: connect: connection refused Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.767931 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"19bc2c2bd153864c8a6456b772f4a0fa8c460d9fbae2db12ccfe1b13f7eda85f"} Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.769160 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9101b60758c8210719a0b602c81e27e6a5a47a54ffe8437ee7b7f82a938d45da"} Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.770830 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7ab2ba9b22ad24b0df040bdcae7c959e938c637af2d6744982a9e494bec84c1b"} Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.772199 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"14f643abfaca70ba9479b632577a85caafd48efe6af0cadc0dc3cad3de5acea8"} Feb 27 16:06:45 crc kubenswrapper[4830]: I0227 16:06:45.773186 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"08756273a34810531b49742b827bb3b3b2f4795e0c3b82e87f38d7bfae79d49d"} Feb 27 16:06:45 crc kubenswrapper[4830]: W0227 16:06:45.779220 4830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.36:6443: connect: connection refused Feb 27 16:06:45 crc kubenswrapper[4830]: E0227 16:06:45.779323 4830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.36:6443: connect: connection refused" logger="UnhandledError" Feb 27 16:06:46 crc kubenswrapper[4830]: E0227 16:06:46.104186 4830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.36:6443: connect: connection refused" interval="1.6s" Feb 27 16:06:46 crc kubenswrapper[4830]: I0227 16:06:46.346407 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:46 crc kubenswrapper[4830]: I0227 16:06:46.347824 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:46 crc kubenswrapper[4830]: I0227 16:06:46.347886 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:46 crc kubenswrapper[4830]: I0227 16:06:46.347905 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:46 crc kubenswrapper[4830]: I0227 16:06:46.347973 4830 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 16:06:46 crc kubenswrapper[4830]: E0227 16:06:46.348729 4830 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.36:6443: connect: connection refused" node="crc" Feb 27 16:06:46 crc kubenswrapper[4830]: I0227 16:06:46.643717 4830 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 27 16:06:46 crc kubenswrapper[4830]: E0227 16:06:46.645794 4830 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.36:6443: connect: connection refused" logger="UnhandledError" Feb 27 16:06:46 crc kubenswrapper[4830]: I0227 16:06:46.699414 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.36:6443: connect: connection refused Feb 27 16:06:46 crc kubenswrapper[4830]: I0227 16:06:46.779441 4830 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f" exitCode=0 Feb 27 16:06:46 crc kubenswrapper[4830]: I0227 16:06:46.779556 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:46 crc kubenswrapper[4830]: I0227 16:06:46.779556 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f"} Feb 27 16:06:46 crc kubenswrapper[4830]: I0227 16:06:46.780785 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:46 crc kubenswrapper[4830]: I0227 16:06:46.780824 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:46 crc kubenswrapper[4830]: I0227 16:06:46.780840 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:46 crc kubenswrapper[4830]: I0227 16:06:46.781688 4830 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="24c01ec20922a3d1028544b23795fc085535970d40bb2b7199d1f726be21f36d" exitCode=0 Feb 27 16:06:46 crc kubenswrapper[4830]: I0227 16:06:46.781778 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"24c01ec20922a3d1028544b23795fc085535970d40bb2b7199d1f726be21f36d"} Feb 27 16:06:46 crc kubenswrapper[4830]: I0227 16:06:46.781896 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:46 crc kubenswrapper[4830]: I0227 16:06:46.783418 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:46 crc kubenswrapper[4830]: I0227 16:06:46.783554 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:46 crc kubenswrapper[4830]: I0227 16:06:46.783583 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:46 crc kubenswrapper[4830]: I0227 16:06:46.784582 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c414ab235e0a7321e6455f54b680cefce4120d0114676e10e9f79bf275964052"} Feb 27 16:06:46 crc kubenswrapper[4830]: I0227 16:06:46.784642 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a576bccb73520a452eea30512283d7bca1141a05d0c697885d106fb132c3df7a"} Feb 27 16:06:46 crc kubenswrapper[4830]: I0227 16:06:46.788360 4830 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2" exitCode=0 Feb 27 16:06:46 crc kubenswrapper[4830]: I0227 16:06:46.788469 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2"} Feb 27 16:06:46 crc kubenswrapper[4830]: I0227 16:06:46.788531 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:46 crc kubenswrapper[4830]: I0227 16:06:46.790158 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:46 crc kubenswrapper[4830]: I0227 16:06:46.790205 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:46 crc kubenswrapper[4830]: I0227 16:06:46.790221 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:46 crc kubenswrapper[4830]: I0227 16:06:46.791022 4830 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="aa08243b506aaad33dbf7184de0ee3b14f2dc444ddfd00ffef3d141ddea8a795" exitCode=0 Feb 27 16:06:46 crc kubenswrapper[4830]: I0227 16:06:46.791060 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"aa08243b506aaad33dbf7184de0ee3b14f2dc444ddfd00ffef3d141ddea8a795"} Feb 27 16:06:46 crc kubenswrapper[4830]: I0227 16:06:46.791255 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:46 crc kubenswrapper[4830]: I0227 16:06:46.792548 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:46 crc kubenswrapper[4830]: I0227 16:06:46.792582 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:46 crc kubenswrapper[4830]: I0227 16:06:46.792598 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:46 crc kubenswrapper[4830]: I0227 16:06:46.793002 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:46 crc kubenswrapper[4830]: I0227 16:06:46.794082 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:46 crc kubenswrapper[4830]: I0227 16:06:46.794124 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:46 crc kubenswrapper[4830]: I0227 16:06:46.794140 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:47 crc kubenswrapper[4830]: W0227 16:06:47.245596 4830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.36:6443: connect: connection refused Feb 27 16:06:47 crc kubenswrapper[4830]: E0227 16:06:47.245684 4830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.36:6443: connect: connection refused" logger="UnhandledError" Feb 27 16:06:47 crc kubenswrapper[4830]: W0227 16:06:47.268502 4830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.36:6443: connect: connection refused Feb 27 16:06:47 crc kubenswrapper[4830]: E0227 16:06:47.268800 4830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.36:6443: connect: connection refused" logger="UnhandledError" Feb 27 16:06:47 crc kubenswrapper[4830]: W0227 16:06:47.448840 4830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.36:6443: connect: connection refused Feb 27 16:06:47 crc kubenswrapper[4830]: E0227 16:06:47.448998 4830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.36:6443: connect: connection refused" logger="UnhandledError" Feb 27 16:06:47 crc kubenswrapper[4830]: I0227 16:06:47.699895 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.36:6443: connect: connection refused Feb 27 16:06:47 crc kubenswrapper[4830]: E0227 16:06:47.705629 4830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.36:6443: connect: connection refused" interval="3.2s" Feb 27 16:06:47 crc kubenswrapper[4830]: I0227 16:06:47.797560 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2e96e9ad075a2fbed1c691bdd79b49300f3b485834a95834aabe1ca32f099fe1"} Feb 27 16:06:47 crc kubenswrapper[4830]: I0227 16:06:47.797607 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:47 crc kubenswrapper[4830]: I0227 16:06:47.797641 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a682f07e321a3dc0cbf11fc0b683893d4527f80d5b41ee627e645f3996cc3ae9"} Feb 27 16:06:47 crc kubenswrapper[4830]: I0227 16:06:47.798755 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:47 crc kubenswrapper[4830]: I0227 16:06:47.798788 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:47 crc kubenswrapper[4830]: I0227 16:06:47.798798 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:47 crc kubenswrapper[4830]: I0227 16:06:47.800901 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a"} Feb 27 16:06:47 crc kubenswrapper[4830]: I0227 16:06:47.800940 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241"} Feb 27 16:06:47 crc kubenswrapper[4830]: I0227 16:06:47.800980 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5"} Feb 27 16:06:47 crc kubenswrapper[4830]: I0227 16:06:47.804450 4830 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="12436bc0b6c1591075f81684cc8726b8ade83e7d217f7c6e963ad1423db6b9ec" exitCode=0 Feb 27 16:06:47 crc kubenswrapper[4830]: I0227 16:06:47.804548 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"12436bc0b6c1591075f81684cc8726b8ade83e7d217f7c6e963ad1423db6b9ec"} Feb 27 16:06:47 crc kubenswrapper[4830]: I0227 16:06:47.804655 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:47 crc kubenswrapper[4830]: I0227 16:06:47.805576 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:47 crc kubenswrapper[4830]: I0227 16:06:47.805628 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:47 crc kubenswrapper[4830]: I0227 16:06:47.805647 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:47 crc kubenswrapper[4830]: I0227 16:06:47.806807 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"5046080bbc4855769fb50909ff06b8c2cfd2438a0b65749c89302a4e97342980"} Feb 27 16:06:47 crc kubenswrapper[4830]: I0227 16:06:47.806886 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:47 crc kubenswrapper[4830]: I0227 16:06:47.808192 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:47 crc kubenswrapper[4830]: I0227 16:06:47.808230 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:47 crc kubenswrapper[4830]: I0227 16:06:47.808253 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:47 crc kubenswrapper[4830]: I0227 16:06:47.809935 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ba7ee266c946dbec6c4506d41bded5a187162e3838fe2b96e7e0957087ee4c2e"} Feb 27 16:06:47 crc kubenswrapper[4830]: I0227 16:06:47.809995 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f1ae97d933dc306c9d1ccee8c5c2d0e35a6a90ba747243526d096dd8fafa125a"} Feb 27 16:06:47 crc kubenswrapper[4830]: I0227 16:06:47.810006 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"bbdb2afa0c0d81de9fc59fba4383c882283506d2312d78a1ed7cd0288bf6e670"} Feb 27 16:06:47 crc kubenswrapper[4830]: I0227 16:06:47.810162 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:47 crc kubenswrapper[4830]: I0227 16:06:47.811294 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:47 crc kubenswrapper[4830]: I0227 16:06:47.811342 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:47 crc kubenswrapper[4830]: I0227 16:06:47.811363 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:47 crc kubenswrapper[4830]: I0227 16:06:47.949237 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:47 crc kubenswrapper[4830]: I0227 16:06:47.950907 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:47 crc kubenswrapper[4830]: I0227 16:06:47.951000 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:47 crc kubenswrapper[4830]: I0227 16:06:47.951022 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:47 crc kubenswrapper[4830]: I0227 16:06:47.951072 4830 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 16:06:47 crc kubenswrapper[4830]: E0227 16:06:47.951710 4830 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.36:6443: connect: connection refused" node="crc" Feb 27 16:06:48 crc kubenswrapper[4830]: W0227 16:06:48.283982 4830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.36:6443: connect: connection refused Feb 27 16:06:48 crc kubenswrapper[4830]: E0227 16:06:48.284429 4830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.36:6443: connect: connection refused" logger="UnhandledError" Feb 27 16:06:48 crc kubenswrapper[4830]: I0227 16:06:48.699155 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.36:6443: connect: connection refused Feb 27 16:06:48 crc kubenswrapper[4830]: I0227 16:06:48.817544 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fd2a80cee21c12fed9ed12847698d68e3a1314e3bc50d00a5600ec94eea618a8"} Feb 27 16:06:48 crc kubenswrapper[4830]: I0227 16:06:48.817603 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89"} Feb 27 16:06:48 crc kubenswrapper[4830]: I0227 16:06:48.817651 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:48 crc kubenswrapper[4830]: I0227 16:06:48.819554 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:48 crc kubenswrapper[4830]: I0227 16:06:48.819617 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:48 crc kubenswrapper[4830]: I0227 16:06:48.819635 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:48 crc kubenswrapper[4830]: I0227 16:06:48.821207 4830 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f63879efa3aa43d9b25f67e5181594aa22d31e9ce7fd480a5e9a1acd89103538" exitCode=0 Feb 27 16:06:48 crc kubenswrapper[4830]: I0227 16:06:48.821340 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:48 crc kubenswrapper[4830]: I0227 16:06:48.821382 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:48 crc kubenswrapper[4830]: I0227 16:06:48.821414 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:48 crc kubenswrapper[4830]: I0227 16:06:48.821371 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:48 crc kubenswrapper[4830]: I0227 16:06:48.821630 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f63879efa3aa43d9b25f67e5181594aa22d31e9ce7fd480a5e9a1acd89103538"} Feb 27 16:06:48 crc kubenswrapper[4830]: I0227 16:06:48.821700 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 16:06:48 crc kubenswrapper[4830]: I0227 16:06:48.825636 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:48 crc kubenswrapper[4830]: I0227 16:06:48.825728 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:48 crc kubenswrapper[4830]: I0227 16:06:48.825749 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:48 crc kubenswrapper[4830]: I0227 16:06:48.825756 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:48 crc kubenswrapper[4830]: I0227 16:06:48.825793 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:48 crc kubenswrapper[4830]: I0227 16:06:48.825810 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:48 crc kubenswrapper[4830]: I0227 16:06:48.825672 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:48 crc kubenswrapper[4830]: I0227 16:06:48.825898 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:48 crc kubenswrapper[4830]: I0227 16:06:48.825922 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:48 crc kubenswrapper[4830]: I0227 16:06:48.825636 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:48 crc kubenswrapper[4830]: I0227 16:06:48.826502 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:48 crc kubenswrapper[4830]: I0227 16:06:48.826634 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:48 crc kubenswrapper[4830]: I0227 16:06:48.874360 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:06:48 crc kubenswrapper[4830]: I0227 16:06:48.874777 4830 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Feb 27 16:06:48 crc kubenswrapper[4830]: I0227 16:06:48.875012 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" Feb 27 16:06:49 crc kubenswrapper[4830]: I0227 16:06:49.116128 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:06:49 crc kubenswrapper[4830]: I0227 16:06:49.829643 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:49 crc kubenswrapper[4830]: I0227 16:06:49.829672 4830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 27 16:06:49 crc kubenswrapper[4830]: I0227 16:06:49.829720 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:49 crc kubenswrapper[4830]: I0227 16:06:49.829808 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:49 crc kubenswrapper[4830]: I0227 16:06:49.830729 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8e44c5ee059ace66f0a159049433d1bf023f1a9024d7f6b8202424022b808889"} Feb 27 16:06:49 crc kubenswrapper[4830]: I0227 16:06:49.830772 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5d696722e1b43f10155be828026a025360961994508157507a965f2fe04a0770"} Feb 27 16:06:49 crc kubenswrapper[4830]: I0227 16:06:49.830791 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5c172ecdd34951e753faf3ec60d36500c2822650b74bb825ef9eeda6bb8d0356"} Feb 27 16:06:49 crc kubenswrapper[4830]: I0227 16:06:49.831317 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:49 crc kubenswrapper[4830]: I0227 16:06:49.831346 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:49 crc kubenswrapper[4830]: I0227 16:06:49.831362 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:49 crc kubenswrapper[4830]: I0227 16:06:49.831453 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:49 crc kubenswrapper[4830]: I0227 16:06:49.831492 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:49 crc kubenswrapper[4830]: I0227 16:06:49.831513 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:49 crc kubenswrapper[4830]: I0227 16:06:49.832079 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:49 crc kubenswrapper[4830]: I0227 16:06:49.832116 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:49 crc kubenswrapper[4830]: I0227 16:06:49.832137 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:50 crc kubenswrapper[4830]: I0227 16:06:50.710536 4830 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 27 16:06:50 crc kubenswrapper[4830]: I0227 16:06:50.839321 4830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 27 16:06:50 crc kubenswrapper[4830]: I0227 16:06:50.839332 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a5de0725eb122d0444c4c7bfb3b03c479dfb681cac98e8a4d52ca0eaa3cdd3aa"} Feb 27 16:06:50 crc kubenswrapper[4830]: I0227 16:06:50.839382 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:50 crc kubenswrapper[4830]: I0227 16:06:50.839395 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d5382b6063de637c6b85d3a34c9fc6963e653f4bb9f30ca7af478a89814f23c7"} Feb 27 16:06:50 crc kubenswrapper[4830]: I0227 16:06:50.839398 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:50 crc kubenswrapper[4830]: I0227 16:06:50.840990 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:50 crc kubenswrapper[4830]: I0227 16:06:50.841047 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:50 crc kubenswrapper[4830]: I0227 16:06:50.841044 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:50 crc kubenswrapper[4830]: I0227 16:06:50.841095 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:50 crc kubenswrapper[4830]: I0227 16:06:50.841111 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:50 crc kubenswrapper[4830]: I0227 16:06:50.841066 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:50 crc kubenswrapper[4830]: I0227 16:06:50.869728 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:06:50 crc kubenswrapper[4830]: I0227 16:06:50.869872 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:50 crc kubenswrapper[4830]: I0227 16:06:50.871119 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:50 crc kubenswrapper[4830]: I0227 16:06:50.871178 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:50 crc kubenswrapper[4830]: I0227 16:06:50.871195 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:50 crc kubenswrapper[4830]: I0227 16:06:50.880326 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:06:51 crc kubenswrapper[4830]: I0227 16:06:51.151916 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:51 crc kubenswrapper[4830]: I0227 16:06:51.153638 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:51 crc kubenswrapper[4830]: I0227 16:06:51.153706 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:51 crc kubenswrapper[4830]: I0227 16:06:51.153731 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:51 crc kubenswrapper[4830]: I0227 16:06:51.153778 4830 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 16:06:51 crc kubenswrapper[4830]: I0227 16:06:51.845470 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:51 crc kubenswrapper[4830]: I0227 16:06:51.846040 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:51 crc kubenswrapper[4830]: I0227 16:06:51.846045 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:06:51 crc kubenswrapper[4830]: I0227 16:06:51.847997 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:51 crc kubenswrapper[4830]: I0227 16:06:51.848050 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:51 crc kubenswrapper[4830]: I0227 16:06:51.848069 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:51 crc kubenswrapper[4830]: I0227 16:06:51.848130 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:51 crc kubenswrapper[4830]: I0227 16:06:51.848166 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:51 crc kubenswrapper[4830]: I0227 16:06:51.848184 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:52 crc kubenswrapper[4830]: I0227 16:06:52.100723 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 27 16:06:52 crc kubenswrapper[4830]: I0227 16:06:52.343074 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:06:52 crc kubenswrapper[4830]: I0227 16:06:52.343369 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:52 crc kubenswrapper[4830]: I0227 16:06:52.345072 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:52 crc kubenswrapper[4830]: I0227 16:06:52.345129 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:52 crc kubenswrapper[4830]: I0227 16:06:52.345148 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:52 crc kubenswrapper[4830]: I0227 16:06:52.845799 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:52 crc kubenswrapper[4830]: I0227 16:06:52.845799 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:52 crc kubenswrapper[4830]: I0227 16:06:52.847203 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:52 crc kubenswrapper[4830]: I0227 16:06:52.847263 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:52 crc kubenswrapper[4830]: I0227 16:06:52.847289 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:52 crc kubenswrapper[4830]: I0227 16:06:52.847299 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:52 crc kubenswrapper[4830]: I0227 16:06:52.847333 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:52 crc kubenswrapper[4830]: I0227 16:06:52.847356 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:53 crc kubenswrapper[4830]: I0227 16:06:53.756008 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:06:53 crc kubenswrapper[4830]: I0227 16:06:53.756208 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:53 crc kubenswrapper[4830]: I0227 16:06:53.757325 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:53 crc kubenswrapper[4830]: I0227 16:06:53.757352 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:53 crc kubenswrapper[4830]: I0227 16:06:53.757361 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:54 crc kubenswrapper[4830]: I0227 16:06:54.142091 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 27 16:06:54 crc kubenswrapper[4830]: I0227 16:06:54.142423 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:54 crc kubenswrapper[4830]: I0227 16:06:54.144082 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:54 crc kubenswrapper[4830]: I0227 16:06:54.144143 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:54 crc kubenswrapper[4830]: I0227 16:06:54.144163 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:54 crc kubenswrapper[4830]: E0227 16:06:54.850181 4830 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 16:06:56 crc kubenswrapper[4830]: I0227 16:06:56.656547 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:06:56 crc kubenswrapper[4830]: I0227 16:06:56.656693 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:56 crc kubenswrapper[4830]: I0227 16:06:56.658342 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:56 crc kubenswrapper[4830]: I0227 16:06:56.658378 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:56 crc kubenswrapper[4830]: I0227 16:06:56.658389 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:56 crc kubenswrapper[4830]: I0227 16:06:56.877779 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:06:56 crc kubenswrapper[4830]: I0227 16:06:56.877930 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:56 crc kubenswrapper[4830]: I0227 16:06:56.879100 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:56 crc kubenswrapper[4830]: I0227 16:06:56.879147 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:56 crc kubenswrapper[4830]: I0227 16:06:56.879164 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:59 crc kubenswrapper[4830]: E0227 16:06:59.480236 4830 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:06:59Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 16:06:59 crc kubenswrapper[4830]: E0227 16:06:59.480781 4830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:06:59Z is after 2026-02-23T05:33:13Z" interval="6.4s" Feb 27 16:06:59 crc kubenswrapper[4830]: I0227 16:06:59.484817 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:06:59Z is after 2026-02-23T05:33:13Z Feb 27 16:06:59 crc kubenswrapper[4830]: I0227 16:06:59.485079 4830 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 27 16:06:59 crc kubenswrapper[4830]: I0227 16:06:59.485185 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 27 16:06:59 crc kubenswrapper[4830]: E0227 16:06:59.486315 4830 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:06:59Z is after 2026-02-23T05:33:13Z" node="crc" Feb 27 16:06:59 crc kubenswrapper[4830]: W0227 16:06:59.490568 4830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:06:59Z is after 2026-02-23T05:33:13Z Feb 27 16:06:59 crc kubenswrapper[4830]: E0227 16:06:59.490827 4830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:06:59Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 16:06:59 crc kubenswrapper[4830]: W0227 16:06:59.492876 4830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:06:59Z is after 2026-02-23T05:33:13Z Feb 27 16:06:59 crc kubenswrapper[4830]: E0227 16:06:59.493011 4830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:06:59Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 16:06:59 crc kubenswrapper[4830]: E0227 16:06:59.495165 4830 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:06:59Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189826278de25a96 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:44.69629199 +0000 UTC m=+0.785564483,LastTimestamp:2026-02-27 16:06:44.69629199 +0000 UTC m=+0.785564483,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:06:59 crc kubenswrapper[4830]: I0227 16:06:59.497462 4830 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 27 16:06:59 crc kubenswrapper[4830]: I0227 16:06:59.497807 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 27 16:06:59 crc kubenswrapper[4830]: W0227 16:06:59.498643 4830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:06:59Z is after 2026-02-23T05:33:13Z Feb 27 16:06:59 crc kubenswrapper[4830]: E0227 16:06:59.498715 4830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:06:59Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 16:06:59 crc kubenswrapper[4830]: W0227 16:06:59.500882 4830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:06:59Z is after 2026-02-23T05:33:13Z Feb 27 16:06:59 crc kubenswrapper[4830]: E0227 16:06:59.500948 4830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:06:59Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 16:06:59 crc kubenswrapper[4830]: I0227 16:06:59.713250 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:06:59Z is after 2026-02-23T05:33:13Z Feb 27 16:06:59 crc kubenswrapper[4830]: I0227 16:06:59.865603 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 27 16:06:59 crc kubenswrapper[4830]: I0227 16:06:59.867422 4830 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fd2a80cee21c12fed9ed12847698d68e3a1314e3bc50d00a5600ec94eea618a8" exitCode=255 Feb 27 16:06:59 crc kubenswrapper[4830]: I0227 16:06:59.867460 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"fd2a80cee21c12fed9ed12847698d68e3a1314e3bc50d00a5600ec94eea618a8"} Feb 27 16:06:59 crc kubenswrapper[4830]: I0227 16:06:59.867587 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:06:59 crc kubenswrapper[4830]: I0227 16:06:59.868324 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:06:59 crc kubenswrapper[4830]: I0227 16:06:59.868347 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:06:59 crc kubenswrapper[4830]: I0227 16:06:59.868356 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:06:59 crc kubenswrapper[4830]: I0227 16:06:59.868809 4830 scope.go:117] "RemoveContainer" containerID="fd2a80cee21c12fed9ed12847698d68e3a1314e3bc50d00a5600ec94eea618a8" Feb 27 16:06:59 crc kubenswrapper[4830]: I0227 16:06:59.878586 4830 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 16:06:59 crc kubenswrapper[4830]: I0227 16:06:59.878680 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 16:07:00 crc kubenswrapper[4830]: I0227 16:07:00.704074 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:07:00Z is after 2026-02-23T05:33:13Z Feb 27 16:07:00 crc kubenswrapper[4830]: I0227 16:07:00.872639 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 27 16:07:00 crc kubenswrapper[4830]: I0227 16:07:00.876147 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1f41ecab548812bc9f4597322b82892ec30222b1c8b69896759115fc09465c3b"} Feb 27 16:07:00 crc kubenswrapper[4830]: I0227 16:07:00.876362 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:07:00 crc kubenswrapper[4830]: I0227 16:07:00.877610 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:07:00 crc kubenswrapper[4830]: I0227 16:07:00.877796 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:07:00 crc kubenswrapper[4830]: I0227 16:07:00.877988 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:07:01 crc kubenswrapper[4830]: I0227 16:07:01.703878 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:07:01Z is after 2026-02-23T05:33:13Z Feb 27 16:07:01 crc kubenswrapper[4830]: I0227 16:07:01.881688 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 27 16:07:01 crc kubenswrapper[4830]: I0227 16:07:01.882704 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 27 16:07:01 crc kubenswrapper[4830]: I0227 16:07:01.885388 4830 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1f41ecab548812bc9f4597322b82892ec30222b1c8b69896759115fc09465c3b" exitCode=255 Feb 27 16:07:01 crc kubenswrapper[4830]: I0227 16:07:01.885447 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"1f41ecab548812bc9f4597322b82892ec30222b1c8b69896759115fc09465c3b"} Feb 27 16:07:01 crc kubenswrapper[4830]: I0227 16:07:01.885550 4830 scope.go:117] "RemoveContainer" containerID="fd2a80cee21c12fed9ed12847698d68e3a1314e3bc50d00a5600ec94eea618a8" Feb 27 16:07:01 crc kubenswrapper[4830]: I0227 16:07:01.885703 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:07:01 crc kubenswrapper[4830]: I0227 16:07:01.887006 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:07:01 crc kubenswrapper[4830]: I0227 16:07:01.887069 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:07:01 crc kubenswrapper[4830]: I0227 16:07:01.887087 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:07:01 crc kubenswrapper[4830]: I0227 16:07:01.887916 4830 scope.go:117] "RemoveContainer" containerID="1f41ecab548812bc9f4597322b82892ec30222b1c8b69896759115fc09465c3b" Feb 27 16:07:01 crc kubenswrapper[4830]: E0227 16:07:01.888240 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 16:07:02 crc kubenswrapper[4830]: I0227 16:07:02.343475 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:07:02 crc kubenswrapper[4830]: I0227 16:07:02.704127 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:07:02Z is after 2026-02-23T05:33:13Z Feb 27 16:07:02 crc kubenswrapper[4830]: I0227 16:07:02.890576 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 27 16:07:02 crc kubenswrapper[4830]: I0227 16:07:02.893901 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:07:02 crc kubenswrapper[4830]: I0227 16:07:02.894890 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:07:02 crc kubenswrapper[4830]: I0227 16:07:02.894925 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:07:02 crc kubenswrapper[4830]: I0227 16:07:02.894935 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:07:02 crc kubenswrapper[4830]: I0227 16:07:02.895431 4830 scope.go:117] "RemoveContainer" containerID="1f41ecab548812bc9f4597322b82892ec30222b1c8b69896759115fc09465c3b" Feb 27 16:07:02 crc kubenswrapper[4830]: E0227 16:07:02.895578 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 16:07:03 crc kubenswrapper[4830]: I0227 16:07:03.704999 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:07:03Z is after 2026-02-23T05:33:13Z Feb 27 16:07:03 crc kubenswrapper[4830]: I0227 16:07:03.730405 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:07:03 crc kubenswrapper[4830]: I0227 16:07:03.883324 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:07:03 crc kubenswrapper[4830]: I0227 16:07:03.899527 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:07:03 crc kubenswrapper[4830]: I0227 16:07:03.901275 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:07:03 crc kubenswrapper[4830]: I0227 16:07:03.901337 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:07:03 crc kubenswrapper[4830]: I0227 16:07:03.901357 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:07:03 crc kubenswrapper[4830]: I0227 16:07:03.902513 4830 scope.go:117] "RemoveContainer" containerID="1f41ecab548812bc9f4597322b82892ec30222b1c8b69896759115fc09465c3b" Feb 27 16:07:03 crc kubenswrapper[4830]: E0227 16:07:03.902838 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 16:07:03 crc kubenswrapper[4830]: I0227 16:07:03.905608 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:07:04 crc kubenswrapper[4830]: I0227 16:07:04.181365 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 27 16:07:04 crc kubenswrapper[4830]: I0227 16:07:04.182398 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:07:04 crc kubenswrapper[4830]: I0227 16:07:04.186746 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:07:04 crc kubenswrapper[4830]: I0227 16:07:04.186819 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:07:04 crc kubenswrapper[4830]: I0227 16:07:04.186843 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:07:04 crc kubenswrapper[4830]: I0227 16:07:04.203480 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 27 16:07:04 crc kubenswrapper[4830]: I0227 16:07:04.703933 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:07:04Z is after 2026-02-23T05:33:13Z Feb 27 16:07:04 crc kubenswrapper[4830]: E0227 16:07:04.851344 4830 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 16:07:04 crc kubenswrapper[4830]: I0227 16:07:04.902147 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:07:04 crc kubenswrapper[4830]: I0227 16:07:04.902175 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:07:04 crc kubenswrapper[4830]: I0227 16:07:04.903516 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:07:04 crc kubenswrapper[4830]: I0227 16:07:04.903552 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:07:04 crc kubenswrapper[4830]: I0227 16:07:04.903567 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:07:04 crc kubenswrapper[4830]: I0227 16:07:04.903805 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:07:04 crc kubenswrapper[4830]: I0227 16:07:04.903877 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:07:04 crc kubenswrapper[4830]: I0227 16:07:04.903906 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:07:04 crc kubenswrapper[4830]: I0227 16:07:04.904198 4830 scope.go:117] "RemoveContainer" containerID="1f41ecab548812bc9f4597322b82892ec30222b1c8b69896759115fc09465c3b" Feb 27 16:07:04 crc kubenswrapper[4830]: E0227 16:07:04.904394 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 16:07:05 crc kubenswrapper[4830]: I0227 16:07:05.703705 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:07:05Z is after 2026-02-23T05:33:13Z Feb 27 16:07:05 crc kubenswrapper[4830]: I0227 16:07:05.886487 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:07:05 crc kubenswrapper[4830]: E0227 16:07:05.887602 4830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:07:05Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 27 16:07:05 crc kubenswrapper[4830]: I0227 16:07:05.888196 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:07:05 crc kubenswrapper[4830]: I0227 16:07:05.888253 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:07:05 crc kubenswrapper[4830]: I0227 16:07:05.888271 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:07:05 crc kubenswrapper[4830]: I0227 16:07:05.888311 4830 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 16:07:05 crc kubenswrapper[4830]: E0227 16:07:05.892872 4830 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:07:05Z is after 2026-02-23T05:33:13Z" node="crc" Feb 27 16:07:05 crc kubenswrapper[4830]: I0227 16:07:05.904077 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:07:05 crc kubenswrapper[4830]: I0227 16:07:05.905123 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:07:05 crc kubenswrapper[4830]: I0227 16:07:05.905156 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:07:05 crc kubenswrapper[4830]: I0227 16:07:05.905167 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:07:05 crc kubenswrapper[4830]: I0227 16:07:05.905716 4830 scope.go:117] "RemoveContainer" containerID="1f41ecab548812bc9f4597322b82892ec30222b1c8b69896759115fc09465c3b" Feb 27 16:07:05 crc kubenswrapper[4830]: E0227 16:07:05.905894 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 16:07:06 crc kubenswrapper[4830]: W0227 16:07:06.578394 4830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:06 crc kubenswrapper[4830]: E0227 16:07:06.578456 4830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 27 16:07:06 crc kubenswrapper[4830]: I0227 16:07:06.704273 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:07 crc kubenswrapper[4830]: I0227 16:07:07.706354 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:07 crc kubenswrapper[4830]: W0227 16:07:07.832538 4830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 27 16:07:07 crc kubenswrapper[4830]: E0227 16:07:07.832942 4830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 27 16:07:08 crc kubenswrapper[4830]: I0227 16:07:08.001795 4830 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 27 16:07:08 crc kubenswrapper[4830]: I0227 16:07:08.029262 4830 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 27 16:07:08 crc kubenswrapper[4830]: I0227 16:07:08.726874 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.503360 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189826278de25a96 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:44.69629199 +0000 UTC m=+0.785564483,LastTimestamp:2026-02-27 16:06:44.69629199 +0000 UTC m=+0.785564483,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.508137 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1898262791b9d28d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:44.760744589 +0000 UTC m=+0.850017082,LastTimestamp:2026-02-27 16:06:44.760744589 +0000 UTC m=+0.850017082,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.514513 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1898262791ba78ec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:44.76078718 +0000 UTC m=+0.850059673,LastTimestamp:2026-02-27 16:06:44.76078718 +0000 UTC m=+0.850059673,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.520704 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1898262791bab96a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:44.76080369 +0000 UTC m=+0.850076183,LastTimestamp:2026-02-27 16:06:44.76080369 +0000 UTC m=+0.850076183,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.525458 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1898262797774e22 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:44.85704861 +0000 UTC m=+0.946321103,LastTimestamp:2026-02-27 16:06:44.85704861 +0000 UTC m=+0.946321103,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.531717 4830 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1898262791b9d28d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1898262791b9d28d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:44.760744589 +0000 UTC m=+0.850017082,LastTimestamp:2026-02-27 16:06:44.864381831 +0000 UTC m=+0.953654324,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.536318 4830 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1898262791ba78ec\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1898262791ba78ec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:44.76078718 +0000 UTC m=+0.850059673,LastTimestamp:2026-02-27 16:06:44.864412772 +0000 UTC m=+0.953685255,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.542890 4830 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1898262791bab96a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1898262791bab96a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:44.76080369 +0000 UTC m=+0.850076183,LastTimestamp:2026-02-27 16:06:44.864427632 +0000 UTC m=+0.953700115,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.549029 4830 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1898262791b9d28d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1898262791b9d28d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:44.760744589 +0000 UTC m=+0.850017082,LastTimestamp:2026-02-27 16:06:44.867409969 +0000 UTC m=+0.956682472,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.554812 4830 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1898262791ba78ec\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1898262791ba78ec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:44.76078718 +0000 UTC m=+0.850059673,LastTimestamp:2026-02-27 16:06:44.86743843 +0000 UTC m=+0.956710923,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.561250 4830 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1898262791bab96a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1898262791bab96a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:44.76080369 +0000 UTC m=+0.850076183,LastTimestamp:2026-02-27 16:06:44.86745861 +0000 UTC m=+0.956731103,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.567858 4830 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1898262791b9d28d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1898262791b9d28d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:44.760744589 +0000 UTC m=+0.850017082,LastTimestamp:2026-02-27 16:06:44.868415899 +0000 UTC m=+0.957688362,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.574366 4830 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1898262791ba78ec\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1898262791ba78ec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:44.76078718 +0000 UTC m=+0.850059673,LastTimestamp:2026-02-27 16:06:44.868456209 +0000 UTC m=+0.957732082,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.581137 4830 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1898262791bab96a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1898262791bab96a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:44.76080369 +0000 UTC m=+0.850076183,LastTimestamp:2026-02-27 16:06:44.86847104 +0000 UTC m=+0.957743503,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.587694 4830 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1898262791b9d28d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1898262791b9d28d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:44.760744589 +0000 UTC m=+0.850017082,LastTimestamp:2026-02-27 16:06:44.871744492 +0000 UTC m=+0.961016995,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.594513 4830 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1898262791ba78ec\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1898262791ba78ec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:44.76078718 +0000 UTC m=+0.850059673,LastTimestamp:2026-02-27 16:06:44.871832384 +0000 UTC m=+0.961104887,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.602321 4830 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1898262791bab96a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1898262791bab96a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:44.76080369 +0000 UTC m=+0.850076183,LastTimestamp:2026-02-27 16:06:44.871864144 +0000 UTC m=+0.961136647,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.609156 4830 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1898262791b9d28d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1898262791b9d28d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:44.760744589 +0000 UTC m=+0.850017082,LastTimestamp:2026-02-27 16:06:44.872291292 +0000 UTC m=+0.961563795,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.615807 4830 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1898262791ba78ec\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1898262791ba78ec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:44.76078718 +0000 UTC m=+0.850059673,LastTimestamp:2026-02-27 16:06:44.872310873 +0000 UTC m=+0.961583346,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.622513 4830 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1898262791bab96a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1898262791bab96a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:44.76080369 +0000 UTC m=+0.850076183,LastTimestamp:2026-02-27 16:06:44.872320783 +0000 UTC m=+0.961593246,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.627416 4830 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1898262791b9d28d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1898262791b9d28d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:44.760744589 +0000 UTC m=+0.850017082,LastTimestamp:2026-02-27 16:06:44.873863304 +0000 UTC m=+0.963135767,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.634093 4830 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1898262791b9d28d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1898262791b9d28d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:44.760744589 +0000 UTC m=+0.850017082,LastTimestamp:2026-02-27 16:06:44.873905434 +0000 UTC m=+0.963177917,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.640786 4830 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1898262791ba78ec\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1898262791ba78ec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:44.76078718 +0000 UTC m=+0.850059673,LastTimestamp:2026-02-27 16:06:44.873928735 +0000 UTC m=+0.963201208,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.649205 4830 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1898262791ba78ec\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1898262791ba78ec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:44.76078718 +0000 UTC m=+0.850059673,LastTimestamp:2026-02-27 16:06:44.873972776 +0000 UTC m=+0.963245249,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.655792 4830 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1898262791bab96a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1898262791bab96a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:44.76080369 +0000 UTC m=+0.850076183,LastTimestamp:2026-02-27 16:06:44.874006816 +0000 UTC m=+0.963279279,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.663442 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18982627b00957e0 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:45.269272544 +0000 UTC m=+1.358545007,LastTimestamp:2026-02-27 16:06:45.269272544 +0000 UTC m=+1.358545007,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.669418 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18982627b0499a81 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:45.273483905 +0000 UTC m=+1.362756408,LastTimestamp:2026-02-27 16:06:45.273483905 +0000 UTC m=+1.362756408,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.675918 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982627b0be6548 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:45.281137992 +0000 UTC m=+1.370410485,LastTimestamp:2026-02-27 16:06:45.281137992 +0000 UTC m=+1.370410485,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.682609 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18982627b14fa4a7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:45.290656935 +0000 UTC m=+1.379929438,LastTimestamp:2026-02-27 16:06:45.290656935 +0000 UTC m=+1.379929438,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.692191 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18982627b19754c4 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:45.295355076 +0000 UTC m=+1.384627539,LastTimestamp:2026-02-27 16:06:45.295355076 +0000 UTC m=+1.384627539,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: I0227 16:07:09.699395 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.699758 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18982627e6ff912a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:46.19137873 +0000 UTC m=+2.280651233,LastTimestamp:2026-02-27 16:06:46.19137873 +0000 UTC m=+2.280651233,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.701549 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18982627e70a104c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:46.192066636 +0000 UTC m=+2.281339109,LastTimestamp:2026-02-27 16:06:46.192066636 +0000 UTC m=+2.281339109,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.706112 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982627e7b3b6c8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:46.20318484 +0000 UTC m=+2.292457303,LastTimestamp:2026-02-27 16:06:46.20318484 +0000 UTC m=+2.292457303,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.710719 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18982627e7b72f01 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:46.203412225 +0000 UTC m=+2.292684718,LastTimestamp:2026-02-27 16:06:46.203412225 +0000 UTC m=+2.292684718,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.716552 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18982627e7bbcb4d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:46.203714381 +0000 UTC m=+2.292986854,LastTimestamp:2026-02-27 16:06:46.203714381 +0000 UTC m=+2.292986854,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.720879 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18982627e7c254fa openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:46.204142842 +0000 UTC m=+2.293415295,LastTimestamp:2026-02-27 16:06:46.204142842 +0000 UTC m=+2.293415295,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.726646 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18982627e7c6d9cc openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:46.204438988 +0000 UTC m=+2.293711491,LastTimestamp:2026-02-27 16:06:46.204438988 +0000 UTC m=+2.293711491,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.733815 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18982627e7e1573e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:46.206175038 +0000 UTC m=+2.295447511,LastTimestamp:2026-02-27 16:06:46.206175038 +0000 UTC m=+2.295447511,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.738543 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18982627e8d34a64 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:46.22203146 +0000 UTC m=+2.311303933,LastTimestamp:2026-02-27 16:06:46.22203146 +0000 UTC m=+2.311303933,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.745388 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982627e8f2dc88 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:46.224100488 +0000 UTC m=+2.313372961,LastTimestamp:2026-02-27 16:06:46.224100488 +0000 UTC m=+2.313372961,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.752788 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18982627e921346f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:46.227137647 +0000 UTC m=+2.316410140,LastTimestamp:2026-02-27 16:06:46.227137647 +0000 UTC m=+2.316410140,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.759677 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18982627ffc9364a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:46.607246922 +0000 UTC m=+2.696519415,LastTimestamp:2026-02-27 16:06:46.607246922 +0000 UTC m=+2.696519415,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.766126 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18982628008acda0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:46.619934112 +0000 UTC m=+2.709206605,LastTimestamp:2026-02-27 16:06:46.619934112 +0000 UTC m=+2.709206605,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.772764 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1898262800a163e8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:46.621414376 +0000 UTC m=+2.710686869,LastTimestamp:2026-02-27 16:06:46.621414376 +0000 UTC m=+2.710686869,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.783592 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189826280a452eb1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:46.783143601 +0000 UTC m=+2.872416094,LastTimestamp:2026-02-27 16:06:46.783143601 +0000 UTC m=+2.872416094,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.790110 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189826280a679633 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:46.785398323 +0000 UTC m=+2.874670816,LastTimestamp:2026-02-27 16:06:46.785398323 +0000 UTC m=+2.874670816,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.797652 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189826280ad5b397 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:46.792614807 +0000 UTC m=+2.881887270,LastTimestamp:2026-02-27 16:06:46.792614807 +0000 UTC m=+2.881887270,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.804097 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189826280af90f8f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:46.794932111 +0000 UTC m=+2.884204604,LastTimestamp:2026-02-27 16:06:46.794932111 +0000 UTC m=+2.884204604,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.807587 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1898262811c51174 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:46.908965236 +0000 UTC m=+2.998237699,LastTimestamp:2026-02-27 16:06:46.908965236 +0000 UTC m=+2.998237699,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.811829 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1898262812d04048 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:46.926475336 +0000 UTC m=+3.015747809,LastTimestamp:2026-02-27 16:06:46.926475336 +0000 UTC m=+3.015747809,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.817176 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1898262812f53425 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:46.928897061 +0000 UTC m=+3.018169524,LastTimestamp:2026-02-27 16:06:46.928897061 +0000 UTC m=+3.018169524,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.823157 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189826281897806d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:47.023419501 +0000 UTC m=+3.112691974,LastTimestamp:2026-02-27 16:06:47.023419501 +0000 UTC m=+3.112691974,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.829207 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1898262818ba0d5a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:47.025683802 +0000 UTC m=+3.114956265,LastTimestamp:2026-02-27 16:06:47.025683802 +0000 UTC m=+3.114956265,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.836099 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1898262818bbaebf openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:47.025790655 +0000 UTC m=+3.115063118,LastTimestamp:2026-02-27 16:06:47.025790655 +0000 UTC m=+3.115063118,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.842791 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189826281900ad07 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:47.030312199 +0000 UTC m=+3.119584672,LastTimestamp:2026-02-27 16:06:47.030312199 +0000 UTC m=+3.119584672,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.849067 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189826281951e31d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:47.035634461 +0000 UTC m=+3.124906924,LastTimestamp:2026-02-27 16:06:47.035634461 +0000 UTC m=+3.124906924,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.853819 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18982628197e430f openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:47.038542607 +0000 UTC m=+3.127815080,LastTimestamp:2026-02-27 16:06:47.038542607 +0000 UTC m=+3.127815080,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.858636 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982628198a0912 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:47.039314194 +0000 UTC m=+3.128586657,LastTimestamp:2026-02-27 16:06:47.039314194 +0000 UTC m=+3.128586657,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.863291 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1898262819a1114c openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:47.040823628 +0000 UTC m=+3.130096091,LastTimestamp:2026-02-27 16:06:47.040823628 +0000 UTC m=+3.130096091,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.868151 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1898262819f22eee openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:47.04613963 +0000 UTC m=+3.135412093,LastTimestamp:2026-02-27 16:06:47.04613963 +0000 UTC m=+3.135412093,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.872783 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189826281a927bb1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:47.056645041 +0000 UTC m=+3.145917524,LastTimestamp:2026-02-27 16:06:47.056645041 +0000 UTC m=+3.145917524,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.877456 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1898262820f1be5b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:47.163551323 +0000 UTC m=+3.252823796,LastTimestamp:2026-02-27 16:06:47.163551323 +0000 UTC m=+3.252823796,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: I0227 16:07:09.877940 4830 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 16:07:09 crc kubenswrapper[4830]: I0227 16:07:09.878039 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.881665 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1898262822074cde openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:47.181741278 +0000 UTC m=+3.271013741,LastTimestamp:2026-02-27 16:06:47.181741278 +0000 UTC m=+3.271013741,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.886401 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18982628256420ef openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:47.238156527 +0000 UTC m=+3.327428980,LastTimestamp:2026-02-27 16:06:47.238156527 +0000 UTC m=+3.327428980,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.891112 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189826282601962d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:47.248475693 +0000 UTC m=+3.337748166,LastTimestamp:2026-02-27 16:06:47.248475693 +0000 UTC m=+3.337748166,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.896195 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18982628265d5bc3 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:47.254490051 +0000 UTC m=+3.343762514,LastTimestamp:2026-02-27 16:06:47.254490051 +0000 UTC m=+3.343762514,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.902198 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18982628266c992e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:47.255488814 +0000 UTC m=+3.344761277,LastTimestamp:2026-02-27 16:06:47.255488814 +0000 UTC m=+3.344761277,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.908901 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189826282716b2b4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:47.266636468 +0000 UTC m=+3.355908931,LastTimestamp:2026-02-27 16:06:47.266636468 +0000 UTC m=+3.355908931,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.913614 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189826282728c933 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:47.267821875 +0000 UTC m=+3.357094338,LastTimestamp:2026-02-27 16:06:47.267821875 +0000 UTC m=+3.357094338,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.919585 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18982628358a38d4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:47.509088468 +0000 UTC m=+3.598360931,LastTimestamp:2026-02-27 16:06:47.509088468 +0000 UTC m=+3.598360931,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.924801 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1898262835b7485d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:47.512041565 +0000 UTC m=+3.601314058,LastTimestamp:2026-02-27 16:06:47.512041565 +0000 UTC m=+3.601314058,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.929566 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1898262836d383dd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:47.530669021 +0000 UTC m=+3.619941524,LastTimestamp:2026-02-27 16:06:47.530669021 +0000 UTC m=+3.619941524,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.937302 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1898262836e4be33 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:47.531798067 +0000 UTC m=+3.621070570,LastTimestamp:2026-02-27 16:06:47.531798067 +0000 UTC m=+3.621070570,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.941601 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1898262837ed3bb6 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:47.549131702 +0000 UTC m=+3.638404205,LastTimestamp:2026-02-27 16:06:47.549131702 +0000 UTC m=+3.638404205,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.946424 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982628475330a0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:47.807471776 +0000 UTC m=+3.896744239,LastTimestamp:2026-02-27 16:06:47.807471776 +0000 UTC m=+3.896744239,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.951907 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189826284b7f6a7b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:47.877479035 +0000 UTC m=+3.966751508,LastTimestamp:2026-02-27 16:06:47.877479035 +0000 UTC m=+3.966751508,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.958900 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189826284f2ffd5b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:47.939382619 +0000 UTC m=+4.028655112,LastTimestamp:2026-02-27 16:06:47.939382619 +0000 UTC m=+4.028655112,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.963583 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189826284f4e9343 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:47.941387075 +0000 UTC m=+4.030659548,LastTimestamp:2026-02-27 16:06:47.941387075 +0000 UTC m=+4.030659548,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.968796 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189826285ac0f634 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:48.133432884 +0000 UTC m=+4.222705377,LastTimestamp:2026-02-27 16:06:48.133432884 +0000 UTC m=+4.222705377,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.973520 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189826285e34254a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:48.191313226 +0000 UTC m=+4.280585719,LastTimestamp:2026-02-27 16:06:48.191313226 +0000 UTC m=+4.280585719,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.978163 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189826285ee741b3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:48.203051443 +0000 UTC m=+4.292323946,LastTimestamp:2026-02-27 16:06:48.203051443 +0000 UTC m=+4.292323946,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.983003 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189826285fe80af2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:48.219880178 +0000 UTC m=+4.309152671,LastTimestamp:2026-02-27 16:06:48.219880178 +0000 UTC m=+4.309152671,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.988564 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1898262884421d75 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:48.829762933 +0000 UTC m=+4.919035426,LastTimestamp:2026-02-27 16:06:48.829762933 +0000 UTC m=+4.919035426,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.993290 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 27 16:07:09 crc kubenswrapper[4830]: &Event{ObjectMeta:{kube-apiserver-crc.1898262886f421ba openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:6443/livez": dial tcp 192.168.126.11:6443: connect: connection refused Feb 27 16:07:09 crc kubenswrapper[4830]: body: Feb 27 16:07:09 crc kubenswrapper[4830]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:48.874983866 +0000 UTC m=+4.964256359,LastTimestamp:2026-02-27 16:06:48.874983866 +0000 UTC m=+4.964256359,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 16:07:09 crc kubenswrapper[4830]: > Feb 27 16:07:09 crc kubenswrapper[4830]: E0227 16:07:09.998789 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1898262886f6fbe6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:48.87517079 +0000 UTC m=+4.964443293,LastTimestamp:2026-02-27 16:06:48.87517079 +0000 UTC m=+4.964443293,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:10 crc kubenswrapper[4830]: E0227 16:07:10.004330 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1898262893ffa624 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:49.093842468 +0000 UTC m=+5.183114941,LastTimestamp:2026-02-27 16:06:49.093842468 +0000 UTC m=+5.183114941,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:10 crc kubenswrapper[4830]: E0227 16:07:10.008326 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1898262895fe3a1f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:49.127303711 +0000 UTC m=+5.216576174,LastTimestamp:2026-02-27 16:06:49.127303711 +0000 UTC m=+5.216576174,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:10 crc kubenswrapper[4830]: E0227 16:07:10.014360 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1898262896156cb3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:49.128823987 +0000 UTC m=+5.218096450,LastTimestamp:2026-02-27 16:06:49.128823987 +0000 UTC m=+5.218096450,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:10 crc kubenswrapper[4830]: E0227 16:07:10.019719 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982628a7570837 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:49.418336311 +0000 UTC m=+5.507608784,LastTimestamp:2026-02-27 16:06:49.418336311 +0000 UTC m=+5.507608784,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:10 crc kubenswrapper[4830]: E0227 16:07:10.028000 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982628a882a2b0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:49.43797112 +0000 UTC m=+5.527243593,LastTimestamp:2026-02-27 16:06:49.43797112 +0000 UTC m=+5.527243593,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:10 crc kubenswrapper[4830]: E0227 16:07:10.032859 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982628a899b6ac openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:49.439483564 +0000 UTC m=+5.528756037,LastTimestamp:2026-02-27 16:06:49.439483564 +0000 UTC m=+5.528756037,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:10 crc kubenswrapper[4830]: E0227 16:07:10.037937 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982628b946ac47 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:49.719254087 +0000 UTC m=+5.808526590,LastTimestamp:2026-02-27 16:06:49.719254087 +0000 UTC m=+5.808526590,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:10 crc kubenswrapper[4830]: E0227 16:07:10.045289 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982628baeeb6c0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:49.747044032 +0000 UTC m=+5.836316535,LastTimestamp:2026-02-27 16:06:49.747044032 +0000 UTC m=+5.836316535,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:10 crc kubenswrapper[4830]: E0227 16:07:10.052516 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982628bb0b6bfb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:49.748925435 +0000 UTC m=+5.838197938,LastTimestamp:2026-02-27 16:06:49.748925435 +0000 UTC m=+5.838197938,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:10 crc kubenswrapper[4830]: E0227 16:07:10.059467 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982628cca45fc6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:50.04416199 +0000 UTC m=+6.133434483,LastTimestamp:2026-02-27 16:06:50.04416199 +0000 UTC m=+6.133434483,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:10 crc kubenswrapper[4830]: E0227 16:07:10.063824 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982628cd9e226d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:50.060530285 +0000 UTC m=+6.149802778,LastTimestamp:2026-02-27 16:06:50.060530285 +0000 UTC m=+6.149802778,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:10 crc kubenswrapper[4830]: E0227 16:07:10.067363 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982628cdb6a307 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:50.062136071 +0000 UTC m=+6.151408564,LastTimestamp:2026-02-27 16:06:50.062136071 +0000 UTC m=+6.151408564,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:10 crc kubenswrapper[4830]: E0227 16:07:10.072061 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982628dd6ac521 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:50.325599521 +0000 UTC m=+6.414872014,LastTimestamp:2026-02-27 16:06:50.325599521 +0000 UTC m=+6.414872014,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:10 crc kubenswrapper[4830]: E0227 16:07:10.079051 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18982628de062a50 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:50.335783504 +0000 UTC m=+6.425056007,LastTimestamp:2026-02-27 16:06:50.335783504 +0000 UTC m=+6.425056007,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:10 crc kubenswrapper[4830]: E0227 16:07:10.088318 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 27 16:07:10 crc kubenswrapper[4830]: &Event{ObjectMeta:{kube-apiserver-crc.1898262aff5e79bc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 27 16:07:10 crc kubenswrapper[4830]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 27 16:07:10 crc kubenswrapper[4830]: Feb 27 16:07:10 crc kubenswrapper[4830]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:59.485153724 +0000 UTC m=+15.574426217,LastTimestamp:2026-02-27 16:06:59.485153724 +0000 UTC m=+15.574426217,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 16:07:10 crc kubenswrapper[4830]: > Feb 27 16:07:10 crc kubenswrapper[4830]: E0227 16:07:10.095106 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1898262aff5f6e6d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:59.485216365 +0000 UTC m=+15.574488858,LastTimestamp:2026-02-27 16:06:59.485216365 +0000 UTC m=+15.574488858,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:10 crc kubenswrapper[4830]: E0227 16:07:10.102688 4830 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1898262aff5e79bc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 27 16:07:10 crc kubenswrapper[4830]: &Event{ObjectMeta:{kube-apiserver-crc.1898262aff5e79bc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 27 16:07:10 crc kubenswrapper[4830]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 27 16:07:10 crc kubenswrapper[4830]: Feb 27 16:07:10 crc kubenswrapper[4830]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:59.485153724 +0000 UTC m=+15.574426217,LastTimestamp:2026-02-27 16:06:59.497774652 +0000 UTC m=+15.587047145,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 16:07:10 crc kubenswrapper[4830]: > Feb 27 16:07:10 crc kubenswrapper[4830]: E0227 16:07:10.108213 4830 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1898262aff5f6e6d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1898262aff5f6e6d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:59.485216365 +0000 UTC m=+15.574488858,LastTimestamp:2026-02-27 16:06:59.497984807 +0000 UTC m=+15.587257310,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:10 crc kubenswrapper[4830]: E0227 16:07:10.114135 4830 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189826284f4e9343\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189826284f4e9343 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:47.941387075 +0000 UTC m=+4.030659548,LastTimestamp:2026-02-27 16:06:59.86966186 +0000 UTC m=+15.958934323,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:10 crc kubenswrapper[4830]: E0227 16:07:10.122470 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 27 16:07:10 crc kubenswrapper[4830]: &Event{ObjectMeta:{kube-controller-manager-crc.1898262b16d2d9f9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 27 16:07:10 crc kubenswrapper[4830]: body: Feb 27 16:07:10 crc kubenswrapper[4830]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:59.878656505 +0000 UTC m=+15.967929008,LastTimestamp:2026-02-27 16:06:59.878656505 +0000 UTC m=+15.967929008,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 16:07:10 crc kubenswrapper[4830]: > Feb 27 16:07:10 crc kubenswrapper[4830]: E0227 16:07:10.123809 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1898262b16d3bc8a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:59.878714506 +0000 UTC m=+15.967987009,LastTimestamp:2026-02-27 16:06:59.878714506 +0000 UTC m=+15.967987009,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:10 crc kubenswrapper[4830]: E0227 16:07:10.129038 4830 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1898262b16d2d9f9\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 27 16:07:10 crc kubenswrapper[4830]: &Event{ObjectMeta:{kube-controller-manager-crc.1898262b16d2d9f9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 27 16:07:10 crc kubenswrapper[4830]: body: Feb 27 16:07:10 crc kubenswrapper[4830]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:59.878656505 +0000 UTC m=+15.967929008,LastTimestamp:2026-02-27 16:07:09.878024767 +0000 UTC m=+25.967297240,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 16:07:10 crc kubenswrapper[4830]: > Feb 27 16:07:10 crc kubenswrapper[4830]: E0227 16:07:10.134325 4830 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1898262b16d3bc8a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1898262b16d3bc8a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:59.878714506 +0000 UTC m=+15.967987009,LastTimestamp:2026-02-27 16:07:09.878091348 +0000 UTC m=+25.967363821,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:10 crc kubenswrapper[4830]: W0227 16:07:10.427283 4830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 27 16:07:10 crc kubenswrapper[4830]: E0227 16:07:10.427332 4830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 27 16:07:10 crc kubenswrapper[4830]: I0227 16:07:10.704540 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:11 crc kubenswrapper[4830]: I0227 16:07:11.706075 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:11 crc kubenswrapper[4830]: W0227 16:07:11.720788 4830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 27 16:07:11 crc kubenswrapper[4830]: E0227 16:07:11.721062 4830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 27 16:07:12 crc kubenswrapper[4830]: I0227 16:07:12.707647 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:12 crc kubenswrapper[4830]: I0227 16:07:12.893332 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:07:12 crc kubenswrapper[4830]: I0227 16:07:12.895424 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:07:12 crc kubenswrapper[4830]: I0227 16:07:12.895510 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:07:12 crc kubenswrapper[4830]: I0227 16:07:12.895534 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:07:12 crc kubenswrapper[4830]: I0227 16:07:12.895585 4830 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 16:07:12 crc kubenswrapper[4830]: E0227 16:07:12.896592 4830 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 27 16:07:12 crc kubenswrapper[4830]: E0227 16:07:12.897112 4830 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 27 16:07:13 crc kubenswrapper[4830]: I0227 16:07:13.705168 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:14 crc kubenswrapper[4830]: I0227 16:07:14.705074 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:14 crc kubenswrapper[4830]: E0227 16:07:14.851622 4830 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 16:07:15 crc kubenswrapper[4830]: I0227 16:07:15.706315 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:16 crc kubenswrapper[4830]: I0227 16:07:16.706488 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:17 crc kubenswrapper[4830]: I0227 16:07:17.607535 4830 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:43404->192.168.126.11:10357: read: connection reset by peer" start-of-body= Feb 27 16:07:17 crc kubenswrapper[4830]: I0227 16:07:17.607617 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:43404->192.168.126.11:10357: read: connection reset by peer" Feb 27 16:07:17 crc kubenswrapper[4830]: I0227 16:07:17.607685 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:07:17 crc kubenswrapper[4830]: I0227 16:07:17.607870 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:07:17 crc kubenswrapper[4830]: I0227 16:07:17.609841 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:07:17 crc kubenswrapper[4830]: I0227 16:07:17.609904 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:07:17 crc kubenswrapper[4830]: I0227 16:07:17.609924 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:07:17 crc kubenswrapper[4830]: I0227 16:07:17.610613 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"c414ab235e0a7321e6455f54b680cefce4120d0114676e10e9f79bf275964052"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 27 16:07:17 crc kubenswrapper[4830]: I0227 16:07:17.610882 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://c414ab235e0a7321e6455f54b680cefce4120d0114676e10e9f79bf275964052" gracePeriod=30 Feb 27 16:07:17 crc kubenswrapper[4830]: E0227 16:07:17.616881 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 27 16:07:17 crc kubenswrapper[4830]: &Event{ObjectMeta:{kube-controller-manager-crc.1898262f378cfcbe openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": read tcp 192.168.126.11:43404->192.168.126.11:10357: read: connection reset by peer Feb 27 16:07:17 crc kubenswrapper[4830]: body: Feb 27 16:07:17 crc kubenswrapper[4830]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:07:17.607595198 +0000 UTC m=+33.696867691,LastTimestamp:2026-02-27 16:07:17.607595198 +0000 UTC m=+33.696867691,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 16:07:17 crc kubenswrapper[4830]: > Feb 27 16:07:17 crc kubenswrapper[4830]: E0227 16:07:17.623853 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1898262f378dd60e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": read tcp 192.168.126.11:43404->192.168.126.11:10357: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:07:17.60765083 +0000 UTC m=+33.696923323,LastTimestamp:2026-02-27 16:07:17.60765083 +0000 UTC m=+33.696923323,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:17 crc kubenswrapper[4830]: E0227 16:07:17.630986 4830 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1898262f37bec7a3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:07:17.610858403 +0000 UTC m=+33.700130896,LastTimestamp:2026-02-27 16:07:17.610858403 +0000 UTC m=+33.700130896,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:17 crc kubenswrapper[4830]: I0227 16:07:17.706189 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:17 crc kubenswrapper[4830]: I0227 16:07:17.945313 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 27 16:07:17 crc kubenswrapper[4830]: I0227 16:07:17.945860 4830 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="c414ab235e0a7321e6455f54b680cefce4120d0114676e10e9f79bf275964052" exitCode=255 Feb 27 16:07:17 crc kubenswrapper[4830]: I0227 16:07:17.945918 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"c414ab235e0a7321e6455f54b680cefce4120d0114676e10e9f79bf275964052"} Feb 27 16:07:18 crc kubenswrapper[4830]: E0227 16:07:18.143622 4830 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18982627e7e1573e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18982627e7e1573e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:46.206175038 +0000 UTC m=+2.295447511,LastTimestamp:2026-02-27 16:07:18.135835389 +0000 UTC m=+34.225107892,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:18 crc kubenswrapper[4830]: E0227 16:07:18.369305 4830 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18982627ffc9364a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18982627ffc9364a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:46.607246922 +0000 UTC m=+2.696519415,LastTimestamp:2026-02-27 16:07:18.362087328 +0000 UTC m=+34.451359821,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:18 crc kubenswrapper[4830]: E0227 16:07:18.384804 4830 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18982628008acda0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18982628008acda0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:06:46.619934112 +0000 UTC m=+2.709206605,LastTimestamp:2026-02-27 16:07:18.376386375 +0000 UTC m=+34.465658878,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:07:18 crc kubenswrapper[4830]: I0227 16:07:18.704663 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:18 crc kubenswrapper[4830]: I0227 16:07:18.761986 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:07:18 crc kubenswrapper[4830]: I0227 16:07:18.764063 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:07:18 crc kubenswrapper[4830]: I0227 16:07:18.764119 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:07:18 crc kubenswrapper[4830]: I0227 16:07:18.764136 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:07:18 crc kubenswrapper[4830]: I0227 16:07:18.765034 4830 scope.go:117] "RemoveContainer" containerID="1f41ecab548812bc9f4597322b82892ec30222b1c8b69896759115fc09465c3b" Feb 27 16:07:18 crc kubenswrapper[4830]: I0227 16:07:18.953987 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 27 16:07:18 crc kubenswrapper[4830]: I0227 16:07:18.954654 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"635c013219126bc71bc7c3f7b7f27339ea0a53eace870778212c42ed22a682ac"} Feb 27 16:07:18 crc kubenswrapper[4830]: I0227 16:07:18.954813 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:07:18 crc kubenswrapper[4830]: I0227 16:07:18.956049 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:07:18 crc kubenswrapper[4830]: I0227 16:07:18.956097 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:07:18 crc kubenswrapper[4830]: I0227 16:07:18.956117 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:07:19 crc kubenswrapper[4830]: I0227 16:07:19.117068 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:07:19 crc kubenswrapper[4830]: I0227 16:07:19.705569 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:19 crc kubenswrapper[4830]: I0227 16:07:19.897327 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:07:19 crc kubenswrapper[4830]: I0227 16:07:19.898625 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:07:19 crc kubenswrapper[4830]: I0227 16:07:19.898659 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:07:19 crc kubenswrapper[4830]: I0227 16:07:19.898670 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:07:19 crc kubenswrapper[4830]: I0227 16:07:19.898702 4830 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 16:07:19 crc kubenswrapper[4830]: E0227 16:07:19.908440 4830 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 27 16:07:19 crc kubenswrapper[4830]: E0227 16:07:19.908721 4830 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 27 16:07:19 crc kubenswrapper[4830]: I0227 16:07:19.959881 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 27 16:07:19 crc kubenswrapper[4830]: I0227 16:07:19.961270 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 27 16:07:19 crc kubenswrapper[4830]: I0227 16:07:19.964070 4830 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8bc38f000e0be3cf67228db1cec44ce2f72e41ffc064b5385a3746c32c42207c" exitCode=255 Feb 27 16:07:19 crc kubenswrapper[4830]: I0227 16:07:19.964230 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:07:19 crc kubenswrapper[4830]: I0227 16:07:19.964359 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8bc38f000e0be3cf67228db1cec44ce2f72e41ffc064b5385a3746c32c42207c"} Feb 27 16:07:19 crc kubenswrapper[4830]: I0227 16:07:19.964555 4830 scope.go:117] "RemoveContainer" containerID="1f41ecab548812bc9f4597322b82892ec30222b1c8b69896759115fc09465c3b" Feb 27 16:07:19 crc kubenswrapper[4830]: I0227 16:07:19.964907 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:07:19 crc kubenswrapper[4830]: I0227 16:07:19.965833 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:07:19 crc kubenswrapper[4830]: I0227 16:07:19.965878 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:07:19 crc kubenswrapper[4830]: I0227 16:07:19.965890 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:07:19 crc kubenswrapper[4830]: I0227 16:07:19.965992 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:07:19 crc kubenswrapper[4830]: I0227 16:07:19.966017 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:07:19 crc kubenswrapper[4830]: I0227 16:07:19.966029 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:07:19 crc kubenswrapper[4830]: I0227 16:07:19.966643 4830 scope.go:117] "RemoveContainer" containerID="8bc38f000e0be3cf67228db1cec44ce2f72e41ffc064b5385a3746c32c42207c" Feb 27 16:07:19 crc kubenswrapper[4830]: E0227 16:07:19.966837 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 16:07:20 crc kubenswrapper[4830]: I0227 16:07:20.702522 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:20 crc kubenswrapper[4830]: I0227 16:07:20.968603 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 27 16:07:20 crc kubenswrapper[4830]: I0227 16:07:20.970500 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:07:20 crc kubenswrapper[4830]: I0227 16:07:20.971261 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:07:20 crc kubenswrapper[4830]: I0227 16:07:20.971311 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:07:20 crc kubenswrapper[4830]: I0227 16:07:20.971329 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:07:21 crc kubenswrapper[4830]: I0227 16:07:21.704398 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:22 crc kubenswrapper[4830]: W0227 16:07:22.146486 4830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:22 crc kubenswrapper[4830]: E0227 16:07:22.146546 4830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 27 16:07:22 crc kubenswrapper[4830]: I0227 16:07:22.343460 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:07:22 crc kubenswrapper[4830]: I0227 16:07:22.343722 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:07:22 crc kubenswrapper[4830]: I0227 16:07:22.345239 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:07:22 crc kubenswrapper[4830]: I0227 16:07:22.345309 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:07:22 crc kubenswrapper[4830]: I0227 16:07:22.345329 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:07:22 crc kubenswrapper[4830]: I0227 16:07:22.346188 4830 scope.go:117] "RemoveContainer" containerID="8bc38f000e0be3cf67228db1cec44ce2f72e41ffc064b5385a3746c32c42207c" Feb 27 16:07:22 crc kubenswrapper[4830]: E0227 16:07:22.346495 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 16:07:22 crc kubenswrapper[4830]: I0227 16:07:22.710540 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:23 crc kubenswrapper[4830]: I0227 16:07:23.705278 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:23 crc kubenswrapper[4830]: I0227 16:07:23.730694 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:07:23 crc kubenswrapper[4830]: I0227 16:07:23.730923 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:07:23 crc kubenswrapper[4830]: I0227 16:07:23.732343 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:07:23 crc kubenswrapper[4830]: I0227 16:07:23.732376 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:07:23 crc kubenswrapper[4830]: I0227 16:07:23.732387 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:07:23 crc kubenswrapper[4830]: I0227 16:07:23.732988 4830 scope.go:117] "RemoveContainer" containerID="8bc38f000e0be3cf67228db1cec44ce2f72e41ffc064b5385a3746c32c42207c" Feb 27 16:07:23 crc kubenswrapper[4830]: E0227 16:07:23.733170 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 16:07:24 crc kubenswrapper[4830]: I0227 16:07:24.705789 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:24 crc kubenswrapper[4830]: E0227 16:07:24.851898 4830 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 16:07:25 crc kubenswrapper[4830]: I0227 16:07:25.703784 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:26 crc kubenswrapper[4830]: I0227 16:07:26.706174 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:26 crc kubenswrapper[4830]: I0227 16:07:26.877276 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:07:26 crc kubenswrapper[4830]: I0227 16:07:26.877545 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:07:26 crc kubenswrapper[4830]: I0227 16:07:26.879094 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:07:26 crc kubenswrapper[4830]: I0227 16:07:26.879156 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:07:26 crc kubenswrapper[4830]: I0227 16:07:26.879178 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:07:26 crc kubenswrapper[4830]: I0227 16:07:26.883504 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:07:26 crc kubenswrapper[4830]: I0227 16:07:26.908972 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:07:26 crc kubenswrapper[4830]: I0227 16:07:26.910811 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:07:26 crc kubenswrapper[4830]: I0227 16:07:26.911042 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:07:26 crc kubenswrapper[4830]: I0227 16:07:26.911191 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:07:26 crc kubenswrapper[4830]: I0227 16:07:26.911354 4830 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 16:07:26 crc kubenswrapper[4830]: E0227 16:07:26.915383 4830 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 27 16:07:26 crc kubenswrapper[4830]: E0227 16:07:26.916516 4830 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 27 16:07:26 crc kubenswrapper[4830]: I0227 16:07:26.986660 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:07:26 crc kubenswrapper[4830]: I0227 16:07:26.988282 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:07:26 crc kubenswrapper[4830]: I0227 16:07:26.988355 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:07:26 crc kubenswrapper[4830]: I0227 16:07:26.988373 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:07:27 crc kubenswrapper[4830]: I0227 16:07:27.706302 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:28 crc kubenswrapper[4830]: I0227 16:07:28.707157 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:29 crc kubenswrapper[4830]: I0227 16:07:29.123181 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:07:29 crc kubenswrapper[4830]: I0227 16:07:29.123648 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:07:29 crc kubenswrapper[4830]: I0227 16:07:29.125407 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:07:29 crc kubenswrapper[4830]: I0227 16:07:29.125486 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:07:29 crc kubenswrapper[4830]: I0227 16:07:29.125505 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:07:29 crc kubenswrapper[4830]: I0227 16:07:29.704131 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:30 crc kubenswrapper[4830]: I0227 16:07:30.705494 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:31 crc kubenswrapper[4830]: W0227 16:07:31.678529 4830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 27 16:07:31 crc kubenswrapper[4830]: E0227 16:07:31.678790 4830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 27 16:07:31 crc kubenswrapper[4830]: I0227 16:07:31.704045 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:31 crc kubenswrapper[4830]: W0227 16:07:31.757889 4830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 27 16:07:31 crc kubenswrapper[4830]: E0227 16:07:31.758221 4830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 27 16:07:31 crc kubenswrapper[4830]: W0227 16:07:31.848396 4830 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 27 16:07:31 crc kubenswrapper[4830]: E0227 16:07:31.848463 4830 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 27 16:07:32 crc kubenswrapper[4830]: I0227 16:07:32.701773 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:33 crc kubenswrapper[4830]: I0227 16:07:33.705067 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:33 crc kubenswrapper[4830]: I0227 16:07:33.916290 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:07:33 crc kubenswrapper[4830]: I0227 16:07:33.917815 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:07:33 crc kubenswrapper[4830]: I0227 16:07:33.917863 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:07:33 crc kubenswrapper[4830]: I0227 16:07:33.917875 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:07:33 crc kubenswrapper[4830]: I0227 16:07:33.917907 4830 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 16:07:33 crc kubenswrapper[4830]: E0227 16:07:33.924480 4830 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 27 16:07:33 crc kubenswrapper[4830]: E0227 16:07:33.924478 4830 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 27 16:07:34 crc kubenswrapper[4830]: I0227 16:07:34.706130 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:34 crc kubenswrapper[4830]: E0227 16:07:34.852372 4830 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 16:07:35 crc kubenswrapper[4830]: I0227 16:07:35.703480 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:35 crc kubenswrapper[4830]: I0227 16:07:35.719091 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 16:07:35 crc kubenswrapper[4830]: I0227 16:07:35.719295 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:07:35 crc kubenswrapper[4830]: I0227 16:07:35.720769 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:07:35 crc kubenswrapper[4830]: I0227 16:07:35.720837 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:07:35 crc kubenswrapper[4830]: I0227 16:07:35.720860 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:07:36 crc kubenswrapper[4830]: I0227 16:07:36.705359 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:37 crc kubenswrapper[4830]: I0227 16:07:37.708409 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:37 crc kubenswrapper[4830]: I0227 16:07:37.761907 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:07:37 crc kubenswrapper[4830]: I0227 16:07:37.763686 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:07:37 crc kubenswrapper[4830]: I0227 16:07:37.763743 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:07:37 crc kubenswrapper[4830]: I0227 16:07:37.763760 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:07:37 crc kubenswrapper[4830]: I0227 16:07:37.764724 4830 scope.go:117] "RemoveContainer" containerID="8bc38f000e0be3cf67228db1cec44ce2f72e41ffc064b5385a3746c32c42207c" Feb 27 16:07:37 crc kubenswrapper[4830]: E0227 16:07:37.765074 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 16:07:38 crc kubenswrapper[4830]: I0227 16:07:38.706729 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:39 crc kubenswrapper[4830]: I0227 16:07:39.706606 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:40 crc kubenswrapper[4830]: I0227 16:07:40.706301 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:40 crc kubenswrapper[4830]: I0227 16:07:40.925479 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:07:40 crc kubenswrapper[4830]: I0227 16:07:40.927078 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:07:40 crc kubenswrapper[4830]: I0227 16:07:40.927129 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:07:40 crc kubenswrapper[4830]: I0227 16:07:40.927148 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:07:40 crc kubenswrapper[4830]: I0227 16:07:40.927185 4830 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 16:07:40 crc kubenswrapper[4830]: E0227 16:07:40.936686 4830 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 27 16:07:40 crc kubenswrapper[4830]: E0227 16:07:40.936731 4830 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 27 16:07:41 crc kubenswrapper[4830]: I0227 16:07:41.705831 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:42 crc kubenswrapper[4830]: I0227 16:07:42.706194 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:43 crc kubenswrapper[4830]: I0227 16:07:43.705512 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:44 crc kubenswrapper[4830]: I0227 16:07:44.705233 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:44 crc kubenswrapper[4830]: E0227 16:07:44.852496 4830 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 16:07:45 crc kubenswrapper[4830]: I0227 16:07:45.705825 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:46 crc kubenswrapper[4830]: I0227 16:07:46.706252 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:47 crc kubenswrapper[4830]: I0227 16:07:47.704712 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:47 crc kubenswrapper[4830]: I0227 16:07:47.937931 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:07:47 crc kubenswrapper[4830]: I0227 16:07:47.939718 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:07:47 crc kubenswrapper[4830]: I0227 16:07:47.939783 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:07:47 crc kubenswrapper[4830]: I0227 16:07:47.939795 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:07:47 crc kubenswrapper[4830]: I0227 16:07:47.939823 4830 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 16:07:47 crc kubenswrapper[4830]: E0227 16:07:47.945265 4830 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 27 16:07:47 crc kubenswrapper[4830]: E0227 16:07:47.946061 4830 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 27 16:07:48 crc kubenswrapper[4830]: I0227 16:07:48.704261 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:49 crc kubenswrapper[4830]: I0227 16:07:49.705738 4830 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:07:49 crc kubenswrapper[4830]: I0227 16:07:49.895778 4830 csr.go:261] certificate signing request csr-6g9mz is approved, waiting to be issued Feb 27 16:07:49 crc kubenswrapper[4830]: I0227 16:07:49.902124 4830 csr.go:257] certificate signing request csr-6g9mz is issued Feb 27 16:07:49 crc kubenswrapper[4830]: I0227 16:07:49.997924 4830 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 27 16:07:50 crc kubenswrapper[4830]: I0227 16:07:50.514754 4830 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 27 16:07:50 crc kubenswrapper[4830]: I0227 16:07:50.904472 4830 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-06 05:50:53.29621717 +0000 UTC Feb 27 16:07:50 crc kubenswrapper[4830]: I0227 16:07:50.904524 4830 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6757h43m2.391697201s for next certificate rotation Feb 27 16:07:51 crc kubenswrapper[4830]: I0227 16:07:51.762221 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:07:51 crc kubenswrapper[4830]: I0227 16:07:51.763929 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:07:51 crc kubenswrapper[4830]: I0227 16:07:51.764009 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:07:51 crc kubenswrapper[4830]: I0227 16:07:51.764030 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:07:51 crc kubenswrapper[4830]: I0227 16:07:51.764866 4830 scope.go:117] "RemoveContainer" containerID="8bc38f000e0be3cf67228db1cec44ce2f72e41ffc064b5385a3746c32c42207c" Feb 27 16:07:52 crc kubenswrapper[4830]: I0227 16:07:52.063214 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 27 16:07:52 crc kubenswrapper[4830]: I0227 16:07:52.065002 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2"} Feb 27 16:07:53 crc kubenswrapper[4830]: I0227 16:07:53.070095 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 27 16:07:53 crc kubenswrapper[4830]: I0227 16:07:53.070801 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 27 16:07:53 crc kubenswrapper[4830]: I0227 16:07:53.072693 4830 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2" exitCode=255 Feb 27 16:07:53 crc kubenswrapper[4830]: I0227 16:07:53.072746 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2"} Feb 27 16:07:53 crc kubenswrapper[4830]: I0227 16:07:53.072785 4830 scope.go:117] "RemoveContainer" containerID="8bc38f000e0be3cf67228db1cec44ce2f72e41ffc064b5385a3746c32c42207c" Feb 27 16:07:53 crc kubenswrapper[4830]: I0227 16:07:53.072830 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:07:53 crc kubenswrapper[4830]: I0227 16:07:53.073644 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:07:53 crc kubenswrapper[4830]: I0227 16:07:53.073672 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:07:53 crc kubenswrapper[4830]: I0227 16:07:53.073682 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:07:53 crc kubenswrapper[4830]: I0227 16:07:53.074842 4830 scope.go:117] "RemoveContainer" containerID="acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2" Feb 27 16:07:53 crc kubenswrapper[4830]: E0227 16:07:53.075036 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 16:07:53 crc kubenswrapper[4830]: I0227 16:07:53.730643 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:07:54 crc kubenswrapper[4830]: I0227 16:07:54.076134 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 27 16:07:54 crc kubenswrapper[4830]: I0227 16:07:54.077601 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:07:54 crc kubenswrapper[4830]: I0227 16:07:54.078412 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:07:54 crc kubenswrapper[4830]: I0227 16:07:54.078457 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:07:54 crc kubenswrapper[4830]: I0227 16:07:54.078470 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:07:54 crc kubenswrapper[4830]: I0227 16:07:54.079158 4830 scope.go:117] "RemoveContainer" containerID="acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2" Feb 27 16:07:54 crc kubenswrapper[4830]: E0227 16:07:54.079361 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 16:07:54 crc kubenswrapper[4830]: E0227 16:07:54.852626 4830 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 16:07:54 crc kubenswrapper[4830]: I0227 16:07:54.945360 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:07:54 crc kubenswrapper[4830]: I0227 16:07:54.946824 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:07:54 crc kubenswrapper[4830]: I0227 16:07:54.946863 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:07:54 crc kubenswrapper[4830]: I0227 16:07:54.946881 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:07:54 crc kubenswrapper[4830]: I0227 16:07:54.947095 4830 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 16:07:54 crc kubenswrapper[4830]: I0227 16:07:54.957211 4830 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 27 16:07:54 crc kubenswrapper[4830]: I0227 16:07:54.957532 4830 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 27 16:07:54 crc kubenswrapper[4830]: E0227 16:07:54.957563 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 27 16:07:54 crc kubenswrapper[4830]: I0227 16:07:54.968068 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:07:54 crc kubenswrapper[4830]: I0227 16:07:54.968120 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:07:54 crc kubenswrapper[4830]: I0227 16:07:54.968138 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:07:54 crc kubenswrapper[4830]: I0227 16:07:54.968161 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:07:54 crc kubenswrapper[4830]: I0227 16:07:54.968179 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:07:54Z","lastTransitionTime":"2026-02-27T16:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:07:54 crc kubenswrapper[4830]: E0227 16:07:54.990046 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:07:54 crc kubenswrapper[4830]: I0227 16:07:54.999445 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:07:54 crc kubenswrapper[4830]: I0227 16:07:54.999498 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:07:54 crc kubenswrapper[4830]: I0227 16:07:54.999897 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:07:55 crc kubenswrapper[4830]: I0227 16:07:54.999973 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:07:55 crc kubenswrapper[4830]: I0227 16:07:55.000013 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:07:54Z","lastTransitionTime":"2026-02-27T16:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:07:55 crc kubenswrapper[4830]: E0227 16:07:55.020413 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:07:55 crc kubenswrapper[4830]: I0227 16:07:55.031225 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:07:55 crc kubenswrapper[4830]: I0227 16:07:55.031274 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:07:55 crc kubenswrapper[4830]: I0227 16:07:55.031292 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:07:55 crc kubenswrapper[4830]: I0227 16:07:55.031315 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:07:55 crc kubenswrapper[4830]: I0227 16:07:55.031332 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:07:55Z","lastTransitionTime":"2026-02-27T16:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:07:55 crc kubenswrapper[4830]: E0227 16:07:55.041616 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:07:55 crc kubenswrapper[4830]: I0227 16:07:55.051754 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:07:55 crc kubenswrapper[4830]: I0227 16:07:55.051820 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:07:55 crc kubenswrapper[4830]: I0227 16:07:55.051839 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:07:55 crc kubenswrapper[4830]: I0227 16:07:55.051866 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:07:55 crc kubenswrapper[4830]: I0227 16:07:55.051886 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:07:55Z","lastTransitionTime":"2026-02-27T16:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:07:55 crc kubenswrapper[4830]: E0227 16:07:55.067988 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:07:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:07:55 crc kubenswrapper[4830]: E0227 16:07:55.068096 4830 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 16:07:55 crc kubenswrapper[4830]: E0227 16:07:55.068117 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:55 crc kubenswrapper[4830]: I0227 16:07:55.108166 4830 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:07:55 crc kubenswrapper[4830]: I0227 16:07:55.109516 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:07:55 crc kubenswrapper[4830]: I0227 16:07:55.109577 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:07:55 crc kubenswrapper[4830]: I0227 16:07:55.109601 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:07:55 crc kubenswrapper[4830]: I0227 16:07:55.110571 4830 scope.go:117] "RemoveContainer" containerID="acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2" Feb 27 16:07:55 crc kubenswrapper[4830]: E0227 16:07:55.110851 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 16:07:55 crc kubenswrapper[4830]: E0227 16:07:55.169249 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:55 crc kubenswrapper[4830]: E0227 16:07:55.270082 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:55 crc kubenswrapper[4830]: E0227 16:07:55.370403 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:55 crc kubenswrapper[4830]: E0227 16:07:55.471522 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:55 crc kubenswrapper[4830]: E0227 16:07:55.572686 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:55 crc kubenswrapper[4830]: E0227 16:07:55.673108 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:55 crc kubenswrapper[4830]: E0227 16:07:55.773423 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:55 crc kubenswrapper[4830]: E0227 16:07:55.873754 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:55 crc kubenswrapper[4830]: E0227 16:07:55.974622 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:56 crc kubenswrapper[4830]: E0227 16:07:56.075221 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:56 crc kubenswrapper[4830]: E0227 16:07:56.176584 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:56 crc kubenswrapper[4830]: E0227 16:07:56.277498 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:56 crc kubenswrapper[4830]: E0227 16:07:56.378397 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:56 crc kubenswrapper[4830]: E0227 16:07:56.479280 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:56 crc kubenswrapper[4830]: E0227 16:07:56.580392 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:56 crc kubenswrapper[4830]: E0227 16:07:56.681291 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:56 crc kubenswrapper[4830]: E0227 16:07:56.782282 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:56 crc kubenswrapper[4830]: E0227 16:07:56.882834 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:56 crc kubenswrapper[4830]: E0227 16:07:56.984259 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:57 crc kubenswrapper[4830]: E0227 16:07:57.084713 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:57 crc kubenswrapper[4830]: E0227 16:07:57.185315 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:57 crc kubenswrapper[4830]: E0227 16:07:57.286354 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:57 crc kubenswrapper[4830]: E0227 16:07:57.387423 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:57 crc kubenswrapper[4830]: E0227 16:07:57.488620 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:57 crc kubenswrapper[4830]: E0227 16:07:57.588990 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:57 crc kubenswrapper[4830]: E0227 16:07:57.689860 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:57 crc kubenswrapper[4830]: E0227 16:07:57.790217 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:57 crc kubenswrapper[4830]: E0227 16:07:57.891034 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:57 crc kubenswrapper[4830]: E0227 16:07:57.991202 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:58 crc kubenswrapper[4830]: E0227 16:07:58.091852 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:58 crc kubenswrapper[4830]: E0227 16:07:58.192439 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:58 crc kubenswrapper[4830]: E0227 16:07:58.293004 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:58 crc kubenswrapper[4830]: E0227 16:07:58.393614 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:58 crc kubenswrapper[4830]: E0227 16:07:58.494556 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:58 crc kubenswrapper[4830]: E0227 16:07:58.595390 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:58 crc kubenswrapper[4830]: E0227 16:07:58.695910 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:58 crc kubenswrapper[4830]: E0227 16:07:58.796760 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:58 crc kubenswrapper[4830]: E0227 16:07:58.897184 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:58 crc kubenswrapper[4830]: E0227 16:07:58.997994 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:59 crc kubenswrapper[4830]: I0227 16:07:59.033879 4830 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 27 16:07:59 crc kubenswrapper[4830]: E0227 16:07:59.099162 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:59 crc kubenswrapper[4830]: E0227 16:07:59.200025 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:59 crc kubenswrapper[4830]: E0227 16:07:59.301227 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:59 crc kubenswrapper[4830]: I0227 16:07:59.305598 4830 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 27 16:07:59 crc kubenswrapper[4830]: E0227 16:07:59.401661 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:59 crc kubenswrapper[4830]: E0227 16:07:59.501856 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:59 crc kubenswrapper[4830]: E0227 16:07:59.602401 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:59 crc kubenswrapper[4830]: E0227 16:07:59.702796 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:59 crc kubenswrapper[4830]: E0227 16:07:59.803445 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:07:59 crc kubenswrapper[4830]: E0227 16:07:59.904183 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:08:00 crc kubenswrapper[4830]: E0227 16:08:00.005028 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:08:00 crc kubenswrapper[4830]: E0227 16:08:00.105583 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:08:00 crc kubenswrapper[4830]: E0227 16:08:00.206727 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:08:00 crc kubenswrapper[4830]: E0227 16:08:00.307854 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:08:00 crc kubenswrapper[4830]: E0227 16:08:00.409068 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:08:00 crc kubenswrapper[4830]: E0227 16:08:00.510221 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:08:00 crc kubenswrapper[4830]: E0227 16:08:00.610819 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:08:00 crc kubenswrapper[4830]: E0227 16:08:00.711593 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:08:00 crc kubenswrapper[4830]: E0227 16:08:00.812023 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:08:00 crc kubenswrapper[4830]: E0227 16:08:00.913133 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:08:01 crc kubenswrapper[4830]: E0227 16:08:01.014162 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:08:01 crc kubenswrapper[4830]: E0227 16:08:01.115237 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:08:01 crc kubenswrapper[4830]: E0227 16:08:01.215906 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:08:01 crc kubenswrapper[4830]: E0227 16:08:01.317132 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:08:01 crc kubenswrapper[4830]: E0227 16:08:01.418146 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:08:01 crc kubenswrapper[4830]: E0227 16:08:01.519251 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:08:01 crc kubenswrapper[4830]: E0227 16:08:01.620429 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:08:01 crc kubenswrapper[4830]: E0227 16:08:01.721392 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:08:01 crc kubenswrapper[4830]: E0227 16:08:01.822389 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:08:01 crc kubenswrapper[4830]: E0227 16:08:01.923196 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:08:02 crc kubenswrapper[4830]: E0227 16:08:02.023616 4830 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.080056 4830 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.126072 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.126124 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.126142 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.126164 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.126181 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:02Z","lastTransitionTime":"2026-02-27T16:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.229725 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.229786 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.229806 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.229834 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.229854 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:02Z","lastTransitionTime":"2026-02-27T16:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.333352 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.333418 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.333435 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.333459 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.333478 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:02Z","lastTransitionTime":"2026-02-27T16:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.343618 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.392732 4830 scope.go:117] "RemoveContainer" containerID="acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2" Feb 27 16:08:02 crc kubenswrapper[4830]: E0227 16:08:02.393092 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.407413 4830 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.436292 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.436338 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.436355 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.436378 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.436395 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:02Z","lastTransitionTime":"2026-02-27T16:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.538613 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.538653 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.538668 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.538691 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.538710 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:02Z","lastTransitionTime":"2026-02-27T16:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.641350 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.641416 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.641434 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.641459 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.641475 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:02Z","lastTransitionTime":"2026-02-27T16:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.718047 4830 apiserver.go:52] "Watching apiserver" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.726483 4830 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.728491 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-fsrq9","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6","openshift-ovn-kubernetes/ovnkube-node-bf9lh","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-image-registry/node-ca-p7298","openshift-machine-config-operator/machine-config-daemon-2tv5v","openshift-multus/multus-additional-cni-plugins-rgv8f","openshift-dns/node-resolver-fcddf","openshift-multus/network-metrics-daemon-kgdlg"] Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.729028 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.729209 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:08:02 crc kubenswrapper[4830]: E0227 16:08:02.729338 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.729841 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:08:02 crc kubenswrapper[4830]: E0227 16:08:02.729908 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.730238 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.730272 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.730899 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:02 crc kubenswrapper[4830]: E0227 16:08:02.730994 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.731040 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.731198 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fsrq9" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.731831 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-p7298" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.732045 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.732081 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" Feb 27 16:08:02 crc kubenswrapper[4830]: E0227 16:08:02.732105 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.732187 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-fcddf" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.732449 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.732690 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.737233 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.737469 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.737548 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.737688 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.737980 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.738022 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.738120 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.738171 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.737491 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.738296 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.737480 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.737490 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.738670 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.738682 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.738862 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.739011 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.739074 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.739082 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.739147 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.739356 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.739452 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.739749 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.739777 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.739878 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.740058 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.740078 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.740095 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.740282 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.740514 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.740625 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.740715 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.740820 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.740914 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.740980 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.741088 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.741144 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.741330 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.745462 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.745500 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.745516 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.745537 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.745555 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:02Z","lastTransitionTime":"2026-02-27T16:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.766017 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.781538 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.798543 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d20f886-cfdb-48c7-9754-6b7255b1124f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:52Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0227 16:07:52.525113 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:07:52.525413 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:07:52.526578 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-70351561/tls.crt::/tmp/serving-cert-70351561/tls.key\\\\\\\"\\\\nI0227 16:07:52.790383 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:07:52.794547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:07:52.794587 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:07:52.794627 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:07:52.794638 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:07:52.801197 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:07:52.801263 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:07:52.801293 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 16:07:52.801246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 16:07:52.801325 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:07:52.801373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:07:52.801389 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:07:52.801396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 16:07:52.804476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:07:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.803060 4830 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.813804 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.824508 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p7298" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9g9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p7298\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.835637 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kgdlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kgdlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.849265 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.849310 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.849326 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.849350 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.849367 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:02Z","lastTransitionTime":"2026-02-27T16:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.850031 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.862402 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.886849 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsrq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb72b0f7-1d22-4d13-9653-b1607aa2235d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwzp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsrq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.887390 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.887448 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.887474 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.887504 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.887531 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.887552 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.887574 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.887596 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.887617 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.887639 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.887663 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.887687 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.887708 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.887732 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.887756 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.887779 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.887802 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.887829 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.887852 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.887872 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.887893 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.887914 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.887938 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.887986 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.888012 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.888036 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.888057 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.888078 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.888100 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.888125 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.888146 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.888169 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.888191 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.888219 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.888394 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.888421 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.888444 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.888468 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.888490 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.888492 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.888530 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.888513 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.888632 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.888672 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.888706 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.888744 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.888778 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.888913 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.888935 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.888998 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.889034 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.889067 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.889102 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.889134 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.889169 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.889200 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.889232 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.889263 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.889295 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.889326 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.889361 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.889398 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.889438 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.889470 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.889502 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.889535 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.889599 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.889632 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.889666 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.889829 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.889866 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.889900 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.889931 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.890008 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.890045 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.890080 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.892917 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.892981 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.893326 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.893366 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.893676 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.893709 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.893963 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.894003 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.894247 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.894287 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.894458 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.894492 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.894521 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.894705 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.894735 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.894917 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.894970 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.895154 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.895184 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.895328 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.895364 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.895403 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.895535 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.895580 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.895707 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.895744 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.895914 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.895979 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.896158 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.896218 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.896343 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.896395 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.896415 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.896558 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.896616 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.896740 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.896809 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.896877 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.896978 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.897030 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.897295 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.897331 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.897411 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.897438 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.897461 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.897532 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.897555 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.897611 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.897632 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.897705 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.897727 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.897749 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.897856 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.897879 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.898095 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.898119 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.898215 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.898238 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.898258 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.898446 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.898471 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.898553 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.898572 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.898713 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.898977 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.899011 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.889030 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.889497 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.889853 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.890329 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.890457 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.890598 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.890606 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.891853 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.892161 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.892253 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.892284 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.892413 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.892730 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.892845 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.892809 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.892910 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.893199 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.893248 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.893296 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.893492 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.893662 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.893831 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.893934 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.894077 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.899582 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.894079 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.894463 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.895013 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.894992 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.895129 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.895195 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.895509 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.895523 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.895815 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.896110 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.896222 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.899742 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.896503 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.896686 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.896678 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.896763 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.897141 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.897271 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.897512 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.897794 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.900100 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.897899 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.897998 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.898137 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.898404 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.898428 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.899019 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.899063 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.897154 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.899285 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.900812 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.902918 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.903250 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.903547 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.903790 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.904359 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.904689 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.904963 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.905277 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.906103 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.906229 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.906433 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.906401 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.906501 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.906727 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.906983 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.907035 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.907044 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.907325 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.908396 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.907532 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.908153 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.908213 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.908245 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.908897 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.908930 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.908854 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.909485 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.909786 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.909825 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.910151 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.910648 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.910747 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.910762 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.910819 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.910844 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.910989 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.911304 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.911317 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.911760 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.911770 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.911906 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.912123 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.912587 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.912726 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.912805 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.912754 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.912613 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.913370 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.913588 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.914777 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.913627 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.900636 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.915817 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.915904 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.916011 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.916142 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.916595 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.916669 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.916735 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.916789 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.916829 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.916872 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.917057 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.917104 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.917146 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.917187 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.917243 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.917270 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.917279 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.917311 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.917527 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.917848 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.918371 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.918372 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.918485 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.918534 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.918570 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.918657 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.918695 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.918724 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.918774 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.918801 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.918827 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.918853 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.918879 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.918906 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.918934 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.918925 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.918979 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.919007 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.919018 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.919037 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.919111 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.919156 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.919195 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.919233 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.919269 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.919309 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.919349 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.919385 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.919424 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.919461 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.919496 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.919532 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.919568 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.919603 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.919641 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.919677 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.919713 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.919750 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.919786 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.919826 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.919866 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.919903 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.919941 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.920024 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.920063 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.920100 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.920135 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.920170 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.920207 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.920312 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.920360 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-kubelet\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.920393 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-slash\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.920428 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/616ebd42-6bbe-4536-ba35-f8b07f2f11b1-host\") pod \"node-ca-p7298\" (UID: \"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\") " pod="openshift-image-registry/node-ca-p7298" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.920462 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-multus-cni-dir\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.920496 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-host-run-multus-certs\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.920532 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd5a4c5b-2008-4354-b26e-8763a631e55c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gqgb6\" (UID: \"cd5a4c5b-2008-4354-b26e-8763a631e55c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.920571 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.920607 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/672682a0-e75f-4d6c-b4f2-542944327497-cni-binary-copy\") pod \"multus-additional-cni-plugins-rgv8f\" (UID: \"672682a0-e75f-4d6c-b4f2-542944327497\") " pod="openshift-multus/multus-additional-cni-plugins-rgv8f" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.920645 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/672682a0-e75f-4d6c-b4f2-542944327497-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rgv8f\" (UID: \"672682a0-e75f-4d6c-b4f2-542944327497\") " pod="openshift-multus/multus-additional-cni-plugins-rgv8f" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.920681 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/bb72b0f7-1d22-4d13-9653-b1607aa2235d-multus-daemon-config\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.920715 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.920753 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.920790 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-var-lib-openvswitch\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.920829 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-etc-openvswitch\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.920867 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/616ebd42-6bbe-4536-ba35-f8b07f2f11b1-serviceca\") pod \"node-ca-p7298\" (UID: \"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\") " pod="openshift-image-registry/node-ca-p7298" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.920902 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-host-var-lib-cni-multus\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.920937 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.920999 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd5a4c5b-2008-4354-b26e-8763a631e55c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gqgb6\" (UID: \"cd5a4c5b-2008-4354-b26e-8763a631e55c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.921033 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zrsv\" (UniqueName: \"kubernetes.io/projected/6adbc0c4-e467-41f1-9190-d0dd3693eba6-kube-api-access-8zrsv\") pod \"node-resolver-fcddf\" (UID: \"6adbc0c4-e467-41f1-9190-d0dd3693eba6\") " pod="openshift-dns/node-resolver-fcddf" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.921066 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-run-ovn\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.921099 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-ovnkube-config\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.921134 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf9wc\" (UniqueName: \"kubernetes.io/projected/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-kube-api-access-tf9wc\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.921245 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-env-overrides\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.921282 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-host-run-k8s-cni-cncf-io\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.921325 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-multus-conf-dir\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.921362 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6adbc0c4-e467-41f1-9190-d0dd3693eba6-hosts-file\") pod \"node-resolver-fcddf\" (UID: \"6adbc0c4-e467-41f1-9190-d0dd3693eba6\") " pod="openshift-dns/node-resolver-fcddf" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.921403 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.921440 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-cni-bin\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.921475 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-host-var-lib-cni-bin\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.921509 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-etc-kubernetes\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.921578 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.921617 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.921655 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-systemd-units\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.921692 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.921727 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/672682a0-e75f-4d6c-b4f2-542944327497-os-release\") pod \"multus-additional-cni-plugins-rgv8f\" (UID: \"672682a0-e75f-4d6c-b4f2-542944327497\") " pod="openshift-multus/multus-additional-cni-plugins-rgv8f" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.921761 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-hostroot\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.921794 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/00d6b7ce-4757-4275-8345-60c1b546ce25-proxy-tls\") pod \"machine-config-daemon-2tv5v\" (UID: \"00d6b7ce-4757-4275-8345-60c1b546ce25\") " pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.921830 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/00d6b7ce-4757-4275-8345-60c1b546ce25-mcd-auth-proxy-config\") pod \"machine-config-daemon-2tv5v\" (UID: \"00d6b7ce-4757-4275-8345-60c1b546ce25\") " pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.921864 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-run-ovn-kubernetes\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.921901 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwzp4\" (UniqueName: \"kubernetes.io/projected/bb72b0f7-1d22-4d13-9653-b1607aa2235d-kube-api-access-xwzp4\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.921936 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-node-log\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.921996 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-log-socket\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.922030 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/672682a0-e75f-4d6c-b4f2-542944327497-system-cni-dir\") pod \"multus-additional-cni-plugins-rgv8f\" (UID: \"672682a0-e75f-4d6c-b4f2-542944327497\") " pod="openshift-multus/multus-additional-cni-plugins-rgv8f" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.922064 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/672682a0-e75f-4d6c-b4f2-542944327497-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rgv8f\" (UID: \"672682a0-e75f-4d6c-b4f2-542944327497\") " pod="openshift-multus/multus-additional-cni-plugins-rgv8f" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.922097 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-system-cni-dir\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.922132 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l8vg\" (UniqueName: \"kubernetes.io/projected/6ba2fe32-66e0-4bcd-a646-9d07c9a21c54-kube-api-access-9l8vg\") pod \"network-metrics-daemon-kgdlg\" (UID: \"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\") " pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.922168 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87p77\" (UniqueName: \"kubernetes.io/projected/cd5a4c5b-2008-4354-b26e-8763a631e55c-kube-api-access-87p77\") pod \"ovnkube-control-plane-749d76644c-gqgb6\" (UID: \"cd5a4c5b-2008-4354-b26e-8763a631e55c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.922202 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-ovn-node-metrics-cert\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.922240 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-cnibin\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.922274 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-host-var-lib-kubelet\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.922313 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.922348 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/00d6b7ce-4757-4275-8345-60c1b546ce25-rootfs\") pod \"machine-config-daemon-2tv5v\" (UID: \"00d6b7ce-4757-4275-8345-60c1b546ce25\") " pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.922383 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/672682a0-e75f-4d6c-b4f2-542944327497-cnibin\") pod \"multus-additional-cni-plugins-rgv8f\" (UID: \"672682a0-e75f-4d6c-b4f2-542944327497\") " pod="openshift-multus/multus-additional-cni-plugins-rgv8f" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.922424 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.922464 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.922500 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-ovnkube-script-lib\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.922535 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-os-release\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.922571 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqztt\" (UniqueName: \"kubernetes.io/projected/00d6b7ce-4757-4275-8345-60c1b546ce25-kube-api-access-sqztt\") pod \"machine-config-daemon-2tv5v\" (UID: \"00d6b7ce-4757-4275-8345-60c1b546ce25\") " pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.922611 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.922648 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-run-netns\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.922684 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-cni-netd\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.922722 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.922756 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-run-systemd\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.922794 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9g9d\" (UniqueName: \"kubernetes.io/projected/616ebd42-6bbe-4536-ba35-f8b07f2f11b1-kube-api-access-x9g9d\") pod \"node-ca-p7298\" (UID: \"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\") " pod="openshift-image-registry/node-ca-p7298" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.922835 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ba2fe32-66e0-4bcd-a646-9d07c9a21c54-metrics-certs\") pod \"network-metrics-daemon-kgdlg\" (UID: \"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\") " pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.922868 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/bb72b0f7-1d22-4d13-9653-b1607aa2235d-cni-binary-copy\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.922902 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-multus-socket-dir-parent\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.922937 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-host-run-netns\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.923005 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.923045 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd5a4c5b-2008-4354-b26e-8763a631e55c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gqgb6\" (UID: \"cd5a4c5b-2008-4354-b26e-8763a631e55c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.919050 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.923093 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: E0227 16:08:02.923694 4830 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.923940 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: E0227 16:08:02.924102 4830 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.924390 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.924901 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.924716 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.925370 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.925475 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.925480 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.925509 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.925843 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.925907 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.919345 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.919852 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.920014 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.920029 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.926003 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.921269 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.921851 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.922235 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.922505 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.922613 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.926144 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.923000 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.923014 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.923018 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.926536 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.926699 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.926810 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.927040 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.927208 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.927262 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.927529 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.927920 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.928163 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.928429 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.928547 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.928571 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672682a0-e75f-4d6c-b4f2-542944327497\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rgv8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.929084 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.929231 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.929271 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.923082 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-run-openvswitch\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.919330 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.929894 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.929900 4830 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.930008 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.930219 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.930248 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.930450 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.930551 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.930732 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.930782 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.931078 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.931217 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.931560 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.932099 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.935150 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpb8z\" (UniqueName: \"kubernetes.io/projected/672682a0-e75f-4d6c-b4f2-542944327497-kube-api-access-dpb8z\") pod \"multus-additional-cni-plugins-rgv8f\" (UID: \"672682a0-e75f-4d6c-b4f2-542944327497\") " pod="openshift-multus/multus-additional-cni-plugins-rgv8f" Feb 27 16:08:02 crc kubenswrapper[4830]: E0227 16:08:02.936628 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:08:03.436579268 +0000 UTC m=+79.525851771 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:08:02 crc kubenswrapper[4830]: E0227 16:08:02.936775 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 16:08:03.436731751 +0000 UTC m=+79.526004464 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 16:08:02 crc kubenswrapper[4830]: E0227 16:08:02.936836 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 16:08:03.436814763 +0000 UTC m=+79.526087486 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.938432 4830 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.938469 4830 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.938489 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.938511 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.938531 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.938550 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.938568 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.938586 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.938603 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.938620 4830 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.938637 4830 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.938654 4830 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.938672 4830 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.938690 4830 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.938708 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.938727 4830 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.938744 4830 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.938760 4830 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.938777 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.938794 4830 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.938811 4830 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.938830 4830 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.938849 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.938866 4830 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.938882 4830 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.938900 4830 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.938916 4830 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.938936 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939004 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939025 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939042 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939060 4830 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939078 4830 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939095 4830 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939114 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939131 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939152 4830 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939172 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939188 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939205 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939223 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939240 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939256 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939273 4830 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939291 4830 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939310 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939428 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939445 4830 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939475 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939492 4830 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939509 4830 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939527 4830 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939543 4830 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939560 4830 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939577 4830 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939594 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939612 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939632 4830 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939649 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939666 4830 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939684 4830 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939701 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939718 4830 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939735 4830 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939894 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939914 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939935 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.939990 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.940013 4830 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.940031 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.940048 4830 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.940067 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.940129 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.940149 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.940362 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.941248 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.942621 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.942811 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.943393 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.943522 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.944523 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.945168 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.946409 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.946498 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.946968 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.951012 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.955217 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.955530 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.955595 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.955621 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.955654 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.955680 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:02Z","lastTransitionTime":"2026-02-27T16:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.958842 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.960352 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.960404 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.962249 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.962350 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fcddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6adbc0c4-e467-41f1-9190-d0dd3693eba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zrsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fcddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:02 crc kubenswrapper[4830]: E0227 16:08:02.962739 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 16:08:02 crc kubenswrapper[4830]: E0227 16:08:02.962763 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 16:08:02 crc kubenswrapper[4830]: E0227 16:08:02.962777 4830 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.962773 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.962984 4830 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963018 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963039 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963056 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963067 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963080 4830 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963095 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963107 4830 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963120 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963133 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963145 4830 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963157 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963167 4830 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963182 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963193 4830 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963205 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963219 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963231 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963246 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: E0227 16:08:02.963268 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 16:08:03.463252469 +0000 UTC m=+79.552524932 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963281 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963292 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963303 4830 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963312 4830 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963322 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963332 4830 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963341 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963365 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963374 4830 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963383 4830 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963393 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963402 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963410 4830 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963419 4830 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963430 4830 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963439 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963449 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963459 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963467 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963477 4830 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963488 4830 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963496 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963505 4830 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963513 4830 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963522 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963531 4830 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963539 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963548 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963557 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963566 4830 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963575 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963584 4830 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963593 4830 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963602 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963612 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963623 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963633 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963641 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963650 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963658 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963667 4830 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963648 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963675 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963711 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963726 4830 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963738 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963751 4830 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963762 4830 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963774 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963785 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963797 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963809 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963822 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963834 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963847 4830 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963860 4830 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963871 4830 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963882 4830 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963893 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963905 4830 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963916 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963929 4830 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963968 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963980 4830 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.963991 4830 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.964002 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.964013 4830 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.964024 4830 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.964034 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.964045 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:02 crc kubenswrapper[4830]: E0227 16:08:02.965504 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 16:08:02 crc kubenswrapper[4830]: E0227 16:08:02.965573 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 16:08:02 crc kubenswrapper[4830]: E0227 16:08:02.965603 4830 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:08:02 crc kubenswrapper[4830]: E0227 16:08:02.965690 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 16:08:03.465660218 +0000 UTC m=+79.554932721 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.967028 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.967074 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.967774 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.968849 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.975757 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.980705 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.986017 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.987231 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:08:02 crc kubenswrapper[4830]: I0227 16:08:02.995513 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.016418 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bf9lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.027548 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d6b7ce-4757-4275-8345-60c1b546ce25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tv5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.037107 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5a4c5b-2008-4354-b26e-8763a631e55c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gqgb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.059123 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.059180 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.059202 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.059231 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.059255 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:03Z","lastTransitionTime":"2026-02-27T16:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.065317 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-run-openvswitch\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.065393 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpb8z\" (UniqueName: \"kubernetes.io/projected/672682a0-e75f-4d6c-b4f2-542944327497-kube-api-access-dpb8z\") pod \"multus-additional-cni-plugins-rgv8f\" (UID: \"672682a0-e75f-4d6c-b4f2-542944327497\") " pod="openshift-multus/multus-additional-cni-plugins-rgv8f" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.065436 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-slash\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.065471 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/616ebd42-6bbe-4536-ba35-f8b07f2f11b1-host\") pod \"node-ca-p7298\" (UID: \"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\") " pod="openshift-image-registry/node-ca-p7298" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.065526 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-multus-cni-dir\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.065559 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-host-run-multus-certs\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.065401 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.065691 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-multus-cni-dir\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.065735 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-host-run-multus-certs\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.065607 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-slash\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.065767 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/616ebd42-6bbe-4536-ba35-f8b07f2f11b1-host\") pod \"node-ca-p7298\" (UID: \"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\") " pod="openshift-image-registry/node-ca-p7298" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.065596 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd5a4c5b-2008-4354-b26e-8763a631e55c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gqgb6\" (UID: \"cd5a4c5b-2008-4354-b26e-8763a631e55c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.065924 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-kubelet\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.066027 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/672682a0-e75f-4d6c-b4f2-542944327497-cni-binary-copy\") pod \"multus-additional-cni-plugins-rgv8f\" (UID: \"672682a0-e75f-4d6c-b4f2-542944327497\") " pod="openshift-multus/multus-additional-cni-plugins-rgv8f" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.066042 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-kubelet\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.066081 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/672682a0-e75f-4d6c-b4f2-542944327497-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rgv8f\" (UID: \"672682a0-e75f-4d6c-b4f2-542944327497\") " pod="openshift-multus/multus-additional-cni-plugins-rgv8f" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.066064 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-run-openvswitch\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.066283 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/bb72b0f7-1d22-4d13-9653-b1607aa2235d-multus-daemon-config\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.066422 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.066489 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-etc-openvswitch\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.066545 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/616ebd42-6bbe-4536-ba35-f8b07f2f11b1-serviceca\") pod \"node-ca-p7298\" (UID: \"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\") " pod="openshift-image-registry/node-ca-p7298" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.066601 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-host-var-lib-cni-multus\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.066660 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.066720 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd5a4c5b-2008-4354-b26e-8763a631e55c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gqgb6\" (UID: \"cd5a4c5b-2008-4354-b26e-8763a631e55c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.066774 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-var-lib-openvswitch\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.066782 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.066819 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zrsv\" (UniqueName: \"kubernetes.io/projected/6adbc0c4-e467-41f1-9190-d0dd3693eba6-kube-api-access-8zrsv\") pod \"node-resolver-fcddf\" (UID: \"6adbc0c4-e467-41f1-9190-d0dd3693eba6\") " pod="openshift-dns/node-resolver-fcddf" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.066867 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-run-ovn\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.066910 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-ovnkube-config\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.066991 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf9wc\" (UniqueName: \"kubernetes.io/projected/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-kube-api-access-tf9wc\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.067044 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-host-run-k8s-cni-cncf-io\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.067091 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-multus-conf-dir\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.067137 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6adbc0c4-e467-41f1-9190-d0dd3693eba6-hosts-file\") pod \"node-resolver-fcddf\" (UID: \"6adbc0c4-e467-41f1-9190-d0dd3693eba6\") " pod="openshift-dns/node-resolver-fcddf" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.067182 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-cni-bin\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.067225 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-env-overrides\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.067240 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/672682a0-e75f-4d6c-b4f2-542944327497-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rgv8f\" (UID: \"672682a0-e75f-4d6c-b4f2-542944327497\") " pod="openshift-multus/multus-additional-cni-plugins-rgv8f" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.067313 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-host-var-lib-cni-bin\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.067360 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-etc-kubernetes\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.067406 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/672682a0-e75f-4d6c-b4f2-542944327497-os-release\") pod \"multus-additional-cni-plugins-rgv8f\" (UID: \"672682a0-e75f-4d6c-b4f2-542944327497\") " pod="openshift-multus/multus-additional-cni-plugins-rgv8f" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.067468 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-hostroot\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.067512 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/00d6b7ce-4757-4275-8345-60c1b546ce25-proxy-tls\") pod \"machine-config-daemon-2tv5v\" (UID: \"00d6b7ce-4757-4275-8345-60c1b546ce25\") " pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.067567 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/00d6b7ce-4757-4275-8345-60c1b546ce25-mcd-auth-proxy-config\") pod \"machine-config-daemon-2tv5v\" (UID: \"00d6b7ce-4757-4275-8345-60c1b546ce25\") " pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.067611 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-systemd-units\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.067640 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/bb72b0f7-1d22-4d13-9653-b1607aa2235d-multus-daemon-config\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.067664 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.067737 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwzp4\" (UniqueName: \"kubernetes.io/projected/bb72b0f7-1d22-4d13-9653-b1607aa2235d-kube-api-access-xwzp4\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.067793 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-node-log\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.067846 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-log-socket\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.067898 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-run-ovn-kubernetes\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.067986 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-system-cni-dir\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.068010 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-multus-conf-dir\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.068042 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l8vg\" (UniqueName: \"kubernetes.io/projected/6ba2fe32-66e0-4bcd-a646-9d07c9a21c54-kube-api-access-9l8vg\") pod \"network-metrics-daemon-kgdlg\" (UID: \"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\") " pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.068098 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87p77\" (UniqueName: \"kubernetes.io/projected/cd5a4c5b-2008-4354-b26e-8763a631e55c-kube-api-access-87p77\") pod \"ovnkube-control-plane-749d76644c-gqgb6\" (UID: \"cd5a4c5b-2008-4354-b26e-8763a631e55c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.068149 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-ovn-node-metrics-cert\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.068198 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/672682a0-e75f-4d6c-b4f2-542944327497-system-cni-dir\") pod \"multus-additional-cni-plugins-rgv8f\" (UID: \"672682a0-e75f-4d6c-b4f2-542944327497\") " pod="openshift-multus/multus-additional-cni-plugins-rgv8f" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.068215 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd5a4c5b-2008-4354-b26e-8763a631e55c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gqgb6\" (UID: \"cd5a4c5b-2008-4354-b26e-8763a631e55c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.068245 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/672682a0-e75f-4d6c-b4f2-542944327497-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rgv8f\" (UID: \"672682a0-e75f-4d6c-b4f2-542944327497\") " pod="openshift-multus/multus-additional-cni-plugins-rgv8f" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.068293 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-cnibin\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.068308 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-host-run-k8s-cni-cncf-io\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.068339 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-host-var-lib-kubelet\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.068364 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-host-var-lib-cni-multus\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.067985 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/672682a0-e75f-4d6c-b4f2-542944327497-os-release\") pod \"multus-additional-cni-plugins-rgv8f\" (UID: \"672682a0-e75f-4d6c-b4f2-542944327497\") " pod="openshift-multus/multus-additional-cni-plugins-rgv8f" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.068390 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/00d6b7ce-4757-4275-8345-60c1b546ce25-rootfs\") pod \"machine-config-daemon-2tv5v\" (UID: \"00d6b7ce-4757-4275-8345-60c1b546ce25\") " pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.068446 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-ovnkube-script-lib\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.068493 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/672682a0-e75f-4d6c-b4f2-542944327497-cnibin\") pod \"multus-additional-cni-plugins-rgv8f\" (UID: \"672682a0-e75f-4d6c-b4f2-542944327497\") " pod="openshift-multus/multus-additional-cni-plugins-rgv8f" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.068549 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-os-release\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.068582 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.068594 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqztt\" (UniqueName: \"kubernetes.io/projected/00d6b7ce-4757-4275-8345-60c1b546ce25-kube-api-access-sqztt\") pod \"machine-config-daemon-2tv5v\" (UID: \"00d6b7ce-4757-4275-8345-60c1b546ce25\") " pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.068661 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-run-netns\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.068680 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-systemd-units\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.068708 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-cni-netd\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.068753 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9g9d\" (UniqueName: \"kubernetes.io/projected/616ebd42-6bbe-4536-ba35-f8b07f2f11b1-kube-api-access-x9g9d\") pod \"node-ca-p7298\" (UID: \"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\") " pod="openshift-image-registry/node-ca-p7298" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.068784 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-cnibin\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.068825 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/00d6b7ce-4757-4275-8345-60c1b546ce25-mcd-auth-proxy-config\") pod \"machine-config-daemon-2tv5v\" (UID: \"00d6b7ce-4757-4275-8345-60c1b546ce25\") " pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.066724 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-etc-openvswitch\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.068889 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ba2fe32-66e0-4bcd-a646-9d07c9a21c54-metrics-certs\") pod \"network-metrics-daemon-kgdlg\" (UID: \"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\") " pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.067741 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.068925 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-run-systemd\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.069000 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6adbc0c4-e467-41f1-9190-d0dd3693eba6-hosts-file\") pod \"node-resolver-fcddf\" (UID: \"6adbc0c4-e467-41f1-9190-d0dd3693eba6\") " pod="openshift-dns/node-resolver-fcddf" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.069016 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-run-systemd\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.069071 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-cni-bin\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.069074 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-node-log\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.067796 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-host-var-lib-cni-bin\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.069286 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-log-socket\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.067854 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-etc-kubernetes\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.069321 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-host-var-lib-kubelet\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.069356 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-run-ovn-kubernetes\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.069401 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/672682a0-e75f-4d6c-b4f2-542944327497-cnibin\") pod \"multus-additional-cni-plugins-rgv8f\" (UID: \"672682a0-e75f-4d6c-b4f2-542944327497\") " pod="openshift-multus/multus-additional-cni-plugins-rgv8f" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.069449 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-os-release\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.069493 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-run-ovn\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.068049 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-hostroot\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.069651 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-run-netns\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.069676 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-cni-netd\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.068101 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-ovnkube-config\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.070039 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/672682a0-e75f-4d6c-b4f2-542944327497-system-cni-dir\") pod \"multus-additional-cni-plugins-rgv8f\" (UID: \"672682a0-e75f-4d6c-b4f2-542944327497\") " pod="openshift-multus/multus-additional-cni-plugins-rgv8f" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.069020 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/bb72b0f7-1d22-4d13-9653-b1607aa2235d-cni-binary-copy\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:03 crc kubenswrapper[4830]: E0227 16:08:03.070141 4830 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.070159 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-multus-socket-dir-parent\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.070218 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-system-cni-dir\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:03 crc kubenswrapper[4830]: E0227 16:08:03.070250 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ba2fe32-66e0-4bcd-a646-9d07c9a21c54-metrics-certs podName:6ba2fe32-66e0-4bcd-a646-9d07c9a21c54 nodeName:}" failed. No retries permitted until 2026-02-27 16:08:03.570217251 +0000 UTC m=+79.659489944 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6ba2fe32-66e0-4bcd-a646-9d07c9a21c54-metrics-certs") pod "network-metrics-daemon-kgdlg" (UID: "6ba2fe32-66e0-4bcd-a646-9d07c9a21c54") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.067890 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/672682a0-e75f-4d6c-b4f2-542944327497-cni-binary-copy\") pod \"multus-additional-cni-plugins-rgv8f\" (UID: \"672682a0-e75f-4d6c-b4f2-542944327497\") " pod="openshift-multus/multus-additional-cni-plugins-rgv8f" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.069364 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/00d6b7ce-4757-4275-8345-60c1b546ce25-rootfs\") pod \"machine-config-daemon-2tv5v\" (UID: \"00d6b7ce-4757-4275-8345-60c1b546ce25\") " pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.070316 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-multus-socket-dir-parent\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.070289 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-host-run-netns\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.070345 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bb72b0f7-1d22-4d13-9653-b1607aa2235d-host-run-netns\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.070424 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd5a4c5b-2008-4354-b26e-8763a631e55c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gqgb6\" (UID: \"cd5a4c5b-2008-4354-b26e-8763a631e55c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.070532 4830 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.070587 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.070605 4830 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.070624 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.070674 4830 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.070694 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.070711 4830 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.070758 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.070776 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.070794 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.070812 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.070861 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.070880 4830 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.070897 4830 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.070941 4830 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.071005 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-ovnkube-script-lib\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.071020 4830 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.071116 4830 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.071134 4830 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.071196 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.071216 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.071234 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.071285 4830 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.071307 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.071325 4830 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.071372 4830 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.071390 4830 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.071407 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.071426 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.071473 4830 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.071492 4830 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.071513 4830 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.071573 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.071822 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/bb72b0f7-1d22-4d13-9653-b1607aa2235d-cni-binary-copy\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.076587 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/672682a0-e75f-4d6c-b4f2-542944327497-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rgv8f\" (UID: \"672682a0-e75f-4d6c-b4f2-542944327497\") " pod="openshift-multus/multus-additional-cni-plugins-rgv8f" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.076645 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.078058 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-var-lib-openvswitch\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.078356 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-env-overrides\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.083202 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/616ebd42-6bbe-4536-ba35-f8b07f2f11b1-serviceca\") pod \"node-ca-p7298\" (UID: \"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\") " pod="openshift-image-registry/node-ca-p7298" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.083786 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd5a4c5b-2008-4354-b26e-8763a631e55c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gqgb6\" (UID: \"cd5a4c5b-2008-4354-b26e-8763a631e55c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.090542 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/00d6b7ce-4757-4275-8345-60c1b546ce25-proxy-tls\") pod \"machine-config-daemon-2tv5v\" (UID: \"00d6b7ce-4757-4275-8345-60c1b546ce25\") " pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.090745 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zrsv\" (UniqueName: \"kubernetes.io/projected/6adbc0c4-e467-41f1-9190-d0dd3693eba6-kube-api-access-8zrsv\") pod \"node-resolver-fcddf\" (UID: \"6adbc0c4-e467-41f1-9190-d0dd3693eba6\") " pod="openshift-dns/node-resolver-fcddf" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.093792 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd5a4c5b-2008-4354-b26e-8763a631e55c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gqgb6\" (UID: \"cd5a4c5b-2008-4354-b26e-8763a631e55c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.095775 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqztt\" (UniqueName: \"kubernetes.io/projected/00d6b7ce-4757-4275-8345-60c1b546ce25-kube-api-access-sqztt\") pod \"machine-config-daemon-2tv5v\" (UID: \"00d6b7ce-4757-4275-8345-60c1b546ce25\") " pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.096079 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-ovn-node-metrics-cert\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.097969 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf9wc\" (UniqueName: \"kubernetes.io/projected/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-kube-api-access-tf9wc\") pod \"ovnkube-node-bf9lh\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.098070 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87p77\" (UniqueName: \"kubernetes.io/projected/cd5a4c5b-2008-4354-b26e-8763a631e55c-kube-api-access-87p77\") pod \"ovnkube-control-plane-749d76644c-gqgb6\" (UID: \"cd5a4c5b-2008-4354-b26e-8763a631e55c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.098885 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l8vg\" (UniqueName: \"kubernetes.io/projected/6ba2fe32-66e0-4bcd-a646-9d07c9a21c54-kube-api-access-9l8vg\") pod \"network-metrics-daemon-kgdlg\" (UID: \"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\") " pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.099721 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwzp4\" (UniqueName: \"kubernetes.io/projected/bb72b0f7-1d22-4d13-9653-b1607aa2235d-kube-api-access-xwzp4\") pod \"multus-fsrq9\" (UID: \"bb72b0f7-1d22-4d13-9653-b1607aa2235d\") " pod="openshift-multus/multus-fsrq9" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.100638 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fsrq9" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.105013 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpb8z\" (UniqueName: \"kubernetes.io/projected/672682a0-e75f-4d6c-b4f2-542944327497-kube-api-access-dpb8z\") pod \"multus-additional-cni-plugins-rgv8f\" (UID: \"672682a0-e75f-4d6c-b4f2-542944327497\") " pod="openshift-multus/multus-additional-cni-plugins-rgv8f" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.109764 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9g9d\" (UniqueName: \"kubernetes.io/projected/616ebd42-6bbe-4536-ba35-f8b07f2f11b1-kube-api-access-x9g9d\") pod \"node-ca-p7298\" (UID: \"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\") " pod="openshift-image-registry/node-ca-p7298" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.110003 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-p7298" Feb 27 16:08:03 crc kubenswrapper[4830]: W0227 16:08:03.124131 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb72b0f7_1d22_4d13_9653_b1607aa2235d.slice/crio-d8f937e835d8fd6b636ae9dad242124f38c5852c84d5a068ecd3460e644ae69f WatchSource:0}: Error finding container d8f937e835d8fd6b636ae9dad242124f38c5852c84d5a068ecd3460e644ae69f: Status 404 returned error can't find the container with id d8f937e835d8fd6b636ae9dad242124f38c5852c84d5a068ecd3460e644ae69f Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.125659 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.133445 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a6e56732d65839ef1d0a3b55fe0428abde30258badc3fa671d80fc7f10cccf34"} Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.134627 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fsrq9" event={"ID":"bb72b0f7-1d22-4d13-9653-b1607aa2235d","Type":"ContainerStarted","Data":"d8f937e835d8fd6b636ae9dad242124f38c5852c84d5a068ecd3460e644ae69f"} Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.136212 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"d8ece4809cc33885084d050b247d9342dd5b6c6e3984768da105154f7362d81b"} Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.136665 4830 scope.go:117] "RemoveContainer" containerID="acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2" Feb 27 16:08:03 crc kubenswrapper[4830]: E0227 16:08:03.136800 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.137413 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-fcddf" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.149678 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.159135 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.161570 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.161744 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.161895 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.162093 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.162254 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:03Z","lastTransitionTime":"2026-02-27T16:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:03 crc kubenswrapper[4830]: W0227 16:08:03.203253 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod672682a0_e75f_4d6c_b4f2_542944327497.slice/crio-06c2feb2a7ef7eca2b76f45e2eaeff40d2bb37013b23c244ac65d40525d3fe65 WatchSource:0}: Error finding container 06c2feb2a7ef7eca2b76f45e2eaeff40d2bb37013b23c244ac65d40525d3fe65: Status 404 returned error can't find the container with id 06c2feb2a7ef7eca2b76f45e2eaeff40d2bb37013b23c244ac65d40525d3fe65 Feb 27 16:08:03 crc kubenswrapper[4830]: W0227 16:08:03.215100 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6adbc0c4_e467_41f1_9190_d0dd3693eba6.slice/crio-cc22b92d0b434342ca8e27f396af75967848b35fcee234e3fdbe1c06b7e2095d WatchSource:0}: Error finding container cc22b92d0b434342ca8e27f396af75967848b35fcee234e3fdbe1c06b7e2095d: Status 404 returned error can't find the container with id cc22b92d0b434342ca8e27f396af75967848b35fcee234e3fdbe1c06b7e2095d Feb 27 16:08:03 crc kubenswrapper[4830]: W0227 16:08:03.220866 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd5a4c5b_2008_4354_b26e_8763a631e55c.slice/crio-325e0c62d7120a1b4d63d047a527171bee56cd78fe86315f2eab944896ce6296 WatchSource:0}: Error finding container 325e0c62d7120a1b4d63d047a527171bee56cd78fe86315f2eab944896ce6296: Status 404 returned error can't find the container with id 325e0c62d7120a1b4d63d047a527171bee56cd78fe86315f2eab944896ce6296 Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.265411 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.265450 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.265467 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.265492 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.265507 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:03Z","lastTransitionTime":"2026-02-27T16:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.356123 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.368223 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.368285 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.368303 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.368329 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.368345 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:03Z","lastTransitionTime":"2026-02-27T16:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.387789 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:03 crc kubenswrapper[4830]: W0227 16:08:03.452929 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fcf8ee6_7d12_4dd9_aa0e_8e2c1d0e6904.slice/crio-6be9cfa3d02e0e72c62c85546d0d78fbfbe835257b1e639aa5a10fea773570ff WatchSource:0}: Error finding container 6be9cfa3d02e0e72c62c85546d0d78fbfbe835257b1e639aa5a10fea773570ff: Status 404 returned error can't find the container with id 6be9cfa3d02e0e72c62c85546d0d78fbfbe835257b1e639aa5a10fea773570ff Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.475006 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.475052 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.475062 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.475079 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.475089 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:03Z","lastTransitionTime":"2026-02-27T16:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.475234 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.475343 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:08:03 crc kubenswrapper[4830]: E0227 16:08:03.475380 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:08:04.475362021 +0000 UTC m=+80.564634494 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.475422 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.475449 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.475487 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:08:03 crc kubenswrapper[4830]: E0227 16:08:03.475490 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 16:08:03 crc kubenswrapper[4830]: E0227 16:08:03.475523 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 16:08:03 crc kubenswrapper[4830]: E0227 16:08:03.475528 4830 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 16:08:03 crc kubenswrapper[4830]: E0227 16:08:03.475534 4830 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:08:03 crc kubenswrapper[4830]: E0227 16:08:03.475566 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 16:08:04.475556895 +0000 UTC m=+80.564829358 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:08:03 crc kubenswrapper[4830]: E0227 16:08:03.475581 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 16:08:04.475573776 +0000 UTC m=+80.564846239 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 16:08:03 crc kubenswrapper[4830]: E0227 16:08:03.475618 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 16:08:03 crc kubenswrapper[4830]: E0227 16:08:03.475633 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 16:08:03 crc kubenswrapper[4830]: E0227 16:08:03.475642 4830 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:08:03 crc kubenswrapper[4830]: E0227 16:08:03.475673 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 16:08:04.475663928 +0000 UTC m=+80.564936401 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:08:03 crc kubenswrapper[4830]: E0227 16:08:03.475717 4830 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 16:08:03 crc kubenswrapper[4830]: E0227 16:08:03.475820 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 16:08:04.475794791 +0000 UTC m=+80.565067264 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.576804 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ba2fe32-66e0-4bcd-a646-9d07c9a21c54-metrics-certs\") pod \"network-metrics-daemon-kgdlg\" (UID: \"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\") " pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:03 crc kubenswrapper[4830]: E0227 16:08:03.576994 4830 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 16:08:03 crc kubenswrapper[4830]: E0227 16:08:03.577052 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ba2fe32-66e0-4bcd-a646-9d07c9a21c54-metrics-certs podName:6ba2fe32-66e0-4bcd-a646-9d07c9a21c54 nodeName:}" failed. No retries permitted until 2026-02-27 16:08:04.577034023 +0000 UTC m=+80.666306486 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6ba2fe32-66e0-4bcd-a646-9d07c9a21c54-metrics-certs") pod "network-metrics-daemon-kgdlg" (UID: "6ba2fe32-66e0-4bcd-a646-9d07c9a21c54") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.577777 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.577800 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.577809 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.577823 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.577834 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:03Z","lastTransitionTime":"2026-02-27T16:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.687811 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.688252 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.688264 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.688298 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.688309 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:03Z","lastTransitionTime":"2026-02-27T16:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.791086 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.791117 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.791126 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.791139 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.791149 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:03Z","lastTransitionTime":"2026-02-27T16:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.893963 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.894009 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.894024 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.894042 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.894055 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:03Z","lastTransitionTime":"2026-02-27T16:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.997099 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.997156 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.997169 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.997187 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:03 crc kubenswrapper[4830]: I0227 16:08:03.997200 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:03Z","lastTransitionTime":"2026-02-27T16:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.099848 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.099907 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.099923 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.099999 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.100018 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:04Z","lastTransitionTime":"2026-02-27T16:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.142970 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-fcddf" event={"ID":"6adbc0c4-e467-41f1-9190-d0dd3693eba6","Type":"ContainerStarted","Data":"f2a66caf75365a8658dd7422b2f715e7f095b5bd7d96c8a0a96ce1a80cb99849"} Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.143020 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-fcddf" event={"ID":"6adbc0c4-e467-41f1-9190-d0dd3693eba6","Type":"ContainerStarted","Data":"cc22b92d0b434342ca8e27f396af75967848b35fcee234e3fdbe1c06b7e2095d"} Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.146772 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" event={"ID":"cd5a4c5b-2008-4354-b26e-8763a631e55c","Type":"ContainerStarted","Data":"14a90f11c3291e6bde2a008f2e8e617c9c8db36ee65c3bbf26112bf8707d7cd2"} Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.146811 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" event={"ID":"cd5a4c5b-2008-4354-b26e-8763a631e55c","Type":"ContainerStarted","Data":"32024ba3e19bc33cc8197b4dd6fdf165f6de8fd7c3b1e85b2f4768c75c50736a"} Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.146823 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" event={"ID":"cd5a4c5b-2008-4354-b26e-8763a631e55c","Type":"ContainerStarted","Data":"325e0c62d7120a1b4d63d047a527171bee56cd78fe86315f2eab944896ce6296"} Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.148560 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" event={"ID":"672682a0-e75f-4d6c-b4f2-542944327497","Type":"ContainerStarted","Data":"b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4"} Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.148635 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" event={"ID":"672682a0-e75f-4d6c-b4f2-542944327497","Type":"ContainerStarted","Data":"06c2feb2a7ef7eca2b76f45e2eaeff40d2bb37013b23c244ac65d40525d3fe65"} Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.153633 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4995a66c4bc6d195947f67ef564aa18c7fe145e12067c2fca8e0de6b76b72f19"} Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.153686 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"df5245f048370e7455a63a6ca01bffb8e514e22a123e45bc3a064ce4fbe27f45"} Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.158458 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerStarted","Data":"58864ba639422794b99f6190c7ab9a81537913f51574220acb3838f97b4f9421"} Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.158549 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerStarted","Data":"5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516"} Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.158577 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerStarted","Data":"bf19dca96fa99ddafe54249d5f615e6611bfe666cbbc27a90be767f037e75ced"} Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.160286 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-p7298" event={"ID":"616ebd42-6bbe-4536-ba35-f8b07f2f11b1","Type":"ContainerStarted","Data":"5a08121a78952c85326d07974c700397678777995d73cce9f24ace8932b3e9b1"} Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.160344 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-p7298" event={"ID":"616ebd42-6bbe-4536-ba35-f8b07f2f11b1","Type":"ContainerStarted","Data":"4e3b60cfefd74e7e76fc1d51af1fb823eebf5c47eaac92f4d3b2313d31ee401c"} Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.163208 4830 generic.go:334] "Generic (PLEG): container finished" podID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerID="45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2" exitCode=0 Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.163248 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" event={"ID":"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904","Type":"ContainerDied","Data":"45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2"} Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.163302 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" event={"ID":"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904","Type":"ContainerStarted","Data":"6be9cfa3d02e0e72c62c85546d0d78fbfbe835257b1e639aa5a10fea773570ff"} Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.165743 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fsrq9" event={"ID":"bb72b0f7-1d22-4d13-9653-b1607aa2235d","Type":"ContainerStarted","Data":"4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa"} Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.168018 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.168787 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"eebea1ec0fd24a7664f474db4e4cdb08e53c5c742c392900599dc3e7adcbee2f"} Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.168836 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"cb692fece6d03f1d4d7fd248c19f4fc5082a6aa1fd0ad4bc1e458b2e1382a87d"} Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.186045 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.200535 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.203783 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.203826 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.203842 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.203861 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.203874 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:04Z","lastTransitionTime":"2026-02-27T16:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.229488 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bf9lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.245027 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d6b7ce-4757-4275-8345-60c1b546ce25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tv5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.256508 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5a4c5b-2008-4354-b26e-8763a631e55c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gqgb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.268140 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d20f886-cfdb-48c7-9754-6b7255b1124f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:52Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0227 16:07:52.525113 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:07:52.525413 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:07:52.526578 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-70351561/tls.crt::/tmp/serving-cert-70351561/tls.key\\\\\\\"\\\\nI0227 16:07:52.790383 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:07:52.794547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:07:52.794587 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:07:52.794627 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:07:52.794638 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:07:52.801197 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:07:52.801263 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:07:52.801293 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 16:07:52.801246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 16:07:52.801325 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:07:52.801373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:07:52.801389 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:07:52.801396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 16:07:52.804476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:07:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.281123 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.291020 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p7298" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9g9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p7298\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.300241 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kgdlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kgdlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.306265 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.306296 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.306306 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.306322 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.306333 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:04Z","lastTransitionTime":"2026-02-27T16:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.310766 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.323501 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsrq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb72b0f7-1d22-4d13-9653-b1607aa2235d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwzp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsrq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.336566 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672682a0-e75f-4d6c-b4f2-542944327497\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rgv8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.348799 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fcddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6adbc0c4-e467-41f1-9190-d0dd3693eba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a66caf75365a8658dd7422b2f715e7f095b5bd7d96c8a0a96ce1a80cb99849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zrsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fcddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.372056 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.387432 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.404562 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsrq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb72b0f7-1d22-4d13-9653-b1607aa2235d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwzp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsrq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.409764 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.409817 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.409828 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.409847 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.409859 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:04Z","lastTransitionTime":"2026-02-27T16:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.426196 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672682a0-e75f-4d6c-b4f2-542944327497\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rgv8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.436267 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fcddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6adbc0c4-e467-41f1-9190-d0dd3693eba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a66caf75365a8658dd7422b2f715e7f095b5bd7d96c8a0a96ce1a80cb99849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zrsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fcddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.450380 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.462988 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.475451 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4995a66c4bc6d195947f67ef564aa18c7fe145e12067c2fca8e0de6b76b72f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5245f048370e7455a63a6ca01bffb8e514e22a123e45bc3a064ce4fbe27f45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.487780 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.491093 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.491240 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:08:04 crc kubenswrapper[4830]: E0227 16:08:04.491350 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:08:06.491316462 +0000 UTC m=+82.580588935 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:08:04 crc kubenswrapper[4830]: E0227 16:08:04.491410 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 16:08:04 crc kubenswrapper[4830]: E0227 16:08:04.491438 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.491441 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.491511 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:04 crc kubenswrapper[4830]: E0227 16:08:04.491455 4830 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:08:04 crc kubenswrapper[4830]: E0227 16:08:04.491601 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 16:08:06.491578669 +0000 UTC m=+82.580851142 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.491541 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:04 crc kubenswrapper[4830]: E0227 16:08:04.491638 4830 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 16:08:04 crc kubenswrapper[4830]: E0227 16:08:04.491527 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 16:08:04 crc kubenswrapper[4830]: E0227 16:08:04.491704 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 16:08:04 crc kubenswrapper[4830]: E0227 16:08:04.491718 4830 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:08:04 crc kubenswrapper[4830]: E0227 16:08:04.491731 4830 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 16:08:04 crc kubenswrapper[4830]: E0227 16:08:04.491688 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 16:08:06.491679381 +0000 UTC m=+82.580951854 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 16:08:04 crc kubenswrapper[4830]: E0227 16:08:04.491783 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 16:08:06.491766133 +0000 UTC m=+82.581038606 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:08:04 crc kubenswrapper[4830]: E0227 16:08:04.491796 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 16:08:06.491789664 +0000 UTC m=+82.581062137 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.512440 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.512757 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.512902 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.513054 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.513199 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:04Z","lastTransitionTime":"2026-02-27T16:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.515279 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bf9lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.526478 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d6b7ce-4757-4275-8345-60c1b546ce25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58864ba639422794b99f6190c7ab9a81537913f51574220acb3838f97b4f9421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tv5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.538808 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5a4c5b-2008-4354-b26e-8763a631e55c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32024ba3e19bc33cc8197b4dd6fdf165f6de8fd7c3b1e85b2f4768c75c50736a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a90f11c3291e6bde2a008f2e8e617c9c8db36ee65c3bbf26112bf8707d7cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gqgb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.554887 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d20f886-cfdb-48c7-9754-6b7255b1124f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:52Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0227 16:07:52.525113 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:07:52.525413 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:07:52.526578 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-70351561/tls.crt::/tmp/serving-cert-70351561/tls.key\\\\\\\"\\\\nI0227 16:07:52.790383 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:07:52.794547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:07:52.794587 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:07:52.794627 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:07:52.794638 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:07:52.801197 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:07:52.801263 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:07:52.801293 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 16:07:52.801246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 16:07:52.801325 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:07:52.801373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:07:52.801389 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:07:52.801396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 16:07:52.804476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:07:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.569374 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eebea1ec0fd24a7664f474db4e4cdb08e53c5c742c392900599dc3e7adcbee2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.582604 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p7298" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a08121a78952c85326d07974c700397678777995d73cce9f24ace8932b3e9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9g9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p7298\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.592825 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ba2fe32-66e0-4bcd-a646-9d07c9a21c54-metrics-certs\") pod \"network-metrics-daemon-kgdlg\" (UID: \"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\") " pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:04 crc kubenswrapper[4830]: E0227 16:08:04.593318 4830 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 16:08:04 crc kubenswrapper[4830]: E0227 16:08:04.593433 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ba2fe32-66e0-4bcd-a646-9d07c9a21c54-metrics-certs podName:6ba2fe32-66e0-4bcd-a646-9d07c9a21c54 nodeName:}" failed. No retries permitted until 2026-02-27 16:08:06.593404884 +0000 UTC m=+82.682677387 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6ba2fe32-66e0-4bcd-a646-9d07c9a21c54-metrics-certs") pod "network-metrics-daemon-kgdlg" (UID: "6ba2fe32-66e0-4bcd-a646-9d07c9a21c54") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.597119 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kgdlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kgdlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.616675 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.616773 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.616799 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.616830 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.616857 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:04Z","lastTransitionTime":"2026-02-27T16:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.719845 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.720242 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.720376 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.720543 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.720670 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:04Z","lastTransitionTime":"2026-02-27T16:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.761587 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.761587 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.761853 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:08:04 crc kubenswrapper[4830]: E0227 16:08:04.762142 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.764174 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:08:04 crc kubenswrapper[4830]: E0227 16:08:04.764268 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:08:04 crc kubenswrapper[4830]: E0227 16:08:04.764404 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:08:04 crc kubenswrapper[4830]: E0227 16:08:04.764558 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.768152 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.769484 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.771335 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.773157 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.775472 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.776984 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.777031 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.778553 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.781065 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.783555 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.785787 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.787211 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.789638 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.790491 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.791222 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.791920 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.792070 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.792640 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.793766 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.794713 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.795933 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.797294 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.800591 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.802110 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.803043 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.805695 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.807042 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.807347 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsrq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb72b0f7-1d22-4d13-9653-b1607aa2235d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwzp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsrq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.809708 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.811243 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.812261 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.814417 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.815495 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.817445 4830 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.817659 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.821713 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.822102 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672682a0-e75f-4d6c-b4f2-542944327497\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rgv8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.823981 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.824398 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.824463 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.824491 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.824523 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.824548 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:04Z","lastTransitionTime":"2026-02-27T16:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.825284 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.829321 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.831704 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.833024 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.834468 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fcddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6adbc0c4-e467-41f1-9190-d0dd3693eba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a66caf75365a8658dd7422b2f715e7f095b5bd7d96c8a0a96ce1a80cb99849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zrsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fcddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.835206 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.836622 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.837796 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.840682 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.843562 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.845718 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.847666 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.848547 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.849890 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.850774 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d6b7ce-4757-4275-8345-60c1b546ce25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58864ba639422794b99f6190c7ab9a81537913f51574220acb3838f97b4f9421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tv5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.850878 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.852082 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.852788 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.853813 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.855371 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.856451 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.857871 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.865317 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5a4c5b-2008-4354-b26e-8763a631e55c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32024ba3e19bc33cc8197b4dd6fdf165f6de8fd7c3b1e85b2f4768c75c50736a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a90f11c3291e6bde2a008f2e8e617c9c8db36ee65c3bbf26112bf8707d7cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gqgb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.882733 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.898897 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4995a66c4bc6d195947f67ef564aa18c7fe145e12067c2fca8e0de6b76b72f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5245f048370e7455a63a6ca01bffb8e514e22a123e45bc3a064ce4fbe27f45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.915182 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.930900 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.930935 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.930969 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.930988 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.931002 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:04Z","lastTransitionTime":"2026-02-27T16:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.933329 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bf9lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.946589 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d20f886-cfdb-48c7-9754-6b7255b1124f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:52Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0227 16:07:52.525113 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:07:52.525413 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:07:52.526578 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-70351561/tls.crt::/tmp/serving-cert-70351561/tls.key\\\\\\\"\\\\nI0227 16:07:52.790383 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:07:52.794547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:07:52.794587 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:07:52.794627 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:07:52.794638 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:07:52.801197 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:07:52.801263 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:07:52.801293 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 16:07:52.801246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 16:07:52.801325 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:07:52.801373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:07:52.801389 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:07:52.801396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 16:07:52.804476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:07:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.962132 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eebea1ec0fd24a7664f474db4e4cdb08e53c5c742c392900599dc3e7adcbee2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.972564 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p7298" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a08121a78952c85326d07974c700397678777995d73cce9f24ace8932b3e9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9g9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p7298\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:04 crc kubenswrapper[4830]: I0227 16:08:04.980113 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kgdlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kgdlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.034730 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.035009 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.035207 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.035360 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.035501 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:05Z","lastTransitionTime":"2026-02-27T16:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.140207 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.140282 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.140343 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.140373 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.140395 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:05Z","lastTransitionTime":"2026-02-27T16:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.178127 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" event={"ID":"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904","Type":"ContainerStarted","Data":"273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef"} Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.178219 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" event={"ID":"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904","Type":"ContainerStarted","Data":"f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372"} Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.178250 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" event={"ID":"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904","Type":"ContainerStarted","Data":"05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05"} Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.178278 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" event={"ID":"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904","Type":"ContainerStarted","Data":"41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41"} Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.181172 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" event={"ID":"672682a0-e75f-4d6c-b4f2-542944327497","Type":"ContainerDied","Data":"b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4"} Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.181061 4830 generic.go:334] "Generic (PLEG): container finished" podID="672682a0-e75f-4d6c-b4f2-542944327497" containerID="b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4" exitCode=0 Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.221031 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d20f886-cfdb-48c7-9754-6b7255b1124f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:52Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0227 16:07:52.525113 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:07:52.525413 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:07:52.526578 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-70351561/tls.crt::/tmp/serving-cert-70351561/tls.key\\\\\\\"\\\\nI0227 16:07:52.790383 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:07:52.794547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:07:52.794587 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:07:52.794627 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:07:52.794638 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:07:52.801197 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:07:52.801263 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:07:52.801293 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 16:07:52.801246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 16:07:52.801325 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:07:52.801373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:07:52.801389 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:07:52.801396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 16:07:52.804476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:07:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.244263 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.244325 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.244343 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.244368 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.244387 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:05Z","lastTransitionTime":"2026-02-27T16:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.248994 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eebea1ec0fd24a7664f474db4e4cdb08e53c5c742c392900599dc3e7adcbee2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.262983 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p7298" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a08121a78952c85326d07974c700397678777995d73cce9f24ace8932b3e9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9g9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p7298\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.282157 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kgdlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kgdlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.300162 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.307407 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.307454 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.307466 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.307486 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.307500 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:05Z","lastTransitionTime":"2026-02-27T16:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.320349 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:05 crc kubenswrapper[4830]: E0227 16:08:05.325160 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.337834 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsrq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb72b0f7-1d22-4d13-9653-b1607aa2235d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwzp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsrq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.338036 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.338088 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.338109 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.338144 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.338166 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:05Z","lastTransitionTime":"2026-02-27T16:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:05 crc kubenswrapper[4830]: E0227 16:08:05.363760 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.370713 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.370782 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.370807 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.370836 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.370858 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:05Z","lastTransitionTime":"2026-02-27T16:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.378858 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672682a0-e75f-4d6c-b4f2-542944327497\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rgv8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:05 crc kubenswrapper[4830]: E0227 16:08:05.389217 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.395065 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fcddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6adbc0c4-e467-41f1-9190-d0dd3693eba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a66caf75365a8658dd7422b2f715e7f095b5bd7d96c8a0a96ce1a80cb99849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zrsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fcddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.396264 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.396321 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.396339 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.396400 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.396422 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:05Z","lastTransitionTime":"2026-02-27T16:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.414314 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d6b7ce-4757-4275-8345-60c1b546ce25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58864ba639422794b99f6190c7ab9a81537913f51574220acb3838f97b4f9421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tv5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:05 crc kubenswrapper[4830]: E0227 16:08:05.417629 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.424393 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.424461 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.424480 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.424523 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.424544 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:05Z","lastTransitionTime":"2026-02-27T16:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.427879 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5a4c5b-2008-4354-b26e-8763a631e55c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32024ba3e19bc33cc8197b4dd6fdf165f6de8fd7c3b1e85b2f4768c75c50736a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a90f11c3291e6bde2a008f2e8e617c9c8db36ee65c3bbf26112bf8707d7cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gqgb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:05 crc kubenswrapper[4830]: E0227 16:08:05.443002 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:05 crc kubenswrapper[4830]: E0227 16:08:05.443254 4830 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.445774 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.445820 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.445838 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.445863 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.445881 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:05Z","lastTransitionTime":"2026-02-27T16:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.453095 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.476001 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4995a66c4bc6d195947f67ef564aa18c7fe145e12067c2fca8e0de6b76b72f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5245f048370e7455a63a6ca01bffb8e514e22a123e45bc3a064ce4fbe27f45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.495512 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.526440 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bf9lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.547972 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.548019 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.548033 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.548053 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.548066 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:05Z","lastTransitionTime":"2026-02-27T16:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.651140 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.651204 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.651219 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.651244 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.651263 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:05Z","lastTransitionTime":"2026-02-27T16:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.754378 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.754456 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.754475 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.754504 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.754522 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:05Z","lastTransitionTime":"2026-02-27T16:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.858333 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.858405 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.858423 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.858453 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.858473 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:05Z","lastTransitionTime":"2026-02-27T16:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.961583 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.961653 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.961670 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.961713 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:05 crc kubenswrapper[4830]: I0227 16:08:05.961732 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:05Z","lastTransitionTime":"2026-02-27T16:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.064991 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.065056 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.065075 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.065103 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.065120 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:06Z","lastTransitionTime":"2026-02-27T16:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.167889 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.167977 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.167997 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.168050 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.168068 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:06Z","lastTransitionTime":"2026-02-27T16:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.186723 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" event={"ID":"672682a0-e75f-4d6c-b4f2-542944327497","Type":"ContainerStarted","Data":"441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d"} Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.193334 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" event={"ID":"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904","Type":"ContainerStarted","Data":"a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa"} Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.193412 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" event={"ID":"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904","Type":"ContainerStarted","Data":"dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f"} Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.223618 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bf9lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:06Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.242062 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d6b7ce-4757-4275-8345-60c1b546ce25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58864ba639422794b99f6190c7ab9a81537913f51574220acb3838f97b4f9421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tv5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:06Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.259926 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5a4c5b-2008-4354-b26e-8763a631e55c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32024ba3e19bc33cc8197b4dd6fdf165f6de8fd7c3b1e85b2f4768c75c50736a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a90f11c3291e6bde2a008f2e8e617c9c8db36ee65c3bbf26112bf8707d7cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gqgb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:06Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.271832 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.271863 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.271872 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.271885 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.271894 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:06Z","lastTransitionTime":"2026-02-27T16:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.283158 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:06Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.308447 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4995a66c4bc6d195947f67ef564aa18c7fe145e12067c2fca8e0de6b76b72f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5245f048370e7455a63a6ca01bffb8e514e22a123e45bc3a064ce4fbe27f45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:06Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.326059 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:06Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.344694 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d20f886-cfdb-48c7-9754-6b7255b1124f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:52Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0227 16:07:52.525113 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:07:52.525413 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:07:52.526578 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-70351561/tls.crt::/tmp/serving-cert-70351561/tls.key\\\\\\\"\\\\nI0227 16:07:52.790383 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:07:52.794547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:07:52.794587 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:07:52.794627 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:07:52.794638 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:07:52.801197 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:07:52.801263 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:07:52.801293 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 16:07:52.801246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 16:07:52.801325 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:07:52.801373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:07:52.801389 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:07:52.801396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 16:07:52.804476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:07:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:06Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.362292 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eebea1ec0fd24a7664f474db4e4cdb08e53c5c742c392900599dc3e7adcbee2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:06Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.374007 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.374039 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.374052 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.374071 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.374085 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:06Z","lastTransitionTime":"2026-02-27T16:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.378998 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p7298" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a08121a78952c85326d07974c700397678777995d73cce9f24ace8932b3e9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9g9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p7298\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:06Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.397209 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kgdlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kgdlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:06Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.414674 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:06Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.433773 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:06Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.455170 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsrq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb72b0f7-1d22-4d13-9653-b1607aa2235d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwzp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsrq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:06Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.478231 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672682a0-e75f-4d6c-b4f2-542944327497\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rgv8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:06Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.478631 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.478678 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.478695 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.478724 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.478744 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:06Z","lastTransitionTime":"2026-02-27T16:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.500602 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fcddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6adbc0c4-e467-41f1-9190-d0dd3693eba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a66caf75365a8658dd7422b2f715e7f095b5bd7d96c8a0a96ce1a80cb99849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zrsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fcddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:06Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.513426 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.513646 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:08:06 crc kubenswrapper[4830]: E0227 16:08:06.513742 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:08:10.513693483 +0000 UTC m=+86.602965986 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.513837 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:08:06 crc kubenswrapper[4830]: E0227 16:08:06.513864 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 16:08:06 crc kubenswrapper[4830]: E0227 16:08:06.513900 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 16:08:06 crc kubenswrapper[4830]: E0227 16:08:06.513918 4830 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:08:06 crc kubenswrapper[4830]: E0227 16:08:06.513993 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 16:08:10.51397343 +0000 UTC m=+86.603245893 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.514032 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.514090 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:06 crc kubenswrapper[4830]: E0227 16:08:06.514133 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 16:08:06 crc kubenswrapper[4830]: E0227 16:08:06.514180 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 16:08:06 crc kubenswrapper[4830]: E0227 16:08:06.514202 4830 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:08:06 crc kubenswrapper[4830]: E0227 16:08:06.514280 4830 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 16:08:06 crc kubenswrapper[4830]: E0227 16:08:06.514291 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 16:08:10.514265607 +0000 UTC m=+86.603538110 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:08:06 crc kubenswrapper[4830]: E0227 16:08:06.514335 4830 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 16:08:06 crc kubenswrapper[4830]: E0227 16:08:06.514351 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 16:08:10.514335229 +0000 UTC m=+86.603607732 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 16:08:06 crc kubenswrapper[4830]: E0227 16:08:06.514513 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 16:08:10.514484062 +0000 UTC m=+86.603756555 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.582066 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.582131 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.582150 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.582177 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.582196 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:06Z","lastTransitionTime":"2026-02-27T16:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.615462 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ba2fe32-66e0-4bcd-a646-9d07c9a21c54-metrics-certs\") pod \"network-metrics-daemon-kgdlg\" (UID: \"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\") " pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:06 crc kubenswrapper[4830]: E0227 16:08:06.615820 4830 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 16:08:06 crc kubenswrapper[4830]: E0227 16:08:06.615922 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ba2fe32-66e0-4bcd-a646-9d07c9a21c54-metrics-certs podName:6ba2fe32-66e0-4bcd-a646-9d07c9a21c54 nodeName:}" failed. No retries permitted until 2026-02-27 16:08:10.615893598 +0000 UTC m=+86.705166091 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6ba2fe32-66e0-4bcd-a646-9d07c9a21c54-metrics-certs") pod "network-metrics-daemon-kgdlg" (UID: "6ba2fe32-66e0-4bcd-a646-9d07c9a21c54") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.686217 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.686293 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.686316 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.686347 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.686374 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:06Z","lastTransitionTime":"2026-02-27T16:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.762143 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.762239 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.762244 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.762414 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:08:06 crc kubenswrapper[4830]: E0227 16:08:06.762406 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:08:06 crc kubenswrapper[4830]: E0227 16:08:06.762591 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:08:06 crc kubenswrapper[4830]: E0227 16:08:06.762758 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:08:06 crc kubenswrapper[4830]: E0227 16:08:06.762869 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.789800 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.789847 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.789861 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.789884 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.789900 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:06Z","lastTransitionTime":"2026-02-27T16:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.893358 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.893406 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.893418 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.893437 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.893450 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:06Z","lastTransitionTime":"2026-02-27T16:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.996647 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.996982 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.996991 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.997006 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:06 crc kubenswrapper[4830]: I0227 16:08:06.997015 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:06Z","lastTransitionTime":"2026-02-27T16:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.099980 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.100052 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.100069 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.100094 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.100111 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:07Z","lastTransitionTime":"2026-02-27T16:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.198047 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"92b20f888e852fcbe89e6d5c685b4b0fad411956bb3dc3c5ba0e10d6b206e131"} Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.200226 4830 generic.go:334] "Generic (PLEG): container finished" podID="672682a0-e75f-4d6c-b4f2-542944327497" containerID="441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d" exitCode=0 Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.200271 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" event={"ID":"672682a0-e75f-4d6c-b4f2-542944327497","Type":"ContainerDied","Data":"441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d"} Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.202694 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.202743 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.202757 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.202776 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.202797 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:07Z","lastTransitionTime":"2026-02-27T16:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.216913 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kgdlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kgdlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.237637 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eebea1ec0fd24a7664f474db4e4cdb08e53c5c742c392900599dc3e7adcbee2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.250314 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p7298" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a08121a78952c85326d07974c700397678777995d73cce9f24ace8932b3e9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9g9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p7298\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.273340 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fcddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6adbc0c4-e467-41f1-9190-d0dd3693eba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a66caf75365a8658dd7422b2f715e7f095b5bd7d96c8a0a96ce1a80cb99849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zrsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fcddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.293272 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.305189 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.305230 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.305242 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.305262 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.305279 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:07Z","lastTransitionTime":"2026-02-27T16:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.311880 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.329483 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsrq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb72b0f7-1d22-4d13-9653-b1607aa2235d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwzp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsrq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.350289 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672682a0-e75f-4d6c-b4f2-542944327497\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rgv8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.366426 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b20f888e852fcbe89e6d5c685b4b0fad411956bb3dc3c5ba0e10d6b206e131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.390251 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bf9lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.405768 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d6b7ce-4757-4275-8345-60c1b546ce25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58864ba639422794b99f6190c7ab9a81537913f51574220acb3838f97b4f9421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tv5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.407826 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.407889 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.407907 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.407933 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.408002 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:07Z","lastTransitionTime":"2026-02-27T16:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.419504 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5a4c5b-2008-4354-b26e-8763a631e55c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32024ba3e19bc33cc8197b4dd6fdf165f6de8fd7c3b1e85b2f4768c75c50736a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a90f11c3291e6bde2a008f2e8e617c9c8db36ee65c3bbf26112bf8707d7cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gqgb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.441681 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.459667 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4995a66c4bc6d195947f67ef564aa18c7fe145e12067c2fca8e0de6b76b72f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5245f048370e7455a63a6ca01bffb8e514e22a123e45bc3a064ce4fbe27f45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.480634 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d20f886-cfdb-48c7-9754-6b7255b1124f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:52Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0227 16:07:52.525113 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:07:52.525413 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:07:52.526578 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-70351561/tls.crt::/tmp/serving-cert-70351561/tls.key\\\\\\\"\\\\nI0227 16:07:52.790383 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:07:52.794547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:07:52.794587 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:07:52.794627 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:07:52.794638 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:07:52.801197 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:07:52.801263 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:07:52.801293 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 16:07:52.801246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 16:07:52.801325 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:07:52.801373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:07:52.801389 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:07:52.801396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 16:07:52.804476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:07:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.501577 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.510766 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.510818 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.510835 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.510854 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.510866 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:07Z","lastTransitionTime":"2026-02-27T16:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.522431 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.542352 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsrq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb72b0f7-1d22-4d13-9653-b1607aa2235d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwzp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsrq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.565087 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672682a0-e75f-4d6c-b4f2-542944327497\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rgv8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.582236 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fcddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6adbc0c4-e467-41f1-9190-d0dd3693eba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a66caf75365a8658dd7422b2f715e7f095b5bd7d96c8a0a96ce1a80cb99849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zrsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fcddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.599628 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5a4c5b-2008-4354-b26e-8763a631e55c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32024ba3e19bc33cc8197b4dd6fdf165f6de8fd7c3b1e85b2f4768c75c50736a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a90f11c3291e6bde2a008f2e8e617c9c8db36ee65c3bbf26112bf8707d7cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gqgb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.613869 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.613930 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.613976 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.614005 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.614022 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:07Z","lastTransitionTime":"2026-02-27T16:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.620336 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.645874 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4995a66c4bc6d195947f67ef564aa18c7fe145e12067c2fca8e0de6b76b72f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5245f048370e7455a63a6ca01bffb8e514e22a123e45bc3a064ce4fbe27f45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.659553 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b20f888e852fcbe89e6d5c685b4b0fad411956bb3dc3c5ba0e10d6b206e131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.684435 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bf9lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.701820 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d6b7ce-4757-4275-8345-60c1b546ce25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58864ba639422794b99f6190c7ab9a81537913f51574220acb3838f97b4f9421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tv5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.717145 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.717192 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.717205 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.717223 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.717237 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:07Z","lastTransitionTime":"2026-02-27T16:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.719594 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d20f886-cfdb-48c7-9754-6b7255b1124f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:52Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0227 16:07:52.525113 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:07:52.525413 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:07:52.526578 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-70351561/tls.crt::/tmp/serving-cert-70351561/tls.key\\\\\\\"\\\\nI0227 16:07:52.790383 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:07:52.794547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:07:52.794587 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:07:52.794627 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:07:52.794638 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:07:52.801197 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:07:52.801263 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:07:52.801293 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 16:07:52.801246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 16:07:52.801325 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:07:52.801373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:07:52.801389 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:07:52.801396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 16:07:52.804476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:07:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.738137 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eebea1ec0fd24a7664f474db4e4cdb08e53c5c742c392900599dc3e7adcbee2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.752807 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p7298" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a08121a78952c85326d07974c700397678777995d73cce9f24ace8932b3e9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9g9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p7298\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.767434 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kgdlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kgdlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.820055 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.820110 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.820124 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.820146 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.820160 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:07Z","lastTransitionTime":"2026-02-27T16:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.923374 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.923431 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.923449 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.923476 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:07 crc kubenswrapper[4830]: I0227 16:08:07.923494 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:07Z","lastTransitionTime":"2026-02-27T16:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.026127 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.026189 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.026206 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.026230 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.026247 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:08Z","lastTransitionTime":"2026-02-27T16:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.129158 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.129214 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.129230 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.129251 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.129269 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:08Z","lastTransitionTime":"2026-02-27T16:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.209830 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" event={"ID":"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904","Type":"ContainerStarted","Data":"4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0"} Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.213494 4830 generic.go:334] "Generic (PLEG): container finished" podID="672682a0-e75f-4d6c-b4f2-542944327497" containerID="be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394" exitCode=0 Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.213566 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" event={"ID":"672682a0-e75f-4d6c-b4f2-542944327497","Type":"ContainerDied","Data":"be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394"} Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.231065 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kgdlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kgdlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:08Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.232011 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.232058 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.232076 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.232100 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.232118 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:08Z","lastTransitionTime":"2026-02-27T16:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.252388 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eebea1ec0fd24a7664f474db4e4cdb08e53c5c742c392900599dc3e7adcbee2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:08Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.269841 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p7298" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a08121a78952c85326d07974c700397678777995d73cce9f24ace8932b3e9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9g9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p7298\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:08Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.282814 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fcddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6adbc0c4-e467-41f1-9190-d0dd3693eba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a66caf75365a8658dd7422b2f715e7f095b5bd7d96c8a0a96ce1a80cb99849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zrsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fcddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:08Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.304875 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:08Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.327762 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:08Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.334749 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.334799 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.334814 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.334834 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.334846 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:08Z","lastTransitionTime":"2026-02-27T16:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.349309 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsrq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb72b0f7-1d22-4d13-9653-b1607aa2235d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwzp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsrq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:08Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.368876 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672682a0-e75f-4d6c-b4f2-542944327497\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rgv8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:08Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.388817 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b20f888e852fcbe89e6d5c685b4b0fad411956bb3dc3c5ba0e10d6b206e131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:08Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.416212 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bf9lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:08Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.438086 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.438213 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.438233 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.438264 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.438286 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:08Z","lastTransitionTime":"2026-02-27T16:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.438908 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d6b7ce-4757-4275-8345-60c1b546ce25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58864ba639422794b99f6190c7ab9a81537913f51574220acb3838f97b4f9421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tv5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:08Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.458807 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5a4c5b-2008-4354-b26e-8763a631e55c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32024ba3e19bc33cc8197b4dd6fdf165f6de8fd7c3b1e85b2f4768c75c50736a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a90f11c3291e6bde2a008f2e8e617c9c8db36ee65c3bbf26112bf8707d7cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gqgb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:08Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.477529 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:08Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.494128 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4995a66c4bc6d195947f67ef564aa18c7fe145e12067c2fca8e0de6b76b72f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5245f048370e7455a63a6ca01bffb8e514e22a123e45bc3a064ce4fbe27f45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:08Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.511471 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d20f886-cfdb-48c7-9754-6b7255b1124f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:52Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0227 16:07:52.525113 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:07:52.525413 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:07:52.526578 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-70351561/tls.crt::/tmp/serving-cert-70351561/tls.key\\\\\\\"\\\\nI0227 16:07:52.790383 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:07:52.794547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:07:52.794587 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:07:52.794627 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:07:52.794638 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:07:52.801197 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:07:52.801263 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:07:52.801293 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 16:07:52.801246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 16:07:52.801325 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:07:52.801373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:07:52.801389 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:07:52.801396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 16:07:52.804476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:07:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:08Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.541387 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.541442 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.541459 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.541484 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.541503 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:08Z","lastTransitionTime":"2026-02-27T16:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.645289 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.645347 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.645359 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.645378 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.645395 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:08Z","lastTransitionTime":"2026-02-27T16:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.749548 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.749584 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.749593 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.749606 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.749617 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:08Z","lastTransitionTime":"2026-02-27T16:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.761563 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.761667 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:08 crc kubenswrapper[4830]: E0227 16:08:08.761688 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:08:08 crc kubenswrapper[4830]: E0227 16:08:08.761917 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.762069 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.762068 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:08 crc kubenswrapper[4830]: E0227 16:08:08.762379 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:08:08 crc kubenswrapper[4830]: E0227 16:08:08.762588 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.852519 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.852599 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.852625 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.852658 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.852682 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:08Z","lastTransitionTime":"2026-02-27T16:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.956695 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.956761 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.956787 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.956814 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:08 crc kubenswrapper[4830]: I0227 16:08:08.956834 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:08Z","lastTransitionTime":"2026-02-27T16:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.060000 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.060065 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.060086 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.060119 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.060145 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:09Z","lastTransitionTime":"2026-02-27T16:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.164416 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.164483 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.164497 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.164521 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.164540 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:09Z","lastTransitionTime":"2026-02-27T16:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.223100 4830 generic.go:334] "Generic (PLEG): container finished" podID="672682a0-e75f-4d6c-b4f2-542944327497" containerID="d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e" exitCode=0 Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.223160 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" event={"ID":"672682a0-e75f-4d6c-b4f2-542944327497","Type":"ContainerDied","Data":"d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e"} Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.247560 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:09Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.267238 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.267315 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.267368 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.267397 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.267417 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:09Z","lastTransitionTime":"2026-02-27T16:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.270744 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsrq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb72b0f7-1d22-4d13-9653-b1607aa2235d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwzp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsrq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:09Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.298650 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672682a0-e75f-4d6c-b4f2-542944327497\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rgv8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:09Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.317911 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fcddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6adbc0c4-e467-41f1-9190-d0dd3693eba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a66caf75365a8658dd7422b2f715e7f095b5bd7d96c8a0a96ce1a80cb99849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zrsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fcddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:09Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.341552 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:09Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.366754 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:09Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.369663 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.369735 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.369758 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.369790 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.369813 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:09Z","lastTransitionTime":"2026-02-27T16:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.386406 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4995a66c4bc6d195947f67ef564aa18c7fe145e12067c2fca8e0de6b76b72f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5245f048370e7455a63a6ca01bffb8e514e22a123e45bc3a064ce4fbe27f45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:09Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.401980 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b20f888e852fcbe89e6d5c685b4b0fad411956bb3dc3c5ba0e10d6b206e131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:09Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.432636 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bf9lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:09Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.446907 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d6b7ce-4757-4275-8345-60c1b546ce25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58864ba639422794b99f6190c7ab9a81537913f51574220acb3838f97b4f9421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tv5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:09Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.465826 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5a4c5b-2008-4354-b26e-8763a631e55c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32024ba3e19bc33cc8197b4dd6fdf165f6de8fd7c3b1e85b2f4768c75c50736a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a90f11c3291e6bde2a008f2e8e617c9c8db36ee65c3bbf26112bf8707d7cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gqgb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:09Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.473167 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.473232 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.473245 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.473263 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.473275 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:09Z","lastTransitionTime":"2026-02-27T16:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.485013 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d20f886-cfdb-48c7-9754-6b7255b1124f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:52Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0227 16:07:52.525113 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:07:52.525413 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:07:52.526578 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-70351561/tls.crt::/tmp/serving-cert-70351561/tls.key\\\\\\\"\\\\nI0227 16:07:52.790383 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:07:52.794547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:07:52.794587 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:07:52.794627 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:07:52.794638 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:07:52.801197 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:07:52.801263 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:07:52.801293 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 16:07:52.801246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 16:07:52.801325 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:07:52.801373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:07:52.801389 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:07:52.801396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 16:07:52.804476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:07:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:09Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.504352 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eebea1ec0fd24a7664f474db4e4cdb08e53c5c742c392900599dc3e7adcbee2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:09Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.519927 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p7298" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a08121a78952c85326d07974c700397678777995d73cce9f24ace8932b3e9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9g9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p7298\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:09Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.532626 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kgdlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kgdlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:09Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.577423 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.577475 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.577488 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.577511 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.577526 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:09Z","lastTransitionTime":"2026-02-27T16:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.681600 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.681648 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.681666 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.681689 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.681704 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:09Z","lastTransitionTime":"2026-02-27T16:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.779908 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.785255 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.785311 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.785337 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.785363 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.785382 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:09Z","lastTransitionTime":"2026-02-27T16:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.888651 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.888714 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.888736 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.888767 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.888790 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:09Z","lastTransitionTime":"2026-02-27T16:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.992202 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.992275 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.992292 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.992323 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:09 crc kubenswrapper[4830]: I0227 16:08:09.992348 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:09Z","lastTransitionTime":"2026-02-27T16:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.095498 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.095558 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.095571 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.095595 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.095612 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:10Z","lastTransitionTime":"2026-02-27T16:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.199267 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.199391 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.199497 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.199588 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.199618 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:10Z","lastTransitionTime":"2026-02-27T16:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.243715 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" event={"ID":"672682a0-e75f-4d6c-b4f2-542944327497","Type":"ContainerStarted","Data":"f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548"} Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.254339 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" event={"ID":"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904","Type":"ContainerStarted","Data":"61c52728c46cb073f23486d3a125c94f84cd97c41e10035ef21076655896a057"} Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.254969 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.255020 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.255104 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.273737 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d20f886-cfdb-48c7-9754-6b7255b1124f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:52Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0227 16:07:52.525113 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:07:52.525413 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:07:52.526578 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-70351561/tls.crt::/tmp/serving-cert-70351561/tls.key\\\\\\\"\\\\nI0227 16:07:52.790383 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:07:52.794547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:07:52.794587 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:07:52.794627 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:07:52.794638 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:07:52.801197 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:07:52.801263 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:07:52.801293 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 16:07:52.801246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 16:07:52.801325 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:07:52.801373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:07:52.801389 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:07:52.801396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 16:07:52.804476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:07:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.296757 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eebea1ec0fd24a7664f474db4e4cdb08e53c5c742c392900599dc3e7adcbee2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.308728 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.308783 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.308800 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.308827 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.308848 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:10Z","lastTransitionTime":"2026-02-27T16:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.309543 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.309704 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.316609 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p7298" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a08121a78952c85326d07974c700397678777995d73cce9f24ace8932b3e9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9g9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p7298\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.335514 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kgdlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kgdlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.355882 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f62df11-40bd-4531-baa9-5b7ab5679a67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5046080bbc4855769fb50909ff06b8c2cfd2438a0b65749c89302a4e97342980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.379577 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.403327 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsrq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb72b0f7-1d22-4d13-9653-b1607aa2235d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwzp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsrq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.411578 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.411635 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.411655 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.411681 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.411697 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:10Z","lastTransitionTime":"2026-02-27T16:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.430611 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672682a0-e75f-4d6c-b4f2-542944327497\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rgv8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.453135 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fcddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6adbc0c4-e467-41f1-9190-d0dd3693eba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a66caf75365a8658dd7422b2f715e7f095b5bd7d96c8a0a96ce1a80cb99849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zrsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fcddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.475886 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.499198 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.514726 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.514825 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.514844 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.514979 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.515003 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:10Z","lastTransitionTime":"2026-02-27T16:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.522656 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4995a66c4bc6d195947f67ef564aa18c7fe145e12067c2fca8e0de6b76b72f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5245f048370e7455a63a6ca01bffb8e514e22a123e45bc3a064ce4fbe27f45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.543570 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b20f888e852fcbe89e6d5c685b4b0fad411956bb3dc3c5ba0e10d6b206e131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.565145 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.565321 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:08:10 crc kubenswrapper[4830]: E0227 16:08:10.565353 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:08:18.565318613 +0000 UTC m=+94.654591116 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.565388 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.565424 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.565475 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:08:10 crc kubenswrapper[4830]: E0227 16:08:10.565558 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 16:08:10 crc kubenswrapper[4830]: E0227 16:08:10.565590 4830 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 16:08:10 crc kubenswrapper[4830]: E0227 16:08:10.565617 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 16:08:10 crc kubenswrapper[4830]: E0227 16:08:10.565639 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 16:08:10 crc kubenswrapper[4830]: E0227 16:08:10.565650 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 16:08:18.565635691 +0000 UTC m=+94.654908184 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 16:08:10 crc kubenswrapper[4830]: E0227 16:08:10.565654 4830 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 16:08:10 crc kubenswrapper[4830]: E0227 16:08:10.565659 4830 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:08:10 crc kubenswrapper[4830]: E0227 16:08:10.565592 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 16:08:10 crc kubenswrapper[4830]: E0227 16:08:10.565728 4830 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:08:10 crc kubenswrapper[4830]: E0227 16:08:10.565715 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 16:08:18.565697252 +0000 UTC m=+94.654969725 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 16:08:10 crc kubenswrapper[4830]: E0227 16:08:10.565754 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 16:08:18.565745493 +0000 UTC m=+94.655017966 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:08:10 crc kubenswrapper[4830]: E0227 16:08:10.565769 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 16:08:18.565761694 +0000 UTC m=+94.655034167 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.570836 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bf9lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.592664 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d6b7ce-4757-4275-8345-60c1b546ce25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58864ba639422794b99f6190c7ab9a81537913f51574220acb3838f97b4f9421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tv5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.610137 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5a4c5b-2008-4354-b26e-8763a631e55c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32024ba3e19bc33cc8197b4dd6fdf165f6de8fd7c3b1e85b2f4768c75c50736a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a90f11c3291e6bde2a008f2e8e617c9c8db36ee65c3bbf26112bf8707d7cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gqgb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.618436 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.618474 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.618485 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.618501 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.618514 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:10Z","lastTransitionTime":"2026-02-27T16:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.642303 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672682a0-e75f-4d6c-b4f2-542944327497\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rgv8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.657899 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fcddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6adbc0c4-e467-41f1-9190-d0dd3693eba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a66caf75365a8658dd7422b2f715e7f095b5bd7d96c8a0a96ce1a80cb99849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zrsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fcddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.666721 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ba2fe32-66e0-4bcd-a646-9d07c9a21c54-metrics-certs\") pod \"network-metrics-daemon-kgdlg\" (UID: \"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\") " pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:10 crc kubenswrapper[4830]: E0227 16:08:10.666972 4830 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 16:08:10 crc kubenswrapper[4830]: E0227 16:08:10.667033 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ba2fe32-66e0-4bcd-a646-9d07c9a21c54-metrics-certs podName:6ba2fe32-66e0-4bcd-a646-9d07c9a21c54 nodeName:}" failed. No retries permitted until 2026-02-27 16:08:18.667014865 +0000 UTC m=+94.756287338 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6ba2fe32-66e0-4bcd-a646-9d07c9a21c54-metrics-certs") pod "network-metrics-daemon-kgdlg" (UID: "6ba2fe32-66e0-4bcd-a646-9d07c9a21c54") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.673446 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.696001 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.711717 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsrq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb72b0f7-1d22-4d13-9653-b1607aa2235d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwzp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsrq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.720424 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.720452 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.720463 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.720479 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.720491 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:10Z","lastTransitionTime":"2026-02-27T16:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.731383 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4995a66c4bc6d195947f67ef564aa18c7fe145e12067c2fca8e0de6b76b72f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5245f048370e7455a63a6ca01bffb8e514e22a123e45bc3a064ce4fbe27f45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.747928 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b20f888e852fcbe89e6d5c685b4b0fad411956bb3dc3c5ba0e10d6b206e131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.761669 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.761782 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.761871 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:10 crc kubenswrapper[4830]: E0227 16:08:10.761888 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.761971 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:10 crc kubenswrapper[4830]: E0227 16:08:10.762182 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:08:10 crc kubenswrapper[4830]: E0227 16:08:10.762240 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:08:10 crc kubenswrapper[4830]: E0227 16:08:10.762347 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.787505 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c52728c46cb073f23486d3a125c94f84cd97c41e10035ef21076655896a057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bf9lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.808545 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d6b7ce-4757-4275-8345-60c1b546ce25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58864ba639422794b99f6190c7ab9a81537913f51574220acb3838f97b4f9421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tv5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.823560 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.823634 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.823652 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.823679 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.823698 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:10Z","lastTransitionTime":"2026-02-27T16:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.828747 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5a4c5b-2008-4354-b26e-8763a631e55c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32024ba3e19bc33cc8197b4dd6fdf165f6de8fd7c3b1e85b2f4768c75c50736a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a90f11c3291e6bde2a008f2e8e617c9c8db36ee65c3bbf26112bf8707d7cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gqgb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.844813 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.860574 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d20f886-cfdb-48c7-9754-6b7255b1124f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:52Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0227 16:07:52.525113 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:07:52.525413 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:07:52.526578 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-70351561/tls.crt::/tmp/serving-cert-70351561/tls.key\\\\\\\"\\\\nI0227 16:07:52.790383 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:07:52.794547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:07:52.794587 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:07:52.794627 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:07:52.794638 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:07:52.801197 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:07:52.801263 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:07:52.801293 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 16:07:52.801246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 16:07:52.801325 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:07:52.801373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:07:52.801389 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:07:52.801396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 16:07:52.804476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:07:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.874892 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p7298" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a08121a78952c85326d07974c700397678777995d73cce9f24ace8932b3e9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9g9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p7298\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.891516 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kgdlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kgdlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.907498 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f62df11-40bd-4531-baa9-5b7ab5679a67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5046080bbc4855769fb50909ff06b8c2cfd2438a0b65749c89302a4e97342980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.920003 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eebea1ec0fd24a7664f474db4e4cdb08e53c5c742c392900599dc3e7adcbee2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.926404 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.926481 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.926501 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.926529 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:10 crc kubenswrapper[4830]: I0227 16:08:10.926550 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:10Z","lastTransitionTime":"2026-02-27T16:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.030345 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.030403 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.030421 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.030447 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.030466 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:11Z","lastTransitionTime":"2026-02-27T16:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.134313 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.134673 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.134826 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.135003 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.135432 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:11Z","lastTransitionTime":"2026-02-27T16:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.238871 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.239296 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.239705 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.239878 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.240079 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:11Z","lastTransitionTime":"2026-02-27T16:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.296443 4830 generic.go:334] "Generic (PLEG): container finished" podID="672682a0-e75f-4d6c-b4f2-542944327497" containerID="f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548" exitCode=0 Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.296574 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" event={"ID":"672682a0-e75f-4d6c-b4f2-542944327497","Type":"ContainerDied","Data":"f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548"} Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.323358 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p7298" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a08121a78952c85326d07974c700397678777995d73cce9f24ace8932b3e9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9g9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p7298\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:11Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.336627 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kgdlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kgdlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:11Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.343726 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.343842 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.343861 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.343886 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.343903 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:11Z","lastTransitionTime":"2026-02-27T16:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.352029 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f62df11-40bd-4531-baa9-5b7ab5679a67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5046080bbc4855769fb50909ff06b8c2cfd2438a0b65749c89302a4e97342980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:11Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.372447 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eebea1ec0fd24a7664f474db4e4cdb08e53c5c742c392900599dc3e7adcbee2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:11Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.398210 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672682a0-e75f-4d6c-b4f2-542944327497\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rgv8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:11Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.411885 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fcddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6adbc0c4-e467-41f1-9190-d0dd3693eba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a66caf75365a8658dd7422b2f715e7f095b5bd7d96c8a0a96ce1a80cb99849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zrsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fcddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:11Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.432045 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:11Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.447212 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.447246 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.447256 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.447273 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.447285 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:11Z","lastTransitionTime":"2026-02-27T16:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.451462 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:11Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.467436 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsrq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb72b0f7-1d22-4d13-9653-b1607aa2235d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwzp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsrq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:11Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.519858 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4995a66c4bc6d195947f67ef564aa18c7fe145e12067c2fca8e0de6b76b72f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5245f048370e7455a63a6ca01bffb8e514e22a123e45bc3a064ce4fbe27f45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:11Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.543067 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b20f888e852fcbe89e6d5c685b4b0fad411956bb3dc3c5ba0e10d6b206e131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:11Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.550581 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.550608 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.550616 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.550631 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.550641 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:11Z","lastTransitionTime":"2026-02-27T16:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.614105 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c52728c46cb073f23486d3a125c94f84cd97c41e10035ef21076655896a057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bf9lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:11Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.625091 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d6b7ce-4757-4275-8345-60c1b546ce25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58864ba639422794b99f6190c7ab9a81537913f51574220acb3838f97b4f9421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tv5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:11Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.641743 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5a4c5b-2008-4354-b26e-8763a631e55c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32024ba3e19bc33cc8197b4dd6fdf165f6de8fd7c3b1e85b2f4768c75c50736a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a90f11c3291e6bde2a008f2e8e617c9c8db36ee65c3bbf26112bf8707d7cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gqgb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:11Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.655499 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:11Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.663154 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.663196 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.663212 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.663235 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.663251 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:11Z","lastTransitionTime":"2026-02-27T16:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.671170 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d20f886-cfdb-48c7-9754-6b7255b1124f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:52Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0227 16:07:52.525113 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:07:52.525413 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:07:52.526578 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-70351561/tls.crt::/tmp/serving-cert-70351561/tls.key\\\\\\\"\\\\nI0227 16:07:52.790383 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:07:52.794547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:07:52.794587 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:07:52.794627 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:07:52.794638 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:07:52.801197 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:07:52.801263 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:07:52.801293 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 16:07:52.801246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 16:07:52.801325 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:07:52.801373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:07:52.801389 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:07:52.801396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 16:07:52.804476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:07:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:11Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.767411 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.771883 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.771905 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.771928 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.771991 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:11Z","lastTransitionTime":"2026-02-27T16:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.874907 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.875016 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.875036 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.875062 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.875082 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:11Z","lastTransitionTime":"2026-02-27T16:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.977742 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.977801 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.977818 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.977842 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:11 crc kubenswrapper[4830]: I0227 16:08:11.977858 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:11Z","lastTransitionTime":"2026-02-27T16:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.080025 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.080087 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.080104 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.080128 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.080148 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:12Z","lastTransitionTime":"2026-02-27T16:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.185810 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.185870 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.185892 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.185921 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.185976 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:12Z","lastTransitionTime":"2026-02-27T16:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.292846 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.292886 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.292905 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.292927 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.292967 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:12Z","lastTransitionTime":"2026-02-27T16:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.309038 4830 generic.go:334] "Generic (PLEG): container finished" podID="672682a0-e75f-4d6c-b4f2-542944327497" containerID="c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42" exitCode=0 Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.309123 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" event={"ID":"672682a0-e75f-4d6c-b4f2-542944327497","Type":"ContainerDied","Data":"c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42"} Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.335255 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d20f886-cfdb-48c7-9754-6b7255b1124f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:52Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0227 16:07:52.525113 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:07:52.525413 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:07:52.526578 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-70351561/tls.crt::/tmp/serving-cert-70351561/tls.key\\\\\\\"\\\\nI0227 16:07:52.790383 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:07:52.794547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:07:52.794587 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:07:52.794627 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:07:52.794638 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:07:52.801197 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:07:52.801263 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:07:52.801293 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 16:07:52.801246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 16:07:52.801325 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:07:52.801373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:07:52.801389 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:07:52.801396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 16:07:52.804476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:07:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.353370 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p7298" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a08121a78952c85326d07974c700397678777995d73cce9f24ace8932b3e9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9g9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p7298\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.371476 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kgdlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kgdlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.390622 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f62df11-40bd-4531-baa9-5b7ab5679a67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5046080bbc4855769fb50909ff06b8c2cfd2438a0b65749c89302a4e97342980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.395761 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.395804 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.395820 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.395842 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.395859 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:12Z","lastTransitionTime":"2026-02-27T16:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.415694 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eebea1ec0fd24a7664f474db4e4cdb08e53c5c742c392900599dc3e7adcbee2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.441289 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672682a0-e75f-4d6c-b4f2-542944327497\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rgv8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.454784 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fcddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6adbc0c4-e467-41f1-9190-d0dd3693eba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a66caf75365a8658dd7422b2f715e7f095b5bd7d96c8a0a96ce1a80cb99849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zrsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fcddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.470553 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.486409 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.503770 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsrq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb72b0f7-1d22-4d13-9653-b1607aa2235d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwzp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsrq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.504486 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.504542 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.504559 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.504586 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.504604 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:12Z","lastTransitionTime":"2026-02-27T16:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.520157 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4995a66c4bc6d195947f67ef564aa18c7fe145e12067c2fca8e0de6b76b72f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5245f048370e7455a63a6ca01bffb8e514e22a123e45bc3a064ce4fbe27f45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.533373 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b20f888e852fcbe89e6d5c685b4b0fad411956bb3dc3c5ba0e10d6b206e131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.554741 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c52728c46cb073f23486d3a125c94f84cd97c41e10035ef21076655896a057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bf9lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.573356 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d6b7ce-4757-4275-8345-60c1b546ce25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58864ba639422794b99f6190c7ab9a81537913f51574220acb3838f97b4f9421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tv5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.590427 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5a4c5b-2008-4354-b26e-8763a631e55c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32024ba3e19bc33cc8197b4dd6fdf165f6de8fd7c3b1e85b2f4768c75c50736a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a90f11c3291e6bde2a008f2e8e617c9c8db36ee65c3bbf26112bf8707d7cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gqgb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.607441 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.607512 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.607534 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.607561 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.607588 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:12Z","lastTransitionTime":"2026-02-27T16:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.612181 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.711837 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.712107 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.712219 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.712309 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.712394 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:12Z","lastTransitionTime":"2026-02-27T16:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.762287 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:08:12 crc kubenswrapper[4830]: E0227 16:08:12.762455 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.763078 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:12 crc kubenswrapper[4830]: E0227 16:08:12.763196 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.763287 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:12 crc kubenswrapper[4830]: E0227 16:08:12.763395 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.763430 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:08:12 crc kubenswrapper[4830]: E0227 16:08:12.763509 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.815778 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.815843 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.815861 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.815888 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.815905 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:12Z","lastTransitionTime":"2026-02-27T16:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.923231 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.923322 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.923342 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.923374 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:12 crc kubenswrapper[4830]: I0227 16:08:12.923394 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:12Z","lastTransitionTime":"2026-02-27T16:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.025630 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.025672 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.025683 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.025703 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.025715 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:13Z","lastTransitionTime":"2026-02-27T16:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.128301 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.128356 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.128374 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.128401 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.128417 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:13Z","lastTransitionTime":"2026-02-27T16:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.232391 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.232443 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.232456 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.232473 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.232487 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:13Z","lastTransitionTime":"2026-02-27T16:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.316350 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" event={"ID":"672682a0-e75f-4d6c-b4f2-542944327497","Type":"ContainerStarted","Data":"c5efac20f84ec097b144c4cb152a6eba826b5eac50e222e2618b75bf163cbe93"} Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.334170 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d20f886-cfdb-48c7-9754-6b7255b1124f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:52Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0227 16:07:52.525113 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:07:52.525413 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:07:52.526578 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-70351561/tls.crt::/tmp/serving-cert-70351561/tls.key\\\\\\\"\\\\nI0227 16:07:52.790383 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:07:52.794547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:07:52.794587 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:07:52.794627 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:07:52.794638 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:07:52.801197 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:07:52.801263 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:07:52.801293 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 16:07:52.801246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 16:07:52.801325 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:07:52.801373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:07:52.801389 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:07:52.801396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 16:07:52.804476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:07:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:13Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.334925 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.335037 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.335059 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.335087 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.335108 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:13Z","lastTransitionTime":"2026-02-27T16:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.348132 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f62df11-40bd-4531-baa9-5b7ab5679a67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5046080bbc4855769fb50909ff06b8c2cfd2438a0b65749c89302a4e97342980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:13Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.362564 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eebea1ec0fd24a7664f474db4e4cdb08e53c5c742c392900599dc3e7adcbee2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:13Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.373614 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p7298" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a08121a78952c85326d07974c700397678777995d73cce9f24ace8932b3e9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9g9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p7298\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:13Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.384073 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kgdlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kgdlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:13Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.398917 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:13Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.422059 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:13Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.437181 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.437209 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.437220 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.437236 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.437246 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:13Z","lastTransitionTime":"2026-02-27T16:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.447884 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsrq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb72b0f7-1d22-4d13-9653-b1607aa2235d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwzp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsrq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:13Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.472718 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672682a0-e75f-4d6c-b4f2-542944327497\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5efac20f84ec097b144c4cb152a6eba826b5eac50e222e2618b75bf163cbe93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rgv8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:13Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.490187 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fcddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6adbc0c4-e467-41f1-9190-d0dd3693eba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a66caf75365a8658dd7422b2f715e7f095b5bd7d96c8a0a96ce1a80cb99849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zrsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fcddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:13Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.504234 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5a4c5b-2008-4354-b26e-8763a631e55c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32024ba3e19bc33cc8197b4dd6fdf165f6de8fd7c3b1e85b2f4768c75c50736a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a90f11c3291e6bde2a008f2e8e617c9c8db36ee65c3bbf26112bf8707d7cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gqgb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:13Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.526565 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:13Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.539140 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.539174 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.539185 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.539200 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.539212 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:13Z","lastTransitionTime":"2026-02-27T16:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.545783 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4995a66c4bc6d195947f67ef564aa18c7fe145e12067c2fca8e0de6b76b72f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5245f048370e7455a63a6ca01bffb8e514e22a123e45bc3a064ce4fbe27f45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:13Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.563620 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b20f888e852fcbe89e6d5c685b4b0fad411956bb3dc3c5ba0e10d6b206e131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:13Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.587167 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c52728c46cb073f23486d3a125c94f84cd97c41e10035ef21076655896a057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bf9lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:13Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.602026 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d6b7ce-4757-4275-8345-60c1b546ce25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58864ba639422794b99f6190c7ab9a81537913f51574220acb3838f97b4f9421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tv5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:13Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.641033 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.641066 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.641076 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.641092 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.641102 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:13Z","lastTransitionTime":"2026-02-27T16:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.743825 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.743859 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.743869 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.743884 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.743894 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:13Z","lastTransitionTime":"2026-02-27T16:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.846921 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.847022 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.847045 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.847078 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.847100 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:13Z","lastTransitionTime":"2026-02-27T16:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.950351 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.950448 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.950473 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.950558 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:13 crc kubenswrapper[4830]: I0227 16:08:13.950582 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:13Z","lastTransitionTime":"2026-02-27T16:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.053235 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.053297 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.053314 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.053341 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.053358 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:14Z","lastTransitionTime":"2026-02-27T16:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.157256 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.157300 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.157316 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.157340 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.157358 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:14Z","lastTransitionTime":"2026-02-27T16:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.260085 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.260142 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.260201 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.260233 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.260255 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:14Z","lastTransitionTime":"2026-02-27T16:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.326860 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bf9lh_2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904/ovnkube-controller/0.log" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.329986 4830 generic.go:334] "Generic (PLEG): container finished" podID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerID="61c52728c46cb073f23486d3a125c94f84cd97c41e10035ef21076655896a057" exitCode=1 Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.330027 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" event={"ID":"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904","Type":"ContainerDied","Data":"61c52728c46cb073f23486d3a125c94f84cd97c41e10035ef21076655896a057"} Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.330624 4830 scope.go:117] "RemoveContainer" containerID="61c52728c46cb073f23486d3a125c94f84cd97c41e10035ef21076655896a057" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.354153 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d20f886-cfdb-48c7-9754-6b7255b1124f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:52Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0227 16:07:52.525113 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:07:52.525413 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:07:52.526578 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-70351561/tls.crt::/tmp/serving-cert-70351561/tls.key\\\\\\\"\\\\nI0227 16:07:52.790383 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:07:52.794547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:07:52.794587 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:07:52.794627 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:07:52.794638 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:07:52.801197 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:07:52.801263 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:07:52.801293 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 16:07:52.801246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 16:07:52.801325 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:07:52.801373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:07:52.801389 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:07:52.801396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 16:07:52.804476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:07:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.363116 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.363156 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.363174 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.363195 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.363215 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:14Z","lastTransitionTime":"2026-02-27T16:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.372395 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p7298" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a08121a78952c85326d07974c700397678777995d73cce9f24ace8932b3e9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9g9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p7298\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.388573 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kgdlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kgdlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.405187 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f62df11-40bd-4531-baa9-5b7ab5679a67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5046080bbc4855769fb50909ff06b8c2cfd2438a0b65749c89302a4e97342980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.426353 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eebea1ec0fd24a7664f474db4e4cdb08e53c5c742c392900599dc3e7adcbee2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.450421 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672682a0-e75f-4d6c-b4f2-542944327497\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5efac20f84ec097b144c4cb152a6eba826b5eac50e222e2618b75bf163cbe93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rgv8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.465750 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fcddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6adbc0c4-e467-41f1-9190-d0dd3693eba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a66caf75365a8658dd7422b2f715e7f095b5bd7d96c8a0a96ce1a80cb99849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zrsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fcddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.466447 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.466497 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.466519 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.466549 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.466570 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:14Z","lastTransitionTime":"2026-02-27T16:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.484180 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.502323 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.520976 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsrq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb72b0f7-1d22-4d13-9653-b1607aa2235d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwzp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsrq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.539436 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4995a66c4bc6d195947f67ef564aa18c7fe145e12067c2fca8e0de6b76b72f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5245f048370e7455a63a6ca01bffb8e514e22a123e45bc3a064ce4fbe27f45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.556339 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b20f888e852fcbe89e6d5c685b4b0fad411956bb3dc3c5ba0e10d6b206e131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.569507 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.569562 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.569578 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.569603 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.569621 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:14Z","lastTransitionTime":"2026-02-27T16:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.584970 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c52728c46cb073f23486d3a125c94f84cd97c41e10035ef21076655896a057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c52728c46cb073f23486d3a125c94f84cd97c41e10035ef21076655896a057\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"message\\\":\\\"762 6344 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0227 16:08:13.738893 6344 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 16:08:13.738931 6344 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0227 16:08:13.738770 6344 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0227 16:08:13.739140 6344 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0227 16:08:13.739162 6344 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0227 16:08:13.739168 6344 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0227 16:08:13.739194 6344 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0227 16:08:13.739226 6344 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0227 16:08:13.739239 6344 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0227 16:08:13.739249 6344 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0227 16:08:13.739259 6344 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0227 16:08:13.739479 6344 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bf9lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.601103 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d6b7ce-4757-4275-8345-60c1b546ce25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58864ba639422794b99f6190c7ab9a81537913f51574220acb3838f97b4f9421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tv5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.619779 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5a4c5b-2008-4354-b26e-8763a631e55c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32024ba3e19bc33cc8197b4dd6fdf165f6de8fd7c3b1e85b2f4768c75c50736a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a90f11c3291e6bde2a008f2e8e617c9c8db36ee65c3bbf26112bf8707d7cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gqgb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.637805 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.672478 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.672534 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.672550 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.672572 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.672588 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:14Z","lastTransitionTime":"2026-02-27T16:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.762117 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:08:14 crc kubenswrapper[4830]: E0227 16:08:14.762321 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.762451 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:08:14 crc kubenswrapper[4830]: E0227 16:08:14.762583 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.763206 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:14 crc kubenswrapper[4830]: E0227 16:08:14.763383 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.763604 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:14 crc kubenswrapper[4830]: E0227 16:08:14.763760 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.775892 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.776021 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.776044 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.776073 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.776094 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:14Z","lastTransitionTime":"2026-02-27T16:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.787043 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672682a0-e75f-4d6c-b4f2-542944327497\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5efac20f84ec097b144c4cb152a6eba826b5eac50e222e2618b75bf163cbe93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rgv8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.802731 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fcddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6adbc0c4-e467-41f1-9190-d0dd3693eba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a66caf75365a8658dd7422b2f715e7f095b5bd7d96c8a0a96ce1a80cb99849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zrsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fcddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.822742 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.843564 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.865146 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsrq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb72b0f7-1d22-4d13-9653-b1607aa2235d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwzp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsrq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.880346 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.880425 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.880450 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.880479 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.880502 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:14Z","lastTransitionTime":"2026-02-27T16:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.888351 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4995a66c4bc6d195947f67ef564aa18c7fe145e12067c2fca8e0de6b76b72f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5245f048370e7455a63a6ca01bffb8e514e22a123e45bc3a064ce4fbe27f45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.907200 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b20f888e852fcbe89e6d5c685b4b0fad411956bb3dc3c5ba0e10d6b206e131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.941741 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61c52728c46cb073f23486d3a125c94f84cd97c41e10035ef21076655896a057\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c52728c46cb073f23486d3a125c94f84cd97c41e10035ef21076655896a057\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"message\\\":\\\"762 6344 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0227 16:08:13.738893 6344 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 16:08:13.738931 6344 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0227 16:08:13.738770 6344 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0227 16:08:13.739140 6344 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0227 16:08:13.739162 6344 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0227 16:08:13.739168 6344 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0227 16:08:13.739194 6344 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0227 16:08:13.739226 6344 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0227 16:08:13.739239 6344 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0227 16:08:13.739249 6344 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0227 16:08:13.739259 6344 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0227 16:08:13.739479 6344 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bf9lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.962523 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d6b7ce-4757-4275-8345-60c1b546ce25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58864ba639422794b99f6190c7ab9a81537913f51574220acb3838f97b4f9421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tv5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.980612 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5a4c5b-2008-4354-b26e-8763a631e55c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32024ba3e19bc33cc8197b4dd6fdf165f6de8fd7c3b1e85b2f4768c75c50736a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a90f11c3291e6bde2a008f2e8e617c9c8db36ee65c3bbf26112bf8707d7cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gqgb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.983462 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.983504 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.983521 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.983545 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:14 crc kubenswrapper[4830]: I0227 16:08:14.983562 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:14Z","lastTransitionTime":"2026-02-27T16:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.000457 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.021659 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d20f886-cfdb-48c7-9754-6b7255b1124f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:52Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0227 16:07:52.525113 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:07:52.525413 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:07:52.526578 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-70351561/tls.crt::/tmp/serving-cert-70351561/tls.key\\\\\\\"\\\\nI0227 16:07:52.790383 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:07:52.794547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:07:52.794587 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:07:52.794627 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:07:52.794638 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:07:52.801197 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:07:52.801263 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:07:52.801293 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 16:07:52.801246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 16:07:52.801325 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:07:52.801373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:07:52.801389 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:07:52.801396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 16:07:52.804476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:07:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:15Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.037568 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p7298" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a08121a78952c85326d07974c700397678777995d73cce9f24ace8932b3e9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9g9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p7298\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:15Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.051746 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kgdlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kgdlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:15Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.065665 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f62df11-40bd-4531-baa9-5b7ab5679a67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5046080bbc4855769fb50909ff06b8c2cfd2438a0b65749c89302a4e97342980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:15Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.082729 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eebea1ec0fd24a7664f474db4e4cdb08e53c5c742c392900599dc3e7adcbee2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:15Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.085436 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.085486 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.085503 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.085527 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.085544 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:15Z","lastTransitionTime":"2026-02-27T16:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.187994 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.188055 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.188074 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.188101 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.188119 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:15Z","lastTransitionTime":"2026-02-27T16:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.290547 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.290579 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.290588 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.290601 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.290611 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:15Z","lastTransitionTime":"2026-02-27T16:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.336495 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bf9lh_2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904/ovnkube-controller/0.log" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.340540 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" event={"ID":"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904","Type":"ContainerStarted","Data":"699248f4512fa4743fb7f9da9ba7ab978d62198587e74a1e69ad4e60ebaa28f5"} Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.393106 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.393149 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.393160 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.393178 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.393191 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:15Z","lastTransitionTime":"2026-02-27T16:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.496287 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.496339 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.496354 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.496377 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.496395 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:15Z","lastTransitionTime":"2026-02-27T16:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.599756 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.599841 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.599862 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.599910 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.599933 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:15Z","lastTransitionTime":"2026-02-27T16:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.686448 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.686503 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.686521 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.686545 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.686563 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:15Z","lastTransitionTime":"2026-02-27T16:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:15 crc kubenswrapper[4830]: E0227 16:08:15.700794 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:15Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.704853 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.704902 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.704918 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.704939 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.704979 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:15Z","lastTransitionTime":"2026-02-27T16:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:15 crc kubenswrapper[4830]: E0227 16:08:15.721061 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:15Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.725702 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.725747 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.725757 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.725773 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.725784 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:15Z","lastTransitionTime":"2026-02-27T16:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:15 crc kubenswrapper[4830]: E0227 16:08:15.737644 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:15Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.741711 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.741780 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.741801 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.741827 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.741845 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:15Z","lastTransitionTime":"2026-02-27T16:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:15 crc kubenswrapper[4830]: E0227 16:08:15.764896 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:15Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.770136 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.770201 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.770219 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.770245 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.770263 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:15Z","lastTransitionTime":"2026-02-27T16:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:15 crc kubenswrapper[4830]: E0227 16:08:15.787659 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:15Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:15 crc kubenswrapper[4830]: E0227 16:08:15.787891 4830 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.790397 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.790453 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.790468 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.790490 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.790504 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:15Z","lastTransitionTime":"2026-02-27T16:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.893281 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.893347 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.893369 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.893399 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.893421 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:15Z","lastTransitionTime":"2026-02-27T16:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.996171 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.996201 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.996209 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.996223 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:15 crc kubenswrapper[4830]: I0227 16:08:15.996231 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:15Z","lastTransitionTime":"2026-02-27T16:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.098326 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.098354 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.098362 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.098375 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.098384 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:16Z","lastTransitionTime":"2026-02-27T16:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.200347 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.200405 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.200421 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.200446 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.200466 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:16Z","lastTransitionTime":"2026-02-27T16:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.303162 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.303261 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.303285 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.303315 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.303339 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:16Z","lastTransitionTime":"2026-02-27T16:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.343802 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.367414 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d20f886-cfdb-48c7-9754-6b7255b1124f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:52Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0227 16:07:52.525113 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:07:52.525413 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:07:52.526578 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-70351561/tls.crt::/tmp/serving-cert-70351561/tls.key\\\\\\\"\\\\nI0227 16:07:52.790383 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:07:52.794547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:07:52.794587 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:07:52.794627 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:07:52.794638 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:07:52.801197 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:07:52.801263 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:07:52.801293 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 16:07:52.801246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 16:07:52.801325 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:07:52.801373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:07:52.801389 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:07:52.801396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 16:07:52.804476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:07:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:16Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.381702 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f62df11-40bd-4531-baa9-5b7ab5679a67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5046080bbc4855769fb50909ff06b8c2cfd2438a0b65749c89302a4e97342980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:16Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.399666 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eebea1ec0fd24a7664f474db4e4cdb08e53c5c742c392900599dc3e7adcbee2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:16Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.405407 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.405447 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.405459 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.405475 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.405486 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:16Z","lastTransitionTime":"2026-02-27T16:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.417313 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p7298" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a08121a78952c85326d07974c700397678777995d73cce9f24ace8932b3e9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9g9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p7298\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:16Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.429939 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kgdlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kgdlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:16Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.446529 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:16Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.467215 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:16Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.491586 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsrq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb72b0f7-1d22-4d13-9653-b1607aa2235d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwzp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsrq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:16Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.508058 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.508120 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.508141 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.508169 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.508188 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:16Z","lastTransitionTime":"2026-02-27T16:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.513663 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672682a0-e75f-4d6c-b4f2-542944327497\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5efac20f84ec097b144c4cb152a6eba826b5eac50e222e2618b75bf163cbe93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rgv8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:16Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.530519 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fcddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6adbc0c4-e467-41f1-9190-d0dd3693eba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a66caf75365a8658dd7422b2f715e7f095b5bd7d96c8a0a96ce1a80cb99849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zrsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fcddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:16Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.571666 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://699248f4512fa4743fb7f9da9ba7ab978d62198587e74a1e69ad4e60ebaa28f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c52728c46cb073f23486d3a125c94f84cd97c41e10035ef21076655896a057\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"message\\\":\\\"762 6344 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0227 16:08:13.738893 6344 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 16:08:13.738931 6344 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0227 16:08:13.738770 6344 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0227 16:08:13.739140 6344 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0227 16:08:13.739162 6344 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0227 16:08:13.739168 6344 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0227 16:08:13.739194 6344 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0227 16:08:13.739226 6344 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0227 16:08:13.739239 6344 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0227 16:08:13.739249 6344 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0227 16:08:13.739259 6344 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0227 16:08:13.739479 6344 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bf9lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:16Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.590928 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d6b7ce-4757-4275-8345-60c1b546ce25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58864ba639422794b99f6190c7ab9a81537913f51574220acb3838f97b4f9421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tv5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:16Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.608103 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5a4c5b-2008-4354-b26e-8763a631e55c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32024ba3e19bc33cc8197b4dd6fdf165f6de8fd7c3b1e85b2f4768c75c50736a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a90f11c3291e6bde2a008f2e8e617c9c8db36ee65c3bbf26112bf8707d7cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gqgb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:16Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.610815 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.610885 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.610902 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.610927 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.610970 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:16Z","lastTransitionTime":"2026-02-27T16:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.626434 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:16Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.645370 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4995a66c4bc6d195947f67ef564aa18c7fe145e12067c2fca8e0de6b76b72f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5245f048370e7455a63a6ca01bffb8e514e22a123e45bc3a064ce4fbe27f45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:16Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.661968 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b20f888e852fcbe89e6d5c685b4b0fad411956bb3dc3c5ba0e10d6b206e131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:16Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.714484 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.714548 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.714567 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.714591 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.714610 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:16Z","lastTransitionTime":"2026-02-27T16:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.761872 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.761920 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.761905 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:16 crc kubenswrapper[4830]: E0227 16:08:16.762095 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.762158 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:16 crc kubenswrapper[4830]: E0227 16:08:16.762259 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:08:16 crc kubenswrapper[4830]: E0227 16:08:16.762386 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:08:16 crc kubenswrapper[4830]: E0227 16:08:16.762513 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.817280 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.817346 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.817363 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.817387 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.817406 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:16Z","lastTransitionTime":"2026-02-27T16:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.920546 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.920607 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.920624 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.920647 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:16 crc kubenswrapper[4830]: I0227 16:08:16.920667 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:16Z","lastTransitionTime":"2026-02-27T16:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.024573 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.024645 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.024662 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.024687 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.024705 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:17Z","lastTransitionTime":"2026-02-27T16:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.127635 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.127819 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.127838 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.127862 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.127879 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:17Z","lastTransitionTime":"2026-02-27T16:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.231024 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.231064 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.231074 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.231088 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.231099 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:17Z","lastTransitionTime":"2026-02-27T16:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.333660 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.333690 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.333698 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.333709 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.333717 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:17Z","lastTransitionTime":"2026-02-27T16:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.348335 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bf9lh_2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904/ovnkube-controller/1.log" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.349145 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bf9lh_2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904/ovnkube-controller/0.log" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.351750 4830 generic.go:334] "Generic (PLEG): container finished" podID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerID="699248f4512fa4743fb7f9da9ba7ab978d62198587e74a1e69ad4e60ebaa28f5" exitCode=1 Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.351782 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" event={"ID":"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904","Type":"ContainerDied","Data":"699248f4512fa4743fb7f9da9ba7ab978d62198587e74a1e69ad4e60ebaa28f5"} Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.351808 4830 scope.go:117] "RemoveContainer" containerID="61c52728c46cb073f23486d3a125c94f84cd97c41e10035ef21076655896a057" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.352764 4830 scope.go:117] "RemoveContainer" containerID="699248f4512fa4743fb7f9da9ba7ab978d62198587e74a1e69ad4e60ebaa28f5" Feb 27 16:08:17 crc kubenswrapper[4830]: E0227 16:08:17.353016 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bf9lh_openshift-ovn-kubernetes(2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.365088 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d20f886-cfdb-48c7-9754-6b7255b1124f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:52Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0227 16:07:52.525113 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:07:52.525413 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:07:52.526578 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-70351561/tls.crt::/tmp/serving-cert-70351561/tls.key\\\\\\\"\\\\nI0227 16:07:52.790383 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:07:52.794547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:07:52.794587 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:07:52.794627 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:07:52.794638 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:07:52.801197 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:07:52.801263 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:07:52.801293 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 16:07:52.801246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 16:07:52.801325 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:07:52.801373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:07:52.801389 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:07:52.801396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 16:07:52.804476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:07:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.374689 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f62df11-40bd-4531-baa9-5b7ab5679a67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5046080bbc4855769fb50909ff06b8c2cfd2438a0b65749c89302a4e97342980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.390514 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eebea1ec0fd24a7664f474db4e4cdb08e53c5c742c392900599dc3e7adcbee2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.405596 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p7298" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a08121a78952c85326d07974c700397678777995d73cce9f24ace8932b3e9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9g9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p7298\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.420864 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kgdlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kgdlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.436972 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.437035 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.437053 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.437079 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.437097 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:17Z","lastTransitionTime":"2026-02-27T16:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.440183 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.458828 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.478733 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsrq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb72b0f7-1d22-4d13-9653-b1607aa2235d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwzp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsrq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.497703 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672682a0-e75f-4d6c-b4f2-542944327497\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5efac20f84ec097b144c4cb152a6eba826b5eac50e222e2618b75bf163cbe93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rgv8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.512399 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fcddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6adbc0c4-e467-41f1-9190-d0dd3693eba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a66caf75365a8658dd7422b2f715e7f095b5bd7d96c8a0a96ce1a80cb99849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zrsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fcddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.528804 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d6b7ce-4757-4275-8345-60c1b546ce25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58864ba639422794b99f6190c7ab9a81537913f51574220acb3838f97b4f9421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tv5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.539427 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.539488 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.539511 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.539541 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.539565 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:17Z","lastTransitionTime":"2026-02-27T16:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.545390 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5a4c5b-2008-4354-b26e-8763a631e55c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32024ba3e19bc33cc8197b4dd6fdf165f6de8fd7c3b1e85b2f4768c75c50736a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a90f11c3291e6bde2a008f2e8e617c9c8db36ee65c3bbf26112bf8707d7cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gqgb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.566925 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.584699 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4995a66c4bc6d195947f67ef564aa18c7fe145e12067c2fca8e0de6b76b72f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5245f048370e7455a63a6ca01bffb8e514e22a123e45bc3a064ce4fbe27f45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.602699 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b20f888e852fcbe89e6d5c685b4b0fad411956bb3dc3c5ba0e10d6b206e131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.633444 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://699248f4512fa4743fb7f9da9ba7ab978d62198587e74a1e69ad4e60ebaa28f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c52728c46cb073f23486d3a125c94f84cd97c41e10035ef21076655896a057\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"message\\\":\\\"762 6344 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0227 16:08:13.738893 6344 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 16:08:13.738931 6344 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0227 16:08:13.738770 6344 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0227 16:08:13.739140 6344 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0227 16:08:13.739162 6344 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0227 16:08:13.739168 6344 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0227 16:08:13.739194 6344 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0227 16:08:13.739226 6344 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0227 16:08:13.739239 6344 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0227 16:08:13.739249 6344 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0227 16:08:13.739259 6344 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0227 16:08:13.739479 6344 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://699248f4512fa4743fb7f9da9ba7ab978d62198587e74a1e69ad4e60ebaa28f5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:08:16Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0227 16:08:16.580502 6760 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0227 16:08:16.580553 6760 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0227 16:08:16.580578 6760 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0227 16:08:16.580892 6760 factory.go:1336] Added *v1.Node event handler 7\\\\nI0227 16:08:16.581661 6760 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0227 16:08:16.582042 6760 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0227 16:08:16.582136 6760 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0227 16:08:16.582177 6760 ovnkube.go:599] Stopped ovnkube\\\\nI0227 16:08:16.582202 6760 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0227 16:08:16.582282 6760 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bf9lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.642104 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.642168 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.642190 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.642216 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.642234 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:17Z","lastTransitionTime":"2026-02-27T16:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.745228 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.745316 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.745336 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.745366 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.745385 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:17Z","lastTransitionTime":"2026-02-27T16:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.763000 4830 scope.go:117] "RemoveContainer" containerID="acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2" Feb 27 16:08:17 crc kubenswrapper[4830]: E0227 16:08:17.763269 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.848055 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.848099 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.848110 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.848126 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.848137 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:17Z","lastTransitionTime":"2026-02-27T16:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.950650 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.950690 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.950703 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.950719 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:17 crc kubenswrapper[4830]: I0227 16:08:17.950730 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:17Z","lastTransitionTime":"2026-02-27T16:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.054509 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.054541 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.054556 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.054571 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.054583 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:18Z","lastTransitionTime":"2026-02-27T16:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.157509 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.157575 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.157595 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.157621 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.157645 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:18Z","lastTransitionTime":"2026-02-27T16:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.266235 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.266327 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.266350 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.266428 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.266467 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:18Z","lastTransitionTime":"2026-02-27T16:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.356579 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bf9lh_2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904/ovnkube-controller/1.log" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.359904 4830 scope.go:117] "RemoveContainer" containerID="699248f4512fa4743fb7f9da9ba7ab978d62198587e74a1e69ad4e60ebaa28f5" Feb 27 16:08:18 crc kubenswrapper[4830]: E0227 16:08:18.360089 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bf9lh_openshift-ovn-kubernetes(2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.370036 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.370060 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.370071 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.370084 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.370096 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:18Z","lastTransitionTime":"2026-02-27T16:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.375864 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eebea1ec0fd24a7664f474db4e4cdb08e53c5c742c392900599dc3e7adcbee2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:18Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.385659 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p7298" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a08121a78952c85326d07974c700397678777995d73cce9f24ace8932b3e9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9g9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p7298\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:18Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.398032 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kgdlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kgdlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:18Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.411029 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f62df11-40bd-4531-baa9-5b7ab5679a67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5046080bbc4855769fb50909ff06b8c2cfd2438a0b65749c89302a4e97342980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:18Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.429492 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:18Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.452036 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsrq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb72b0f7-1d22-4d13-9653-b1607aa2235d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwzp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsrq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:18Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.473160 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.473214 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.473228 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.473248 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.473261 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:18Z","lastTransitionTime":"2026-02-27T16:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.475896 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672682a0-e75f-4d6c-b4f2-542944327497\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5efac20f84ec097b144c4cb152a6eba826b5eac50e222e2618b75bf163cbe93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rgv8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:18Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.489531 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fcddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6adbc0c4-e467-41f1-9190-d0dd3693eba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a66caf75365a8658dd7422b2f715e7f095b5bd7d96c8a0a96ce1a80cb99849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zrsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fcddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:18Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.510662 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:18Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.533138 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:18Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.552254 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4995a66c4bc6d195947f67ef564aa18c7fe145e12067c2fca8e0de6b76b72f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5245f048370e7455a63a6ca01bffb8e514e22a123e45bc3a064ce4fbe27f45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:18Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.571359 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b20f888e852fcbe89e6d5c685b4b0fad411956bb3dc3c5ba0e10d6b206e131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:18Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.576315 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.576335 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.576344 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.576359 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.576370 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:18Z","lastTransitionTime":"2026-02-27T16:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.607370 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://699248f4512fa4743fb7f9da9ba7ab978d62198587e74a1e69ad4e60ebaa28f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://699248f4512fa4743fb7f9da9ba7ab978d62198587e74a1e69ad4e60ebaa28f5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:08:16Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0227 16:08:16.580502 6760 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0227 16:08:16.580553 6760 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0227 16:08:16.580578 6760 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0227 16:08:16.580892 6760 factory.go:1336] Added *v1.Node event handler 7\\\\nI0227 16:08:16.581661 6760 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0227 16:08:16.582042 6760 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0227 16:08:16.582136 6760 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0227 16:08:16.582177 6760 ovnkube.go:599] Stopped ovnkube\\\\nI0227 16:08:16.582202 6760 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0227 16:08:16.582282 6760 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bf9lh_openshift-ovn-kubernetes(2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bf9lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:18Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.628032 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d6b7ce-4757-4275-8345-60c1b546ce25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58864ba639422794b99f6190c7ab9a81537913f51574220acb3838f97b4f9421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tv5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:18Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.646708 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5a4c5b-2008-4354-b26e-8763a631e55c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32024ba3e19bc33cc8197b4dd6fdf165f6de8fd7c3b1e85b2f4768c75c50736a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a90f11c3291e6bde2a008f2e8e617c9c8db36ee65c3bbf26112bf8707d7cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gqgb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:18Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.656704 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:08:18 crc kubenswrapper[4830]: E0227 16:08:18.656903 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:08:34.656866937 +0000 UTC m=+110.746139460 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.657020 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.657097 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:18 crc kubenswrapper[4830]: E0227 16:08:18.657176 4830 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 16:08:18 crc kubenswrapper[4830]: E0227 16:08:18.657276 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 16:08:34.657253286 +0000 UTC m=+110.746525789 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 16:08:18 crc kubenswrapper[4830]: E0227 16:08:18.657302 4830 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 16:08:18 crc kubenswrapper[4830]: E0227 16:08:18.657341 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 16:08:18 crc kubenswrapper[4830]: E0227 16:08:18.657379 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 16:08:18 crc kubenswrapper[4830]: E0227 16:08:18.657405 4830 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.657191 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:08:18 crc kubenswrapper[4830]: E0227 16:08:18.657407 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 16:08:34.657379889 +0000 UTC m=+110.746652392 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 16:08:18 crc kubenswrapper[4830]: E0227 16:08:18.657495 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 16:08:34.657473171 +0000 UTC m=+110.746745674 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.657578 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:08:18 crc kubenswrapper[4830]: E0227 16:08:18.657729 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 16:08:18 crc kubenswrapper[4830]: E0227 16:08:18.657780 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 16:08:18 crc kubenswrapper[4830]: E0227 16:08:18.657808 4830 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:08:18 crc kubenswrapper[4830]: E0227 16:08:18.657875 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 16:08:34.65785291 +0000 UTC m=+110.747125403 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.669254 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d20f886-cfdb-48c7-9754-6b7255b1124f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:52Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0227 16:07:52.525113 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:07:52.525413 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:07:52.526578 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-70351561/tls.crt::/tmp/serving-cert-70351561/tls.key\\\\\\\"\\\\nI0227 16:07:52.790383 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:07:52.794547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:07:52.794587 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:07:52.794627 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:07:52.794638 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:07:52.801197 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:07:52.801263 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:07:52.801293 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 16:07:52.801246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 16:07:52.801325 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:07:52.801373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:07:52.801389 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:07:52.801396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 16:07:52.804476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:07:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:18Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.678595 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.678620 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.678632 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.678650 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.678662 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:18Z","lastTransitionTime":"2026-02-27T16:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.759099 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ba2fe32-66e0-4bcd-a646-9d07c9a21c54-metrics-certs\") pod \"network-metrics-daemon-kgdlg\" (UID: \"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\") " pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:18 crc kubenswrapper[4830]: E0227 16:08:18.759303 4830 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 16:08:18 crc kubenswrapper[4830]: E0227 16:08:18.759388 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ba2fe32-66e0-4bcd-a646-9d07c9a21c54-metrics-certs podName:6ba2fe32-66e0-4bcd-a646-9d07c9a21c54 nodeName:}" failed. No retries permitted until 2026-02-27 16:08:34.759365769 +0000 UTC m=+110.848638272 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6ba2fe32-66e0-4bcd-a646-9d07c9a21c54-metrics-certs") pod "network-metrics-daemon-kgdlg" (UID: "6ba2fe32-66e0-4bcd-a646-9d07c9a21c54") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.761366 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.761401 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.761383 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.761368 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:18 crc kubenswrapper[4830]: E0227 16:08:18.761522 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:08:18 crc kubenswrapper[4830]: E0227 16:08:18.761603 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:08:18 crc kubenswrapper[4830]: E0227 16:08:18.761763 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:08:18 crc kubenswrapper[4830]: E0227 16:08:18.761903 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.781441 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.781461 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.781470 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.781484 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.781493 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:18Z","lastTransitionTime":"2026-02-27T16:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.884309 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.884345 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.884354 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.884368 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.884379 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:18Z","lastTransitionTime":"2026-02-27T16:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.986666 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.986698 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.986708 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.986719 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:18 crc kubenswrapper[4830]: I0227 16:08:18.986731 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:18Z","lastTransitionTime":"2026-02-27T16:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.089835 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.089902 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.089914 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.089933 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.089975 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:19Z","lastTransitionTime":"2026-02-27T16:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.192865 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.192923 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.192939 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.192991 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.193008 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:19Z","lastTransitionTime":"2026-02-27T16:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.296056 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.296102 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.296113 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.296132 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.296144 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:19Z","lastTransitionTime":"2026-02-27T16:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.398287 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.398356 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.398373 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.398398 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.398429 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:19Z","lastTransitionTime":"2026-02-27T16:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.501346 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.501399 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.501415 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.501441 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.501471 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:19Z","lastTransitionTime":"2026-02-27T16:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.604146 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.604179 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.604188 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.604204 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.604215 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:19Z","lastTransitionTime":"2026-02-27T16:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.707254 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.707322 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.707342 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.707365 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.707385 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:19Z","lastTransitionTime":"2026-02-27T16:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.811160 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.811216 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.811234 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.811257 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.811276 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:19Z","lastTransitionTime":"2026-02-27T16:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.914648 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.914702 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.914720 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.914744 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:19 crc kubenswrapper[4830]: I0227 16:08:19.914762 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:19Z","lastTransitionTime":"2026-02-27T16:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.021260 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.021323 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.021340 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.021366 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.021384 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:20Z","lastTransitionTime":"2026-02-27T16:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.124218 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.124267 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.124284 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.124306 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.124322 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:20Z","lastTransitionTime":"2026-02-27T16:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.227883 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.228243 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.228465 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.228663 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.228853 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:20Z","lastTransitionTime":"2026-02-27T16:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.332374 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.332627 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.332757 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.332845 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.332931 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:20Z","lastTransitionTime":"2026-02-27T16:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.435746 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.436084 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.436166 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.436268 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.436348 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:20Z","lastTransitionTime":"2026-02-27T16:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.538750 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.539160 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.539172 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.539191 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.539203 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:20Z","lastTransitionTime":"2026-02-27T16:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.641797 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.642572 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.642717 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.642782 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.642835 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:20Z","lastTransitionTime":"2026-02-27T16:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.745610 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.745683 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.745702 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.745731 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.745749 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:20Z","lastTransitionTime":"2026-02-27T16:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.761606 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.761657 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:08:20 crc kubenswrapper[4830]: E0227 16:08:20.761755 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.761765 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:20 crc kubenswrapper[4830]: E0227 16:08:20.761934 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.762014 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:20 crc kubenswrapper[4830]: E0227 16:08:20.762097 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:08:20 crc kubenswrapper[4830]: E0227 16:08:20.762177 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.848086 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.848129 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.848138 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.848152 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.848163 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:20Z","lastTransitionTime":"2026-02-27T16:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.950560 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.950626 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.950645 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.950672 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:20 crc kubenswrapper[4830]: I0227 16:08:20.950692 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:20Z","lastTransitionTime":"2026-02-27T16:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.054037 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.054111 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.054132 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.054164 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.054182 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:21Z","lastTransitionTime":"2026-02-27T16:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.156839 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.156929 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.156982 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.157008 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.157025 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:21Z","lastTransitionTime":"2026-02-27T16:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.260334 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.260662 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.260797 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.260986 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.261109 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:21Z","lastTransitionTime":"2026-02-27T16:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.364618 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.364682 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.364700 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.364725 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.364797 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:21Z","lastTransitionTime":"2026-02-27T16:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.467103 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.467164 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.467182 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.467207 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.467224 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:21Z","lastTransitionTime":"2026-02-27T16:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.570003 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.570058 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.570075 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.570101 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.570118 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:21Z","lastTransitionTime":"2026-02-27T16:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.674783 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.674862 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.674884 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.674912 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.674933 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:21Z","lastTransitionTime":"2026-02-27T16:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.777441 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.777506 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.777525 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.777555 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.777572 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:21Z","lastTransitionTime":"2026-02-27T16:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.880106 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.880175 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.880192 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.880221 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.880239 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:21Z","lastTransitionTime":"2026-02-27T16:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.982998 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.983036 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.983045 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.983059 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:21 crc kubenswrapper[4830]: I0227 16:08:21.983070 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:21Z","lastTransitionTime":"2026-02-27T16:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.085899 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.086008 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.086033 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.086063 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.086087 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:22Z","lastTransitionTime":"2026-02-27T16:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.188466 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.188542 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.188566 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.188596 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.188619 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:22Z","lastTransitionTime":"2026-02-27T16:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.291688 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.291749 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.291765 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.291789 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.291806 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:22Z","lastTransitionTime":"2026-02-27T16:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.394668 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.394732 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.394754 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.394786 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.394807 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:22Z","lastTransitionTime":"2026-02-27T16:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.497026 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.497085 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.497106 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.497134 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.497153 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:22Z","lastTransitionTime":"2026-02-27T16:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.598897 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.598966 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.598984 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.599005 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.599020 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:22Z","lastTransitionTime":"2026-02-27T16:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.700543 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.700574 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.700585 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.700597 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.700607 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:22Z","lastTransitionTime":"2026-02-27T16:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.762363 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.762412 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:22 crc kubenswrapper[4830]: E0227 16:08:22.762575 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.762613 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.762675 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:08:22 crc kubenswrapper[4830]: E0227 16:08:22.762737 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:08:22 crc kubenswrapper[4830]: E0227 16:08:22.762890 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:08:22 crc kubenswrapper[4830]: E0227 16:08:22.763051 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.803350 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.803403 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.803420 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.803443 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.803459 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:22Z","lastTransitionTime":"2026-02-27T16:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.906048 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.906096 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.906112 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.906132 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:22 crc kubenswrapper[4830]: I0227 16:08:22.906150 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:22Z","lastTransitionTime":"2026-02-27T16:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.008147 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.008175 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.008182 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.008194 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.008201 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:23Z","lastTransitionTime":"2026-02-27T16:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.112729 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.112808 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.112834 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.112864 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.112889 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:23Z","lastTransitionTime":"2026-02-27T16:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.215873 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.215967 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.215978 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.216756 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.216803 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:23Z","lastTransitionTime":"2026-02-27T16:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.319421 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.319472 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.319494 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.319523 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.319546 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:23Z","lastTransitionTime":"2026-02-27T16:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.422342 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.422417 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.422436 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.422875 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.422922 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:23Z","lastTransitionTime":"2026-02-27T16:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.526294 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.526343 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.526359 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.526383 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.526400 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:23Z","lastTransitionTime":"2026-02-27T16:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.629503 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.629596 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.629624 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.629658 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.629681 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:23Z","lastTransitionTime":"2026-02-27T16:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.733419 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.733479 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.733496 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.733523 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.733541 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:23Z","lastTransitionTime":"2026-02-27T16:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.837400 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.837456 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.837473 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.837499 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.837517 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:23Z","lastTransitionTime":"2026-02-27T16:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.940741 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.940815 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.940833 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.940853 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:23 crc kubenswrapper[4830]: I0227 16:08:23.940872 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:23Z","lastTransitionTime":"2026-02-27T16:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.044550 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.044599 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.044617 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.044644 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.044660 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:24Z","lastTransitionTime":"2026-02-27T16:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.148045 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.148092 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.148107 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.148131 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.148149 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:24Z","lastTransitionTime":"2026-02-27T16:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.251501 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.251560 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.251577 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.251598 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.251616 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:24Z","lastTransitionTime":"2026-02-27T16:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.354621 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.354699 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.354722 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.354751 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.354771 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:24Z","lastTransitionTime":"2026-02-27T16:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.457762 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.457826 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.457843 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.457870 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.457888 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:24Z","lastTransitionTime":"2026-02-27T16:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.560858 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.560920 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.560937 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.560996 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.561021 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:24Z","lastTransitionTime":"2026-02-27T16:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.664059 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.664156 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.664179 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.664207 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.664227 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:24Z","lastTransitionTime":"2026-02-27T16:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.761473 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.761586 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.761663 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.761689 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:08:24 crc kubenswrapper[4830]: E0227 16:08:24.761820 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:08:24 crc kubenswrapper[4830]: E0227 16:08:24.762000 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:08:24 crc kubenswrapper[4830]: E0227 16:08:24.762139 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:08:24 crc kubenswrapper[4830]: E0227 16:08:24.762289 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.766301 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.766349 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.766362 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.766382 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.766399 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:24Z","lastTransitionTime":"2026-02-27T16:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.783472 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5a4c5b-2008-4354-b26e-8763a631e55c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32024ba3e19bc33cc8197b4dd6fdf165f6de8fd7c3b1e85b2f4768c75c50736a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a90f11c3291e6bde2a008f2e8e617c9c8db36ee65c3bbf26112bf8707d7cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gqgb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:24Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.802305 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:24Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.820006 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4995a66c4bc6d195947f67ef564aa18c7fe145e12067c2fca8e0de6b76b72f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5245f048370e7455a63a6ca01bffb8e514e22a123e45bc3a064ce4fbe27f45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:24Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.841585 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b20f888e852fcbe89e6d5c685b4b0fad411956bb3dc3c5ba0e10d6b206e131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:24Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.868922 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.868994 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.869010 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.869034 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.869052 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:24Z","lastTransitionTime":"2026-02-27T16:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.873545 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://699248f4512fa4743fb7f9da9ba7ab978d62198587e74a1e69ad4e60ebaa28f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://699248f4512fa4743fb7f9da9ba7ab978d62198587e74a1e69ad4e60ebaa28f5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:08:16Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0227 16:08:16.580502 6760 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0227 16:08:16.580553 6760 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0227 16:08:16.580578 6760 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0227 16:08:16.580892 6760 factory.go:1336] Added *v1.Node event handler 7\\\\nI0227 16:08:16.581661 6760 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0227 16:08:16.582042 6760 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0227 16:08:16.582136 6760 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0227 16:08:16.582177 6760 ovnkube.go:599] Stopped ovnkube\\\\nI0227 16:08:16.582202 6760 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0227 16:08:16.582282 6760 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bf9lh_openshift-ovn-kubernetes(2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bf9lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:24Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.895236 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d6b7ce-4757-4275-8345-60c1b546ce25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58864ba639422794b99f6190c7ab9a81537913f51574220acb3838f97b4f9421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tv5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:24Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.916639 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d20f886-cfdb-48c7-9754-6b7255b1124f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:52Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0227 16:07:52.525113 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:07:52.525413 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:07:52.526578 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-70351561/tls.crt::/tmp/serving-cert-70351561/tls.key\\\\\\\"\\\\nI0227 16:07:52.790383 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:07:52.794547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:07:52.794587 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:07:52.794627 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:07:52.794638 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:07:52.801197 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:07:52.801263 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:07:52.801293 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 16:07:52.801246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 16:07:52.801325 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:07:52.801373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:07:52.801389 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:07:52.801396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 16:07:52.804476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:07:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:24Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.933687 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f62df11-40bd-4531-baa9-5b7ab5679a67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5046080bbc4855769fb50909ff06b8c2cfd2438a0b65749c89302a4e97342980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:24Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.954379 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eebea1ec0fd24a7664f474db4e4cdb08e53c5c742c392900599dc3e7adcbee2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:24Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.969677 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p7298" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a08121a78952c85326d07974c700397678777995d73cce9f24ace8932b3e9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9g9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p7298\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:24Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.972226 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.972299 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.972317 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.972342 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.972360 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:24Z","lastTransitionTime":"2026-02-27T16:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:24 crc kubenswrapper[4830]: I0227 16:08:24.989090 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kgdlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kgdlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:24Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.012564 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:25Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.033075 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:25Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.052475 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsrq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb72b0f7-1d22-4d13-9653-b1607aa2235d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwzp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsrq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:25Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.075163 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672682a0-e75f-4d6c-b4f2-542944327497\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5efac20f84ec097b144c4cb152a6eba826b5eac50e222e2618b75bf163cbe93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rgv8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:25Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.075632 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.075658 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.075674 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.075695 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.075711 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:25Z","lastTransitionTime":"2026-02-27T16:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.089280 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fcddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6adbc0c4-e467-41f1-9190-d0dd3693eba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a66caf75365a8658dd7422b2f715e7f095b5bd7d96c8a0a96ce1a80cb99849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zrsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fcddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:25Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.179147 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.179186 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.179197 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.179219 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.179231 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:25Z","lastTransitionTime":"2026-02-27T16:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.282216 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.282280 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.282297 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.282322 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.282339 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:25Z","lastTransitionTime":"2026-02-27T16:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.391163 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.391217 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.391247 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.391272 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.391291 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:25Z","lastTransitionTime":"2026-02-27T16:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.494656 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.494700 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.494719 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.494741 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.494758 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:25Z","lastTransitionTime":"2026-02-27T16:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.598091 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.598149 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.598169 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.598193 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.598210 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:25Z","lastTransitionTime":"2026-02-27T16:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.701293 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.701352 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.701368 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.701396 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.701415 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:25Z","lastTransitionTime":"2026-02-27T16:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.804195 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.804248 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.804259 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.804276 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.804289 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:25Z","lastTransitionTime":"2026-02-27T16:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.833023 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.833081 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.833099 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.833122 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.833139 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:25Z","lastTransitionTime":"2026-02-27T16:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:25 crc kubenswrapper[4830]: E0227 16:08:25.853500 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:25Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.858393 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.858447 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.858464 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.858495 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.858511 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:25Z","lastTransitionTime":"2026-02-27T16:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:25 crc kubenswrapper[4830]: E0227 16:08:25.877997 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:25Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.883380 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.883440 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.883459 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.883484 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.883502 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:25Z","lastTransitionTime":"2026-02-27T16:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:25 crc kubenswrapper[4830]: E0227 16:08:25.902639 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:25Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.907391 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.907441 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.907457 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.907480 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.907498 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:25Z","lastTransitionTime":"2026-02-27T16:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:25 crc kubenswrapper[4830]: E0227 16:08:25.926494 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:25Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.933113 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.933169 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.933185 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.933208 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.933225 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:25Z","lastTransitionTime":"2026-02-27T16:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:25 crc kubenswrapper[4830]: E0227 16:08:25.953184 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:25Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:25 crc kubenswrapper[4830]: E0227 16:08:25.953368 4830 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.955547 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.955620 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.955637 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.955655 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:25 crc kubenswrapper[4830]: I0227 16:08:25.955667 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:25Z","lastTransitionTime":"2026-02-27T16:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.058226 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.058288 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.058305 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.058330 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.058351 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:26Z","lastTransitionTime":"2026-02-27T16:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.162008 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.162062 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.162078 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.162101 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.162118 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:26Z","lastTransitionTime":"2026-02-27T16:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.265072 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.265131 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.265147 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.265171 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.265189 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:26Z","lastTransitionTime":"2026-02-27T16:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.368621 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.368683 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.368706 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.368737 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.368761 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:26Z","lastTransitionTime":"2026-02-27T16:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.471987 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.472042 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.472064 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.472091 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.472113 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:26Z","lastTransitionTime":"2026-02-27T16:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.575552 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.575639 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.575655 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.575675 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.575688 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:26Z","lastTransitionTime":"2026-02-27T16:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.679014 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.679077 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.679097 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.679125 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.679145 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:26Z","lastTransitionTime":"2026-02-27T16:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.761657 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.761796 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.762103 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:26 crc kubenswrapper[4830]: E0227 16:08:26.762084 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.762173 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:26 crc kubenswrapper[4830]: E0227 16:08:26.762370 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:08:26 crc kubenswrapper[4830]: E0227 16:08:26.762586 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:08:26 crc kubenswrapper[4830]: E0227 16:08:26.762790 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.781849 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.781895 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.781905 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.781920 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.781931 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:26Z","lastTransitionTime":"2026-02-27T16:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.884810 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.884865 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.884881 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.884904 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.884921 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:26Z","lastTransitionTime":"2026-02-27T16:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.988426 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.988542 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.988563 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.988589 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:26 crc kubenswrapper[4830]: I0227 16:08:26.988620 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:26Z","lastTransitionTime":"2026-02-27T16:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.091342 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.091402 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.091419 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.091442 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.091459 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:27Z","lastTransitionTime":"2026-02-27T16:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.194169 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.194228 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.194244 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.194266 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.194283 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:27Z","lastTransitionTime":"2026-02-27T16:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.297296 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.297408 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.297467 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.297490 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.297506 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:27Z","lastTransitionTime":"2026-02-27T16:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.399516 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.399586 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.399608 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.399633 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.399656 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:27Z","lastTransitionTime":"2026-02-27T16:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.502832 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.502882 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.502896 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.502912 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.502925 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:27Z","lastTransitionTime":"2026-02-27T16:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.606209 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.606271 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.606287 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.606312 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.606333 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:27Z","lastTransitionTime":"2026-02-27T16:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.709609 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.709675 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.709697 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.709726 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.709782 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:27Z","lastTransitionTime":"2026-02-27T16:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.812577 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.812644 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.812666 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.812690 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.812713 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:27Z","lastTransitionTime":"2026-02-27T16:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.915544 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.915609 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.915626 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.915650 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:27 crc kubenswrapper[4830]: I0227 16:08:27.915667 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:27Z","lastTransitionTime":"2026-02-27T16:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.018038 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.018079 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.018092 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.018109 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.018121 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:28Z","lastTransitionTime":"2026-02-27T16:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.121625 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.121698 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.121715 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.121737 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.121754 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:28Z","lastTransitionTime":"2026-02-27T16:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.224139 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.224203 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.224221 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.224244 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.224260 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:28Z","lastTransitionTime":"2026-02-27T16:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.327298 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.327354 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.327370 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.327393 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.327411 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:28Z","lastTransitionTime":"2026-02-27T16:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.430263 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.430319 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.430331 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.430349 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.430364 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:28Z","lastTransitionTime":"2026-02-27T16:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.533010 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.533093 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.533111 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.533134 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.533152 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:28Z","lastTransitionTime":"2026-02-27T16:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.635006 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.635060 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.635077 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.635099 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.635117 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:28Z","lastTransitionTime":"2026-02-27T16:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.737704 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.737777 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.737794 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.737817 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.737835 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:28Z","lastTransitionTime":"2026-02-27T16:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.761420 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:28 crc kubenswrapper[4830]: E0227 16:08:28.761729 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.761804 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.761867 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.761796 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:28 crc kubenswrapper[4830]: E0227 16:08:28.761919 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:08:28 crc kubenswrapper[4830]: E0227 16:08:28.762070 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:08:28 crc kubenswrapper[4830]: E0227 16:08:28.762218 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.841886 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.841979 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.842004 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.842029 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.842049 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:28Z","lastTransitionTime":"2026-02-27T16:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.944997 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.945037 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.945050 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.945066 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:28 crc kubenswrapper[4830]: I0227 16:08:28.945077 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:28Z","lastTransitionTime":"2026-02-27T16:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.048020 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.048082 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.048103 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.048134 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.048155 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:29Z","lastTransitionTime":"2026-02-27T16:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.150795 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.150854 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.150870 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.150891 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.150908 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:29Z","lastTransitionTime":"2026-02-27T16:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.253885 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.253974 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.253992 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.254016 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.254033 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:29Z","lastTransitionTime":"2026-02-27T16:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.357273 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.357335 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.357352 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.357376 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.357392 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:29Z","lastTransitionTime":"2026-02-27T16:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.460134 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.460177 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.460195 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.460216 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.460232 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:29Z","lastTransitionTime":"2026-02-27T16:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.562889 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.562985 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.563009 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.563039 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.563059 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:29Z","lastTransitionTime":"2026-02-27T16:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.665878 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.665923 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.665934 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.665974 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.665989 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:29Z","lastTransitionTime":"2026-02-27T16:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.762746 4830 scope.go:117] "RemoveContainer" containerID="699248f4512fa4743fb7f9da9ba7ab978d62198587e74a1e69ad4e60ebaa28f5" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.774850 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.774923 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.775053 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.775196 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.775218 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:29Z","lastTransitionTime":"2026-02-27T16:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.879877 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.879923 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.879939 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.879990 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.880008 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:29Z","lastTransitionTime":"2026-02-27T16:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.983003 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.983059 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.983076 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.983099 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:29 crc kubenswrapper[4830]: I0227 16:08:29.983116 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:29Z","lastTransitionTime":"2026-02-27T16:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.085385 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.085417 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.085430 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.085446 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.085458 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:30Z","lastTransitionTime":"2026-02-27T16:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.189007 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.189328 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.189348 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.189371 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.189389 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:30Z","lastTransitionTime":"2026-02-27T16:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.293325 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.293368 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.293389 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.293413 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.293429 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:30Z","lastTransitionTime":"2026-02-27T16:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.396018 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.396072 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.396086 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.396103 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.396115 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:30Z","lastTransitionTime":"2026-02-27T16:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.416044 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bf9lh_2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904/ovnkube-controller/1.log" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.418536 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" event={"ID":"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904","Type":"ContainerStarted","Data":"279aa904d7438057d878006234143ea48d8c40383ab648353776c01f12c147b6"} Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.418992 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.434549 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:30Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.454517 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4995a66c4bc6d195947f67ef564aa18c7fe145e12067c2fca8e0de6b76b72f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5245f048370e7455a63a6ca01bffb8e514e22a123e45bc3a064ce4fbe27f45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:30Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.474217 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b20f888e852fcbe89e6d5c685b4b0fad411956bb3dc3c5ba0e10d6b206e131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:30Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.499370 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.499424 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.499441 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.499464 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.499480 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:30Z","lastTransitionTime":"2026-02-27T16:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.507166 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279aa904d7438057d878006234143ea48d8c40383ab648353776c01f12c147b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://699248f4512fa4743fb7f9da9ba7ab978d62198587e74a1e69ad4e60ebaa28f5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:08:16Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0227 16:08:16.580502 6760 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0227 16:08:16.580553 6760 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0227 16:08:16.580578 6760 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0227 16:08:16.580892 6760 factory.go:1336] Added *v1.Node event handler 7\\\\nI0227 16:08:16.581661 6760 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0227 16:08:16.582042 6760 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0227 16:08:16.582136 6760 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0227 16:08:16.582177 6760 ovnkube.go:599] Stopped ovnkube\\\\nI0227 16:08:16.582202 6760 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0227 16:08:16.582282 6760 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bf9lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:30Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.524392 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d6b7ce-4757-4275-8345-60c1b546ce25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58864ba639422794b99f6190c7ab9a81537913f51574220acb3838f97b4f9421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tv5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:30Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.539491 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5a4c5b-2008-4354-b26e-8763a631e55c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32024ba3e19bc33cc8197b4dd6fdf165f6de8fd7c3b1e85b2f4768c75c50736a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a90f11c3291e6bde2a008f2e8e617c9c8db36ee65c3bbf26112bf8707d7cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gqgb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:30Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.560282 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d20f886-cfdb-48c7-9754-6b7255b1124f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:52Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0227 16:07:52.525113 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:07:52.525413 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:07:52.526578 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-70351561/tls.crt::/tmp/serving-cert-70351561/tls.key\\\\\\\"\\\\nI0227 16:07:52.790383 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:07:52.794547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:07:52.794587 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:07:52.794627 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:07:52.794638 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:07:52.801197 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:07:52.801263 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:07:52.801293 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 16:07:52.801246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 16:07:52.801325 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:07:52.801373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:07:52.801389 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:07:52.801396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 16:07:52.804476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:07:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:30Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.573299 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eebea1ec0fd24a7664f474db4e4cdb08e53c5c742c392900599dc3e7adcbee2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:30Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.587022 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p7298" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a08121a78952c85326d07974c700397678777995d73cce9f24ace8932b3e9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9g9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p7298\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:30Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.602579 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.602642 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.602659 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.602687 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.602705 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:30Z","lastTransitionTime":"2026-02-27T16:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.605894 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kgdlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kgdlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:30Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.621793 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f62df11-40bd-4531-baa9-5b7ab5679a67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5046080bbc4855769fb50909ff06b8c2cfd2438a0b65749c89302a4e97342980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:30Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.640271 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsrq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb72b0f7-1d22-4d13-9653-b1607aa2235d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwzp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsrq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:30Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.655489 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672682a0-e75f-4d6c-b4f2-542944327497\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5efac20f84ec097b144c4cb152a6eba826b5eac50e222e2618b75bf163cbe93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rgv8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:30Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.670688 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fcddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6adbc0c4-e467-41f1-9190-d0dd3693eba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a66caf75365a8658dd7422b2f715e7f095b5bd7d96c8a0a96ce1a80cb99849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zrsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fcddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:30Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.690149 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:30Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.706115 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.706196 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.706221 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.706253 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.706278 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:30Z","lastTransitionTime":"2026-02-27T16:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.710686 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:30Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.761986 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.762072 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:30 crc kubenswrapper[4830]: E0227 16:08:30.762128 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.762161 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.762188 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.762633 4830 scope.go:117] "RemoveContainer" containerID="acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2" Feb 27 16:08:30 crc kubenswrapper[4830]: E0227 16:08:30.763363 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 16:08:30 crc kubenswrapper[4830]: E0227 16:08:30.763568 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:08:30 crc kubenswrapper[4830]: E0227 16:08:30.763755 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:08:30 crc kubenswrapper[4830]: E0227 16:08:30.764008 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.809416 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.809507 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.809527 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.809553 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.809574 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:30Z","lastTransitionTime":"2026-02-27T16:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.912292 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.912356 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.912380 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.912412 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:30 crc kubenswrapper[4830]: I0227 16:08:30.912510 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:30Z","lastTransitionTime":"2026-02-27T16:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.020283 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.020354 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.020370 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.020395 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.020414 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:31Z","lastTransitionTime":"2026-02-27T16:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.123606 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.123674 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.123695 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.123725 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.123744 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:31Z","lastTransitionTime":"2026-02-27T16:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.226623 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.226666 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.226683 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.226705 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.226722 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:31Z","lastTransitionTime":"2026-02-27T16:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.330144 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.330211 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.330229 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.330253 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.330271 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:31Z","lastTransitionTime":"2026-02-27T16:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.424104 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bf9lh_2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904/ovnkube-controller/2.log" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.424963 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bf9lh_2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904/ovnkube-controller/1.log" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.428823 4830 generic.go:334] "Generic (PLEG): container finished" podID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerID="279aa904d7438057d878006234143ea48d8c40383ab648353776c01f12c147b6" exitCode=1 Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.428897 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" event={"ID":"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904","Type":"ContainerDied","Data":"279aa904d7438057d878006234143ea48d8c40383ab648353776c01f12c147b6"} Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.428971 4830 scope.go:117] "RemoveContainer" containerID="699248f4512fa4743fb7f9da9ba7ab978d62198587e74a1e69ad4e60ebaa28f5" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.430011 4830 scope.go:117] "RemoveContainer" containerID="279aa904d7438057d878006234143ea48d8c40383ab648353776c01f12c147b6" Feb 27 16:08:31 crc kubenswrapper[4830]: E0227 16:08:31.430205 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bf9lh_openshift-ovn-kubernetes(2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.435924 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.436017 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.436036 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.436061 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.436079 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:31Z","lastTransitionTime":"2026-02-27T16:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.445628 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d20f886-cfdb-48c7-9754-6b7255b1124f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:52Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0227 16:07:52.525113 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:07:52.525413 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:07:52.526578 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-70351561/tls.crt::/tmp/serving-cert-70351561/tls.key\\\\\\\"\\\\nI0227 16:07:52.790383 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:07:52.794547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:07:52.794587 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:07:52.794627 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:07:52.794638 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:07:52.801197 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:07:52.801263 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:07:52.801293 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 16:07:52.801246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 16:07:52.801325 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:07:52.801373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:07:52.801389 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:07:52.801396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 16:07:52.804476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:07:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:31Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.458968 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p7298" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a08121a78952c85326d07974c700397678777995d73cce9f24ace8932b3e9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9g9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p7298\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:31Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.476150 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kgdlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kgdlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:31Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.490691 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f62df11-40bd-4531-baa9-5b7ab5679a67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5046080bbc4855769fb50909ff06b8c2cfd2438a0b65749c89302a4e97342980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:31Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.507267 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eebea1ec0fd24a7664f474db4e4cdb08e53c5c742c392900599dc3e7adcbee2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:31Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.526670 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672682a0-e75f-4d6c-b4f2-542944327497\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5efac20f84ec097b144c4cb152a6eba826b5eac50e222e2618b75bf163cbe93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rgv8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:31Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.539075 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.539142 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.539160 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.539186 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.539206 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:31Z","lastTransitionTime":"2026-02-27T16:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.540868 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fcddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6adbc0c4-e467-41f1-9190-d0dd3693eba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a66caf75365a8658dd7422b2f715e7f095b5bd7d96c8a0a96ce1a80cb99849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zrsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fcddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:31Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.560147 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:31Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.579247 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:31Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.599414 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsrq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb72b0f7-1d22-4d13-9653-b1607aa2235d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwzp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsrq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:31Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.619555 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4995a66c4bc6d195947f67ef564aa18c7fe145e12067c2fca8e0de6b76b72f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5245f048370e7455a63a6ca01bffb8e514e22a123e45bc3a064ce4fbe27f45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:31Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.636708 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b20f888e852fcbe89e6d5c685b4b0fad411956bb3dc3c5ba0e10d6b206e131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:31Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.641877 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.641903 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.641911 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.641925 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.641934 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:31Z","lastTransitionTime":"2026-02-27T16:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.666428 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279aa904d7438057d878006234143ea48d8c40383ab648353776c01f12c147b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://699248f4512fa4743fb7f9da9ba7ab978d62198587e74a1e69ad4e60ebaa28f5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:08:16Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0227 16:08:16.580502 6760 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0227 16:08:16.580553 6760 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0227 16:08:16.580578 6760 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0227 16:08:16.580892 6760 factory.go:1336] Added *v1.Node event handler 7\\\\nI0227 16:08:16.581661 6760 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0227 16:08:16.582042 6760 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0227 16:08:16.582136 6760 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0227 16:08:16.582177 6760 ovnkube.go:599] Stopped ovnkube\\\\nI0227 16:08:16.582202 6760 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0227 16:08:16.582282 6760 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://279aa904d7438057d878006234143ea48d8c40383ab648353776c01f12c147b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:08:30Z\\\",\\\"message\\\":\\\"Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0227 16:08:30.816934 6938 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bf9lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:31Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.681316 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d6b7ce-4757-4275-8345-60c1b546ce25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58864ba639422794b99f6190c7ab9a81537913f51574220acb3838f97b4f9421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tv5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:31Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.695707 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5a4c5b-2008-4354-b26e-8763a631e55c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32024ba3e19bc33cc8197b4dd6fdf165f6de8fd7c3b1e85b2f4768c75c50736a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a90f11c3291e6bde2a008f2e8e617c9c8db36ee65c3bbf26112bf8707d7cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gqgb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:31Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.713817 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:31Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.745220 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.745263 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.745279 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.745302 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.745317 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:31Z","lastTransitionTime":"2026-02-27T16:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.847476 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.847542 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.847559 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.847584 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.847601 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:31Z","lastTransitionTime":"2026-02-27T16:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.950556 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.950633 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.950654 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.950678 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:31 crc kubenswrapper[4830]: I0227 16:08:31.950695 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:31Z","lastTransitionTime":"2026-02-27T16:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.053836 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.053888 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.053904 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.053926 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.053972 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:32Z","lastTransitionTime":"2026-02-27T16:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.156099 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.156151 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.156169 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.156192 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.156208 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:32Z","lastTransitionTime":"2026-02-27T16:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.259381 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.259447 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.259481 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.259510 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.259533 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:32Z","lastTransitionTime":"2026-02-27T16:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.362612 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.362688 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.362707 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.362734 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.362761 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:32Z","lastTransitionTime":"2026-02-27T16:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.435039 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bf9lh_2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904/ovnkube-controller/2.log" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.443863 4830 scope.go:117] "RemoveContainer" containerID="279aa904d7438057d878006234143ea48d8c40383ab648353776c01f12c147b6" Feb 27 16:08:32 crc kubenswrapper[4830]: E0227 16:08:32.444301 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bf9lh_openshift-ovn-kubernetes(2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.464357 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:32Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.466536 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.466610 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.466635 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.466664 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.466687 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:32Z","lastTransitionTime":"2026-02-27T16:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.485017 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:32Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.503928 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsrq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb72b0f7-1d22-4d13-9653-b1607aa2235d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwzp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsrq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:32Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.526801 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672682a0-e75f-4d6c-b4f2-542944327497\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5efac20f84ec097b144c4cb152a6eba826b5eac50e222e2618b75bf163cbe93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rgv8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:32Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.542258 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fcddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6adbc0c4-e467-41f1-9190-d0dd3693eba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a66caf75365a8658dd7422b2f715e7f095b5bd7d96c8a0a96ce1a80cb99849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zrsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fcddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:32Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.562827 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:32Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.569408 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.569473 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.569491 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.569517 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.569535 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:32Z","lastTransitionTime":"2026-02-27T16:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.583310 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4995a66c4bc6d195947f67ef564aa18c7fe145e12067c2fca8e0de6b76b72f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5245f048370e7455a63a6ca01bffb8e514e22a123e45bc3a064ce4fbe27f45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:32Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.600406 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b20f888e852fcbe89e6d5c685b4b0fad411956bb3dc3c5ba0e10d6b206e131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:32Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.630107 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279aa904d7438057d878006234143ea48d8c40383ab648353776c01f12c147b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://279aa904d7438057d878006234143ea48d8c40383ab648353776c01f12c147b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:08:30Z\\\",\\\"message\\\":\\\"Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0227 16:08:30.816934 6938 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bf9lh_openshift-ovn-kubernetes(2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bf9lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:32Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.647533 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d6b7ce-4757-4275-8345-60c1b546ce25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58864ba639422794b99f6190c7ab9a81537913f51574220acb3838f97b4f9421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tv5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:32Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.664095 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5a4c5b-2008-4354-b26e-8763a631e55c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32024ba3e19bc33cc8197b4dd6fdf165f6de8fd7c3b1e85b2f4768c75c50736a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a90f11c3291e6bde2a008f2e8e617c9c8db36ee65c3bbf26112bf8707d7cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gqgb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:32Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.672503 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.672564 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.672582 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.672607 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.672625 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:32Z","lastTransitionTime":"2026-02-27T16:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.684184 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d20f886-cfdb-48c7-9754-6b7255b1124f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:52Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0227 16:07:52.525113 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:07:52.525413 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:07:52.526578 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-70351561/tls.crt::/tmp/serving-cert-70351561/tls.key\\\\\\\"\\\\nI0227 16:07:52.790383 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:07:52.794547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:07:52.794587 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:07:52.794627 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:07:52.794638 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:07:52.801197 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:07:52.801263 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:07:52.801293 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 16:07:52.801246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 16:07:52.801325 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:07:52.801373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:07:52.801389 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:07:52.801396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 16:07:52.804476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:07:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:32Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.700423 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f62df11-40bd-4531-baa9-5b7ab5679a67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5046080bbc4855769fb50909ff06b8c2cfd2438a0b65749c89302a4e97342980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:32Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.720459 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eebea1ec0fd24a7664f474db4e4cdb08e53c5c742c392900599dc3e7adcbee2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:32Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.735180 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p7298" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a08121a78952c85326d07974c700397678777995d73cce9f24ace8932b3e9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9g9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p7298\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:32Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.750172 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kgdlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kgdlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:32Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.761718 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.761849 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:32 crc kubenswrapper[4830]: E0227 16:08:32.762053 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.762127 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.762186 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:08:32 crc kubenswrapper[4830]: E0227 16:08:32.762333 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:08:32 crc kubenswrapper[4830]: E0227 16:08:32.762532 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:08:32 crc kubenswrapper[4830]: E0227 16:08:32.762653 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.775690 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.775744 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.775761 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.775786 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.775805 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:32Z","lastTransitionTime":"2026-02-27T16:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.878487 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.878541 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.878560 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.878589 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.878611 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:32Z","lastTransitionTime":"2026-02-27T16:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.981524 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.981565 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.981574 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.981587 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:32 crc kubenswrapper[4830]: I0227 16:08:32.981599 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:32Z","lastTransitionTime":"2026-02-27T16:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.084066 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.084126 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.084144 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.084170 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.084191 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:33Z","lastTransitionTime":"2026-02-27T16:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.187249 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.187311 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.187333 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.187362 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.187386 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:33Z","lastTransitionTime":"2026-02-27T16:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.290438 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.290471 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.290480 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.290495 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.290504 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:33Z","lastTransitionTime":"2026-02-27T16:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.392733 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.392794 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.392811 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.392834 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.392854 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:33Z","lastTransitionTime":"2026-02-27T16:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.496064 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.496140 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.496158 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.496184 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.496204 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:33Z","lastTransitionTime":"2026-02-27T16:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.599232 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.599294 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.599315 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.599340 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.599359 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:33Z","lastTransitionTime":"2026-02-27T16:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.702144 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.702201 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.702217 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.702241 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.702259 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:33Z","lastTransitionTime":"2026-02-27T16:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.804889 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.805052 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.805076 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.805098 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.805113 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:33Z","lastTransitionTime":"2026-02-27T16:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.907903 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.908024 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.908050 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.908077 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:33 crc kubenswrapper[4830]: I0227 16:08:33.908094 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:33Z","lastTransitionTime":"2026-02-27T16:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.010425 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.010480 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.010498 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.010523 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.010540 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:34Z","lastTransitionTime":"2026-02-27T16:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.112789 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.112846 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.112865 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.112889 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.112908 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:34Z","lastTransitionTime":"2026-02-27T16:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.218505 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.218593 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.218631 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.218664 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.218689 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:34Z","lastTransitionTime":"2026-02-27T16:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.322442 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.322506 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.322523 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.322552 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.322571 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:34Z","lastTransitionTime":"2026-02-27T16:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.425130 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.425187 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.425204 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.425255 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.425272 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:34Z","lastTransitionTime":"2026-02-27T16:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.528603 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.528670 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.528688 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.528713 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.528732 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:34Z","lastTransitionTime":"2026-02-27T16:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.631325 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.631389 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.631409 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.631433 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.631451 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:34Z","lastTransitionTime":"2026-02-27T16:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.734172 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.734240 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.734260 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.734288 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.734307 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:34Z","lastTransitionTime":"2026-02-27T16:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.743376 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.743594 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:08:34 crc kubenswrapper[4830]: E0227 16:08:34.743743 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:09:06.743676602 +0000 UTC m=+142.832949095 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:08:34 crc kubenswrapper[4830]: E0227 16:08:34.743810 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 16:08:34 crc kubenswrapper[4830]: E0227 16:08:34.743852 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 16:08:34 crc kubenswrapper[4830]: E0227 16:08:34.743878 4830 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.743977 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:08:34 crc kubenswrapper[4830]: E0227 16:08:34.744007 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 16:09:06.743941379 +0000 UTC m=+142.833213882 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.744137 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.744182 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:34 crc kubenswrapper[4830]: E0227 16:08:34.744229 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 16:08:34 crc kubenswrapper[4830]: E0227 16:08:34.744267 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 16:08:34 crc kubenswrapper[4830]: E0227 16:08:34.744290 4830 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:08:34 crc kubenswrapper[4830]: E0227 16:08:34.744370 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 16:09:06.744345249 +0000 UTC m=+142.833617802 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:08:34 crc kubenswrapper[4830]: E0227 16:08:34.744418 4830 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 16:08:34 crc kubenswrapper[4830]: E0227 16:08:34.744530 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 16:09:06.744495573 +0000 UTC m=+142.833768076 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 16:08:34 crc kubenswrapper[4830]: E0227 16:08:34.744417 4830 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 16:08:34 crc kubenswrapper[4830]: E0227 16:08:34.744657 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 16:09:06.744630806 +0000 UTC m=+142.833903349 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.761809 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.762001 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:34 crc kubenswrapper[4830]: E0227 16:08:34.762083 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.762159 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.762009 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:08:34 crc kubenswrapper[4830]: E0227 16:08:34.762244 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:08:34 crc kubenswrapper[4830]: E0227 16:08:34.762404 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:08:34 crc kubenswrapper[4830]: E0227 16:08:34.763434 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.783146 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.787543 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d20f886-cfdb-48c7-9754-6b7255b1124f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:52Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0227 16:07:52.525113 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:07:52.525413 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:07:52.526578 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-70351561/tls.crt::/tmp/serving-cert-70351561/tls.key\\\\\\\"\\\\nI0227 16:07:52.790383 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:07:52.794547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:07:52.794587 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:07:52.794627 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:07:52.794638 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:07:52.801197 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:07:52.801263 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:07:52.801293 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 16:07:52.801246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 16:07:52.801325 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:07:52.801373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:07:52.801389 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:07:52.801396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 16:07:52.804476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:07:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:34Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.811086 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eebea1ec0fd24a7664f474db4e4cdb08e53c5c742c392900599dc3e7adcbee2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:34Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.827724 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p7298" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a08121a78952c85326d07974c700397678777995d73cce9f24ace8932b3e9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9g9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p7298\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:34Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.837695 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.837746 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.837766 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.837793 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.837814 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:34Z","lastTransitionTime":"2026-02-27T16:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.844780 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ba2fe32-66e0-4bcd-a646-9d07c9a21c54-metrics-certs\") pod \"network-metrics-daemon-kgdlg\" (UID: \"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\") " pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:34 crc kubenswrapper[4830]: E0227 16:08:34.846105 4830 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 16:08:34 crc kubenswrapper[4830]: E0227 16:08:34.846221 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ba2fe32-66e0-4bcd-a646-9d07c9a21c54-metrics-certs podName:6ba2fe32-66e0-4bcd-a646-9d07c9a21c54 nodeName:}" failed. No retries permitted until 2026-02-27 16:09:06.846191205 +0000 UTC m=+142.935463708 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6ba2fe32-66e0-4bcd-a646-9d07c9a21c54-metrics-certs") pod "network-metrics-daemon-kgdlg" (UID: "6ba2fe32-66e0-4bcd-a646-9d07c9a21c54") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.847108 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kgdlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kgdlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:34Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.864837 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f62df11-40bd-4531-baa9-5b7ab5679a67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5046080bbc4855769fb50909ff06b8c2cfd2438a0b65749c89302a4e97342980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:34Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.889301 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:34Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.909524 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsrq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb72b0f7-1d22-4d13-9653-b1607aa2235d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwzp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsrq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:34Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.932564 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672682a0-e75f-4d6c-b4f2-542944327497\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5efac20f84ec097b144c4cb152a6eba826b5eac50e222e2618b75bf163cbe93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rgv8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:34Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.940385 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.940439 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.940456 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.940482 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.940513 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:34Z","lastTransitionTime":"2026-02-27T16:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.949428 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fcddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6adbc0c4-e467-41f1-9190-d0dd3693eba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a66caf75365a8658dd7422b2f715e7f095b5bd7d96c8a0a96ce1a80cb99849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zrsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fcddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:34Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.968520 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:34Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:34 crc kubenswrapper[4830]: I0227 16:08:34.990691 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:34Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.010437 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4995a66c4bc6d195947f67ef564aa18c7fe145e12067c2fca8e0de6b76b72f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5245f048370e7455a63a6ca01bffb8e514e22a123e45bc3a064ce4fbe27f45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:35Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.028622 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b20f888e852fcbe89e6d5c685b4b0fad411956bb3dc3c5ba0e10d6b206e131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:35Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.044194 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.044241 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.044260 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.044286 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.044304 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:35Z","lastTransitionTime":"2026-02-27T16:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.059015 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279aa904d7438057d878006234143ea48d8c40383ab648353776c01f12c147b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://279aa904d7438057d878006234143ea48d8c40383ab648353776c01f12c147b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:08:30Z\\\",\\\"message\\\":\\\"Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0227 16:08:30.816934 6938 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bf9lh_openshift-ovn-kubernetes(2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bf9lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:35Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.078705 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d6b7ce-4757-4275-8345-60c1b546ce25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58864ba639422794b99f6190c7ab9a81537913f51574220acb3838f97b4f9421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tv5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:35Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.099738 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5a4c5b-2008-4354-b26e-8763a631e55c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32024ba3e19bc33cc8197b4dd6fdf165f6de8fd7c3b1e85b2f4768c75c50736a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a90f11c3291e6bde2a008f2e8e617c9c8db36ee65c3bbf26112bf8707d7cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gqgb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:35Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.146721 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.147135 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.147333 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.147485 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.147617 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:35Z","lastTransitionTime":"2026-02-27T16:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.250427 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.250494 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.250515 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.250546 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.250568 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:35Z","lastTransitionTime":"2026-02-27T16:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.352797 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.352853 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.352869 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.352889 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.352906 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:35Z","lastTransitionTime":"2026-02-27T16:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.455522 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.455577 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.455594 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.455618 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.455636 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:35Z","lastTransitionTime":"2026-02-27T16:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.559150 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.559207 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.559223 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.559247 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.559264 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:35Z","lastTransitionTime":"2026-02-27T16:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.662230 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.662285 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.662304 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.662325 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.662343 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:35Z","lastTransitionTime":"2026-02-27T16:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.765645 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.765682 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.765698 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.765718 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.765735 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:35Z","lastTransitionTime":"2026-02-27T16:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.869074 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.869178 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.869197 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.869222 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.869240 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:35Z","lastTransitionTime":"2026-02-27T16:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.973343 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.973402 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.973418 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.973443 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:35 crc kubenswrapper[4830]: I0227 16:08:35.973461 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:35Z","lastTransitionTime":"2026-02-27T16:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.077193 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.077997 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.078168 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.078352 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.078520 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:36Z","lastTransitionTime":"2026-02-27T16:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.082245 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.082470 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.082618 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.082764 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.082920 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:36Z","lastTransitionTime":"2026-02-27T16:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:36 crc kubenswrapper[4830]: E0227 16:08:36.105420 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:36Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.110235 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.110463 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.110650 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.110834 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.111018 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:36Z","lastTransitionTime":"2026-02-27T16:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:36 crc kubenswrapper[4830]: E0227 16:08:36.131585 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:36Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.136920 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.137017 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.137041 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.137067 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.137088 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:36Z","lastTransitionTime":"2026-02-27T16:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:36 crc kubenswrapper[4830]: E0227 16:08:36.157455 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:36Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.161699 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.161765 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.161804 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.161825 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.161843 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:36Z","lastTransitionTime":"2026-02-27T16:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:36 crc kubenswrapper[4830]: E0227 16:08:36.181396 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:36Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.186388 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.186456 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.186478 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.186524 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.186544 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:36Z","lastTransitionTime":"2026-02-27T16:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:36 crc kubenswrapper[4830]: E0227 16:08:36.209848 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:36Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:36 crc kubenswrapper[4830]: E0227 16:08:36.210130 4830 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.212764 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.212817 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.212835 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.212861 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.212878 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:36Z","lastTransitionTime":"2026-02-27T16:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.316679 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.316744 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.316762 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.316793 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.316811 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:36Z","lastTransitionTime":"2026-02-27T16:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.423253 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.423324 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.423348 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.423384 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.423409 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:36Z","lastTransitionTime":"2026-02-27T16:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.527317 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.527387 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.527405 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.527429 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.527449 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:36Z","lastTransitionTime":"2026-02-27T16:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.631164 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.631233 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.631255 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.631281 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.631302 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:36Z","lastTransitionTime":"2026-02-27T16:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.734783 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.734831 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.734844 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.734863 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.734877 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:36Z","lastTransitionTime":"2026-02-27T16:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.761446 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.761507 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.761533 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.761457 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:08:36 crc kubenswrapper[4830]: E0227 16:08:36.761650 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:08:36 crc kubenswrapper[4830]: E0227 16:08:36.761787 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:08:36 crc kubenswrapper[4830]: E0227 16:08:36.762000 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:08:36 crc kubenswrapper[4830]: E0227 16:08:36.762201 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.838883 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.839037 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.839058 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.839129 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.839159 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:36Z","lastTransitionTime":"2026-02-27T16:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.943169 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.943232 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.943249 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.943278 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:36 crc kubenswrapper[4830]: I0227 16:08:36.943302 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:36Z","lastTransitionTime":"2026-02-27T16:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.047692 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.047791 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.047841 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.047872 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.047891 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:37Z","lastTransitionTime":"2026-02-27T16:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.151128 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.151191 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.151209 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.151233 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.151252 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:37Z","lastTransitionTime":"2026-02-27T16:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.255465 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.255546 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.255571 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.255605 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.255627 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:37Z","lastTransitionTime":"2026-02-27T16:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.359137 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.359200 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.359218 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.359245 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.359263 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:37Z","lastTransitionTime":"2026-02-27T16:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.462553 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.462605 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.462622 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.462644 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.462661 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:37Z","lastTransitionTime":"2026-02-27T16:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.566752 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.566821 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.566842 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.566870 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.566889 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:37Z","lastTransitionTime":"2026-02-27T16:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.670177 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.670241 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.670260 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.670287 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.670328 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:37Z","lastTransitionTime":"2026-02-27T16:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.773413 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.773487 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.773508 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.773533 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.773551 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:37Z","lastTransitionTime":"2026-02-27T16:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.876704 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.876771 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.876789 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.876816 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.876839 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:37Z","lastTransitionTime":"2026-02-27T16:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.980167 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.980220 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.980236 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.980257 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:37 crc kubenswrapper[4830]: I0227 16:08:37.980301 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:37Z","lastTransitionTime":"2026-02-27T16:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.083694 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.083754 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.083770 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.083795 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.083814 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:38Z","lastTransitionTime":"2026-02-27T16:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.186676 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.186747 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.186766 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.186793 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.186815 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:38Z","lastTransitionTime":"2026-02-27T16:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.289294 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.289388 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.289408 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.289434 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.289452 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:38Z","lastTransitionTime":"2026-02-27T16:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.392749 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.392821 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.392838 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.392867 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.392885 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:38Z","lastTransitionTime":"2026-02-27T16:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.495514 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.495579 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.495605 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.495632 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.495655 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:38Z","lastTransitionTime":"2026-02-27T16:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.598239 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.598315 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.598334 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.598360 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.598380 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:38Z","lastTransitionTime":"2026-02-27T16:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.701674 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.701744 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.701770 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.701802 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.701828 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:38Z","lastTransitionTime":"2026-02-27T16:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.762374 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:38 crc kubenswrapper[4830]: E0227 16:08:38.762533 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.762791 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:08:38 crc kubenswrapper[4830]: E0227 16:08:38.762900 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.763156 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:38 crc kubenswrapper[4830]: E0227 16:08:38.763278 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.763380 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:08:38 crc kubenswrapper[4830]: E0227 16:08:38.763563 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.804116 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.804178 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.804200 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.804227 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.804249 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:38Z","lastTransitionTime":"2026-02-27T16:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.907362 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.907482 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.907502 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.907532 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:38 crc kubenswrapper[4830]: I0227 16:08:38.907554 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:38Z","lastTransitionTime":"2026-02-27T16:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.011100 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.011156 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.011172 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.011195 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.011211 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:39Z","lastTransitionTime":"2026-02-27T16:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.114398 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.114452 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.114468 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.114490 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.114508 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:39Z","lastTransitionTime":"2026-02-27T16:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.216839 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.216894 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.216911 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.216931 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.216985 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:39Z","lastTransitionTime":"2026-02-27T16:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.319595 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.319646 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.319662 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.319683 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.319699 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:39Z","lastTransitionTime":"2026-02-27T16:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.422184 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.422233 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.422252 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.422273 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.422289 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:39Z","lastTransitionTime":"2026-02-27T16:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.525159 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.525262 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.525315 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.525341 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.525365 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:39Z","lastTransitionTime":"2026-02-27T16:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.628132 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.628183 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.628199 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.628221 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.628238 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:39Z","lastTransitionTime":"2026-02-27T16:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.730571 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.730635 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.730652 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.730681 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.730702 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:39Z","lastTransitionTime":"2026-02-27T16:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.833361 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.833744 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.833760 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.833781 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.833798 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:39Z","lastTransitionTime":"2026-02-27T16:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.936403 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.936471 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.936487 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.936510 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:39 crc kubenswrapper[4830]: I0227 16:08:39.936528 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:39Z","lastTransitionTime":"2026-02-27T16:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.039516 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.039561 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.039576 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.039598 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.039614 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:40Z","lastTransitionTime":"2026-02-27T16:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.143070 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.143126 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.143143 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.143185 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.143225 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:40Z","lastTransitionTime":"2026-02-27T16:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.245631 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.245689 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.245701 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.245719 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.245738 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:40Z","lastTransitionTime":"2026-02-27T16:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.349550 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.354487 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.354567 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.354627 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.354669 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:40Z","lastTransitionTime":"2026-02-27T16:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.457459 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.457581 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.457607 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.457633 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.457653 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:40Z","lastTransitionTime":"2026-02-27T16:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.560624 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.560683 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.560701 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.560724 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.560742 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:40Z","lastTransitionTime":"2026-02-27T16:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.663580 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.663641 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.663657 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.663681 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.663698 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:40Z","lastTransitionTime":"2026-02-27T16:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.762315 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.762369 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.762402 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:08:40 crc kubenswrapper[4830]: E0227 16:08:40.762527 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.762545 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:40 crc kubenswrapper[4830]: E0227 16:08:40.762667 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:08:40 crc kubenswrapper[4830]: E0227 16:08:40.762810 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:08:40 crc kubenswrapper[4830]: E0227 16:08:40.762857 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.765978 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.766035 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.766055 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.766077 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.766094 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:40Z","lastTransitionTime":"2026-02-27T16:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.868440 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.868500 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.868517 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.868543 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.868561 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:40Z","lastTransitionTime":"2026-02-27T16:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.971107 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.971164 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.971183 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.971206 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:40 crc kubenswrapper[4830]: I0227 16:08:40.971223 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:40Z","lastTransitionTime":"2026-02-27T16:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.074803 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.074901 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.074920 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.074971 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.074989 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:41Z","lastTransitionTime":"2026-02-27T16:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.177668 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.177721 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.177739 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.177762 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.177779 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:41Z","lastTransitionTime":"2026-02-27T16:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.280832 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.280875 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.280892 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.280913 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.280929 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:41Z","lastTransitionTime":"2026-02-27T16:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.384015 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.384085 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.384110 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.384141 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.384165 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:41Z","lastTransitionTime":"2026-02-27T16:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.486817 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.486873 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.486889 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.486910 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.486927 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:41Z","lastTransitionTime":"2026-02-27T16:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.589991 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.590051 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.590068 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.590096 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.590113 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:41Z","lastTransitionTime":"2026-02-27T16:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.693158 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.693193 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.693203 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.693217 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.693236 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:41Z","lastTransitionTime":"2026-02-27T16:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.795514 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.795571 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.795591 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.795614 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.795632 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:41Z","lastTransitionTime":"2026-02-27T16:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.898468 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.898506 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.898514 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.898527 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:41 crc kubenswrapper[4830]: I0227 16:08:41.898536 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:41Z","lastTransitionTime":"2026-02-27T16:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.001938 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.002030 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.002047 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.002070 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.002089 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:42Z","lastTransitionTime":"2026-02-27T16:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.104869 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.104991 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.105014 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.105044 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.105065 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:42Z","lastTransitionTime":"2026-02-27T16:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.208411 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.208465 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.208478 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.208496 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.208508 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:42Z","lastTransitionTime":"2026-02-27T16:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.311933 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.312018 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.312036 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.312064 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.312084 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:42Z","lastTransitionTime":"2026-02-27T16:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.415392 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.415446 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.415462 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.415486 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.415506 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:42Z","lastTransitionTime":"2026-02-27T16:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.518121 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.518254 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.518273 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.518295 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.518312 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:42Z","lastTransitionTime":"2026-02-27T16:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.621322 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.621350 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.621358 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.621372 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.621382 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:42Z","lastTransitionTime":"2026-02-27T16:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.723390 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.723435 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.723451 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.723471 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.723487 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:42Z","lastTransitionTime":"2026-02-27T16:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.762382 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:08:42 crc kubenswrapper[4830]: E0227 16:08:42.762490 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.762378 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.762583 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.762587 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:42 crc kubenswrapper[4830]: E0227 16:08:42.762763 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:08:42 crc kubenswrapper[4830]: E0227 16:08:42.763052 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:08:42 crc kubenswrapper[4830]: E0227 16:08:42.763128 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.826080 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.826141 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.826153 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.826175 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.826189 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:42Z","lastTransitionTime":"2026-02-27T16:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.929362 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.929438 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.929453 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.929478 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:42 crc kubenswrapper[4830]: I0227 16:08:42.929494 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:42Z","lastTransitionTime":"2026-02-27T16:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.032845 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.032936 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.032990 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.033024 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.033055 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:43Z","lastTransitionTime":"2026-02-27T16:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.136337 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.136380 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.136390 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.136405 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.136418 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:43Z","lastTransitionTime":"2026-02-27T16:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.238978 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.239041 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.239058 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.239080 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.239103 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:43Z","lastTransitionTime":"2026-02-27T16:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.343407 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.343477 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.343495 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.343523 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.343544 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:43Z","lastTransitionTime":"2026-02-27T16:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.447278 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.447351 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.447369 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.447402 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.447421 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:43Z","lastTransitionTime":"2026-02-27T16:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.550733 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.550826 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.550844 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.550869 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.550885 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:43Z","lastTransitionTime":"2026-02-27T16:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.654423 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.654518 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.654546 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.654587 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.654616 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:43Z","lastTransitionTime":"2026-02-27T16:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.757810 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.757875 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.757889 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.757911 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.757927 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:43Z","lastTransitionTime":"2026-02-27T16:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.778867 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.861303 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.861386 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.861405 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.861434 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.861457 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:43Z","lastTransitionTime":"2026-02-27T16:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.964626 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.964686 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.964703 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.964729 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:43 crc kubenswrapper[4830]: I0227 16:08:43.964767 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:43Z","lastTransitionTime":"2026-02-27T16:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.068471 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.068518 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.068528 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.068544 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.068555 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:44Z","lastTransitionTime":"2026-02-27T16:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.172324 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.172376 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.172395 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.172418 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.172441 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:44Z","lastTransitionTime":"2026-02-27T16:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.274390 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.274445 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.274456 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.274469 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.274505 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:44Z","lastTransitionTime":"2026-02-27T16:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.377579 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.377615 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.377627 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.377641 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.377653 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:44Z","lastTransitionTime":"2026-02-27T16:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.480549 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.480596 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.480613 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.480635 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.480652 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:44Z","lastTransitionTime":"2026-02-27T16:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.584441 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.584525 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.584543 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.584567 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.584584 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:44Z","lastTransitionTime":"2026-02-27T16:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.688339 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.688416 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.688439 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.688467 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.688485 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:44Z","lastTransitionTime":"2026-02-27T16:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.761338 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.761502 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.761616 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:08:44 crc kubenswrapper[4830]: E0227 16:08:44.761611 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.761662 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:44 crc kubenswrapper[4830]: E0227 16:08:44.761993 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:08:44 crc kubenswrapper[4830]: E0227 16:08:44.762187 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:08:44 crc kubenswrapper[4830]: E0227 16:08:44.762252 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.762600 4830 scope.go:117] "RemoveContainer" containerID="acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2" Feb 27 16:08:44 crc kubenswrapper[4830]: E0227 16:08:44.789599 4830 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.796650 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279aa904d7438057d878006234143ea48d8c40383ab648353776c01f12c147b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://279aa904d7438057d878006234143ea48d8c40383ab648353776c01f12c147b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:08:30Z\\\",\\\"message\\\":\\\"Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0227 16:08:30.816934 6938 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bf9lh_openshift-ovn-kubernetes(2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bf9lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:44Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.816113 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d6b7ce-4757-4275-8345-60c1b546ce25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58864ba639422794b99f6190c7ab9a81537913f51574220acb3838f97b4f9421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tv5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:44Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.835062 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5a4c5b-2008-4354-b26e-8763a631e55c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32024ba3e19bc33cc8197b4dd6fdf165f6de8fd7c3b1e85b2f4768c75c50736a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a90f11c3291e6bde2a008f2e8e617c9c8db36ee65c3bbf26112bf8707d7cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gqgb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:44Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.856879 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478eecce-80f0-4502-b435-b1cddaf017e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635c013219126bc71bc7c3f7b7f27339ea0a53eace870778212c42ed22a682ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c414ab235e0a7321e6455f54b680cefce4120d0114676e10e9f79bf275964052\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 16:06:47.208783 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 16:06:47.215570 1 observer_polling.go:159] Starting file observer\\\\nI0227 16:06:47.322367 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 16:06:47.347440 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0227 16:07:17.594375 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a576bccb73520a452eea30512283d7bca1141a05d0c697885d106fb132c3df7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a682f07e321a3dc0cbf11fc0b683893d4527f80d5b41ee627e645f3996cc3ae9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e96e9ad075a2fbed1c691bdd79b49300f3b485834a95834aabe1ca32f099fe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:44Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.887293 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:44Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:44 crc kubenswrapper[4830]: E0227 16:08:44.894894 4830 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.920733 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4995a66c4bc6d195947f67ef564aa18c7fe145e12067c2fca8e0de6b76b72f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5245f048370e7455a63a6ca01bffb8e514e22a123e45bc3a064ce4fbe27f45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:44Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.951393 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b20f888e852fcbe89e6d5c685b4b0fad411956bb3dc3c5ba0e10d6b206e131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:44Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.981147 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d20f886-cfdb-48c7-9754-6b7255b1124f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:52Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0227 16:07:52.525113 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:07:52.525413 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:07:52.526578 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-70351561/tls.crt::/tmp/serving-cert-70351561/tls.key\\\\\\\"\\\\nI0227 16:07:52.790383 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:07:52.794547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:07:52.794587 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:07:52.794627 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:07:52.794638 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:07:52.801197 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:07:52.801263 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:07:52.801293 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 16:07:52.801246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 16:07:52.801325 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:07:52.801373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:07:52.801389 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:07:52.801396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 16:07:52.804476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:07:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:44Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:44 crc kubenswrapper[4830]: I0227 16:08:44.995988 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f62df11-40bd-4531-baa9-5b7ab5679a67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5046080bbc4855769fb50909ff06b8c2cfd2438a0b65749c89302a4e97342980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:44Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:45 crc kubenswrapper[4830]: I0227 16:08:45.014542 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cd7e51-371e-4b0a-bd9f-2f517b32dcc2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d696722e1b43f10155be828026a025360961994508157507a965f2fe04a0770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e44c5ee059ace66f0a159049433d1bf023f1a9024d7f6b8202424022b808889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5382b6063de637c6b85d3a34c9fc6963e653f4bb9f30ca7af478a89814f23c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5de0725eb122d0444c4c7bfb3b03c479dfb681cac98e8a4d52ca0eaa3cdd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c172ecdd34951e753faf3ec60d36500c2822650b74bb825ef9eeda6bb8d0356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa08243b506aaad33dbf7184de0ee3b14f2dc444ddfd00ffef3d141ddea8a795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa08243b506aaad33dbf7184de0ee3b14f2dc444ddfd00ffef3d141ddea8a795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12436bc0b6c1591075f81684cc8726b8ade83e7d217f7c6e963ad1423db6b9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12436bc0b6c1591075f81684cc8726b8ade83e7d217f7c6e963ad1423db6b9ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f63879efa3aa43d9b25f67e5181594aa22d31e9ce7fd480a5e9a1acd89103538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f63879efa3aa43d9b25f67e5181594aa22d31e9ce7fd480a5e9a1acd89103538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:45Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:45 crc kubenswrapper[4830]: I0227 16:08:45.027652 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eebea1ec0fd24a7664f474db4e4cdb08e53c5c742c392900599dc3e7adcbee2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:45Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:45 crc kubenswrapper[4830]: I0227 16:08:45.036393 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p7298" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a08121a78952c85326d07974c700397678777995d73cce9f24ace8932b3e9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9g9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p7298\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:45Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:45 crc kubenswrapper[4830]: I0227 16:08:45.048178 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kgdlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kgdlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:45Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:45 crc kubenswrapper[4830]: I0227 16:08:45.061350 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:45Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:45 crc kubenswrapper[4830]: I0227 16:08:45.085055 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:45Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:45 crc kubenswrapper[4830]: I0227 16:08:45.098695 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsrq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb72b0f7-1d22-4d13-9653-b1607aa2235d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwzp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsrq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:45Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:45 crc kubenswrapper[4830]: I0227 16:08:45.115826 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672682a0-e75f-4d6c-b4f2-542944327497\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5efac20f84ec097b144c4cb152a6eba826b5eac50e222e2618b75bf163cbe93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rgv8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:45Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:45 crc kubenswrapper[4830]: I0227 16:08:45.129425 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fcddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6adbc0c4-e467-41f1-9190-d0dd3693eba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a66caf75365a8658dd7422b2f715e7f095b5bd7d96c8a0a96ce1a80cb99849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zrsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fcddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:45Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:45 crc kubenswrapper[4830]: I0227 16:08:45.504074 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 27 16:08:45 crc kubenswrapper[4830]: I0227 16:08:45.506078 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2fd9cca8c6af573373bdd5d9c9bea88412fc8611274c4856a98f11a59599fcb6"} Feb 27 16:08:45 crc kubenswrapper[4830]: I0227 16:08:45.506661 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:08:45 crc kubenswrapper[4830]: I0227 16:08:45.520689 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f62df11-40bd-4531-baa9-5b7ab5679a67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5046080bbc4855769fb50909ff06b8c2cfd2438a0b65749c89302a4e97342980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:45Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:45 crc kubenswrapper[4830]: I0227 16:08:45.559215 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cd7e51-371e-4b0a-bd9f-2f517b32dcc2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d696722e1b43f10155be828026a025360961994508157507a965f2fe04a0770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e44c5ee059ace66f0a159049433d1bf023f1a9024d7f6b8202424022b808889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5382b6063de637c6b85d3a34c9fc6963e653f4bb9f30ca7af478a89814f23c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5de0725eb122d0444c4c7bfb3b03c479dfb681cac98e8a4d52ca0eaa3cdd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c172ecdd34951e753faf3ec60d36500c2822650b74bb825ef9eeda6bb8d0356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa08243b506aaad33dbf7184de0ee3b14f2dc444ddfd00ffef3d141ddea8a795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa08243b506aaad33dbf7184de0ee3b14f2dc444ddfd00ffef3d141ddea8a795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12436bc0b6c1591075f81684cc8726b8ade83e7d217f7c6e963ad1423db6b9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12436bc0b6c1591075f81684cc8726b8ade83e7d217f7c6e963ad1423db6b9ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f63879efa3aa43d9b25f67e5181594aa22d31e9ce7fd480a5e9a1acd89103538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f63879efa3aa43d9b25f67e5181594aa22d31e9ce7fd480a5e9a1acd89103538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:45Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:45 crc kubenswrapper[4830]: I0227 16:08:45.572298 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eebea1ec0fd24a7664f474db4e4cdb08e53c5c742c392900599dc3e7adcbee2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:45Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:45 crc kubenswrapper[4830]: I0227 16:08:45.587876 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p7298" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a08121a78952c85326d07974c700397678777995d73cce9f24ace8932b3e9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9g9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p7298\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:45Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:45 crc kubenswrapper[4830]: I0227 16:08:45.602203 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kgdlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kgdlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:45Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:45 crc kubenswrapper[4830]: I0227 16:08:45.617510 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:45Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:45 crc kubenswrapper[4830]: I0227 16:08:45.638012 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:45Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:45 crc kubenswrapper[4830]: I0227 16:08:45.653306 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsrq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb72b0f7-1d22-4d13-9653-b1607aa2235d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwzp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsrq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:45Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:45 crc kubenswrapper[4830]: I0227 16:08:45.675191 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672682a0-e75f-4d6c-b4f2-542944327497\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5efac20f84ec097b144c4cb152a6eba826b5eac50e222e2618b75bf163cbe93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rgv8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:45Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:45 crc kubenswrapper[4830]: I0227 16:08:45.691619 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fcddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6adbc0c4-e467-41f1-9190-d0dd3693eba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a66caf75365a8658dd7422b2f715e7f095b5bd7d96c8a0a96ce1a80cb99849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zrsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fcddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:45Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:45 crc kubenswrapper[4830]: I0227 16:08:45.709320 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478eecce-80f0-4502-b435-b1cddaf017e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635c013219126bc71bc7c3f7b7f27339ea0a53eace870778212c42ed22a682ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c414ab235e0a7321e6455f54b680cefce4120d0114676e10e9f79bf275964052\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 16:06:47.208783 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 16:06:47.215570 1 observer_polling.go:159] Starting file observer\\\\nI0227 16:06:47.322367 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 16:06:47.347440 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0227 16:07:17.594375 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a576bccb73520a452eea30512283d7bca1141a05d0c697885d106fb132c3df7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a682f07e321a3dc0cbf11fc0b683893d4527f80d5b41ee627e645f3996cc3ae9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e96e9ad075a2fbed1c691bdd79b49300f3b485834a95834aabe1ca32f099fe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:45Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:45 crc kubenswrapper[4830]: I0227 16:08:45.725008 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:45Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:45 crc kubenswrapper[4830]: I0227 16:08:45.742147 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4995a66c4bc6d195947f67ef564aa18c7fe145e12067c2fca8e0de6b76b72f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5245f048370e7455a63a6ca01bffb8e514e22a123e45bc3a064ce4fbe27f45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:45Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:45 crc kubenswrapper[4830]: I0227 16:08:45.758186 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b20f888e852fcbe89e6d5c685b4b0fad411956bb3dc3c5ba0e10d6b206e131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:45Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:45 crc kubenswrapper[4830]: I0227 16:08:45.791129 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279aa904d7438057d878006234143ea48d8c40383ab648353776c01f12c147b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://279aa904d7438057d878006234143ea48d8c40383ab648353776c01f12c147b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:08:30Z\\\",\\\"message\\\":\\\"Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0227 16:08:30.816934 6938 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bf9lh_openshift-ovn-kubernetes(2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bf9lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:45Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:45 crc kubenswrapper[4830]: I0227 16:08:45.806412 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d6b7ce-4757-4275-8345-60c1b546ce25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58864ba639422794b99f6190c7ab9a81537913f51574220acb3838f97b4f9421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tv5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:45Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:45 crc kubenswrapper[4830]: I0227 16:08:45.820988 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5a4c5b-2008-4354-b26e-8763a631e55c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32024ba3e19bc33cc8197b4dd6fdf165f6de8fd7c3b1e85b2f4768c75c50736a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a90f11c3291e6bde2a008f2e8e617c9c8db36ee65c3bbf26112bf8707d7cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gqgb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:45Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:45 crc kubenswrapper[4830]: I0227 16:08:45.842580 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d20f886-cfdb-48c7-9754-6b7255b1124f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd9cca8c6af573373bdd5d9c9bea88412fc8611274c4856a98f11a59599fcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:52Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0227 16:07:52.525113 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:07:52.525413 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:07:52.526578 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-70351561/tls.crt::/tmp/serving-cert-70351561/tls.key\\\\\\\"\\\\nI0227 16:07:52.790383 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:07:52.794547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:07:52.794587 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:07:52.794627 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:07:52.794638 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:07:52.801197 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:07:52.801263 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:07:52.801293 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 16:07:52.801246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 16:07:52.801325 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:07:52.801373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:07:52.801389 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:07:52.801396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 16:07:52.804476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:07:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:45Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:46 crc kubenswrapper[4830]: I0227 16:08:46.600203 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:46 crc kubenswrapper[4830]: I0227 16:08:46.600270 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:46 crc kubenswrapper[4830]: I0227 16:08:46.600290 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:46 crc kubenswrapper[4830]: I0227 16:08:46.600318 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:46 crc kubenswrapper[4830]: I0227 16:08:46.600336 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:46Z","lastTransitionTime":"2026-02-27T16:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:46 crc kubenswrapper[4830]: E0227 16:08:46.622022 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:46Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:46 crc kubenswrapper[4830]: I0227 16:08:46.627340 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:46 crc kubenswrapper[4830]: I0227 16:08:46.627382 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:46 crc kubenswrapper[4830]: I0227 16:08:46.627399 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:46 crc kubenswrapper[4830]: I0227 16:08:46.627423 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:46 crc kubenswrapper[4830]: I0227 16:08:46.627440 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:46Z","lastTransitionTime":"2026-02-27T16:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:46 crc kubenswrapper[4830]: E0227 16:08:46.645318 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:46Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:46 crc kubenswrapper[4830]: I0227 16:08:46.649693 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:46 crc kubenswrapper[4830]: I0227 16:08:46.649731 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:46 crc kubenswrapper[4830]: I0227 16:08:46.649748 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:46 crc kubenswrapper[4830]: I0227 16:08:46.649768 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:46 crc kubenswrapper[4830]: I0227 16:08:46.649786 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:46Z","lastTransitionTime":"2026-02-27T16:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:46 crc kubenswrapper[4830]: E0227 16:08:46.670547 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:46Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:46 crc kubenswrapper[4830]: I0227 16:08:46.674705 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:46 crc kubenswrapper[4830]: I0227 16:08:46.674766 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:46 crc kubenswrapper[4830]: I0227 16:08:46.674790 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:46 crc kubenswrapper[4830]: I0227 16:08:46.674818 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:46 crc kubenswrapper[4830]: I0227 16:08:46.674842 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:46Z","lastTransitionTime":"2026-02-27T16:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:46 crc kubenswrapper[4830]: E0227 16:08:46.688726 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:46Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:46 crc kubenswrapper[4830]: I0227 16:08:46.692660 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:46 crc kubenswrapper[4830]: I0227 16:08:46.692729 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:46 crc kubenswrapper[4830]: I0227 16:08:46.692759 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:46 crc kubenswrapper[4830]: I0227 16:08:46.692785 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:46 crc kubenswrapper[4830]: I0227 16:08:46.692807 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:46Z","lastTransitionTime":"2026-02-27T16:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:46 crc kubenswrapper[4830]: E0227 16:08:46.717165 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:46Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:46 crc kubenswrapper[4830]: E0227 16:08:46.717384 4830 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 16:08:46 crc kubenswrapper[4830]: I0227 16:08:46.761629 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:08:46 crc kubenswrapper[4830]: I0227 16:08:46.761708 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:46 crc kubenswrapper[4830]: E0227 16:08:46.761789 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:08:46 crc kubenswrapper[4830]: I0227 16:08:46.761719 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:46 crc kubenswrapper[4830]: E0227 16:08:46.761867 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:08:46 crc kubenswrapper[4830]: I0227 16:08:46.761984 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:08:46 crc kubenswrapper[4830]: E0227 16:08:46.762049 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:08:46 crc kubenswrapper[4830]: E0227 16:08:46.762068 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:08:47 crc kubenswrapper[4830]: I0227 16:08:47.762694 4830 scope.go:117] "RemoveContainer" containerID="279aa904d7438057d878006234143ea48d8c40383ab648353776c01f12c147b6" Feb 27 16:08:47 crc kubenswrapper[4830]: E0227 16:08:47.762981 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bf9lh_openshift-ovn-kubernetes(2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" Feb 27 16:08:48 crc kubenswrapper[4830]: I0227 16:08:48.761549 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:08:48 crc kubenswrapper[4830]: E0227 16:08:48.762326 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:08:48 crc kubenswrapper[4830]: I0227 16:08:48.762382 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:48 crc kubenswrapper[4830]: I0227 16:08:48.762485 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:48 crc kubenswrapper[4830]: I0227 16:08:48.762549 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:08:48 crc kubenswrapper[4830]: E0227 16:08:48.762866 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:08:48 crc kubenswrapper[4830]: E0227 16:08:48.763027 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:08:48 crc kubenswrapper[4830]: E0227 16:08:48.762733 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:08:49 crc kubenswrapper[4830]: E0227 16:08:49.896434 4830 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 16:08:50 crc kubenswrapper[4830]: I0227 16:08:50.762495 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:50 crc kubenswrapper[4830]: I0227 16:08:50.762605 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:08:50 crc kubenswrapper[4830]: I0227 16:08:50.762511 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:08:50 crc kubenswrapper[4830]: I0227 16:08:50.762504 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:50 crc kubenswrapper[4830]: E0227 16:08:50.762803 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:08:50 crc kubenswrapper[4830]: E0227 16:08:50.762990 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:08:50 crc kubenswrapper[4830]: E0227 16:08:50.763175 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:08:50 crc kubenswrapper[4830]: E0227 16:08:50.763350 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:08:52 crc kubenswrapper[4830]: I0227 16:08:52.543707 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fsrq9_bb72b0f7-1d22-4d13-9653-b1607aa2235d/kube-multus/0.log" Feb 27 16:08:52 crc kubenswrapper[4830]: I0227 16:08:52.543853 4830 generic.go:334] "Generic (PLEG): container finished" podID="bb72b0f7-1d22-4d13-9653-b1607aa2235d" containerID="4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa" exitCode=1 Feb 27 16:08:52 crc kubenswrapper[4830]: I0227 16:08:52.544001 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fsrq9" event={"ID":"bb72b0f7-1d22-4d13-9653-b1607aa2235d","Type":"ContainerDied","Data":"4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa"} Feb 27 16:08:52 crc kubenswrapper[4830]: I0227 16:08:52.544642 4830 scope.go:117] "RemoveContainer" containerID="4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa" Feb 27 16:08:52 crc kubenswrapper[4830]: I0227 16:08:52.568810 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d20f886-cfdb-48c7-9754-6b7255b1124f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd9cca8c6af573373bdd5d9c9bea88412fc8611274c4856a98f11a59599fcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:52Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0227 16:07:52.525113 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:07:52.525413 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:07:52.526578 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-70351561/tls.crt::/tmp/serving-cert-70351561/tls.key\\\\\\\"\\\\nI0227 16:07:52.790383 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:07:52.794547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:07:52.794587 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:07:52.794627 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:07:52.794638 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:07:52.801197 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:07:52.801263 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:07:52.801293 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 16:07:52.801246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 16:07:52.801325 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:07:52.801373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:07:52.801389 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:07:52.801396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 16:07:52.804476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:07:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:52Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:52 crc kubenswrapper[4830]: I0227 16:08:52.584693 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kgdlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kgdlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:52Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:52 crc kubenswrapper[4830]: I0227 16:08:52.599312 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f62df11-40bd-4531-baa9-5b7ab5679a67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5046080bbc4855769fb50909ff06b8c2cfd2438a0b65749c89302a4e97342980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:52Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:52 crc kubenswrapper[4830]: I0227 16:08:52.630386 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cd7e51-371e-4b0a-bd9f-2f517b32dcc2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d696722e1b43f10155be828026a025360961994508157507a965f2fe04a0770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e44c5ee059ace66f0a159049433d1bf023f1a9024d7f6b8202424022b808889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5382b6063de637c6b85d3a34c9fc6963e653f4bb9f30ca7af478a89814f23c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5de0725eb122d0444c4c7bfb3b03c479dfb681cac98e8a4d52ca0eaa3cdd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c172ecdd34951e753faf3ec60d36500c2822650b74bb825ef9eeda6bb8d0356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa08243b506aaad33dbf7184de0ee3b14f2dc444ddfd00ffef3d141ddea8a795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa08243b506aaad33dbf7184de0ee3b14f2dc444ddfd00ffef3d141ddea8a795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12436bc0b6c1591075f81684cc8726b8ade83e7d217f7c6e963ad1423db6b9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12436bc0b6c1591075f81684cc8726b8ade83e7d217f7c6e963ad1423db6b9ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f63879efa3aa43d9b25f67e5181594aa22d31e9ce7fd480a5e9a1acd89103538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f63879efa3aa43d9b25f67e5181594aa22d31e9ce7fd480a5e9a1acd89103538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:52Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:52 crc kubenswrapper[4830]: I0227 16:08:52.653266 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eebea1ec0fd24a7664f474db4e4cdb08e53c5c742c392900599dc3e7adcbee2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:52Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:52 crc kubenswrapper[4830]: I0227 16:08:52.670302 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p7298" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a08121a78952c85326d07974c700397678777995d73cce9f24ace8932b3e9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9g9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p7298\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:52Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:52 crc kubenswrapper[4830]: I0227 16:08:52.690433 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fcddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6adbc0c4-e467-41f1-9190-d0dd3693eba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a66caf75365a8658dd7422b2f715e7f095b5bd7d96c8a0a96ce1a80cb99849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zrsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fcddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:52Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:52 crc kubenswrapper[4830]: I0227 16:08:52.711515 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:52Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:52 crc kubenswrapper[4830]: I0227 16:08:52.735014 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:52Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:52 crc kubenswrapper[4830]: I0227 16:08:52.759929 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsrq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb72b0f7-1d22-4d13-9653-b1607aa2235d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:08:52Z\\\",\\\"message\\\":\\\"2026-02-27T16:08:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e9ebe1ab-38fe-4467-b8c9-4c8d8f85bd30\\\\n2026-02-27T16:08:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e9ebe1ab-38fe-4467-b8c9-4c8d8f85bd30 to /host/opt/cni/bin/\\\\n2026-02-27T16:08:07Z [verbose] multus-daemon started\\\\n2026-02-27T16:08:07Z [verbose] Readiness Indicator file check\\\\n2026-02-27T16:08:52Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwzp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsrq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:52Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:52 crc kubenswrapper[4830]: I0227 16:08:52.762367 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:52 crc kubenswrapper[4830]: I0227 16:08:52.762406 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:52 crc kubenswrapper[4830]: I0227 16:08:52.762420 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:08:52 crc kubenswrapper[4830]: E0227 16:08:52.762572 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:08:52 crc kubenswrapper[4830]: I0227 16:08:52.762831 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:08:52 crc kubenswrapper[4830]: E0227 16:08:52.762922 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:08:52 crc kubenswrapper[4830]: E0227 16:08:52.763091 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:08:52 crc kubenswrapper[4830]: E0227 16:08:52.763141 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:08:52 crc kubenswrapper[4830]: I0227 16:08:52.785670 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672682a0-e75f-4d6c-b4f2-542944327497\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5efac20f84ec097b144c4cb152a6eba826b5eac50e222e2618b75bf163cbe93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rgv8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:52Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:52 crc kubenswrapper[4830]: I0227 16:08:52.801596 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b20f888e852fcbe89e6d5c685b4b0fad411956bb3dc3c5ba0e10d6b206e131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:52Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:52 crc kubenswrapper[4830]: I0227 16:08:52.831399 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279aa904d7438057d878006234143ea48d8c40383ab648353776c01f12c147b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://279aa904d7438057d878006234143ea48d8c40383ab648353776c01f12c147b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:08:30Z\\\",\\\"message\\\":\\\"Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0227 16:08:30.816934 6938 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bf9lh_openshift-ovn-kubernetes(2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bf9lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:52Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:52 crc kubenswrapper[4830]: I0227 16:08:52.851214 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d6b7ce-4757-4275-8345-60c1b546ce25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58864ba639422794b99f6190c7ab9a81537913f51574220acb3838f97b4f9421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tv5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:52Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:52 crc kubenswrapper[4830]: I0227 16:08:52.867897 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5a4c5b-2008-4354-b26e-8763a631e55c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32024ba3e19bc33cc8197b4dd6fdf165f6de8fd7c3b1e85b2f4768c75c50736a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a90f11c3291e6bde2a008f2e8e617c9c8db36ee65c3bbf26112bf8707d7cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gqgb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:52Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:52 crc kubenswrapper[4830]: I0227 16:08:52.888289 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478eecce-80f0-4502-b435-b1cddaf017e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635c013219126bc71bc7c3f7b7f27339ea0a53eace870778212c42ed22a682ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c414ab235e0a7321e6455f54b680cefce4120d0114676e10e9f79bf275964052\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 16:06:47.208783 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 16:06:47.215570 1 observer_polling.go:159] Starting file observer\\\\nI0227 16:06:47.322367 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 16:06:47.347440 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0227 16:07:17.594375 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a576bccb73520a452eea30512283d7bca1141a05d0c697885d106fb132c3df7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a682f07e321a3dc0cbf11fc0b683893d4527f80d5b41ee627e645f3996cc3ae9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e96e9ad075a2fbed1c691bdd79b49300f3b485834a95834aabe1ca32f099fe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:52Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:52 crc kubenswrapper[4830]: I0227 16:08:52.908432 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:52Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:52 crc kubenswrapper[4830]: I0227 16:08:52.926932 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4995a66c4bc6d195947f67ef564aa18c7fe145e12067c2fca8e0de6b76b72f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5245f048370e7455a63a6ca01bffb8e514e22a123e45bc3a064ce4fbe27f45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:52Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:53 crc kubenswrapper[4830]: I0227 16:08:53.551145 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fsrq9_bb72b0f7-1d22-4d13-9653-b1607aa2235d/kube-multus/0.log" Feb 27 16:08:53 crc kubenswrapper[4830]: I0227 16:08:53.551243 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fsrq9" event={"ID":"bb72b0f7-1d22-4d13-9653-b1607aa2235d","Type":"ContainerStarted","Data":"787ecf758c99d969efde354b66bb37b5f9a8c2d79cccd8bab200423af61d4109"} Feb 27 16:08:53 crc kubenswrapper[4830]: I0227 16:08:53.577425 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsrq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb72b0f7-1d22-4d13-9653-b1607aa2235d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://787ecf758c99d969efde354b66bb37b5f9a8c2d79cccd8bab200423af61d4109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:08:52Z\\\",\\\"message\\\":\\\"2026-02-27T16:08:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e9ebe1ab-38fe-4467-b8c9-4c8d8f85bd30\\\\n2026-02-27T16:08:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e9ebe1ab-38fe-4467-b8c9-4c8d8f85bd30 to /host/opt/cni/bin/\\\\n2026-02-27T16:08:07Z [verbose] multus-daemon started\\\\n2026-02-27T16:08:07Z [verbose] Readiness Indicator file check\\\\n2026-02-27T16:08:52Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwzp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsrq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:53Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:53 crc kubenswrapper[4830]: I0227 16:08:53.602580 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672682a0-e75f-4d6c-b4f2-542944327497\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5efac20f84ec097b144c4cb152a6eba826b5eac50e222e2618b75bf163cbe93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rgv8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:53Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:53 crc kubenswrapper[4830]: I0227 16:08:53.618246 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fcddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6adbc0c4-e467-41f1-9190-d0dd3693eba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a66caf75365a8658dd7422b2f715e7f095b5bd7d96c8a0a96ce1a80cb99849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zrsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fcddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:53Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:53 crc kubenswrapper[4830]: I0227 16:08:53.637039 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:53Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:53 crc kubenswrapper[4830]: I0227 16:08:53.663423 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:53Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:53 crc kubenswrapper[4830]: I0227 16:08:53.684122 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:53Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:53 crc kubenswrapper[4830]: I0227 16:08:53.702783 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4995a66c4bc6d195947f67ef564aa18c7fe145e12067c2fca8e0de6b76b72f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5245f048370e7455a63a6ca01bffb8e514e22a123e45bc3a064ce4fbe27f45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:53Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:53 crc kubenswrapper[4830]: I0227 16:08:53.721724 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b20f888e852fcbe89e6d5c685b4b0fad411956bb3dc3c5ba0e10d6b206e131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:53Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:53 crc kubenswrapper[4830]: I0227 16:08:53.752773 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279aa904d7438057d878006234143ea48d8c40383ab648353776c01f12c147b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://279aa904d7438057d878006234143ea48d8c40383ab648353776c01f12c147b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:08:30Z\\\",\\\"message\\\":\\\"Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0227 16:08:30.816934 6938 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bf9lh_openshift-ovn-kubernetes(2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bf9lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:53Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:53 crc kubenswrapper[4830]: I0227 16:08:53.770900 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d6b7ce-4757-4275-8345-60c1b546ce25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58864ba639422794b99f6190c7ab9a81537913f51574220acb3838f97b4f9421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tv5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:53Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:53 crc kubenswrapper[4830]: I0227 16:08:53.788254 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5a4c5b-2008-4354-b26e-8763a631e55c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32024ba3e19bc33cc8197b4dd6fdf165f6de8fd7c3b1e85b2f4768c75c50736a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a90f11c3291e6bde2a008f2e8e617c9c8db36ee65c3bbf26112bf8707d7cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gqgb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:53Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:53 crc kubenswrapper[4830]: I0227 16:08:53.809500 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478eecce-80f0-4502-b435-b1cddaf017e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635c013219126bc71bc7c3f7b7f27339ea0a53eace870778212c42ed22a682ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c414ab235e0a7321e6455f54b680cefce4120d0114676e10e9f79bf275964052\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 16:06:47.208783 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 16:06:47.215570 1 observer_polling.go:159] Starting file observer\\\\nI0227 16:06:47.322367 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 16:06:47.347440 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0227 16:07:17.594375 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a576bccb73520a452eea30512283d7bca1141a05d0c697885d106fb132c3df7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a682f07e321a3dc0cbf11fc0b683893d4527f80d5b41ee627e645f3996cc3ae9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e96e9ad075a2fbed1c691bdd79b49300f3b485834a95834aabe1ca32f099fe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:53Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:53 crc kubenswrapper[4830]: I0227 16:08:53.830619 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d20f886-cfdb-48c7-9754-6b7255b1124f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd9cca8c6af573373bdd5d9c9bea88412fc8611274c4856a98f11a59599fcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:52Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0227 16:07:52.525113 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:07:52.525413 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:07:52.526578 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-70351561/tls.crt::/tmp/serving-cert-70351561/tls.key\\\\\\\"\\\\nI0227 16:07:52.790383 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:07:52.794547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:07:52.794587 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:07:52.794627 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:07:52.794638 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:07:52.801197 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:07:52.801263 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:07:52.801293 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 16:07:52.801246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 16:07:52.801325 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:07:52.801373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:07:52.801389 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:07:52.801396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 16:07:52.804476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:07:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:53Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:53 crc kubenswrapper[4830]: I0227 16:08:53.851360 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eebea1ec0fd24a7664f474db4e4cdb08e53c5c742c392900599dc3e7adcbee2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:53Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:53 crc kubenswrapper[4830]: I0227 16:08:53.866884 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p7298" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a08121a78952c85326d07974c700397678777995d73cce9f24ace8932b3e9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9g9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p7298\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:53Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:53 crc kubenswrapper[4830]: I0227 16:08:53.882850 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kgdlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kgdlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:53Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:53 crc kubenswrapper[4830]: I0227 16:08:53.900647 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f62df11-40bd-4531-baa9-5b7ab5679a67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5046080bbc4855769fb50909ff06b8c2cfd2438a0b65749c89302a4e97342980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:53Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:53 crc kubenswrapper[4830]: I0227 16:08:53.933346 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cd7e51-371e-4b0a-bd9f-2f517b32dcc2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d696722e1b43f10155be828026a025360961994508157507a965f2fe04a0770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e44c5ee059ace66f0a159049433d1bf023f1a9024d7f6b8202424022b808889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5382b6063de637c6b85d3a34c9fc6963e653f4bb9f30ca7af478a89814f23c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5de0725eb122d0444c4c7bfb3b03c479dfb681cac98e8a4d52ca0eaa3cdd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c172ecdd34951e753faf3ec60d36500c2822650b74bb825ef9eeda6bb8d0356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa08243b506aaad33dbf7184de0ee3b14f2dc444ddfd00ffef3d141ddea8a795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa08243b506aaad33dbf7184de0ee3b14f2dc444ddfd00ffef3d141ddea8a795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12436bc0b6c1591075f81684cc8726b8ade83e7d217f7c6e963ad1423db6b9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12436bc0b6c1591075f81684cc8726b8ade83e7d217f7c6e963ad1423db6b9ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f63879efa3aa43d9b25f67e5181594aa22d31e9ce7fd480a5e9a1acd89103538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f63879efa3aa43d9b25f67e5181594aa22d31e9ce7fd480a5e9a1acd89103538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:53Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:54 crc kubenswrapper[4830]: I0227 16:08:54.761826 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:54 crc kubenswrapper[4830]: I0227 16:08:54.761858 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:54 crc kubenswrapper[4830]: I0227 16:08:54.761874 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:08:54 crc kubenswrapper[4830]: E0227 16:08:54.762033 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:08:54 crc kubenswrapper[4830]: I0227 16:08:54.762060 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:08:54 crc kubenswrapper[4830]: E0227 16:08:54.762194 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:08:54 crc kubenswrapper[4830]: E0227 16:08:54.762337 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:08:54 crc kubenswrapper[4830]: E0227 16:08:54.762432 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:08:54 crc kubenswrapper[4830]: I0227 16:08:54.783340 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d20f886-cfdb-48c7-9754-6b7255b1124f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd9cca8c6af573373bdd5d9c9bea88412fc8611274c4856a98f11a59599fcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:52Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0227 16:07:52.525113 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:07:52.525413 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:07:52.526578 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-70351561/tls.crt::/tmp/serving-cert-70351561/tls.key\\\\\\\"\\\\nI0227 16:07:52.790383 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:07:52.794547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:07:52.794587 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:07:52.794627 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:07:52.794638 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:07:52.801197 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:07:52.801263 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:07:52.801293 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 16:07:52.801246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 16:07:52.801325 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:07:52.801373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:07:52.801389 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:07:52.801396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 16:07:52.804476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:07:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:54Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:54 crc kubenswrapper[4830]: I0227 16:08:54.796275 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f62df11-40bd-4531-baa9-5b7ab5679a67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5046080bbc4855769fb50909ff06b8c2cfd2438a0b65749c89302a4e97342980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:54Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:54 crc kubenswrapper[4830]: I0227 16:08:54.827382 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cd7e51-371e-4b0a-bd9f-2f517b32dcc2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d696722e1b43f10155be828026a025360961994508157507a965f2fe04a0770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e44c5ee059ace66f0a159049433d1bf023f1a9024d7f6b8202424022b808889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5382b6063de637c6b85d3a34c9fc6963e653f4bb9f30ca7af478a89814f23c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5de0725eb122d0444c4c7bfb3b03c479dfb681cac98e8a4d52ca0eaa3cdd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c172ecdd34951e753faf3ec60d36500c2822650b74bb825ef9eeda6bb8d0356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa08243b506aaad33dbf7184de0ee3b14f2dc444ddfd00ffef3d141ddea8a795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa08243b506aaad33dbf7184de0ee3b14f2dc444ddfd00ffef3d141ddea8a795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12436bc0b6c1591075f81684cc8726b8ade83e7d217f7c6e963ad1423db6b9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12436bc0b6c1591075f81684cc8726b8ade83e7d217f7c6e963ad1423db6b9ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f63879efa3aa43d9b25f67e5181594aa22d31e9ce7fd480a5e9a1acd89103538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f63879efa3aa43d9b25f67e5181594aa22d31e9ce7fd480a5e9a1acd89103538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:54Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:54 crc kubenswrapper[4830]: I0227 16:08:54.848585 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eebea1ec0fd24a7664f474db4e4cdb08e53c5c742c392900599dc3e7adcbee2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:54Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:54 crc kubenswrapper[4830]: I0227 16:08:54.864539 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p7298" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a08121a78952c85326d07974c700397678777995d73cce9f24ace8932b3e9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9g9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p7298\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:54Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:54 crc kubenswrapper[4830]: I0227 16:08:54.889639 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kgdlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kgdlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:54Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:54 crc kubenswrapper[4830]: E0227 16:08:54.897765 4830 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 16:08:54 crc kubenswrapper[4830]: I0227 16:08:54.910836 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:54Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:54 crc kubenswrapper[4830]: I0227 16:08:54.930211 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:54Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:54 crc kubenswrapper[4830]: I0227 16:08:54.951900 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsrq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb72b0f7-1d22-4d13-9653-b1607aa2235d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://787ecf758c99d969efde354b66bb37b5f9a8c2d79cccd8bab200423af61d4109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:08:52Z\\\",\\\"message\\\":\\\"2026-02-27T16:08:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e9ebe1ab-38fe-4467-b8c9-4c8d8f85bd30\\\\n2026-02-27T16:08:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e9ebe1ab-38fe-4467-b8c9-4c8d8f85bd30 to /host/opt/cni/bin/\\\\n2026-02-27T16:08:07Z [verbose] multus-daemon started\\\\n2026-02-27T16:08:07Z [verbose] Readiness Indicator file check\\\\n2026-02-27T16:08:52Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwzp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsrq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:54Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:54 crc kubenswrapper[4830]: I0227 16:08:54.976072 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672682a0-e75f-4d6c-b4f2-542944327497\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5efac20f84ec097b144c4cb152a6eba826b5eac50e222e2618b75bf163cbe93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rgv8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:54Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:54 crc kubenswrapper[4830]: I0227 16:08:54.992372 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fcddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6adbc0c4-e467-41f1-9190-d0dd3693eba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a66caf75365a8658dd7422b2f715e7f095b5bd7d96c8a0a96ce1a80cb99849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zrsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fcddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:54Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:55 crc kubenswrapper[4830]: I0227 16:08:55.012810 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478eecce-80f0-4502-b435-b1cddaf017e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635c013219126bc71bc7c3f7b7f27339ea0a53eace870778212c42ed22a682ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c414ab235e0a7321e6455f54b680cefce4120d0114676e10e9f79bf275964052\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 16:06:47.208783 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 16:06:47.215570 1 observer_polling.go:159] Starting file observer\\\\nI0227 16:06:47.322367 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 16:06:47.347440 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0227 16:07:17.594375 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a576bccb73520a452eea30512283d7bca1141a05d0c697885d106fb132c3df7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a682f07e321a3dc0cbf11fc0b683893d4527f80d5b41ee627e645f3996cc3ae9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e96e9ad075a2fbed1c691bdd79b49300f3b485834a95834aabe1ca32f099fe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:55Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:55 crc kubenswrapper[4830]: I0227 16:08:55.034204 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:55Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:55 crc kubenswrapper[4830]: I0227 16:08:55.051914 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4995a66c4bc6d195947f67ef564aa18c7fe145e12067c2fca8e0de6b76b72f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5245f048370e7455a63a6ca01bffb8e514e22a123e45bc3a064ce4fbe27f45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:55Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:55 crc kubenswrapper[4830]: I0227 16:08:55.070685 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b20f888e852fcbe89e6d5c685b4b0fad411956bb3dc3c5ba0e10d6b206e131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:55Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:55 crc kubenswrapper[4830]: I0227 16:08:55.102082 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279aa904d7438057d878006234143ea48d8c40383ab648353776c01f12c147b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://279aa904d7438057d878006234143ea48d8c40383ab648353776c01f12c147b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:08:30Z\\\",\\\"message\\\":\\\"Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0227 16:08:30.816934 6938 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bf9lh_openshift-ovn-kubernetes(2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bf9lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:55Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:55 crc kubenswrapper[4830]: I0227 16:08:55.121051 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d6b7ce-4757-4275-8345-60c1b546ce25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58864ba639422794b99f6190c7ab9a81537913f51574220acb3838f97b4f9421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tv5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:55Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:55 crc kubenswrapper[4830]: I0227 16:08:55.139223 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5a4c5b-2008-4354-b26e-8763a631e55c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32024ba3e19bc33cc8197b4dd6fdf165f6de8fd7c3b1e85b2f4768c75c50736a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a90f11c3291e6bde2a008f2e8e617c9c8db36ee65c3bbf26112bf8707d7cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gqgb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:55Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:56 crc kubenswrapper[4830]: I0227 16:08:56.751498 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:56 crc kubenswrapper[4830]: I0227 16:08:56.751558 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:56 crc kubenswrapper[4830]: I0227 16:08:56.751574 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:56 crc kubenswrapper[4830]: I0227 16:08:56.751596 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:56 crc kubenswrapper[4830]: I0227 16:08:56.751612 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:56Z","lastTransitionTime":"2026-02-27T16:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:56 crc kubenswrapper[4830]: I0227 16:08:56.762498 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:56 crc kubenswrapper[4830]: I0227 16:08:56.762548 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:08:56 crc kubenswrapper[4830]: I0227 16:08:56.762522 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:56 crc kubenswrapper[4830]: I0227 16:08:56.762524 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:08:56 crc kubenswrapper[4830]: E0227 16:08:56.762706 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:08:56 crc kubenswrapper[4830]: E0227 16:08:56.762774 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:08:56 crc kubenswrapper[4830]: E0227 16:08:56.762893 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:08:56 crc kubenswrapper[4830]: E0227 16:08:56.763099 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:08:56 crc kubenswrapper[4830]: E0227 16:08:56.770395 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:56Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:56 crc kubenswrapper[4830]: I0227 16:08:56.775791 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:56 crc kubenswrapper[4830]: I0227 16:08:56.775844 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:56 crc kubenswrapper[4830]: I0227 16:08:56.775860 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:56 crc kubenswrapper[4830]: I0227 16:08:56.775883 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:56 crc kubenswrapper[4830]: I0227 16:08:56.775902 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:56Z","lastTransitionTime":"2026-02-27T16:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:56 crc kubenswrapper[4830]: E0227 16:08:56.796396 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:56Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:56 crc kubenswrapper[4830]: I0227 16:08:56.801811 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:56 crc kubenswrapper[4830]: I0227 16:08:56.801868 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:56 crc kubenswrapper[4830]: I0227 16:08:56.801885 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:56 crc kubenswrapper[4830]: I0227 16:08:56.801907 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:56 crc kubenswrapper[4830]: I0227 16:08:56.801933 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:56Z","lastTransitionTime":"2026-02-27T16:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:56 crc kubenswrapper[4830]: E0227 16:08:56.823006 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:56Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:56 crc kubenswrapper[4830]: I0227 16:08:56.828134 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:56 crc kubenswrapper[4830]: I0227 16:08:56.828193 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:56 crc kubenswrapper[4830]: I0227 16:08:56.828211 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:56 crc kubenswrapper[4830]: I0227 16:08:56.828235 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:56 crc kubenswrapper[4830]: I0227 16:08:56.828252 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:56Z","lastTransitionTime":"2026-02-27T16:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:56 crc kubenswrapper[4830]: E0227 16:08:56.846852 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:56Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:56 crc kubenswrapper[4830]: I0227 16:08:56.851910 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:08:56 crc kubenswrapper[4830]: I0227 16:08:56.851993 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:08:56 crc kubenswrapper[4830]: I0227 16:08:56.852012 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:08:56 crc kubenswrapper[4830]: I0227 16:08:56.852036 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:08:56 crc kubenswrapper[4830]: I0227 16:08:56.852052 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:08:56Z","lastTransitionTime":"2026-02-27T16:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:08:56 crc kubenswrapper[4830]: E0227 16:08:56.870536 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:08:56Z is after 2025-08-24T17:21:41Z" Feb 27 16:08:56 crc kubenswrapper[4830]: E0227 16:08:56.870691 4830 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 16:08:58 crc kubenswrapper[4830]: I0227 16:08:58.761350 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:08:58 crc kubenswrapper[4830]: I0227 16:08:58.761513 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:08:58 crc kubenswrapper[4830]: E0227 16:08:58.761659 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:08:58 crc kubenswrapper[4830]: I0227 16:08:58.761982 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:08:58 crc kubenswrapper[4830]: I0227 16:08:58.762038 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:08:58 crc kubenswrapper[4830]: E0227 16:08:58.762129 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:08:58 crc kubenswrapper[4830]: E0227 16:08:58.762362 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:08:58 crc kubenswrapper[4830]: E0227 16:08:58.762608 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:08:59 crc kubenswrapper[4830]: E0227 16:08:59.899493 4830 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 16:09:00 crc kubenswrapper[4830]: I0227 16:09:00.762069 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:09:00 crc kubenswrapper[4830]: I0227 16:09:00.762179 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:09:00 crc kubenswrapper[4830]: I0227 16:09:00.762187 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:09:00 crc kubenswrapper[4830]: I0227 16:09:00.762328 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:09:00 crc kubenswrapper[4830]: E0227 16:09:00.763654 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:09:00 crc kubenswrapper[4830]: E0227 16:09:00.763531 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:09:00 crc kubenswrapper[4830]: E0227 16:09:00.763937 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:09:00 crc kubenswrapper[4830]: E0227 16:09:00.764092 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:09:01 crc kubenswrapper[4830]: I0227 16:09:01.763151 4830 scope.go:117] "RemoveContainer" containerID="279aa904d7438057d878006234143ea48d8c40383ab648353776c01f12c147b6" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.350613 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.383444 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cd7e51-371e-4b0a-bd9f-2f517b32dcc2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d696722e1b43f10155be828026a025360961994508157507a965f2fe04a0770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e44c5ee059ace66f0a159049433d1bf023f1a9024d7f6b8202424022b808889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5382b6063de637c6b85d3a34c9fc6963e653f4bb9f30ca7af478a89814f23c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5de0725eb122d0444c4c7bfb3b03c479dfb681cac98e8a4d52ca0eaa3cdd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c172ecdd34951e753faf3ec60d36500c2822650b74bb825ef9eeda6bb8d0356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa08243b506aaad33dbf7184de0ee3b14f2dc444ddfd00ffef3d141ddea8a795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa08243b506aaad33dbf7184de0ee3b14f2dc444ddfd00ffef3d141ddea8a795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12436bc0b6c1591075f81684cc8726b8ade83e7d217f7c6e963ad1423db6b9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12436bc0b6c1591075f81684cc8726b8ade83e7d217f7c6e963ad1423db6b9ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f63879efa3aa43d9b25f67e5181594aa22d31e9ce7fd480a5e9a1acd89103538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f63879efa3aa43d9b25f67e5181594aa22d31e9ce7fd480a5e9a1acd89103538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.401055 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eebea1ec0fd24a7664f474db4e4cdb08e53c5c742c392900599dc3e7adcbee2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.416437 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p7298" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a08121a78952c85326d07974c700397678777995d73cce9f24ace8932b3e9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9g9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p7298\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.430369 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kgdlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kgdlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.443874 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f62df11-40bd-4531-baa9-5b7ab5679a67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5046080bbc4855769fb50909ff06b8c2cfd2438a0b65749c89302a4e97342980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.462878 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.486748 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsrq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb72b0f7-1d22-4d13-9653-b1607aa2235d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://787ecf758c99d969efde354b66bb37b5f9a8c2d79cccd8bab200423af61d4109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:08:52Z\\\",\\\"message\\\":\\\"2026-02-27T16:08:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e9ebe1ab-38fe-4467-b8c9-4c8d8f85bd30\\\\n2026-02-27T16:08:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e9ebe1ab-38fe-4467-b8c9-4c8d8f85bd30 to /host/opt/cni/bin/\\\\n2026-02-27T16:08:07Z [verbose] multus-daemon started\\\\n2026-02-27T16:08:07Z [verbose] Readiness Indicator file check\\\\n2026-02-27T16:08:52Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwzp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsrq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.505939 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672682a0-e75f-4d6c-b4f2-542944327497\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5efac20f84ec097b144c4cb152a6eba826b5eac50e222e2618b75bf163cbe93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rgv8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.522623 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fcddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6adbc0c4-e467-41f1-9190-d0dd3693eba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a66caf75365a8658dd7422b2f715e7f095b5bd7d96c8a0a96ce1a80cb99849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zrsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fcddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.540275 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.558874 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478eecce-80f0-4502-b435-b1cddaf017e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635c013219126bc71bc7c3f7b7f27339ea0a53eace870778212c42ed22a682ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c414ab235e0a7321e6455f54b680cefce4120d0114676e10e9f79bf275964052\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 16:06:47.208783 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 16:06:47.215570 1 observer_polling.go:159] Starting file observer\\\\nI0227 16:06:47.322367 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 16:06:47.347440 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0227 16:07:17.594375 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a576bccb73520a452eea30512283d7bca1141a05d0c697885d106fb132c3df7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a682f07e321a3dc0cbf11fc0b683893d4527f80d5b41ee627e645f3996cc3ae9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e96e9ad075a2fbed1c691bdd79b49300f3b485834a95834aabe1ca32f099fe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.577781 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.589510 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bf9lh_2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904/ovnkube-controller/2.log" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.593192 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" event={"ID":"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904","Type":"ContainerStarted","Data":"e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140"} Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.593675 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.603332 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4995a66c4bc6d195947f67ef564aa18c7fe145e12067c2fca8e0de6b76b72f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5245f048370e7455a63a6ca01bffb8e514e22a123e45bc3a064ce4fbe27f45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.620288 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b20f888e852fcbe89e6d5c685b4b0fad411956bb3dc3c5ba0e10d6b206e131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.645233 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279aa904d7438057d878006234143ea48d8c40383ab648353776c01f12c147b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://279aa904d7438057d878006234143ea48d8c40383ab648353776c01f12c147b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:08:30Z\\\",\\\"message\\\":\\\"Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0227 16:08:30.816934 6938 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bf9lh_openshift-ovn-kubernetes(2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bf9lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.661458 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d6b7ce-4757-4275-8345-60c1b546ce25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58864ba639422794b99f6190c7ab9a81537913f51574220acb3838f97b4f9421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tv5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.679602 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5a4c5b-2008-4354-b26e-8763a631e55c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32024ba3e19bc33cc8197b4dd6fdf165f6de8fd7c3b1e85b2f4768c75c50736a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a90f11c3291e6bde2a008f2e8e617c9c8db36ee65c3bbf26112bf8707d7cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gqgb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.702729 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d20f886-cfdb-48c7-9754-6b7255b1124f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd9cca8c6af573373bdd5d9c9bea88412fc8611274c4856a98f11a59599fcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:52Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0227 16:07:52.525113 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:07:52.525413 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:07:52.526578 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-70351561/tls.crt::/tmp/serving-cert-70351561/tls.key\\\\\\\"\\\\nI0227 16:07:52.790383 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:07:52.794547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:07:52.794587 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:07:52.794627 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:07:52.794638 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:07:52.801197 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:07:52.801263 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:07:52.801293 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 16:07:52.801246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 16:07:52.801325 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:07:52.801373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:07:52.801389 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:07:52.801396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 16:07:52.804476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:07:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.721291 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fcddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6adbc0c4-e467-41f1-9190-d0dd3693eba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a66caf75365a8658dd7422b2f715e7f095b5bd7d96c8a0a96ce1a80cb99849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zrsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fcddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.737457 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.753756 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.761748 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.761842 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.761880 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:09:02 crc kubenswrapper[4830]: E0227 16:09:02.761933 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.762082 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:09:02 crc kubenswrapper[4830]: E0227 16:09:02.762209 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:09:02 crc kubenswrapper[4830]: E0227 16:09:02.762439 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:09:02 crc kubenswrapper[4830]: E0227 16:09:02.762521 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.768930 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsrq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb72b0f7-1d22-4d13-9653-b1607aa2235d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://787ecf758c99d969efde354b66bb37b5f9a8c2d79cccd8bab200423af61d4109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:08:52Z\\\",\\\"message\\\":\\\"2026-02-27T16:08:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e9ebe1ab-38fe-4467-b8c9-4c8d8f85bd30\\\\n2026-02-27T16:08:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e9ebe1ab-38fe-4467-b8c9-4c8d8f85bd30 to /host/opt/cni/bin/\\\\n2026-02-27T16:08:07Z [verbose] multus-daemon started\\\\n2026-02-27T16:08:07Z [verbose] Readiness Indicator file check\\\\n2026-02-27T16:08:52Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwzp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsrq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.785566 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672682a0-e75f-4d6c-b4f2-542944327497\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5efac20f84ec097b144c4cb152a6eba826b5eac50e222e2618b75bf163cbe93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rgv8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.799986 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b20f888e852fcbe89e6d5c685b4b0fad411956bb3dc3c5ba0e10d6b206e131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.822352 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://279aa904d7438057d878006234143ea48d8c40383ab648353776c01f12c147b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:08:30Z\\\",\\\"message\\\":\\\"Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0227 16:08:30.816934 6938 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bf9lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.837391 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d6b7ce-4757-4275-8345-60c1b546ce25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58864ba639422794b99f6190c7ab9a81537913f51574220acb3838f97b4f9421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tv5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.853918 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5a4c5b-2008-4354-b26e-8763a631e55c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32024ba3e19bc33cc8197b4dd6fdf165f6de8fd7c3b1e85b2f4768c75c50736a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a90f11c3291e6bde2a008f2e8e617c9c8db36ee65c3bbf26112bf8707d7cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gqgb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.871731 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478eecce-80f0-4502-b435-b1cddaf017e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635c013219126bc71bc7c3f7b7f27339ea0a53eace870778212c42ed22a682ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c414ab235e0a7321e6455f54b680cefce4120d0114676e10e9f79bf275964052\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 16:06:47.208783 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 16:06:47.215570 1 observer_polling.go:159] Starting file observer\\\\nI0227 16:06:47.322367 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 16:06:47.347440 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0227 16:07:17.594375 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a576bccb73520a452eea30512283d7bca1141a05d0c697885d106fb132c3df7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a682f07e321a3dc0cbf11fc0b683893d4527f80d5b41ee627e645f3996cc3ae9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e96e9ad075a2fbed1c691bdd79b49300f3b485834a95834aabe1ca32f099fe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.890889 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.910730 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4995a66c4bc6d195947f67ef564aa18c7fe145e12067c2fca8e0de6b76b72f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5245f048370e7455a63a6ca01bffb8e514e22a123e45bc3a064ce4fbe27f45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.930176 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d20f886-cfdb-48c7-9754-6b7255b1124f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd9cca8c6af573373bdd5d9c9bea88412fc8611274c4856a98f11a59599fcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:52Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0227 16:07:52.525113 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:07:52.525413 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:07:52.526578 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-70351561/tls.crt::/tmp/serving-cert-70351561/tls.key\\\\\\\"\\\\nI0227 16:07:52.790383 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:07:52.794547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:07:52.794587 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:07:52.794627 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:07:52.794638 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:07:52.801197 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:07:52.801263 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:07:52.801293 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 16:07:52.801246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 16:07:52.801325 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:07:52.801373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:07:52.801389 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:07:52.801396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 16:07:52.804476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:07:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.943409 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kgdlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kgdlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.958197 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f62df11-40bd-4531-baa9-5b7ab5679a67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5046080bbc4855769fb50909ff06b8c2cfd2438a0b65749c89302a4e97342980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:02 crc kubenswrapper[4830]: I0227 16:09:02.989354 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cd7e51-371e-4b0a-bd9f-2f517b32dcc2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d696722e1b43f10155be828026a025360961994508157507a965f2fe04a0770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e44c5ee059ace66f0a159049433d1bf023f1a9024d7f6b8202424022b808889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5382b6063de637c6b85d3a34c9fc6963e653f4bb9f30ca7af478a89814f23c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5de0725eb122d0444c4c7bfb3b03c479dfb681cac98e8a4d52ca0eaa3cdd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c172ecdd34951e753faf3ec60d36500c2822650b74bb825ef9eeda6bb8d0356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa08243b506aaad33dbf7184de0ee3b14f2dc444ddfd00ffef3d141ddea8a795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa08243b506aaad33dbf7184de0ee3b14f2dc444ddfd00ffef3d141ddea8a795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12436bc0b6c1591075f81684cc8726b8ade83e7d217f7c6e963ad1423db6b9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12436bc0b6c1591075f81684cc8726b8ade83e7d217f7c6e963ad1423db6b9ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f63879efa3aa43d9b25f67e5181594aa22d31e9ce7fd480a5e9a1acd89103538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f63879efa3aa43d9b25f67e5181594aa22d31e9ce7fd480a5e9a1acd89103538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:03 crc kubenswrapper[4830]: I0227 16:09:03.009022 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eebea1ec0fd24a7664f474db4e4cdb08e53c5c742c392900599dc3e7adcbee2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:03Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:03 crc kubenswrapper[4830]: I0227 16:09:03.024684 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p7298" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a08121a78952c85326d07974c700397678777995d73cce9f24ace8932b3e9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9g9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p7298\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:03Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:03 crc kubenswrapper[4830]: I0227 16:09:03.600754 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bf9lh_2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904/ovnkube-controller/3.log" Feb 27 16:09:03 crc kubenswrapper[4830]: I0227 16:09:03.602039 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bf9lh_2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904/ovnkube-controller/2.log" Feb 27 16:09:03 crc kubenswrapper[4830]: I0227 16:09:03.606082 4830 generic.go:334] "Generic (PLEG): container finished" podID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerID="e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140" exitCode=1 Feb 27 16:09:03 crc kubenswrapper[4830]: I0227 16:09:03.606135 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" event={"ID":"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904","Type":"ContainerDied","Data":"e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140"} Feb 27 16:09:03 crc kubenswrapper[4830]: I0227 16:09:03.606223 4830 scope.go:117] "RemoveContainer" containerID="279aa904d7438057d878006234143ea48d8c40383ab648353776c01f12c147b6" Feb 27 16:09:03 crc kubenswrapper[4830]: I0227 16:09:03.607365 4830 scope.go:117] "RemoveContainer" containerID="e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140" Feb 27 16:09:03 crc kubenswrapper[4830]: E0227 16:09:03.607647 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bf9lh_openshift-ovn-kubernetes(2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" Feb 27 16:09:03 crc kubenswrapper[4830]: I0227 16:09:03.631577 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d20f886-cfdb-48c7-9754-6b7255b1124f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd9cca8c6af573373bdd5d9c9bea88412fc8611274c4856a98f11a59599fcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:52Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0227 16:07:52.525113 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:07:52.525413 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:07:52.526578 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-70351561/tls.crt::/tmp/serving-cert-70351561/tls.key\\\\\\\"\\\\nI0227 16:07:52.790383 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:07:52.794547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:07:52.794587 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:07:52.794627 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:07:52.794638 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:07:52.801197 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:07:52.801263 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:07:52.801293 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 16:07:52.801246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 16:07:52.801325 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:07:52.801373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:07:52.801389 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:07:52.801396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 16:07:52.804476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:07:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:03Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:03 crc kubenswrapper[4830]: I0227 16:09:03.647468 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p7298" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a08121a78952c85326d07974c700397678777995d73cce9f24ace8932b3e9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9g9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p7298\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:03Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:03 crc kubenswrapper[4830]: I0227 16:09:03.664029 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kgdlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kgdlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:03Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:03 crc kubenswrapper[4830]: I0227 16:09:03.676144 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f62df11-40bd-4531-baa9-5b7ab5679a67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5046080bbc4855769fb50909ff06b8c2cfd2438a0b65749c89302a4e97342980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:03Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:03 crc kubenswrapper[4830]: I0227 16:09:03.713144 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cd7e51-371e-4b0a-bd9f-2f517b32dcc2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d696722e1b43f10155be828026a025360961994508157507a965f2fe04a0770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e44c5ee059ace66f0a159049433d1bf023f1a9024d7f6b8202424022b808889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5382b6063de637c6b85d3a34c9fc6963e653f4bb9f30ca7af478a89814f23c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5de0725eb122d0444c4c7bfb3b03c479dfb681cac98e8a4d52ca0eaa3cdd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c172ecdd34951e753faf3ec60d36500c2822650b74bb825ef9eeda6bb8d0356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa08243b506aaad33dbf7184de0ee3b14f2dc444ddfd00ffef3d141ddea8a795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa08243b506aaad33dbf7184de0ee3b14f2dc444ddfd00ffef3d141ddea8a795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12436bc0b6c1591075f81684cc8726b8ade83e7d217f7c6e963ad1423db6b9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12436bc0b6c1591075f81684cc8726b8ade83e7d217f7c6e963ad1423db6b9ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f63879efa3aa43d9b25f67e5181594aa22d31e9ce7fd480a5e9a1acd89103538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f63879efa3aa43d9b25f67e5181594aa22d31e9ce7fd480a5e9a1acd89103538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:03Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:03 crc kubenswrapper[4830]: I0227 16:09:03.747657 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eebea1ec0fd24a7664f474db4e4cdb08e53c5c742c392900599dc3e7adcbee2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:03Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:03 crc kubenswrapper[4830]: I0227 16:09:03.773152 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672682a0-e75f-4d6c-b4f2-542944327497\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5efac20f84ec097b144c4cb152a6eba826b5eac50e222e2618b75bf163cbe93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rgv8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:03Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:03 crc kubenswrapper[4830]: I0227 16:09:03.773446 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 27 16:09:03 crc kubenswrapper[4830]: I0227 16:09:03.787693 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fcddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6adbc0c4-e467-41f1-9190-d0dd3693eba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a66caf75365a8658dd7422b2f715e7f095b5bd7d96c8a0a96ce1a80cb99849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zrsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fcddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:03Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:03 crc kubenswrapper[4830]: I0227 16:09:03.809225 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:03Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:03 crc kubenswrapper[4830]: I0227 16:09:03.827099 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:03Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:03 crc kubenswrapper[4830]: I0227 16:09:03.844716 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsrq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb72b0f7-1d22-4d13-9653-b1607aa2235d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://787ecf758c99d969efde354b66bb37b5f9a8c2d79cccd8bab200423af61d4109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:08:52Z\\\",\\\"message\\\":\\\"2026-02-27T16:08:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e9ebe1ab-38fe-4467-b8c9-4c8d8f85bd30\\\\n2026-02-27T16:08:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e9ebe1ab-38fe-4467-b8c9-4c8d8f85bd30 to /host/opt/cni/bin/\\\\n2026-02-27T16:08:07Z [verbose] multus-daemon started\\\\n2026-02-27T16:08:07Z [verbose] Readiness Indicator file check\\\\n2026-02-27T16:08:52Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwzp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsrq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:03Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:03 crc kubenswrapper[4830]: I0227 16:09:03.866323 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4995a66c4bc6d195947f67ef564aa18c7fe145e12067c2fca8e0de6b76b72f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5245f048370e7455a63a6ca01bffb8e514e22a123e45bc3a064ce4fbe27f45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:03Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:03 crc kubenswrapper[4830]: I0227 16:09:03.881911 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b20f888e852fcbe89e6d5c685b4b0fad411956bb3dc3c5ba0e10d6b206e131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:03Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:03 crc kubenswrapper[4830]: I0227 16:09:03.910074 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://279aa904d7438057d878006234143ea48d8c40383ab648353776c01f12c147b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:08:30Z\\\",\\\"message\\\":\\\"Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0227 16:08:30.816934 6938 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:09:02Z\\\",\\\"message\\\":\\\"_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-bf9lh\\\\nI0227 16:09:02.708486 7289 ovnkube.go:599] Stopped ovnkube\\\\nI0227 16:09:02.708837 7289 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0227 16:09:02.709090 7289 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nI0227 16:09:02.709106 7289 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0227 16:09:02.709119 7289 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0227 16:09:02.709170 7289 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0227 16:09:02.708848 7289 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0227 16:09:02.709226 7289 kube.go:317] Updating pod openshift-multus/network-metrics-daemon-kgdlg\\\\nF0227 16:09:02.709254 7289 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:09:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bf9lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:03Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:03 crc kubenswrapper[4830]: I0227 16:09:03.926055 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d6b7ce-4757-4275-8345-60c1b546ce25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58864ba639422794b99f6190c7ab9a81537913f51574220acb3838f97b4f9421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tv5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:03Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:03 crc kubenswrapper[4830]: I0227 16:09:03.945851 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5a4c5b-2008-4354-b26e-8763a631e55c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32024ba3e19bc33cc8197b4dd6fdf165f6de8fd7c3b1e85b2f4768c75c50736a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a90f11c3291e6bde2a008f2e8e617c9c8db36ee65c3bbf26112bf8707d7cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gqgb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:03Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:03 crc kubenswrapper[4830]: I0227 16:09:03.965464 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478eecce-80f0-4502-b435-b1cddaf017e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635c013219126bc71bc7c3f7b7f27339ea0a53eace870778212c42ed22a682ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c414ab235e0a7321e6455f54b680cefce4120d0114676e10e9f79bf275964052\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 16:06:47.208783 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 16:06:47.215570 1 observer_polling.go:159] Starting file observer\\\\nI0227 16:06:47.322367 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 16:06:47.347440 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0227 16:07:17.594375 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a576bccb73520a452eea30512283d7bca1141a05d0c697885d106fb132c3df7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a682f07e321a3dc0cbf11fc0b683893d4527f80d5b41ee627e645f3996cc3ae9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e96e9ad075a2fbed1c691bdd79b49300f3b485834a95834aabe1ca32f099fe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:03Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:03 crc kubenswrapper[4830]: I0227 16:09:03.979452 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:03Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:04 crc kubenswrapper[4830]: I0227 16:09:04.612704 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bf9lh_2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904/ovnkube-controller/3.log" Feb 27 16:09:04 crc kubenswrapper[4830]: I0227 16:09:04.618729 4830 scope.go:117] "RemoveContainer" containerID="e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140" Feb 27 16:09:04 crc kubenswrapper[4830]: E0227 16:09:04.618996 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bf9lh_openshift-ovn-kubernetes(2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" Feb 27 16:09:04 crc kubenswrapper[4830]: I0227 16:09:04.641872 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d20f886-cfdb-48c7-9754-6b7255b1124f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd9cca8c6af573373bdd5d9c9bea88412fc8611274c4856a98f11a59599fcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:52Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0227 16:07:52.525113 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:07:52.525413 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:07:52.526578 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-70351561/tls.crt::/tmp/serving-cert-70351561/tls.key\\\\\\\"\\\\nI0227 16:07:52.790383 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:07:52.794547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:07:52.794587 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:07:52.794627 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:07:52.794638 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:07:52.801197 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:07:52.801263 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:07:52.801293 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 16:07:52.801246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 16:07:52.801325 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:07:52.801373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:07:52.801389 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:07:52.801396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 16:07:52.804476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:07:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:04Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:04 crc kubenswrapper[4830]: I0227 16:09:04.673416 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cd7e51-371e-4b0a-bd9f-2f517b32dcc2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d696722e1b43f10155be828026a025360961994508157507a965f2fe04a0770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e44c5ee059ace66f0a159049433d1bf023f1a9024d7f6b8202424022b808889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5382b6063de637c6b85d3a34c9fc6963e653f4bb9f30ca7af478a89814f23c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5de0725eb122d0444c4c7bfb3b03c479dfb681cac98e8a4d52ca0eaa3cdd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c172ecdd34951e753faf3ec60d36500c2822650b74bb825ef9eeda6bb8d0356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa08243b506aaad33dbf7184de0ee3b14f2dc444ddfd00ffef3d141ddea8a795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa08243b506aaad33dbf7184de0ee3b14f2dc444ddfd00ffef3d141ddea8a795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12436bc0b6c1591075f81684cc8726b8ade83e7d217f7c6e963ad1423db6b9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12436bc0b6c1591075f81684cc8726b8ade83e7d217f7c6e963ad1423db6b9ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f63879efa3aa43d9b25f67e5181594aa22d31e9ce7fd480a5e9a1acd89103538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f63879efa3aa43d9b25f67e5181594aa22d31e9ce7fd480a5e9a1acd89103538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:04Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:04 crc kubenswrapper[4830]: I0227 16:09:04.699163 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eebea1ec0fd24a7664f474db4e4cdb08e53c5c742c392900599dc3e7adcbee2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:04Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:04 crc kubenswrapper[4830]: I0227 16:09:04.715135 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p7298" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a08121a78952c85326d07974c700397678777995d73cce9f24ace8932b3e9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9g9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p7298\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:04Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:04 crc kubenswrapper[4830]: I0227 16:09:04.733281 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kgdlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kgdlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:04Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:04 crc kubenswrapper[4830]: I0227 16:09:04.748381 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f62df11-40bd-4531-baa9-5b7ab5679a67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5046080bbc4855769fb50909ff06b8c2cfd2438a0b65749c89302a4e97342980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:04Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:04 crc kubenswrapper[4830]: I0227 16:09:04.763563 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:09:04 crc kubenswrapper[4830]: I0227 16:09:04.763729 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:09:04 crc kubenswrapper[4830]: E0227 16:09:04.763828 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:09:04 crc kubenswrapper[4830]: I0227 16:09:04.763887 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:09:04 crc kubenswrapper[4830]: I0227 16:09:04.763846 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:09:04 crc kubenswrapper[4830]: E0227 16:09:04.764150 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:09:04 crc kubenswrapper[4830]: E0227 16:09:04.764037 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:09:04 crc kubenswrapper[4830]: E0227 16:09:04.764274 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:09:04 crc kubenswrapper[4830]: I0227 16:09:04.772282 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:04Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:04 crc kubenswrapper[4830]: I0227 16:09:04.794507 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsrq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb72b0f7-1d22-4d13-9653-b1607aa2235d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://787ecf758c99d969efde354b66bb37b5f9a8c2d79cccd8bab200423af61d4109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:08:52Z\\\",\\\"message\\\":\\\"2026-02-27T16:08:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e9ebe1ab-38fe-4467-b8c9-4c8d8f85bd30\\\\n2026-02-27T16:08:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e9ebe1ab-38fe-4467-b8c9-4c8d8f85bd30 to /host/opt/cni/bin/\\\\n2026-02-27T16:08:07Z [verbose] multus-daemon started\\\\n2026-02-27T16:08:07Z [verbose] Readiness Indicator file check\\\\n2026-02-27T16:08:52Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwzp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsrq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:04Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:04 crc kubenswrapper[4830]: I0227 16:09:04.817050 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672682a0-e75f-4d6c-b4f2-542944327497\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5efac20f84ec097b144c4cb152a6eba826b5eac50e222e2618b75bf163cbe93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rgv8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:04Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:04 crc kubenswrapper[4830]: I0227 16:09:04.831453 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fcddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6adbc0c4-e467-41f1-9190-d0dd3693eba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a66caf75365a8658dd7422b2f715e7f095b5bd7d96c8a0a96ce1a80cb99849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zrsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fcddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:04Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:04 crc kubenswrapper[4830]: I0227 16:09:04.851142 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:04Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:04 crc kubenswrapper[4830]: I0227 16:09:04.869459 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478eecce-80f0-4502-b435-b1cddaf017e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635c013219126bc71bc7c3f7b7f27339ea0a53eace870778212c42ed22a682ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c414ab235e0a7321e6455f54b680cefce4120d0114676e10e9f79bf275964052\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 16:06:47.208783 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 16:06:47.215570 1 observer_polling.go:159] Starting file observer\\\\nI0227 16:06:47.322367 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 16:06:47.347440 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0227 16:07:17.594375 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a576bccb73520a452eea30512283d7bca1141a05d0c697885d106fb132c3df7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a682f07e321a3dc0cbf11fc0b683893d4527f80d5b41ee627e645f3996cc3ae9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e96e9ad075a2fbed1c691bdd79b49300f3b485834a95834aabe1ca32f099fe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:04Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:04 crc kubenswrapper[4830]: I0227 16:09:04.892846 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:04Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:04 crc kubenswrapper[4830]: E0227 16:09:04.900806 4830 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 16:09:04 crc kubenswrapper[4830]: I0227 16:09:04.912357 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4995a66c4bc6d195947f67ef564aa18c7fe145e12067c2fca8e0de6b76b72f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5245f048370e7455a63a6ca01bffb8e514e22a123e45bc3a064ce4fbe27f45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:04Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:04 crc kubenswrapper[4830]: I0227 16:09:04.928520 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b20f888e852fcbe89e6d5c685b4b0fad411956bb3dc3c5ba0e10d6b206e131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:04Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:04 crc kubenswrapper[4830]: I0227 16:09:04.947616 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:09:02Z\\\",\\\"message\\\":\\\"_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-bf9lh\\\\nI0227 16:09:02.708486 7289 ovnkube.go:599] Stopped ovnkube\\\\nI0227 16:09:02.708837 7289 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0227 16:09:02.709090 7289 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nI0227 16:09:02.709106 7289 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0227 16:09:02.709119 7289 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0227 16:09:02.709170 7289 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0227 16:09:02.708848 7289 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0227 16:09:02.709226 7289 kube.go:317] Updating pod openshift-multus/network-metrics-daemon-kgdlg\\\\nF0227 16:09:02.709254 7289 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:09:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bf9lh_openshift-ovn-kubernetes(2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bf9lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:04Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:04 crc kubenswrapper[4830]: I0227 16:09:04.963428 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d6b7ce-4757-4275-8345-60c1b546ce25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58864ba639422794b99f6190c7ab9a81537913f51574220acb3838f97b4f9421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tv5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:04Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:04 crc kubenswrapper[4830]: I0227 16:09:04.981550 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5a4c5b-2008-4354-b26e-8763a631e55c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32024ba3e19bc33cc8197b4dd6fdf165f6de8fd7c3b1e85b2f4768c75c50736a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a90f11c3291e6bde2a008f2e8e617c9c8db36ee65c3bbf26112bf8707d7cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gqgb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:04Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:05 crc kubenswrapper[4830]: I0227 16:09:05.000541 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9b6de8-29b6-48f3-9b7f-595e7722ec93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbdb2afa0c0d81de9fc59fba4383c882283506d2312d78a1ed7cd0288bf6e670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1ae97d933dc306c9d1ccee8c5c2d0e35a6a90ba747243526d096dd8fafa125a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba7ee266c946dbec6c4506d41bded5a187162e3838fe2b96e7e0957087ee4c2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24c01ec20922a3d1028544b23795fc085535970d40bb2b7199d1f726be21f36d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c01ec20922a3d1028544b23795fc085535970d40bb2b7199d1f726be21f36d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:04Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:05 crc kubenswrapper[4830]: I0227 16:09:05.018629 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:05 crc kubenswrapper[4830]: I0227 16:09:05.039214 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsrq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb72b0f7-1d22-4d13-9653-b1607aa2235d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://787ecf758c99d969efde354b66bb37b5f9a8c2d79cccd8bab200423af61d4109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:08:52Z\\\",\\\"message\\\":\\\"2026-02-27T16:08:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e9ebe1ab-38fe-4467-b8c9-4c8d8f85bd30\\\\n2026-02-27T16:08:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e9ebe1ab-38fe-4467-b8c9-4c8d8f85bd30 to /host/opt/cni/bin/\\\\n2026-02-27T16:08:07Z [verbose] multus-daemon started\\\\n2026-02-27T16:08:07Z [verbose] Readiness Indicator file check\\\\n2026-02-27T16:08:52Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwzp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsrq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:05 crc kubenswrapper[4830]: I0227 16:09:05.063222 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672682a0-e75f-4d6c-b4f2-542944327497\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5efac20f84ec097b144c4cb152a6eba826b5eac50e222e2618b75bf163cbe93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rgv8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:05 crc kubenswrapper[4830]: I0227 16:09:05.079900 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fcddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6adbc0c4-e467-41f1-9190-d0dd3693eba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a66caf75365a8658dd7422b2f715e7f095b5bd7d96c8a0a96ce1a80cb99849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zrsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fcddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:05 crc kubenswrapper[4830]: I0227 16:09:05.100644 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:05 crc kubenswrapper[4830]: I0227 16:09:05.119038 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478eecce-80f0-4502-b435-b1cddaf017e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635c013219126bc71bc7c3f7b7f27339ea0a53eace870778212c42ed22a682ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c414ab235e0a7321e6455f54b680cefce4120d0114676e10e9f79bf275964052\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 16:06:47.208783 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 16:06:47.215570 1 observer_polling.go:159] Starting file observer\\\\nI0227 16:06:47.322367 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 16:06:47.347440 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0227 16:07:17.594375 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a576bccb73520a452eea30512283d7bca1141a05d0c697885d106fb132c3df7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a682f07e321a3dc0cbf11fc0b683893d4527f80d5b41ee627e645f3996cc3ae9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e96e9ad075a2fbed1c691bdd79b49300f3b485834a95834aabe1ca32f099fe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:05 crc kubenswrapper[4830]: I0227 16:09:05.136677 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:05 crc kubenswrapper[4830]: I0227 16:09:05.155669 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4995a66c4bc6d195947f67ef564aa18c7fe145e12067c2fca8e0de6b76b72f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5245f048370e7455a63a6ca01bffb8e514e22a123e45bc3a064ce4fbe27f45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:05 crc kubenswrapper[4830]: I0227 16:09:05.175860 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b20f888e852fcbe89e6d5c685b4b0fad411956bb3dc3c5ba0e10d6b206e131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:05 crc kubenswrapper[4830]: I0227 16:09:05.206295 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:09:02Z\\\",\\\"message\\\":\\\"_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-bf9lh\\\\nI0227 16:09:02.708486 7289 ovnkube.go:599] Stopped ovnkube\\\\nI0227 16:09:02.708837 7289 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0227 16:09:02.709090 7289 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nI0227 16:09:02.709106 7289 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0227 16:09:02.709119 7289 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0227 16:09:02.709170 7289 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0227 16:09:02.708848 7289 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0227 16:09:02.709226 7289 kube.go:317] Updating pod openshift-multus/network-metrics-daemon-kgdlg\\\\nF0227 16:09:02.709254 7289 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:09:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bf9lh_openshift-ovn-kubernetes(2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bf9lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:05 crc kubenswrapper[4830]: I0227 16:09:05.221310 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d6b7ce-4757-4275-8345-60c1b546ce25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58864ba639422794b99f6190c7ab9a81537913f51574220acb3838f97b4f9421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tv5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:05 crc kubenswrapper[4830]: I0227 16:09:05.235746 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5a4c5b-2008-4354-b26e-8763a631e55c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32024ba3e19bc33cc8197b4dd6fdf165f6de8fd7c3b1e85b2f4768c75c50736a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a90f11c3291e6bde2a008f2e8e617c9c8db36ee65c3bbf26112bf8707d7cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gqgb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:05 crc kubenswrapper[4830]: I0227 16:09:05.259001 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9b6de8-29b6-48f3-9b7f-595e7722ec93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbdb2afa0c0d81de9fc59fba4383c882283506d2312d78a1ed7cd0288bf6e670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1ae97d933dc306c9d1ccee8c5c2d0e35a6a90ba747243526d096dd8fafa125a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba7ee266c946dbec6c4506d41bded5a187162e3838fe2b96e7e0957087ee4c2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24c01ec20922a3d1028544b23795fc085535970d40bb2b7199d1f726be21f36d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c01ec20922a3d1028544b23795fc085535970d40bb2b7199d1f726be21f36d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:05 crc kubenswrapper[4830]: I0227 16:09:05.279015 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d20f886-cfdb-48c7-9754-6b7255b1124f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd9cca8c6af573373bdd5d9c9bea88412fc8611274c4856a98f11a59599fcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:52Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0227 16:07:52.525113 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:07:52.525413 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:07:52.526578 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-70351561/tls.crt::/tmp/serving-cert-70351561/tls.key\\\\\\\"\\\\nI0227 16:07:52.790383 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:07:52.794547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:07:52.794587 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:07:52.794627 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:07:52.794638 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:07:52.801197 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:07:52.801263 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:07:52.801293 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 16:07:52.801246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 16:07:52.801325 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:07:52.801373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:07:52.801389 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:07:52.801396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 16:07:52.804476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:07:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:05 crc kubenswrapper[4830]: I0227 16:09:05.312056 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cd7e51-371e-4b0a-bd9f-2f517b32dcc2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d696722e1b43f10155be828026a025360961994508157507a965f2fe04a0770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e44c5ee059ace66f0a159049433d1bf023f1a9024d7f6b8202424022b808889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5382b6063de637c6b85d3a34c9fc6963e653f4bb9f30ca7af478a89814f23c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5de0725eb122d0444c4c7bfb3b03c479dfb681cac98e8a4d52ca0eaa3cdd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c172ecdd34951e753faf3ec60d36500c2822650b74bb825ef9eeda6bb8d0356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa08243b506aaad33dbf7184de0ee3b14f2dc444ddfd00ffef3d141ddea8a795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa08243b506aaad33dbf7184de0ee3b14f2dc444ddfd00ffef3d141ddea8a795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12436bc0b6c1591075f81684cc8726b8ade83e7d217f7c6e963ad1423db6b9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12436bc0b6c1591075f81684cc8726b8ade83e7d217f7c6e963ad1423db6b9ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f63879efa3aa43d9b25f67e5181594aa22d31e9ce7fd480a5e9a1acd89103538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f63879efa3aa43d9b25f67e5181594aa22d31e9ce7fd480a5e9a1acd89103538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:05 crc kubenswrapper[4830]: I0227 16:09:05.333099 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eebea1ec0fd24a7664f474db4e4cdb08e53c5c742c392900599dc3e7adcbee2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:05 crc kubenswrapper[4830]: I0227 16:09:05.349547 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p7298" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a08121a78952c85326d07974c700397678777995d73cce9f24ace8932b3e9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9g9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p7298\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:05 crc kubenswrapper[4830]: I0227 16:09:05.365683 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kgdlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kgdlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:05 crc kubenswrapper[4830]: I0227 16:09:05.382469 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f62df11-40bd-4531-baa9-5b7ab5679a67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5046080bbc4855769fb50909ff06b8c2cfd2438a0b65749c89302a4e97342980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:06 crc kubenswrapper[4830]: I0227 16:09:06.762111 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:09:06 crc kubenswrapper[4830]: I0227 16:09:06.762186 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:09:06 crc kubenswrapper[4830]: I0227 16:09:06.762115 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:09:06 crc kubenswrapper[4830]: I0227 16:09:06.762118 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:09:06 crc kubenswrapper[4830]: E0227 16:09:06.762263 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:09:06 crc kubenswrapper[4830]: E0227 16:09:06.762344 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:09:06 crc kubenswrapper[4830]: E0227 16:09:06.762501 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:09:06 crc kubenswrapper[4830]: E0227 16:09:06.762621 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:09:06 crc kubenswrapper[4830]: I0227 16:09:06.822698 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:09:06 crc kubenswrapper[4830]: I0227 16:09:06.822868 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:09:06 crc kubenswrapper[4830]: E0227 16:09:06.823006 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:10.822923332 +0000 UTC m=+206.912195825 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:09:06 crc kubenswrapper[4830]: E0227 16:09:06.823050 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 16:09:06 crc kubenswrapper[4830]: E0227 16:09:06.823077 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 16:09:06 crc kubenswrapper[4830]: E0227 16:09:06.823100 4830 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:09:06 crc kubenswrapper[4830]: E0227 16:09:06.823161 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 16:10:10.823139958 +0000 UTC m=+206.912412461 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:09:06 crc kubenswrapper[4830]: I0227 16:09:06.823194 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:09:06 crc kubenswrapper[4830]: I0227 16:09:06.823233 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:09:06 crc kubenswrapper[4830]: I0227 16:09:06.823318 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:09:06 crc kubenswrapper[4830]: E0227 16:09:06.823428 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 16:09:06 crc kubenswrapper[4830]: E0227 16:09:06.823447 4830 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 16:09:06 crc kubenswrapper[4830]: E0227 16:09:06.823462 4830 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:09:06 crc kubenswrapper[4830]: E0227 16:09:06.823504 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 16:10:10.823489817 +0000 UTC m=+206.912762320 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:09:06 crc kubenswrapper[4830]: E0227 16:09:06.823557 4830 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 16:09:06 crc kubenswrapper[4830]: E0227 16:09:06.823625 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 16:10:10.82360838 +0000 UTC m=+206.912880883 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 16:09:06 crc kubenswrapper[4830]: E0227 16:09:06.823728 4830 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 16:09:06 crc kubenswrapper[4830]: E0227 16:09:06.823771 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 16:10:10.823757714 +0000 UTC m=+206.913030217 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 16:09:06 crc kubenswrapper[4830]: I0227 16:09:06.924940 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ba2fe32-66e0-4bcd-a646-9d07c9a21c54-metrics-certs\") pod \"network-metrics-daemon-kgdlg\" (UID: \"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\") " pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:09:06 crc kubenswrapper[4830]: E0227 16:09:06.925058 4830 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 16:09:06 crc kubenswrapper[4830]: E0227 16:09:06.925130 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ba2fe32-66e0-4bcd-a646-9d07c9a21c54-metrics-certs podName:6ba2fe32-66e0-4bcd-a646-9d07c9a21c54 nodeName:}" failed. No retries permitted until 2026-02-27 16:10:10.925112586 +0000 UTC m=+207.014385059 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6ba2fe32-66e0-4bcd-a646-9d07c9a21c54-metrics-certs") pod "network-metrics-daemon-kgdlg" (UID: "6ba2fe32-66e0-4bcd-a646-9d07c9a21c54") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 16:09:07 crc kubenswrapper[4830]: I0227 16:09:07.010572 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:09:07 crc kubenswrapper[4830]: I0227 16:09:07.010631 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:09:07 crc kubenswrapper[4830]: I0227 16:09:07.010649 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:09:07 crc kubenswrapper[4830]: I0227 16:09:07.010673 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:09:07 crc kubenswrapper[4830]: I0227 16:09:07.010691 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:09:07Z","lastTransitionTime":"2026-02-27T16:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:09:07 crc kubenswrapper[4830]: E0227 16:09:07.031645 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:07 crc kubenswrapper[4830]: I0227 16:09:07.036331 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:09:07 crc kubenswrapper[4830]: I0227 16:09:07.036393 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:09:07 crc kubenswrapper[4830]: I0227 16:09:07.036412 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:09:07 crc kubenswrapper[4830]: I0227 16:09:07.036438 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:09:07 crc kubenswrapper[4830]: I0227 16:09:07.036456 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:09:07Z","lastTransitionTime":"2026-02-27T16:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:09:07 crc kubenswrapper[4830]: E0227 16:09:07.055888 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:07 crc kubenswrapper[4830]: I0227 16:09:07.060645 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:09:07 crc kubenswrapper[4830]: I0227 16:09:07.060700 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:09:07 crc kubenswrapper[4830]: I0227 16:09:07.060718 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:09:07 crc kubenswrapper[4830]: I0227 16:09:07.060742 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:09:07 crc kubenswrapper[4830]: I0227 16:09:07.060759 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:09:07Z","lastTransitionTime":"2026-02-27T16:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:09:07 crc kubenswrapper[4830]: E0227 16:09:07.080378 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:07 crc kubenswrapper[4830]: I0227 16:09:07.084870 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:09:07 crc kubenswrapper[4830]: I0227 16:09:07.084924 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:09:07 crc kubenswrapper[4830]: I0227 16:09:07.084941 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:09:07 crc kubenswrapper[4830]: I0227 16:09:07.084998 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:09:07 crc kubenswrapper[4830]: I0227 16:09:07.085023 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:09:07Z","lastTransitionTime":"2026-02-27T16:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:09:07 crc kubenswrapper[4830]: E0227 16:09:07.105134 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:07 crc kubenswrapper[4830]: I0227 16:09:07.109772 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:09:07 crc kubenswrapper[4830]: I0227 16:09:07.109895 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:09:07 crc kubenswrapper[4830]: I0227 16:09:07.109967 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:09:07 crc kubenswrapper[4830]: I0227 16:09:07.110002 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:09:07 crc kubenswrapper[4830]: I0227 16:09:07.110025 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:09:07Z","lastTransitionTime":"2026-02-27T16:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:09:07 crc kubenswrapper[4830]: E0227 16:09:07.134453 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:07 crc kubenswrapper[4830]: E0227 16:09:07.134762 4830 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 16:09:08 crc kubenswrapper[4830]: I0227 16:09:08.761765 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:09:08 crc kubenswrapper[4830]: I0227 16:09:08.761829 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:09:08 crc kubenswrapper[4830]: I0227 16:09:08.761934 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:09:08 crc kubenswrapper[4830]: E0227 16:09:08.762034 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:09:08 crc kubenswrapper[4830]: I0227 16:09:08.762164 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:09:08 crc kubenswrapper[4830]: E0227 16:09:08.762170 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:09:08 crc kubenswrapper[4830]: E0227 16:09:08.762287 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:09:08 crc kubenswrapper[4830]: E0227 16:09:08.762377 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:09:09 crc kubenswrapper[4830]: E0227 16:09:09.902215 4830 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 16:09:10 crc kubenswrapper[4830]: I0227 16:09:10.761752 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:09:10 crc kubenswrapper[4830]: I0227 16:09:10.761903 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:09:10 crc kubenswrapper[4830]: I0227 16:09:10.761980 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:09:10 crc kubenswrapper[4830]: I0227 16:09:10.761903 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:09:10 crc kubenswrapper[4830]: E0227 16:09:10.762131 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:09:10 crc kubenswrapper[4830]: E0227 16:09:10.762345 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:09:10 crc kubenswrapper[4830]: E0227 16:09:10.762498 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:09:10 crc kubenswrapper[4830]: E0227 16:09:10.762675 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:09:12 crc kubenswrapper[4830]: I0227 16:09:12.762266 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:09:12 crc kubenswrapper[4830]: I0227 16:09:12.762412 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:09:12 crc kubenswrapper[4830]: E0227 16:09:12.763369 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:09:12 crc kubenswrapper[4830]: I0227 16:09:12.762897 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:09:12 crc kubenswrapper[4830]: I0227 16:09:12.762428 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:09:12 crc kubenswrapper[4830]: E0227 16:09:12.763520 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:09:12 crc kubenswrapper[4830]: E0227 16:09:12.763612 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:09:12 crc kubenswrapper[4830]: E0227 16:09:12.763720 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:09:14 crc kubenswrapper[4830]: I0227 16:09:14.762215 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:09:14 crc kubenswrapper[4830]: I0227 16:09:14.762380 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:09:14 crc kubenswrapper[4830]: E0227 16:09:14.762589 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:09:14 crc kubenswrapper[4830]: I0227 16:09:14.762675 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:09:14 crc kubenswrapper[4830]: I0227 16:09:14.762772 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:09:14 crc kubenswrapper[4830]: E0227 16:09:14.763019 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:09:14 crc kubenswrapper[4830]: E0227 16:09:14.763156 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:09:14 crc kubenswrapper[4830]: E0227 16:09:14.763314 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:09:14 crc kubenswrapper[4830]: I0227 16:09:14.785183 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:14 crc kubenswrapper[4830]: I0227 16:09:14.806400 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:14 crc kubenswrapper[4830]: I0227 16:09:14.823442 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fsrq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb72b0f7-1d22-4d13-9653-b1607aa2235d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://787ecf758c99d969efde354b66bb37b5f9a8c2d79cccd8bab200423af61d4109\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:08:52Z\\\",\\\"message\\\":\\\"2026-02-27T16:08:06+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e9ebe1ab-38fe-4467-b8c9-4c8d8f85bd30\\\\n2026-02-27T16:08:06+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e9ebe1ab-38fe-4467-b8c9-4c8d8f85bd30 to /host/opt/cni/bin/\\\\n2026-02-27T16:08:07Z [verbose] multus-daemon started\\\\n2026-02-27T16:08:07Z [verbose] Readiness Indicator file check\\\\n2026-02-27T16:08:52Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xwzp4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fsrq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:14 crc kubenswrapper[4830]: I0227 16:09:14.847172 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"672682a0-e75f-4d6c-b4f2-542944327497\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5efac20f84ec097b144c4cb152a6eba826b5eac50e222e2618b75bf163cbe93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b56418e015eaf4499e3e5e8ade43a2e4fb4803d5621b0db3fb365dd185d375f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://441562bcb0ff6663b1affd84ae53348e082907782a89293730d0d076e9e84d9d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be1e4280634a32e892b1dbc9bb4034909146da92eb6b01aa69bb7208447a7394\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d37bf60056d482e28c3252074c7ae82c3bd9884ba28af16b639c21795c9e3f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f343d2da5177c14f48359b1377e123308e3686e4adfba0407940fe1368592548\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c27a1b90defc90e0a69236bb9c462d6e5fd36e6500f0f4c720ccd960b0d3ee42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dpb8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rgv8f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:14 crc kubenswrapper[4830]: I0227 16:09:14.862441 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-fcddf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6adbc0c4-e467-41f1-9190-d0dd3693eba6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2a66caf75365a8658dd7422b2f715e7f095b5bd7d96c8a0a96ce1a80cb99849\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8zrsv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fcddf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:14 crc kubenswrapper[4830]: I0227 16:09:14.880407 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d9b6de8-29b6-48f3-9b7f-595e7722ec93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbdb2afa0c0d81de9fc59fba4383c882283506d2312d78a1ed7cd0288bf6e670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1ae97d933dc306c9d1ccee8c5c2d0e35a6a90ba747243526d096dd8fafa125a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba7ee266c946dbec6c4506d41bded5a187162e3838fe2b96e7e0957087ee4c2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24c01ec20922a3d1028544b23795fc085535970d40bb2b7199d1f726be21f36d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24c01ec20922a3d1028544b23795fc085535970d40bb2b7199d1f726be21f36d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:14 crc kubenswrapper[4830]: I0227 16:09:14.899764 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"478eecce-80f0-4502-b435-b1cddaf017e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://635c013219126bc71bc7c3f7b7f27339ea0a53eace870778212c42ed22a682ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c414ab235e0a7321e6455f54b680cefce4120d0114676e10e9f79bf275964052\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:17Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 16:06:47.208783 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 16:06:47.215570 1 observer_polling.go:159] Starting file observer\\\\nI0227 16:06:47.322367 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 16:06:47.347440 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0227 16:07:17.594375 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a576bccb73520a452eea30512283d7bca1141a05d0c697885d106fb132c3df7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a682f07e321a3dc0cbf11fc0b683893d4527f80d5b41ee627e645f3996cc3ae9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e96e9ad075a2fbed1c691bdd79b49300f3b485834a95834aabe1ca32f099fe1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:14 crc kubenswrapper[4830]: E0227 16:09:14.904491 4830 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 16:09:14 crc kubenswrapper[4830]: I0227 16:09:14.921061 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:14 crc kubenswrapper[4830]: I0227 16:09:14.940647 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4995a66c4bc6d195947f67ef564aa18c7fe145e12067c2fca8e0de6b76b72f19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df5245f048370e7455a63a6ca01bffb8e514e22a123e45bc3a064ce4fbe27f45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:14 crc kubenswrapper[4830]: I0227 16:09:14.958385 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:07Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b20f888e852fcbe89e6d5c685b4b0fad411956bb3dc3c5ba0e10d6b206e131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:14 crc kubenswrapper[4830]: I0227 16:09:14.991449 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:09:02Z\\\",\\\"message\\\":\\\"_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-node-bf9lh\\\\nI0227 16:09:02.708486 7289 ovnkube.go:599] Stopped ovnkube\\\\nI0227 16:09:02.708837 7289 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0227 16:09:02.709090 7289 base_network_controller_pods.go:916] Annotation values: ip=[10.217.0.3/23] ; mac=0a:58:0a:d9:00:03 ; gw=[10.217.0.1]\\\\nI0227 16:09:02.709106 7289 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0227 16:09:02.709119 7289 ovn.go:134] Ensuring zone local for Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g in node crc\\\\nI0227 16:09:02.709170 7289 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0227 16:09:02.708848 7289 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0227 16:09:02.709226 7289 kube.go:317] Updating pod openshift-multus/network-metrics-daemon-kgdlg\\\\nF0227 16:09:02.709254 7289 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:09:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bf9lh_openshift-ovn-kubernetes(2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:08:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tf9wc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bf9lh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:15 crc kubenswrapper[4830]: I0227 16:09:15.010508 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00d6b7ce-4757-4275-8345-60c1b546ce25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58864ba639422794b99f6190c7ab9a81537913f51574220acb3838f97b4f9421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqztt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2tv5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:15Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:15 crc kubenswrapper[4830]: I0227 16:09:15.028429 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd5a4c5b-2008-4354-b26e-8763a631e55c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32024ba3e19bc33cc8197b4dd6fdf165f6de8fd7c3b1e85b2f4768c75c50736a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14a90f11c3291e6bde2a008f2e8e617c9c8db36ee65c3bbf26112bf8707d7cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-87p77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gqgb6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:15Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:15 crc kubenswrapper[4830]: I0227 16:09:15.051264 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d20f886-cfdb-48c7-9754-6b7255b1124f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fd9cca8c6af573373bdd5d9c9bea88412fc8611274c4856a98f11a59599fcb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:07:52Z\\\",\\\"message\\\":\\\"g file observer\\\\nW0227 16:07:52.525113 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:07:52.525413 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:07:52.526578 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-70351561/tls.crt::/tmp/serving-cert-70351561/tls.key\\\\\\\"\\\\nI0227 16:07:52.790383 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:07:52.794547 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:07:52.794587 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:07:52.794627 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:07:52.794638 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:07:52.801197 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:07:52.801263 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:07:52.801293 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0227 16:07:52.801246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0227 16:07:52.801325 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:07:52.801373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:07:52.801389 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:07:52.801396 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0227 16:07:52.804476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:07:52Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:15Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:15 crc kubenswrapper[4830]: I0227 16:09:15.064791 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f62df11-40bd-4531-baa9-5b7ab5679a67\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5046080bbc4855769fb50909ff06b8c2cfd2438a0b65749c89302a4e97342980\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97f85a479701374168505eed2e5a64ea144d1c48dce2022d60cb6ebaebfe159f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:15Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:15 crc kubenswrapper[4830]: I0227 16:09:15.097350 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00cd7e51-371e-4b0a-bd9f-2f517b32dcc2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d696722e1b43f10155be828026a025360961994508157507a965f2fe04a0770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e44c5ee059ace66f0a159049433d1bf023f1a9024d7f6b8202424022b808889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5382b6063de637c6b85d3a34c9fc6963e653f4bb9f30ca7af478a89814f23c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5de0725eb122d0444c4c7bfb3b03c479dfb681cac98e8a4d52ca0eaa3cdd3aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c172ecdd34951e753faf3ec60d36500c2822650b74bb825ef9eeda6bb8d0356\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa08243b506aaad33dbf7184de0ee3b14f2dc444ddfd00ffef3d141ddea8a795\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa08243b506aaad33dbf7184de0ee3b14f2dc444ddfd00ffef3d141ddea8a795\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12436bc0b6c1591075f81684cc8726b8ade83e7d217f7c6e963ad1423db6b9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12436bc0b6c1591075f81684cc8726b8ade83e7d217f7c6e963ad1423db6b9ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:47Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f63879efa3aa43d9b25f67e5181594aa22d31e9ce7fd480a5e9a1acd89103538\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f63879efa3aa43d9b25f67e5181594aa22d31e9ce7fd480a5e9a1acd89103538\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:06:44Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:15Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:15 crc kubenswrapper[4830]: I0227 16:09:15.115530 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eebea1ec0fd24a7664f474db4e4cdb08e53c5c742c392900599dc3e7adcbee2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:15Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:15 crc kubenswrapper[4830]: I0227 16:09:15.131319 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-p7298" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"616ebd42-6bbe-4536-ba35-f8b07f2f11b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a08121a78952c85326d07974c700397678777995d73cce9f24ace8932b3e9b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:08:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9g9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-p7298\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:15Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:15 crc kubenswrapper[4830]: I0227 16:09:15.147089 4830 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kgdlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:08:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9l8vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:08:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kgdlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:15Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:15 crc kubenswrapper[4830]: I0227 16:09:15.763136 4830 scope.go:117] "RemoveContainer" containerID="e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140" Feb 27 16:09:15 crc kubenswrapper[4830]: E0227 16:09:15.763522 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bf9lh_openshift-ovn-kubernetes(2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" Feb 27 16:09:16 crc kubenswrapper[4830]: I0227 16:09:16.762142 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:09:16 crc kubenswrapper[4830]: I0227 16:09:16.762185 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:09:16 crc kubenswrapper[4830]: I0227 16:09:16.762220 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:09:16 crc kubenswrapper[4830]: I0227 16:09:16.762308 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:09:16 crc kubenswrapper[4830]: E0227 16:09:16.762307 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:09:16 crc kubenswrapper[4830]: E0227 16:09:16.762420 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:09:16 crc kubenswrapper[4830]: E0227 16:09:16.762514 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:09:16 crc kubenswrapper[4830]: E0227 16:09:16.762729 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:09:17 crc kubenswrapper[4830]: I0227 16:09:17.316852 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:09:17 crc kubenswrapper[4830]: I0227 16:09:17.316911 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:09:17 crc kubenswrapper[4830]: I0227 16:09:17.316927 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:09:17 crc kubenswrapper[4830]: I0227 16:09:17.316982 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:09:17 crc kubenswrapper[4830]: I0227 16:09:17.317023 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:09:17Z","lastTransitionTime":"2026-02-27T16:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:09:17 crc kubenswrapper[4830]: E0227 16:09:17.336317 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:17 crc kubenswrapper[4830]: I0227 16:09:17.341012 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:09:17 crc kubenswrapper[4830]: I0227 16:09:17.341125 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:09:17 crc kubenswrapper[4830]: I0227 16:09:17.341150 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:09:17 crc kubenswrapper[4830]: I0227 16:09:17.341184 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:09:17 crc kubenswrapper[4830]: I0227 16:09:17.341208 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:09:17Z","lastTransitionTime":"2026-02-27T16:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:09:17 crc kubenswrapper[4830]: E0227 16:09:17.361286 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:17 crc kubenswrapper[4830]: I0227 16:09:17.366000 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:09:17 crc kubenswrapper[4830]: I0227 16:09:17.366043 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:09:17 crc kubenswrapper[4830]: I0227 16:09:17.366052 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:09:17 crc kubenswrapper[4830]: I0227 16:09:17.366068 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:09:17 crc kubenswrapper[4830]: I0227 16:09:17.366077 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:09:17Z","lastTransitionTime":"2026-02-27T16:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:09:17 crc kubenswrapper[4830]: E0227 16:09:17.380713 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:17 crc kubenswrapper[4830]: I0227 16:09:17.384888 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:09:17 crc kubenswrapper[4830]: I0227 16:09:17.384926 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:09:17 crc kubenswrapper[4830]: I0227 16:09:17.384937 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:09:17 crc kubenswrapper[4830]: I0227 16:09:17.384967 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:09:17 crc kubenswrapper[4830]: I0227 16:09:17.384986 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:09:17Z","lastTransitionTime":"2026-02-27T16:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:09:17 crc kubenswrapper[4830]: E0227 16:09:17.398433 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:17 crc kubenswrapper[4830]: I0227 16:09:17.402671 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:09:17 crc kubenswrapper[4830]: I0227 16:09:17.402721 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:09:17 crc kubenswrapper[4830]: I0227 16:09:17.402733 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:09:17 crc kubenswrapper[4830]: I0227 16:09:17.402751 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:09:17 crc kubenswrapper[4830]: I0227 16:09:17.402764 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:09:17Z","lastTransitionTime":"2026-02-27T16:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:09:17 crc kubenswrapper[4830]: E0227 16:09:17.420404 4830 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:09:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"058e4d33-3c10-460a-8f66-1f2272cb9956\\\",\\\"systemUUID\\\":\\\"1d4a94de-760c-40e1-8054-66d250f336ee\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:09:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:09:17 crc kubenswrapper[4830]: E0227 16:09:17.420616 4830 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 16:09:18 crc kubenswrapper[4830]: I0227 16:09:18.761961 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:09:18 crc kubenswrapper[4830]: I0227 16:09:18.762020 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:09:18 crc kubenswrapper[4830]: I0227 16:09:18.762066 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:09:18 crc kubenswrapper[4830]: E0227 16:09:18.762536 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:09:18 crc kubenswrapper[4830]: E0227 16:09:18.762375 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:09:18 crc kubenswrapper[4830]: I0227 16:09:18.762082 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:09:18 crc kubenswrapper[4830]: E0227 16:09:18.762635 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:09:18 crc kubenswrapper[4830]: E0227 16:09:18.762726 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:09:19 crc kubenswrapper[4830]: E0227 16:09:19.906100 4830 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 16:09:20 crc kubenswrapper[4830]: I0227 16:09:20.761601 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:09:20 crc kubenswrapper[4830]: I0227 16:09:20.761733 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:09:20 crc kubenswrapper[4830]: E0227 16:09:20.761795 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:09:20 crc kubenswrapper[4830]: I0227 16:09:20.761827 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:09:20 crc kubenswrapper[4830]: I0227 16:09:20.761838 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:09:20 crc kubenswrapper[4830]: E0227 16:09:20.762023 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:09:20 crc kubenswrapper[4830]: E0227 16:09:20.762240 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:09:20 crc kubenswrapper[4830]: E0227 16:09:20.762364 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:09:22 crc kubenswrapper[4830]: I0227 16:09:22.762082 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:09:22 crc kubenswrapper[4830]: I0227 16:09:22.762121 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:09:22 crc kubenswrapper[4830]: I0227 16:09:22.762179 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:09:22 crc kubenswrapper[4830]: E0227 16:09:22.762315 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:09:22 crc kubenswrapper[4830]: I0227 16:09:22.762350 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:09:22 crc kubenswrapper[4830]: E0227 16:09:22.762502 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:09:22 crc kubenswrapper[4830]: E0227 16:09:22.762715 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:09:22 crc kubenswrapper[4830]: E0227 16:09:22.762876 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:09:24 crc kubenswrapper[4830]: I0227 16:09:24.761656 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:09:24 crc kubenswrapper[4830]: I0227 16:09:24.761724 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:09:24 crc kubenswrapper[4830]: I0227 16:09:24.761776 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:09:24 crc kubenswrapper[4830]: I0227 16:09:24.761912 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:09:24 crc kubenswrapper[4830]: E0227 16:09:24.762171 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:09:24 crc kubenswrapper[4830]: E0227 16:09:24.762229 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:09:24 crc kubenswrapper[4830]: E0227 16:09:24.762386 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:09:24 crc kubenswrapper[4830]: E0227 16:09:24.762572 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:09:24 crc kubenswrapper[4830]: I0227 16:09:24.846927 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podStartSLOduration=124.846902012 podStartE2EDuration="2m4.846902012s" podCreationTimestamp="2026-02-27 16:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:09:24.846570093 +0000 UTC m=+160.935842606" watchObservedRunningTime="2026-02-27 16:09:24.846902012 +0000 UTC m=+160.936174515" Feb 27 16:09:24 crc kubenswrapper[4830]: I0227 16:09:24.865603 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gqgb6" podStartSLOduration=123.865587618 podStartE2EDuration="2m3.865587618s" podCreationTimestamp="2026-02-27 16:07:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:09:24.865450255 +0000 UTC m=+160.954722718" watchObservedRunningTime="2026-02-27 16:09:24.865587618 +0000 UTC m=+160.954860081" Feb 27 16:09:24 crc kubenswrapper[4830]: I0227 16:09:24.884100 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=21.884055449 podStartE2EDuration="21.884055449s" podCreationTimestamp="2026-02-27 16:09:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:09:24.883028101 +0000 UTC m=+160.972300604" watchObservedRunningTime="2026-02-27 16:09:24.884055449 +0000 UTC m=+160.973327962" Feb 27 16:09:24 crc kubenswrapper[4830]: I0227 16:09:24.903403 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=41.903377582 podStartE2EDuration="41.903377582s" podCreationTimestamp="2026-02-27 16:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:09:24.90256741 +0000 UTC m=+160.991839903" watchObservedRunningTime="2026-02-27 16:09:24.903377582 +0000 UTC m=+160.992650075" Feb 27 16:09:24 crc kubenswrapper[4830]: E0227 16:09:24.908053 4830 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 16:09:24 crc kubenswrapper[4830]: I0227 16:09:24.986630 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=82.986601542 podStartE2EDuration="1m22.986601542s" podCreationTimestamp="2026-02-27 16:08:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:09:24.986564461 +0000 UTC m=+161.075836964" watchObservedRunningTime="2026-02-27 16:09:24.986601542 +0000 UTC m=+161.075874035" Feb 27 16:09:25 crc kubenswrapper[4830]: I0227 16:09:25.002473 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=76.002448873 podStartE2EDuration="1m16.002448873s" podCreationTimestamp="2026-02-27 16:08:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:09:25.002048762 +0000 UTC m=+161.091321265" watchObservedRunningTime="2026-02-27 16:09:25.002448873 +0000 UTC m=+161.091721376" Feb 27 16:09:25 crc kubenswrapper[4830]: I0227 16:09:25.061890 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=51.061863171 podStartE2EDuration="51.061863171s" podCreationTimestamp="2026-02-27 16:08:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:09:25.038284794 +0000 UTC m=+161.127557267" watchObservedRunningTime="2026-02-27 16:09:25.061863171 +0000 UTC m=+161.151135644" Feb 27 16:09:25 crc kubenswrapper[4830]: I0227 16:09:25.091984 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-p7298" podStartSLOduration=125.09193948 podStartE2EDuration="2m5.09193948s" podCreationTimestamp="2026-02-27 16:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:09:25.07687612 +0000 UTC m=+161.166148603" watchObservedRunningTime="2026-02-27 16:09:25.09193948 +0000 UTC m=+161.181211953" Feb 27 16:09:25 crc kubenswrapper[4830]: I0227 16:09:25.164109 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-fsrq9" podStartSLOduration=125.164087176 podStartE2EDuration="2m5.164087176s" podCreationTimestamp="2026-02-27 16:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:09:25.140938491 +0000 UTC m=+161.230210964" watchObservedRunningTime="2026-02-27 16:09:25.164087176 +0000 UTC m=+161.253359659" Feb 27 16:09:25 crc kubenswrapper[4830]: I0227 16:09:25.164965 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-rgv8f" podStartSLOduration=125.164938178 podStartE2EDuration="2m5.164938178s" podCreationTimestamp="2026-02-27 16:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:09:25.164855416 +0000 UTC m=+161.254127879" watchObservedRunningTime="2026-02-27 16:09:25.164938178 +0000 UTC m=+161.254210651" Feb 27 16:09:25 crc kubenswrapper[4830]: I0227 16:09:25.180457 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-fcddf" podStartSLOduration=125.18042378 podStartE2EDuration="2m5.18042378s" podCreationTimestamp="2026-02-27 16:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:09:25.179839664 +0000 UTC m=+161.269112147" watchObservedRunningTime="2026-02-27 16:09:25.18042378 +0000 UTC m=+161.269696283" Feb 27 16:09:26 crc kubenswrapper[4830]: I0227 16:09:26.762030 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:09:26 crc kubenswrapper[4830]: I0227 16:09:26.762052 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:09:26 crc kubenswrapper[4830]: I0227 16:09:26.762120 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:09:26 crc kubenswrapper[4830]: E0227 16:09:26.762575 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:09:26 crc kubenswrapper[4830]: E0227 16:09:26.762647 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:09:26 crc kubenswrapper[4830]: E0227 16:09:26.762486 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:09:26 crc kubenswrapper[4830]: I0227 16:09:26.762802 4830 scope.go:117] "RemoveContainer" containerID="e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140" Feb 27 16:09:26 crc kubenswrapper[4830]: E0227 16:09:26.763005 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bf9lh_openshift-ovn-kubernetes(2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" Feb 27 16:09:26 crc kubenswrapper[4830]: I0227 16:09:26.763226 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:09:26 crc kubenswrapper[4830]: E0227 16:09:26.763320 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:09:27 crc kubenswrapper[4830]: I0227 16:09:27.705724 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:09:27 crc kubenswrapper[4830]: I0227 16:09:27.705763 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:09:27 crc kubenswrapper[4830]: I0227 16:09:27.705775 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:09:27 crc kubenswrapper[4830]: I0227 16:09:27.705793 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:09:27 crc kubenswrapper[4830]: I0227 16:09:27.705805 4830 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:09:27Z","lastTransitionTime":"2026-02-27T16:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:09:27 crc kubenswrapper[4830]: I0227 16:09:27.760740 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-ltqlj"] Feb 27 16:09:27 crc kubenswrapper[4830]: I0227 16:09:27.761284 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ltqlj" Feb 27 16:09:27 crc kubenswrapper[4830]: I0227 16:09:27.763642 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 27 16:09:27 crc kubenswrapper[4830]: I0227 16:09:27.764044 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 27 16:09:27 crc kubenswrapper[4830]: I0227 16:09:27.764793 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 27 16:09:27 crc kubenswrapper[4830]: I0227 16:09:27.764935 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 27 16:09:27 crc kubenswrapper[4830]: I0227 16:09:27.811080 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 27 16:09:27 crc kubenswrapper[4830]: I0227 16:09:27.821396 4830 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 27 16:09:27 crc kubenswrapper[4830]: I0227 16:09:27.889558 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3ea219e4-95f0-4957-b4be-a4a394993f73-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-ltqlj\" (UID: \"3ea219e4-95f0-4957-b4be-a4a394993f73\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ltqlj" Feb 27 16:09:27 crc kubenswrapper[4830]: I0227 16:09:27.889674 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3ea219e4-95f0-4957-b4be-a4a394993f73-service-ca\") pod \"cluster-version-operator-5c965bbfc6-ltqlj\" (UID: \"3ea219e4-95f0-4957-b4be-a4a394993f73\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ltqlj" Feb 27 16:09:27 crc kubenswrapper[4830]: I0227 16:09:27.889735 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3ea219e4-95f0-4957-b4be-a4a394993f73-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-ltqlj\" (UID: \"3ea219e4-95f0-4957-b4be-a4a394993f73\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ltqlj" Feb 27 16:09:27 crc kubenswrapper[4830]: I0227 16:09:27.889785 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ea219e4-95f0-4957-b4be-a4a394993f73-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-ltqlj\" (UID: \"3ea219e4-95f0-4957-b4be-a4a394993f73\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ltqlj" Feb 27 16:09:27 crc kubenswrapper[4830]: I0227 16:09:27.889828 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ea219e4-95f0-4957-b4be-a4a394993f73-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-ltqlj\" (UID: \"3ea219e4-95f0-4957-b4be-a4a394993f73\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ltqlj" Feb 27 16:09:27 crc kubenswrapper[4830]: I0227 16:09:27.990555 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3ea219e4-95f0-4957-b4be-a4a394993f73-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-ltqlj\" (UID: \"3ea219e4-95f0-4957-b4be-a4a394993f73\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ltqlj" Feb 27 16:09:27 crc kubenswrapper[4830]: I0227 16:09:27.990629 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3ea219e4-95f0-4957-b4be-a4a394993f73-service-ca\") pod \"cluster-version-operator-5c965bbfc6-ltqlj\" (UID: \"3ea219e4-95f0-4957-b4be-a4a394993f73\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ltqlj" Feb 27 16:09:27 crc kubenswrapper[4830]: I0227 16:09:27.990669 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3ea219e4-95f0-4957-b4be-a4a394993f73-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-ltqlj\" (UID: \"3ea219e4-95f0-4957-b4be-a4a394993f73\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ltqlj" Feb 27 16:09:27 crc kubenswrapper[4830]: I0227 16:09:27.990756 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ea219e4-95f0-4957-b4be-a4a394993f73-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-ltqlj\" (UID: \"3ea219e4-95f0-4957-b4be-a4a394993f73\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ltqlj" Feb 27 16:09:27 crc kubenswrapper[4830]: I0227 16:09:27.990805 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3ea219e4-95f0-4957-b4be-a4a394993f73-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-ltqlj\" (UID: \"3ea219e4-95f0-4957-b4be-a4a394993f73\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ltqlj" Feb 27 16:09:27 crc kubenswrapper[4830]: I0227 16:09:27.990820 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ea219e4-95f0-4957-b4be-a4a394993f73-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-ltqlj\" (UID: \"3ea219e4-95f0-4957-b4be-a4a394993f73\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ltqlj" Feb 27 16:09:27 crc kubenswrapper[4830]: I0227 16:09:27.991014 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3ea219e4-95f0-4957-b4be-a4a394993f73-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-ltqlj\" (UID: \"3ea219e4-95f0-4957-b4be-a4a394993f73\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ltqlj" Feb 27 16:09:27 crc kubenswrapper[4830]: I0227 16:09:27.992548 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3ea219e4-95f0-4957-b4be-a4a394993f73-service-ca\") pod \"cluster-version-operator-5c965bbfc6-ltqlj\" (UID: \"3ea219e4-95f0-4957-b4be-a4a394993f73\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ltqlj" Feb 27 16:09:28 crc kubenswrapper[4830]: I0227 16:09:28.010640 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ea219e4-95f0-4957-b4be-a4a394993f73-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-ltqlj\" (UID: \"3ea219e4-95f0-4957-b4be-a4a394993f73\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ltqlj" Feb 27 16:09:28 crc kubenswrapper[4830]: I0227 16:09:28.020446 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ea219e4-95f0-4957-b4be-a4a394993f73-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-ltqlj\" (UID: \"3ea219e4-95f0-4957-b4be-a4a394993f73\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ltqlj" Feb 27 16:09:28 crc kubenswrapper[4830]: I0227 16:09:28.089568 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ltqlj" Feb 27 16:09:28 crc kubenswrapper[4830]: I0227 16:09:28.702520 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ltqlj" event={"ID":"3ea219e4-95f0-4957-b4be-a4a394993f73","Type":"ContainerStarted","Data":"0372c1bd99987d8b95827d6d0d0961c8c39d754f47a04b3fbe6ce2e21c9d7dd6"} Feb 27 16:09:28 crc kubenswrapper[4830]: I0227 16:09:28.702597 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ltqlj" event={"ID":"3ea219e4-95f0-4957-b4be-a4a394993f73","Type":"ContainerStarted","Data":"15a051bd70cb189ef5189b2e2da0f656f3532c9b0bc7602f9570d4eb24f8ea64"} Feb 27 16:09:28 crc kubenswrapper[4830]: I0227 16:09:28.728273 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ltqlj" podStartSLOduration=128.728234769 podStartE2EDuration="2m8.728234769s" podCreationTimestamp="2026-02-27 16:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:09:28.727799297 +0000 UTC m=+164.817071790" watchObservedRunningTime="2026-02-27 16:09:28.728234769 +0000 UTC m=+164.817507312" Feb 27 16:09:28 crc kubenswrapper[4830]: I0227 16:09:28.761660 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:09:28 crc kubenswrapper[4830]: I0227 16:09:28.761717 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:09:28 crc kubenswrapper[4830]: I0227 16:09:28.761660 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:09:28 crc kubenswrapper[4830]: E0227 16:09:28.761851 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:09:28 crc kubenswrapper[4830]: I0227 16:09:28.761978 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:09:28 crc kubenswrapper[4830]: E0227 16:09:28.762083 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:09:28 crc kubenswrapper[4830]: E0227 16:09:28.762298 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:09:28 crc kubenswrapper[4830]: E0227 16:09:28.762493 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:09:29 crc kubenswrapper[4830]: E0227 16:09:29.909899 4830 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 16:09:30 crc kubenswrapper[4830]: I0227 16:09:30.761601 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:09:30 crc kubenswrapper[4830]: I0227 16:09:30.761683 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:09:30 crc kubenswrapper[4830]: E0227 16:09:30.761765 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:09:30 crc kubenswrapper[4830]: E0227 16:09:30.761856 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:09:30 crc kubenswrapper[4830]: I0227 16:09:30.762001 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:09:30 crc kubenswrapper[4830]: E0227 16:09:30.762091 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:09:30 crc kubenswrapper[4830]: I0227 16:09:30.762266 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:09:30 crc kubenswrapper[4830]: E0227 16:09:30.762347 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:09:32 crc kubenswrapper[4830]: I0227 16:09:32.761528 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:09:32 crc kubenswrapper[4830]: I0227 16:09:32.761573 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:09:32 crc kubenswrapper[4830]: I0227 16:09:32.761471 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:09:32 crc kubenswrapper[4830]: E0227 16:09:32.761712 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:09:32 crc kubenswrapper[4830]: I0227 16:09:32.761856 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:09:32 crc kubenswrapper[4830]: E0227 16:09:32.762103 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:09:32 crc kubenswrapper[4830]: E0227 16:09:32.762292 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:09:32 crc kubenswrapper[4830]: E0227 16:09:32.762369 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:09:34 crc kubenswrapper[4830]: I0227 16:09:34.762271 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:09:34 crc kubenswrapper[4830]: I0227 16:09:34.762326 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:09:34 crc kubenswrapper[4830]: I0227 16:09:34.762378 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:09:34 crc kubenswrapper[4830]: I0227 16:09:34.762485 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:09:34 crc kubenswrapper[4830]: E0227 16:09:34.764404 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:09:34 crc kubenswrapper[4830]: E0227 16:09:34.764605 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:09:34 crc kubenswrapper[4830]: E0227 16:09:34.764740 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:09:34 crc kubenswrapper[4830]: E0227 16:09:34.764839 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:09:34 crc kubenswrapper[4830]: E0227 16:09:34.911206 4830 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 16:09:36 crc kubenswrapper[4830]: I0227 16:09:36.764162 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:09:36 crc kubenswrapper[4830]: I0227 16:09:36.764174 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:09:36 crc kubenswrapper[4830]: E0227 16:09:36.765141 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:09:36 crc kubenswrapper[4830]: I0227 16:09:36.764293 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:09:36 crc kubenswrapper[4830]: E0227 16:09:36.765521 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:09:36 crc kubenswrapper[4830]: I0227 16:09:36.764251 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:09:36 crc kubenswrapper[4830]: E0227 16:09:36.765907 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:09:36 crc kubenswrapper[4830]: E0227 16:09:36.766051 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:09:38 crc kubenswrapper[4830]: I0227 16:09:38.741892 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fsrq9_bb72b0f7-1d22-4d13-9653-b1607aa2235d/kube-multus/1.log" Feb 27 16:09:38 crc kubenswrapper[4830]: I0227 16:09:38.743073 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fsrq9_bb72b0f7-1d22-4d13-9653-b1607aa2235d/kube-multus/0.log" Feb 27 16:09:38 crc kubenswrapper[4830]: I0227 16:09:38.743132 4830 generic.go:334] "Generic (PLEG): container finished" podID="bb72b0f7-1d22-4d13-9653-b1607aa2235d" containerID="787ecf758c99d969efde354b66bb37b5f9a8c2d79cccd8bab200423af61d4109" exitCode=1 Feb 27 16:09:38 crc kubenswrapper[4830]: I0227 16:09:38.743170 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fsrq9" event={"ID":"bb72b0f7-1d22-4d13-9653-b1607aa2235d","Type":"ContainerDied","Data":"787ecf758c99d969efde354b66bb37b5f9a8c2d79cccd8bab200423af61d4109"} Feb 27 16:09:38 crc kubenswrapper[4830]: I0227 16:09:38.743211 4830 scope.go:117] "RemoveContainer" containerID="4b684822980dfea25ae15f46337d198cbbb8656b55c38874d917c2fae68431aa" Feb 27 16:09:38 crc kubenswrapper[4830]: I0227 16:09:38.743887 4830 scope.go:117] "RemoveContainer" containerID="787ecf758c99d969efde354b66bb37b5f9a8c2d79cccd8bab200423af61d4109" Feb 27 16:09:38 crc kubenswrapper[4830]: E0227 16:09:38.744284 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-fsrq9_openshift-multus(bb72b0f7-1d22-4d13-9653-b1607aa2235d)\"" pod="openshift-multus/multus-fsrq9" podUID="bb72b0f7-1d22-4d13-9653-b1607aa2235d" Feb 27 16:09:38 crc kubenswrapper[4830]: I0227 16:09:38.762273 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:09:38 crc kubenswrapper[4830]: I0227 16:09:38.762326 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:09:38 crc kubenswrapper[4830]: I0227 16:09:38.762290 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:09:38 crc kubenswrapper[4830]: I0227 16:09:38.762290 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:09:38 crc kubenswrapper[4830]: E0227 16:09:38.762522 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:09:38 crc kubenswrapper[4830]: E0227 16:09:38.762650 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:09:38 crc kubenswrapper[4830]: E0227 16:09:38.762776 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:09:38 crc kubenswrapper[4830]: E0227 16:09:38.763090 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:09:39 crc kubenswrapper[4830]: I0227 16:09:39.749752 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fsrq9_bb72b0f7-1d22-4d13-9653-b1607aa2235d/kube-multus/1.log" Feb 27 16:09:39 crc kubenswrapper[4830]: E0227 16:09:39.912793 4830 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 16:09:40 crc kubenswrapper[4830]: I0227 16:09:40.761269 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:09:40 crc kubenswrapper[4830]: I0227 16:09:40.761321 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:09:40 crc kubenswrapper[4830]: I0227 16:09:40.761341 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:09:40 crc kubenswrapper[4830]: I0227 16:09:40.761285 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:09:40 crc kubenswrapper[4830]: E0227 16:09:40.761449 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:09:40 crc kubenswrapper[4830]: E0227 16:09:40.761550 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:09:40 crc kubenswrapper[4830]: E0227 16:09:40.761635 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:09:40 crc kubenswrapper[4830]: E0227 16:09:40.761791 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:09:41 crc kubenswrapper[4830]: I0227 16:09:41.762684 4830 scope.go:117] "RemoveContainer" containerID="e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140" Feb 27 16:09:41 crc kubenswrapper[4830]: E0227 16:09:41.762875 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bf9lh_openshift-ovn-kubernetes(2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" Feb 27 16:09:42 crc kubenswrapper[4830]: I0227 16:09:42.761464 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:09:42 crc kubenswrapper[4830]: I0227 16:09:42.761474 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:09:42 crc kubenswrapper[4830]: I0227 16:09:42.761614 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:09:42 crc kubenswrapper[4830]: E0227 16:09:42.761755 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:09:42 crc kubenswrapper[4830]: I0227 16:09:42.761829 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:09:42 crc kubenswrapper[4830]: E0227 16:09:42.762063 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:09:42 crc kubenswrapper[4830]: E0227 16:09:42.762207 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:09:42 crc kubenswrapper[4830]: E0227 16:09:42.762371 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:09:44 crc kubenswrapper[4830]: I0227 16:09:44.761665 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:09:44 crc kubenswrapper[4830]: I0227 16:09:44.761731 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:09:44 crc kubenswrapper[4830]: I0227 16:09:44.761679 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:09:44 crc kubenswrapper[4830]: E0227 16:09:44.763668 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:09:44 crc kubenswrapper[4830]: I0227 16:09:44.763746 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:09:44 crc kubenswrapper[4830]: E0227 16:09:44.763901 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:09:44 crc kubenswrapper[4830]: E0227 16:09:44.764139 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:09:44 crc kubenswrapper[4830]: E0227 16:09:44.764344 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:09:44 crc kubenswrapper[4830]: E0227 16:09:44.914500 4830 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 16:09:46 crc kubenswrapper[4830]: I0227 16:09:46.761742 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:09:46 crc kubenswrapper[4830]: I0227 16:09:46.761921 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:09:46 crc kubenswrapper[4830]: I0227 16:09:46.762022 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:09:46 crc kubenswrapper[4830]: E0227 16:09:46.762070 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:09:46 crc kubenswrapper[4830]: E0227 16:09:46.762192 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:09:46 crc kubenswrapper[4830]: E0227 16:09:46.762359 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:09:46 crc kubenswrapper[4830]: I0227 16:09:46.762793 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:09:46 crc kubenswrapper[4830]: E0227 16:09:46.763014 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:09:48 crc kubenswrapper[4830]: I0227 16:09:48.764143 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:09:48 crc kubenswrapper[4830]: I0227 16:09:48.764193 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:09:48 crc kubenswrapper[4830]: E0227 16:09:48.764273 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:09:48 crc kubenswrapper[4830]: I0227 16:09:48.764319 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:09:48 crc kubenswrapper[4830]: E0227 16:09:48.764509 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:09:48 crc kubenswrapper[4830]: E0227 16:09:48.764473 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:09:48 crc kubenswrapper[4830]: I0227 16:09:48.765698 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:09:48 crc kubenswrapper[4830]: E0227 16:09:48.766074 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:09:49 crc kubenswrapper[4830]: E0227 16:09:49.915844 4830 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 16:09:50 crc kubenswrapper[4830]: I0227 16:09:50.762207 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:09:50 crc kubenswrapper[4830]: I0227 16:09:50.762342 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:09:50 crc kubenswrapper[4830]: I0227 16:09:50.762451 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:09:50 crc kubenswrapper[4830]: E0227 16:09:50.762440 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:09:50 crc kubenswrapper[4830]: I0227 16:09:50.762496 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:09:50 crc kubenswrapper[4830]: E0227 16:09:50.762704 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:09:50 crc kubenswrapper[4830]: E0227 16:09:50.762816 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:09:50 crc kubenswrapper[4830]: E0227 16:09:50.763044 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:09:52 crc kubenswrapper[4830]: I0227 16:09:52.762058 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:09:52 crc kubenswrapper[4830]: E0227 16:09:52.763061 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:09:52 crc kubenswrapper[4830]: I0227 16:09:52.762151 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:09:52 crc kubenswrapper[4830]: E0227 16:09:52.763490 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:09:52 crc kubenswrapper[4830]: I0227 16:09:52.762107 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:09:52 crc kubenswrapper[4830]: I0227 16:09:52.762160 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:09:52 crc kubenswrapper[4830]: E0227 16:09:52.764069 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:09:52 crc kubenswrapper[4830]: E0227 16:09:52.763856 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:09:53 crc kubenswrapper[4830]: I0227 16:09:53.761877 4830 scope.go:117] "RemoveContainer" containerID="787ecf758c99d969efde354b66bb37b5f9a8c2d79cccd8bab200423af61d4109" Feb 27 16:09:53 crc kubenswrapper[4830]: I0227 16:09:53.765359 4830 scope.go:117] "RemoveContainer" containerID="e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140" Feb 27 16:09:54 crc kubenswrapper[4830]: I0227 16:09:54.761559 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:09:54 crc kubenswrapper[4830]: I0227 16:09:54.761639 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:09:54 crc kubenswrapper[4830]: I0227 16:09:54.761693 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:09:54 crc kubenswrapper[4830]: E0227 16:09:54.762859 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:09:54 crc kubenswrapper[4830]: I0227 16:09:54.762886 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:09:54 crc kubenswrapper[4830]: E0227 16:09:54.763054 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:09:54 crc kubenswrapper[4830]: E0227 16:09:54.763187 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:09:54 crc kubenswrapper[4830]: E0227 16:09:54.763274 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:09:54 crc kubenswrapper[4830]: I0227 16:09:54.802545 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kgdlg"] Feb 27 16:09:54 crc kubenswrapper[4830]: I0227 16:09:54.809729 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bf9lh_2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904/ovnkube-controller/3.log" Feb 27 16:09:54 crc kubenswrapper[4830]: I0227 16:09:54.816770 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" event={"ID":"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904","Type":"ContainerStarted","Data":"32af4b45910f57f6a3fa4635f90216703a4ed17597dbac7320a131b2e5119ac5"} Feb 27 16:09:54 crc kubenswrapper[4830]: I0227 16:09:54.818712 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:09:54 crc kubenswrapper[4830]: I0227 16:09:54.821726 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fsrq9_bb72b0f7-1d22-4d13-9653-b1607aa2235d/kube-multus/1.log" Feb 27 16:09:54 crc kubenswrapper[4830]: I0227 16:09:54.821850 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:09:54 crc kubenswrapper[4830]: E0227 16:09:54.822027 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:09:54 crc kubenswrapper[4830]: I0227 16:09:54.822318 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fsrq9" event={"ID":"bb72b0f7-1d22-4d13-9653-b1607aa2235d","Type":"ContainerStarted","Data":"ae5ebcddc959e70697cd3baeda6440556cbec5ca5056d85333946284a2e0f292"} Feb 27 16:09:54 crc kubenswrapper[4830]: I0227 16:09:54.865646 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" podStartSLOduration=154.865626799 podStartE2EDuration="2m34.865626799s" podCreationTimestamp="2026-02-27 16:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:09:54.864777057 +0000 UTC m=+190.954049520" watchObservedRunningTime="2026-02-27 16:09:54.865626799 +0000 UTC m=+190.954899272" Feb 27 16:09:54 crc kubenswrapper[4830]: E0227 16:09:54.917002 4830 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 16:09:56 crc kubenswrapper[4830]: I0227 16:09:56.761676 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:09:56 crc kubenswrapper[4830]: I0227 16:09:56.761681 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:09:56 crc kubenswrapper[4830]: E0227 16:09:56.762405 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:09:56 crc kubenswrapper[4830]: I0227 16:09:56.761746 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:09:56 crc kubenswrapper[4830]: I0227 16:09:56.761749 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:09:56 crc kubenswrapper[4830]: E0227 16:09:56.762539 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:09:56 crc kubenswrapper[4830]: E0227 16:09:56.762732 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:09:56 crc kubenswrapper[4830]: E0227 16:09:56.762840 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:09:58 crc kubenswrapper[4830]: I0227 16:09:58.761642 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:09:58 crc kubenswrapper[4830]: I0227 16:09:58.761659 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:09:58 crc kubenswrapper[4830]: I0227 16:09:58.761907 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:09:58 crc kubenswrapper[4830]: E0227 16:09:58.762101 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:09:58 crc kubenswrapper[4830]: I0227 16:09:58.762133 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:09:58 crc kubenswrapper[4830]: E0227 16:09:58.762224 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kgdlg" podUID="6ba2fe32-66e0-4bcd-a646-9d07c9a21c54" Feb 27 16:09:58 crc kubenswrapper[4830]: E0227 16:09:58.762270 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:09:58 crc kubenswrapper[4830]: E0227 16:09:58.762309 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:10:00 crc kubenswrapper[4830]: I0227 16:10:00.762400 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:10:00 crc kubenswrapper[4830]: I0227 16:10:00.762485 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:10:00 crc kubenswrapper[4830]: I0227 16:10:00.762593 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:10:00 crc kubenswrapper[4830]: I0227 16:10:00.763432 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:10:00 crc kubenswrapper[4830]: I0227 16:10:00.765409 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 27 16:10:00 crc kubenswrapper[4830]: I0227 16:10:00.766395 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 27 16:10:00 crc kubenswrapper[4830]: I0227 16:10:00.768421 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 27 16:10:00 crc kubenswrapper[4830]: I0227 16:10:00.768485 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 27 16:10:00 crc kubenswrapper[4830]: I0227 16:10:00.768443 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 27 16:10:00 crc kubenswrapper[4830]: I0227 16:10:00.769319 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 27 16:10:03 crc kubenswrapper[4830]: I0227 16:10:03.160441 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 16:10:03 crc kubenswrapper[4830]: I0227 16:10:03.160541 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 16:10:03 crc kubenswrapper[4830]: I0227 16:10:03.411823 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.435713 4830 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.480562 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9c4wb"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.481414 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.486633 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.490471 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-v78pc"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.491287 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-v78pc" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.492412 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-khmn9"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.492903 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-khmn9" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.494106 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-hgw6n"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.494871 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgw6n" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.495635 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2rrvm"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.496229 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2rrvm" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.497682 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5pcpc"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.498219 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5pcpc" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.499727 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-kjfn6"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.500387 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-kjfn6" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.502400 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ngpn7"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.502760 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-ngpn7" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.504226 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vs8sq"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.504519 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.506020 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5jfm7"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.525258 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5jfm7" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.525401 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.525731 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.525933 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.526538 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.526741 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.528296 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.530123 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.532650 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.532850 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rvd8g"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.534187 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-4dhxq"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.535009 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-4dhxq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.535085 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rvd8g" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.541163 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2x2lh"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.541241 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.541651 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.542040 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.542166 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.542367 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.542642 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.542671 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.543337 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.543364 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.544193 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.547716 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.548889 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-n6xx6"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.548908 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2x2lh" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.548410 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.549067 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.549100 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.549117 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.549679 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.549708 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.551460 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-nfkdb"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.549724 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.549752 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.549797 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.549820 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.550349 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.550407 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.550439 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.551518 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-n6xx6" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.552207 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q7c6j"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.552572 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nfkdb" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.552672 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q7c6j" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.553184 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.554208 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.555225 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.555368 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.555430 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.555521 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.555624 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.555661 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.555711 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.555781 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.555808 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.555995 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.556045 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.556201 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.556212 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.556247 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.556273 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.556334 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.555671 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.556465 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.556483 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.556516 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.555630 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.556596 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.555378 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.555815 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.556740 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.556341 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.556850 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.556969 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.557124 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.557180 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.557454 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.557597 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.557483 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.557502 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.558212 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.561611 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.565473 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.566660 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-45mg7"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.567296 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-45mg7" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.566101 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.566396 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/17018e1c-72bf-40ba-9240-5d6684ec855a-encryption-config\") pod \"apiserver-7bbb656c7d-hgw6n\" (UID: \"17018e1c-72bf-40ba-9240-5d6684ec855a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgw6n" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.567637 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.567664 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ce35469-d725-409b-8e24-2c74769d7b77-config\") pod \"route-controller-manager-6576b87f9c-2rrvm\" (UID: \"4ce35469-d725-409b-8e24-2c74769d7b77\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2rrvm" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.567692 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/278df35c-de00-443d-a6f7-e0cc526a487c-client-ca\") pod \"controller-manager-879f6c89f-v78pc\" (UID: \"278df35c-de00-443d-a6f7-e0cc526a487c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v78pc" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.567712 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/17018e1c-72bf-40ba-9240-5d6684ec855a-etcd-client\") pod \"apiserver-7bbb656c7d-hgw6n\" (UID: \"17018e1c-72bf-40ba-9240-5d6684ec855a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgw6n" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.567734 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4ntq\" (UniqueName: \"kubernetes.io/projected/11fbaa05-cf66-40dd-be15-c6474a011768-kube-api-access-z4ntq\") pod \"console-f9d7485db-kjfn6\" (UID: \"11fbaa05-cf66-40dd-be15-c6474a011768\") " pod="openshift-console/console-f9d7485db-kjfn6" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.567767 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcv5t\" (UniqueName: \"kubernetes.io/projected/1843207f-14a3-4f21-a253-dbd843d2d8bf-kube-api-access-kcv5t\") pod \"machine-api-operator-5694c8668f-khmn9\" (UID: \"1843207f-14a3-4f21-a253-dbd843d2d8bf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-khmn9" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.567788 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/278df35c-de00-443d-a6f7-e0cc526a487c-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-v78pc\" (UID: \"278df35c-de00-443d-a6f7-e0cc526a487c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v78pc" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.567811 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17018e1c-72bf-40ba-9240-5d6684ec855a-serving-cert\") pod \"apiserver-7bbb656c7d-hgw6n\" (UID: \"17018e1c-72bf-40ba-9240-5d6684ec855a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgw6n" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.567834 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1843207f-14a3-4f21-a253-dbd843d2d8bf-config\") pod \"machine-api-operator-5694c8668f-khmn9\" (UID: \"1843207f-14a3-4f21-a253-dbd843d2d8bf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-khmn9" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.567854 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f18ef53a-23d0-4f48-b7a4-96f2716e137f-audit-dir\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.567876 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ce35469-d725-409b-8e24-2c74769d7b77-client-ca\") pod \"route-controller-manager-6576b87f9c-2rrvm\" (UID: \"4ce35469-d725-409b-8e24-2c74769d7b77\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2rrvm" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.567900 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/11fbaa05-cf66-40dd-be15-c6474a011768-console-config\") pod \"console-f9d7485db-kjfn6\" (UID: \"11fbaa05-cf66-40dd-be15-c6474a011768\") " pod="openshift-console/console-f9d7485db-kjfn6" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.567931 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/17018e1c-72bf-40ba-9240-5d6684ec855a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-hgw6n\" (UID: \"17018e1c-72bf-40ba-9240-5d6684ec855a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgw6n" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.567975 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11fbaa05-cf66-40dd-be15-c6474a011768-trusted-ca-bundle\") pod \"console-f9d7485db-kjfn6\" (UID: \"11fbaa05-cf66-40dd-be15-c6474a011768\") " pod="openshift-console/console-f9d7485db-kjfn6" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568008 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e9c70786-d73e-4e48-a552-bdeb53daba49-image-import-ca\") pod \"apiserver-76f77b778f-9c4wb\" (UID: \"e9c70786-d73e-4e48-a552-bdeb53daba49\") " pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568037 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/17018e1c-72bf-40ba-9240-5d6684ec855a-audit-policies\") pod \"apiserver-7bbb656c7d-hgw6n\" (UID: \"17018e1c-72bf-40ba-9240-5d6684ec855a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgw6n" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568063 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568064 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbddb809-9950-48a7-945a-ef66c2e1c1f9-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-5pcpc\" (UID: \"cbddb809-9950-48a7-945a-ef66c2e1c1f9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5pcpc" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568186 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568205 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e02559f6-da6b-44d6-b0d3-16a5b400edda-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ngpn7\" (UID: \"e02559f6-da6b-44d6-b0d3-16a5b400edda\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ngpn7" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568224 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/278df35c-de00-443d-a6f7-e0cc526a487c-serving-cert\") pod \"controller-manager-879f6c89f-v78pc\" (UID: \"278df35c-de00-443d-a6f7-e0cc526a487c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v78pc" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568239 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9c70786-d73e-4e48-a552-bdeb53daba49-config\") pod \"apiserver-76f77b778f-9c4wb\" (UID: \"e9c70786-d73e-4e48-a552-bdeb53daba49\") " pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568254 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/11fbaa05-cf66-40dd-be15-c6474a011768-console-oauth-config\") pod \"console-f9d7485db-kjfn6\" (UID: \"11fbaa05-cf66-40dd-be15-c6474a011768\") " pod="openshift-console/console-f9d7485db-kjfn6" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568272 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8t4q\" (UniqueName: \"kubernetes.io/projected/17018e1c-72bf-40ba-9240-5d6684ec855a-kube-api-access-g8t4q\") pod \"apiserver-7bbb656c7d-hgw6n\" (UID: \"17018e1c-72bf-40ba-9240-5d6684ec855a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgw6n" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568291 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e02559f6-da6b-44d6-b0d3-16a5b400edda-service-ca-bundle\") pod \"authentication-operator-69f744f599-ngpn7\" (UID: \"e02559f6-da6b-44d6-b0d3-16a5b400edda\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ngpn7" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568306 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/17018e1c-72bf-40ba-9240-5d6684ec855a-audit-dir\") pod \"apiserver-7bbb656c7d-hgw6n\" (UID: \"17018e1c-72bf-40ba-9240-5d6684ec855a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgw6n" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568328 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e02559f6-da6b-44d6-b0d3-16a5b400edda-config\") pod \"authentication-operator-69f744f599-ngpn7\" (UID: \"e02559f6-da6b-44d6-b0d3-16a5b400edda\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ngpn7" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568344 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/11fbaa05-cf66-40dd-be15-c6474a011768-console-serving-cert\") pod \"console-f9d7485db-kjfn6\" (UID: \"11fbaa05-cf66-40dd-be15-c6474a011768\") " pod="openshift-console/console-f9d7485db-kjfn6" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568365 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st29l\" (UniqueName: \"kubernetes.io/projected/e02559f6-da6b-44d6-b0d3-16a5b400edda-kube-api-access-st29l\") pod \"authentication-operator-69f744f599-ngpn7\" (UID: \"e02559f6-da6b-44d6-b0d3-16a5b400edda\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ngpn7" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568382 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sthg5\" (UniqueName: \"kubernetes.io/projected/4ce35469-d725-409b-8e24-2c74769d7b77-kube-api-access-sthg5\") pod \"route-controller-manager-6576b87f9c-2rrvm\" (UID: \"4ce35469-d725-409b-8e24-2c74769d7b77\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2rrvm" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568398 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568416 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e9c70786-d73e-4e48-a552-bdeb53daba49-audit\") pod \"apiserver-76f77b778f-9c4wb\" (UID: \"e9c70786-d73e-4e48-a552-bdeb53daba49\") " pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568430 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwrqm\" (UniqueName: \"kubernetes.io/projected/cbddb809-9950-48a7-945a-ef66c2e1c1f9-kube-api-access-kwrqm\") pod \"openshift-apiserver-operator-796bbdcf4f-5pcpc\" (UID: \"cbddb809-9950-48a7-945a-ef66c2e1c1f9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5pcpc" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568475 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568492 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f18ef53a-23d0-4f48-b7a4-96f2716e137f-audit-policies\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568508 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9c70786-d73e-4e48-a552-bdeb53daba49-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9c4wb\" (UID: \"e9c70786-d73e-4e48-a552-bdeb53daba49\") " pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568523 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9c70786-d73e-4e48-a552-bdeb53daba49-audit-dir\") pod \"apiserver-76f77b778f-9c4wb\" (UID: \"e9c70786-d73e-4e48-a552-bdeb53daba49\") " pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568537 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568552 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ce35469-d725-409b-8e24-2c74769d7b77-serving-cert\") pod \"route-controller-manager-6576b87f9c-2rrvm\" (UID: \"4ce35469-d725-409b-8e24-2c74769d7b77\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2rrvm" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568571 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e9c70786-d73e-4e48-a552-bdeb53daba49-node-pullsecrets\") pod \"apiserver-76f77b778f-9c4wb\" (UID: \"e9c70786-d73e-4e48-a552-bdeb53daba49\") " pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568587 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17018e1c-72bf-40ba-9240-5d6684ec855a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-hgw6n\" (UID: \"17018e1c-72bf-40ba-9240-5d6684ec855a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgw6n" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568604 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1843207f-14a3-4f21-a253-dbd843d2d8bf-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-khmn9\" (UID: \"1843207f-14a3-4f21-a253-dbd843d2d8bf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-khmn9" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568618 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e9c70786-d73e-4e48-a552-bdeb53daba49-etcd-serving-ca\") pod \"apiserver-76f77b778f-9c4wb\" (UID: \"e9c70786-d73e-4e48-a552-bdeb53daba49\") " pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568632 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9c70786-d73e-4e48-a552-bdeb53daba49-serving-cert\") pod \"apiserver-76f77b778f-9c4wb\" (UID: \"e9c70786-d73e-4e48-a552-bdeb53daba49\") " pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568647 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568664 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1843207f-14a3-4f21-a253-dbd843d2d8bf-images\") pod \"machine-api-operator-5694c8668f-khmn9\" (UID: \"1843207f-14a3-4f21-a253-dbd843d2d8bf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-khmn9" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568678 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/278df35c-de00-443d-a6f7-e0cc526a487c-config\") pod \"controller-manager-879f6c89f-v78pc\" (UID: \"278df35c-de00-443d-a6f7-e0cc526a487c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v78pc" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568708 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568732 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e9c70786-d73e-4e48-a552-bdeb53daba49-encryption-config\") pod \"apiserver-76f77b778f-9c4wb\" (UID: \"e9c70786-d73e-4e48-a552-bdeb53daba49\") " pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568746 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/11fbaa05-cf66-40dd-be15-c6474a011768-service-ca\") pod \"console-f9d7485db-kjfn6\" (UID: \"11fbaa05-cf66-40dd-be15-c6474a011768\") " pod="openshift-console/console-f9d7485db-kjfn6" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568760 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5c8f\" (UniqueName: \"kubernetes.io/projected/278df35c-de00-443d-a6f7-e0cc526a487c-kube-api-access-t5c8f\") pod \"controller-manager-879f6c89f-v78pc\" (UID: \"278df35c-de00-443d-a6f7-e0cc526a487c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v78pc" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568849 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e02559f6-da6b-44d6-b0d3-16a5b400edda-serving-cert\") pod \"authentication-operator-69f744f599-ngpn7\" (UID: \"e02559f6-da6b-44d6-b0d3-16a5b400edda\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ngpn7" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568926 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.568990 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/11fbaa05-cf66-40dd-be15-c6474a011768-oauth-serving-cert\") pod \"console-f9d7485db-kjfn6\" (UID: \"11fbaa05-cf66-40dd-be15-c6474a011768\") " pod="openshift-console/console-f9d7485db-kjfn6" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.569025 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s7mz\" (UniqueName: \"kubernetes.io/projected/e9c70786-d73e-4e48-a552-bdeb53daba49-kube-api-access-4s7mz\") pod \"apiserver-76f77b778f-9c4wb\" (UID: \"e9c70786-d73e-4e48-a552-bdeb53daba49\") " pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.569060 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.569094 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbddb809-9950-48a7-945a-ef66c2e1c1f9-config\") pod \"openshift-apiserver-operator-796bbdcf4f-5pcpc\" (UID: \"cbddb809-9950-48a7-945a-ef66c2e1c1f9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5pcpc" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.569125 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.569158 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.569193 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94mnb\" (UniqueName: \"kubernetes.io/projected/f18ef53a-23d0-4f48-b7a4-96f2716e137f-kube-api-access-94mnb\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.569263 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e9c70786-d73e-4e48-a552-bdeb53daba49-etcd-client\") pod \"apiserver-76f77b778f-9c4wb\" (UID: \"e9c70786-d73e-4e48-a552-bdeb53daba49\") " pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.570518 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.571263 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.573025 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.582292 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.583740 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.585046 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n77c6"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.585449 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.585496 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z5n7d"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.585731 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gb4pt"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.585896 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n77c6" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.588114 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z5n7d" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.598574 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-gb4pt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.601937 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.602235 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.602941 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.603822 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.603872 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.603874 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.604102 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5mm8b"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.604604 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.605294 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7stx8"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.605583 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5mm8b" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.606178 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7stx8" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.640167 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9gfr4"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.640997 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.641436 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.641902 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.642884 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.643034 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.643053 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.643176 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.643205 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.643349 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.643475 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.643905 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.644750 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.644867 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.645019 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.645052 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.645911 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.647163 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.648612 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pz6hl"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.649146 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.649495 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-5shdf"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.649640 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz6hl" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.651464 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5shdf" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.652465 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.659643 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-c75pf"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.660355 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-c75pf" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.660452 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b89fm"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.665163 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9c4wb"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.665300 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b89fm" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.666961 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.668074 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-wh6nt"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.669051 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8fhp6"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.669501 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8fhp6" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.670025 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-wh6nt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671052 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e9c70786-d73e-4e48-a552-bdeb53daba49-encryption-config\") pod \"apiserver-76f77b778f-9c4wb\" (UID: \"e9c70786-d73e-4e48-a552-bdeb53daba49\") " pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671082 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671105 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/11fbaa05-cf66-40dd-be15-c6474a011768-service-ca\") pod \"console-f9d7485db-kjfn6\" (UID: \"11fbaa05-cf66-40dd-be15-c6474a011768\") " pod="openshift-console/console-f9d7485db-kjfn6" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671123 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5c8f\" (UniqueName: \"kubernetes.io/projected/278df35c-de00-443d-a6f7-e0cc526a487c-kube-api-access-t5c8f\") pod \"controller-manager-879f6c89f-v78pc\" (UID: \"278df35c-de00-443d-a6f7-e0cc526a487c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v78pc" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671141 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e02559f6-da6b-44d6-b0d3-16a5b400edda-serving-cert\") pod \"authentication-operator-69f744f599-ngpn7\" (UID: \"e02559f6-da6b-44d6-b0d3-16a5b400edda\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ngpn7" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671165 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ntc8\" (UniqueName: \"kubernetes.io/projected/48571590-5f3e-4b3f-9cd5-451eeb22a435-kube-api-access-7ntc8\") pod \"kube-storage-version-migrator-operator-b67b599dd-n77c6\" (UID: \"48571590-5f3e-4b3f-9cd5-451eeb22a435\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n77c6" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671187 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/11fbaa05-cf66-40dd-be15-c6474a011768-oauth-serving-cert\") pod \"console-f9d7485db-kjfn6\" (UID: \"11fbaa05-cf66-40dd-be15-c6474a011768\") " pod="openshift-console/console-f9d7485db-kjfn6" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671205 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s7mz\" (UniqueName: \"kubernetes.io/projected/e9c70786-d73e-4e48-a552-bdeb53daba49-kube-api-access-4s7mz\") pod \"apiserver-76f77b778f-9c4wb\" (UID: \"e9c70786-d73e-4e48-a552-bdeb53daba49\") " pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671222 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671238 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671258 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z74nc\" (UniqueName: \"kubernetes.io/projected/af0d26af-5990-456b-a3bc-4ea4a14bbc25-kube-api-access-z74nc\") pod \"cluster-samples-operator-665b6dd947-rvd8g\" (UID: \"af0d26af-5990-456b-a3bc-4ea4a14bbc25\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rvd8g" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671276 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e9c70786-d73e-4e48-a552-bdeb53daba49-etcd-client\") pod \"apiserver-76f77b778f-9c4wb\" (UID: \"e9c70786-d73e-4e48-a552-bdeb53daba49\") " pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671294 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbddb809-9950-48a7-945a-ef66c2e1c1f9-config\") pod \"openshift-apiserver-operator-796bbdcf4f-5pcpc\" (UID: \"cbddb809-9950-48a7-945a-ef66c2e1c1f9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5pcpc" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671315 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671332 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671348 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94mnb\" (UniqueName: \"kubernetes.io/projected/f18ef53a-23d0-4f48-b7a4-96f2716e137f-kube-api-access-94mnb\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671369 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxqt7\" (UniqueName: \"kubernetes.io/projected/b0274d4b-eb80-4321-a4c1-6848c65bc32e-kube-api-access-lxqt7\") pod \"cluster-image-registry-operator-dc59b4c8b-q7c6j\" (UID: \"b0274d4b-eb80-4321-a4c1-6848c65bc32e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q7c6j" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671386 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/17018e1c-72bf-40ba-9240-5d6684ec855a-encryption-config\") pod \"apiserver-7bbb656c7d-hgw6n\" (UID: \"17018e1c-72bf-40ba-9240-5d6684ec855a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgw6n" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671404 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671426 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48571590-5f3e-4b3f-9cd5-451eeb22a435-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-n77c6\" (UID: \"48571590-5f3e-4b3f-9cd5-451eeb22a435\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n77c6" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671446 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/278df35c-de00-443d-a6f7-e0cc526a487c-client-ca\") pod \"controller-manager-879f6c89f-v78pc\" (UID: \"278df35c-de00-443d-a6f7-e0cc526a487c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v78pc" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671468 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ce35469-d725-409b-8e24-2c74769d7b77-config\") pod \"route-controller-manager-6576b87f9c-2rrvm\" (UID: \"4ce35469-d725-409b-8e24-2c74769d7b77\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2rrvm" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671486 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqhdk\" (UniqueName: \"kubernetes.io/projected/e7d85019-9a72-439e-a548-496027dd3d2c-kube-api-access-jqhdk\") pod \"openshift-config-operator-7777fb866f-5jfm7\" (UID: \"e7d85019-9a72-439e-a548-496027dd3d2c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5jfm7" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671505 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/17018e1c-72bf-40ba-9240-5d6684ec855a-etcd-client\") pod \"apiserver-7bbb656c7d-hgw6n\" (UID: \"17018e1c-72bf-40ba-9240-5d6684ec855a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgw6n" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671522 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4ntq\" (UniqueName: \"kubernetes.io/projected/11fbaa05-cf66-40dd-be15-c6474a011768-kube-api-access-z4ntq\") pod \"console-f9d7485db-kjfn6\" (UID: \"11fbaa05-cf66-40dd-be15-c6474a011768\") " pod="openshift-console/console-f9d7485db-kjfn6" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671543 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17018e1c-72bf-40ba-9240-5d6684ec855a-serving-cert\") pod \"apiserver-7bbb656c7d-hgw6n\" (UID: \"17018e1c-72bf-40ba-9240-5d6684ec855a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgw6n" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671562 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcv5t\" (UniqueName: \"kubernetes.io/projected/1843207f-14a3-4f21-a253-dbd843d2d8bf-kube-api-access-kcv5t\") pod \"machine-api-operator-5694c8668f-khmn9\" (UID: \"1843207f-14a3-4f21-a253-dbd843d2d8bf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-khmn9" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671581 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/278df35c-de00-443d-a6f7-e0cc526a487c-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-v78pc\" (UID: \"278df35c-de00-443d-a6f7-e0cc526a487c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v78pc" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671600 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bed81cec-625c-4239-92b4-39428a13becc-serving-cert\") pod \"console-operator-58897d9998-gb4pt\" (UID: \"bed81cec-625c-4239-92b4-39428a13becc\") " pod="openshift-console-operator/console-operator-58897d9998-gb4pt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671621 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1843207f-14a3-4f21-a253-dbd843d2d8bf-config\") pod \"machine-api-operator-5694c8668f-khmn9\" (UID: \"1843207f-14a3-4f21-a253-dbd843d2d8bf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-khmn9" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671640 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f18ef53a-23d0-4f48-b7a4-96f2716e137f-audit-dir\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671660 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/11fbaa05-cf66-40dd-be15-c6474a011768-console-config\") pod \"console-f9d7485db-kjfn6\" (UID: \"11fbaa05-cf66-40dd-be15-c6474a011768\") " pod="openshift-console/console-f9d7485db-kjfn6" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671678 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ce35469-d725-409b-8e24-2c74769d7b77-client-ca\") pod \"route-controller-manager-6576b87f9c-2rrvm\" (UID: \"4ce35469-d725-409b-8e24-2c74769d7b77\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2rrvm" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671698 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt9hc\" (UniqueName: \"kubernetes.io/projected/36eaeabc-508b-4a11-9dc5-45ff8b42e0a8-kube-api-access-tt9hc\") pod \"migrator-59844c95c7-5mm8b\" (UID: \"36eaeabc-508b-4a11-9dc5-45ff8b42e0a8\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5mm8b" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671719 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7d85019-9a72-439e-a548-496027dd3d2c-serving-cert\") pod \"openshift-config-operator-7777fb866f-5jfm7\" (UID: \"e7d85019-9a72-439e-a548-496027dd3d2c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5jfm7" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671738 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t85v\" (UniqueName: \"kubernetes.io/projected/1f30f03f-511a-4a29-beae-e3d6971a8c9e-kube-api-access-8t85v\") pod \"downloads-7954f5f757-4dhxq\" (UID: \"1f30f03f-511a-4a29-beae-e3d6971a8c9e\") " pod="openshift-console/downloads-7954f5f757-4dhxq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671760 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/17018e1c-72bf-40ba-9240-5d6684ec855a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-hgw6n\" (UID: \"17018e1c-72bf-40ba-9240-5d6684ec855a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgw6n" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671783 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11fbaa05-cf66-40dd-be15-c6474a011768-trusted-ca-bundle\") pod \"console-f9d7485db-kjfn6\" (UID: \"11fbaa05-cf66-40dd-be15-c6474a011768\") " pod="openshift-console/console-f9d7485db-kjfn6" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671800 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e9c70786-d73e-4e48-a552-bdeb53daba49-image-import-ca\") pod \"apiserver-76f77b778f-9c4wb\" (UID: \"e9c70786-d73e-4e48-a552-bdeb53daba49\") " pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671817 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzv5r\" (UniqueName: \"kubernetes.io/projected/bed81cec-625c-4239-92b4-39428a13becc-kube-api-access-pzv5r\") pod \"console-operator-58897d9998-gb4pt\" (UID: \"bed81cec-625c-4239-92b4-39428a13becc\") " pod="openshift-console-operator/console-operator-58897d9998-gb4pt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671836 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pf5f\" (UniqueName: \"kubernetes.io/projected/32e984aa-8399-4cf1-8a4a-b36525c67e35-kube-api-access-8pf5f\") pod \"marketplace-operator-79b997595-45mg7\" (UID: \"32e984aa-8399-4cf1-8a4a-b36525c67e35\") " pod="openshift-marketplace/marketplace-operator-79b997595-45mg7" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671852 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/17018e1c-72bf-40ba-9240-5d6684ec855a-audit-policies\") pod \"apiserver-7bbb656c7d-hgw6n\" (UID: \"17018e1c-72bf-40ba-9240-5d6684ec855a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgw6n" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671868 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32e984aa-8399-4cf1-8a4a-b36525c67e35-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-45mg7\" (UID: \"32e984aa-8399-4cf1-8a4a-b36525c67e35\") " pod="openshift-marketplace/marketplace-operator-79b997595-45mg7" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671886 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e02559f6-da6b-44d6-b0d3-16a5b400edda-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ngpn7\" (UID: \"e02559f6-da6b-44d6-b0d3-16a5b400edda\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ngpn7" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671903 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/278df35c-de00-443d-a6f7-e0cc526a487c-serving-cert\") pod \"controller-manager-879f6c89f-v78pc\" (UID: \"278df35c-de00-443d-a6f7-e0cc526a487c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v78pc" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671923 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9c70786-d73e-4e48-a552-bdeb53daba49-config\") pod \"apiserver-76f77b778f-9c4wb\" (UID: \"e9c70786-d73e-4e48-a552-bdeb53daba49\") " pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671953 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbddb809-9950-48a7-945a-ef66c2e1c1f9-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-5pcpc\" (UID: \"cbddb809-9950-48a7-945a-ef66c2e1c1f9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5pcpc" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671971 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.671990 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0274d4b-eb80-4321-a4c1-6848c65bc32e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-q7c6j\" (UID: \"b0274d4b-eb80-4321-a4c1-6848c65bc32e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q7c6j" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.672008 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8t4q\" (UniqueName: \"kubernetes.io/projected/17018e1c-72bf-40ba-9240-5d6684ec855a-kube-api-access-g8t4q\") pod \"apiserver-7bbb656c7d-hgw6n\" (UID: \"17018e1c-72bf-40ba-9240-5d6684ec855a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgw6n" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.672024 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/11fbaa05-cf66-40dd-be15-c6474a011768-console-oauth-config\") pod \"console-f9d7485db-kjfn6\" (UID: \"11fbaa05-cf66-40dd-be15-c6474a011768\") " pod="openshift-console/console-f9d7485db-kjfn6" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.672043 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bed81cec-625c-4239-92b4-39428a13becc-trusted-ca\") pod \"console-operator-58897d9998-gb4pt\" (UID: \"bed81cec-625c-4239-92b4-39428a13becc\") " pod="openshift-console-operator/console-operator-58897d9998-gb4pt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.672062 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bed81cec-625c-4239-92b4-39428a13becc-config\") pod \"console-operator-58897d9998-gb4pt\" (UID: \"bed81cec-625c-4239-92b4-39428a13becc\") " pod="openshift-console-operator/console-operator-58897d9998-gb4pt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.672079 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/af0d26af-5990-456b-a3bc-4ea4a14bbc25-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rvd8g\" (UID: \"af0d26af-5990-456b-a3bc-4ea4a14bbc25\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rvd8g" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.672096 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e7d85019-9a72-439e-a548-496027dd3d2c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5jfm7\" (UID: \"e7d85019-9a72-439e-a548-496027dd3d2c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5jfm7" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.672116 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e02559f6-da6b-44d6-b0d3-16a5b400edda-service-ca-bundle\") pod \"authentication-operator-69f744f599-ngpn7\" (UID: \"e02559f6-da6b-44d6-b0d3-16a5b400edda\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ngpn7" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.672134 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/17018e1c-72bf-40ba-9240-5d6684ec855a-audit-dir\") pod \"apiserver-7bbb656c7d-hgw6n\" (UID: \"17018e1c-72bf-40ba-9240-5d6684ec855a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgw6n" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.672149 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b0274d4b-eb80-4321-a4c1-6848c65bc32e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-q7c6j\" (UID: \"b0274d4b-eb80-4321-a4c1-6848c65bc32e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q7c6j" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.672167 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e02559f6-da6b-44d6-b0d3-16a5b400edda-config\") pod \"authentication-operator-69f744f599-ngpn7\" (UID: \"e02559f6-da6b-44d6-b0d3-16a5b400edda\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ngpn7" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.672186 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/11fbaa05-cf66-40dd-be15-c6474a011768-console-serving-cert\") pod \"console-f9d7485db-kjfn6\" (UID: \"11fbaa05-cf66-40dd-be15-c6474a011768\") " pod="openshift-console/console-f9d7485db-kjfn6" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.672204 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st29l\" (UniqueName: \"kubernetes.io/projected/e02559f6-da6b-44d6-b0d3-16a5b400edda-kube-api-access-st29l\") pod \"authentication-operator-69f744f599-ngpn7\" (UID: \"e02559f6-da6b-44d6-b0d3-16a5b400edda\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ngpn7" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.672226 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sthg5\" (UniqueName: \"kubernetes.io/projected/4ce35469-d725-409b-8e24-2c74769d7b77-kube-api-access-sthg5\") pod \"route-controller-manager-6576b87f9c-2rrvm\" (UID: \"4ce35469-d725-409b-8e24-2c74769d7b77\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2rrvm" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.672246 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.672263 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b0274d4b-eb80-4321-a4c1-6848c65bc32e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-q7c6j\" (UID: \"b0274d4b-eb80-4321-a4c1-6848c65bc32e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q7c6j" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.672281 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/32e984aa-8399-4cf1-8a4a-b36525c67e35-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-45mg7\" (UID: \"32e984aa-8399-4cf1-8a4a-b36525c67e35\") " pod="openshift-marketplace/marketplace-operator-79b997595-45mg7" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.672308 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e9c70786-d73e-4e48-a552-bdeb53daba49-audit\") pod \"apiserver-76f77b778f-9c4wb\" (UID: \"e9c70786-d73e-4e48-a552-bdeb53daba49\") " pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.672325 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwrqm\" (UniqueName: \"kubernetes.io/projected/cbddb809-9950-48a7-945a-ef66c2e1c1f9-kube-api-access-kwrqm\") pod \"openshift-apiserver-operator-796bbdcf4f-5pcpc\" (UID: \"cbddb809-9950-48a7-945a-ef66c2e1c1f9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5pcpc" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.672354 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.672374 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9c70786-d73e-4e48-a552-bdeb53daba49-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9c4wb\" (UID: \"e9c70786-d73e-4e48-a552-bdeb53daba49\") " pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.672393 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f18ef53a-23d0-4f48-b7a4-96f2716e137f-audit-policies\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.672411 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9c70786-d73e-4e48-a552-bdeb53daba49-audit-dir\") pod \"apiserver-76f77b778f-9c4wb\" (UID: \"e9c70786-d73e-4e48-a552-bdeb53daba49\") " pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.672429 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.672447 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ce35469-d725-409b-8e24-2c74769d7b77-serving-cert\") pod \"route-controller-manager-6576b87f9c-2rrvm\" (UID: \"4ce35469-d725-409b-8e24-2c74769d7b77\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2rrvm" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.672468 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17018e1c-72bf-40ba-9240-5d6684ec855a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-hgw6n\" (UID: \"17018e1c-72bf-40ba-9240-5d6684ec855a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgw6n" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.672486 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1843207f-14a3-4f21-a253-dbd843d2d8bf-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-khmn9\" (UID: \"1843207f-14a3-4f21-a253-dbd843d2d8bf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-khmn9" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.672506 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e9c70786-d73e-4e48-a552-bdeb53daba49-node-pullsecrets\") pod \"apiserver-76f77b778f-9c4wb\" (UID: \"e9c70786-d73e-4e48-a552-bdeb53daba49\") " pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.672526 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48571590-5f3e-4b3f-9cd5-451eeb22a435-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-n77c6\" (UID: \"48571590-5f3e-4b3f-9cd5-451eeb22a435\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n77c6" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.672547 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1843207f-14a3-4f21-a253-dbd843d2d8bf-images\") pod \"machine-api-operator-5694c8668f-khmn9\" (UID: \"1843207f-14a3-4f21-a253-dbd843d2d8bf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-khmn9" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.672565 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/278df35c-de00-443d-a6f7-e0cc526a487c-config\") pod \"controller-manager-879f6c89f-v78pc\" (UID: \"278df35c-de00-443d-a6f7-e0cc526a487c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v78pc" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.672582 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e9c70786-d73e-4e48-a552-bdeb53daba49-etcd-serving-ca\") pod \"apiserver-76f77b778f-9c4wb\" (UID: \"e9c70786-d73e-4e48-a552-bdeb53daba49\") " pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.672599 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9c70786-d73e-4e48-a552-bdeb53daba49-serving-cert\") pod \"apiserver-76f77b778f-9c4wb\" (UID: \"e9c70786-d73e-4e48-a552-bdeb53daba49\") " pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.672616 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.672623 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hdgkf"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.673386 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-wpzdz"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.674071 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-xk2qk"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.674728 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/17018e1c-72bf-40ba-9240-5d6684ec855a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-hgw6n\" (UID: \"17018e1c-72bf-40ba-9240-5d6684ec855a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgw6n" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.675496 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11fbaa05-cf66-40dd-be15-c6474a011768-trusted-ca-bundle\") pod \"console-f9d7485db-kjfn6\" (UID: \"11fbaa05-cf66-40dd-be15-c6474a011768\") " pod="openshift-console/console-f9d7485db-kjfn6" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.675934 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f18ef53a-23d0-4f48-b7a4-96f2716e137f-audit-policies\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.676376 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xk2qk" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.676577 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e9c70786-d73e-4e48-a552-bdeb53daba49-image-import-ca\") pod \"apiserver-76f77b778f-9c4wb\" (UID: \"e9c70786-d73e-4e48-a552-bdeb53daba49\") " pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.677074 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e02559f6-da6b-44d6-b0d3-16a5b400edda-config\") pod \"authentication-operator-69f744f599-ngpn7\" (UID: \"e02559f6-da6b-44d6-b0d3-16a5b400edda\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ngpn7" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.677091 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/17018e1c-72bf-40ba-9240-5d6684ec855a-audit-policies\") pod \"apiserver-7bbb656c7d-hgw6n\" (UID: \"17018e1c-72bf-40ba-9240-5d6684ec855a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgw6n" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.677862 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-hdgkf" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.677922 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wpzdz" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.678815 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8lclt"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.679468 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9c70786-d73e-4e48-a552-bdeb53daba49-config\") pod \"apiserver-76f77b778f-9c4wb\" (UID: \"e9c70786-d73e-4e48-a552-bdeb53daba49\") " pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.679688 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e02559f6-da6b-44d6-b0d3-16a5b400edda-service-ca-bundle\") pod \"authentication-operator-69f744f599-ngpn7\" (UID: \"e02559f6-da6b-44d6-b0d3-16a5b400edda\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ngpn7" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.679765 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/17018e1c-72bf-40ba-9240-5d6684ec855a-audit-dir\") pod \"apiserver-7bbb656c7d-hgw6n\" (UID: \"17018e1c-72bf-40ba-9240-5d6684ec855a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgw6n" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.680054 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/278df35c-de00-443d-a6f7-e0cc526a487c-client-ca\") pod \"controller-manager-879f6c89f-v78pc\" (UID: \"278df35c-de00-443d-a6f7-e0cc526a487c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v78pc" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.680824 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/11fbaa05-cf66-40dd-be15-c6474a011768-service-ca\") pod \"console-f9d7485db-kjfn6\" (UID: \"11fbaa05-cf66-40dd-be15-c6474a011768\") " pod="openshift-console/console-f9d7485db-kjfn6" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.681093 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.681220 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.681108 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e02559f6-da6b-44d6-b0d3-16a5b400edda-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ngpn7\" (UID: \"e02559f6-da6b-44d6-b0d3-16a5b400edda\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ngpn7" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.681678 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e9c70786-d73e-4e48-a552-bdeb53daba49-audit\") pod \"apiserver-76f77b778f-9c4wb\" (UID: \"e9c70786-d73e-4e48-a552-bdeb53daba49\") " pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.681721 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.682154 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.682177 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f18ef53a-23d0-4f48-b7a4-96f2716e137f-audit-dir\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.682329 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9c70786-d73e-4e48-a552-bdeb53daba49-audit-dir\") pod \"apiserver-76f77b778f-9c4wb\" (UID: \"e9c70786-d73e-4e48-a552-bdeb53daba49\") " pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.682418 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bfz6f"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.683755 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/11fbaa05-cf66-40dd-be15-c6474a011768-console-config\") pod \"console-f9d7485db-kjfn6\" (UID: \"11fbaa05-cf66-40dd-be15-c6474a011768\") " pod="openshift-console/console-f9d7485db-kjfn6" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.682726 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.683814 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n7thk"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.683984 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9c70786-d73e-4e48-a552-bdeb53daba49-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9c4wb\" (UID: \"e9c70786-d73e-4e48-a552-bdeb53daba49\") " pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.684198 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n7thk" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.684224 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/11fbaa05-cf66-40dd-be15-c6474a011768-oauth-serving-cert\") pod \"console-f9d7485db-kjfn6\" (UID: \"11fbaa05-cf66-40dd-be15-c6474a011768\") " pod="openshift-console/console-f9d7485db-kjfn6" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.684769 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e9c70786-d73e-4e48-a552-bdeb53daba49-node-pullsecrets\") pod \"apiserver-76f77b778f-9c4wb\" (UID: \"e9c70786-d73e-4e48-a552-bdeb53daba49\") " pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.684803 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ce35469-d725-409b-8e24-2c74769d7b77-client-ca\") pod \"route-controller-manager-6576b87f9c-2rrvm\" (UID: \"4ce35469-d725-409b-8e24-2c74769d7b77\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2rrvm" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.684882 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/17018e1c-72bf-40ba-9240-5d6684ec855a-encryption-config\") pod \"apiserver-7bbb656c7d-hgw6n\" (UID: \"17018e1c-72bf-40ba-9240-5d6684ec855a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgw6n" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.685152 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.685383 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.685411 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/278df35c-de00-443d-a6f7-e0cc526a487c-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-v78pc\" (UID: \"278df35c-de00-443d-a6f7-e0cc526a487c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v78pc" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.685453 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.682496 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8lclt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.685920 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bfz6f" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.686885 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbddb809-9950-48a7-945a-ef66c2e1c1f9-config\") pod \"openshift-apiserver-operator-796bbdcf4f-5pcpc\" (UID: \"cbddb809-9950-48a7-945a-ef66c2e1c1f9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5pcpc" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.687323 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e9c70786-d73e-4e48-a552-bdeb53daba49-etcd-serving-ca\") pod \"apiserver-76f77b778f-9c4wb\" (UID: \"e9c70786-d73e-4e48-a552-bdeb53daba49\") " pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.688044 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1843207f-14a3-4f21-a253-dbd843d2d8bf-config\") pod \"machine-api-operator-5694c8668f-khmn9\" (UID: \"1843207f-14a3-4f21-a253-dbd843d2d8bf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-khmn9" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.688683 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1843207f-14a3-4f21-a253-dbd843d2d8bf-images\") pod \"machine-api-operator-5694c8668f-khmn9\" (UID: \"1843207f-14a3-4f21-a253-dbd843d2d8bf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-khmn9" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.689782 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e9c70786-d73e-4e48-a552-bdeb53daba49-etcd-client\") pod \"apiserver-76f77b778f-9c4wb\" (UID: \"e9c70786-d73e-4e48-a552-bdeb53daba49\") " pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.689820 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/11fbaa05-cf66-40dd-be15-c6474a011768-console-oauth-config\") pod \"console-f9d7485db-kjfn6\" (UID: \"11fbaa05-cf66-40dd-be15-c6474a011768\") " pod="openshift-console/console-f9d7485db-kjfn6" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.689865 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbddb809-9950-48a7-945a-ef66c2e1c1f9-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-5pcpc\" (UID: \"cbddb809-9950-48a7-945a-ef66c2e1c1f9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5pcpc" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.689981 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/11fbaa05-cf66-40dd-be15-c6474a011768-console-serving-cert\") pod \"console-f9d7485db-kjfn6\" (UID: \"11fbaa05-cf66-40dd-be15-c6474a011768\") " pod="openshift-console/console-f9d7485db-kjfn6" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.691394 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e02559f6-da6b-44d6-b0d3-16a5b400edda-serving-cert\") pod \"authentication-operator-69f744f599-ngpn7\" (UID: \"e02559f6-da6b-44d6-b0d3-16a5b400edda\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ngpn7" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.691472 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hlg9d"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.692159 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ce35469-d725-409b-8e24-2c74769d7b77-config\") pod \"route-controller-manager-6576b87f9c-2rrvm\" (UID: \"4ce35469-d725-409b-8e24-2c74769d7b77\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2rrvm" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.692546 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hlg9d" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.694012 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536800-n8kg4"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.694540 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536800-n8kg4" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.694548 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1843207f-14a3-4f21-a253-dbd843d2d8bf-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-khmn9\" (UID: \"1843207f-14a3-4f21-a253-dbd843d2d8bf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-khmn9" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.694845 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ce35469-d725-409b-8e24-2c74769d7b77-serving-cert\") pod \"route-controller-manager-6576b87f9c-2rrvm\" (UID: \"4ce35469-d725-409b-8e24-2c74769d7b77\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2rrvm" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.705806 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.706108 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.706589 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9c70786-d73e-4e48-a552-bdeb53daba49-serving-cert\") pod \"apiserver-76f77b778f-9c4wb\" (UID: \"e9c70786-d73e-4e48-a552-bdeb53daba49\") " pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.707571 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17018e1c-72bf-40ba-9240-5d6684ec855a-serving-cert\") pod \"apiserver-7bbb656c7d-hgw6n\" (UID: \"17018e1c-72bf-40ba-9240-5d6684ec855a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgw6n" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.707804 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17018e1c-72bf-40ba-9240-5d6684ec855a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-hgw6n\" (UID: \"17018e1c-72bf-40ba-9240-5d6684ec855a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgw6n" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.708026 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.708254 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.708304 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/278df35c-de00-443d-a6f7-e0cc526a487c-config\") pod \"controller-manager-879f6c89f-v78pc\" (UID: \"278df35c-de00-443d-a6f7-e0cc526a487c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v78pc" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.708558 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/17018e1c-72bf-40ba-9240-5d6684ec855a-etcd-client\") pod \"apiserver-7bbb656c7d-hgw6n\" (UID: \"17018e1c-72bf-40ba-9240-5d6684ec855a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgw6n" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.710189 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.710458 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536810-bc446"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.711877 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536810-bc446" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.713198 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/278df35c-de00-443d-a6f7-e0cc526a487c-serving-cert\") pod \"controller-manager-879f6c89f-v78pc\" (UID: \"278df35c-de00-443d-a6f7-e0cc526a487c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v78pc" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.716086 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.717313 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-hgw6n"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.722516 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-khmn9"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.723029 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e9c70786-d73e-4e48-a552-bdeb53daba49-encryption-config\") pod \"apiserver-76f77b778f-9c4wb\" (UID: \"e9c70786-d73e-4e48-a552-bdeb53daba49\") " pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.723082 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.724058 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-ghjwl"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.725065 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-ghjwl" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.728966 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-v78pc"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.730386 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gw4c8"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.731329 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-gw4c8" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.733769 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5pcpc"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.738967 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-4dhxq"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.741356 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2rrvm"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.741540 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5jfm7"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.742855 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-kjfn6"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.744414 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vs8sq"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.744349 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.747171 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ngpn7"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.750687 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-n6xx6"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.751814 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rvd8g"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.754427 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2x2lh"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.757198 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-c75pf"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.760122 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z5n7d"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.762121 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b89fm"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.766097 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.767465 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gb4pt"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.767494 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-45mg7"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.767503 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n7thk"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.769812 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-5shdf"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.772596 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5mm8b"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.773906 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-wpzdz"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.774377 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ntc8\" (UniqueName: \"kubernetes.io/projected/48571590-5f3e-4b3f-9cd5-451eeb22a435-kube-api-access-7ntc8\") pod \"kube-storage-version-migrator-operator-b67b599dd-n77c6\" (UID: \"48571590-5f3e-4b3f-9cd5-451eeb22a435\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n77c6" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.774437 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z74nc\" (UniqueName: \"kubernetes.io/projected/af0d26af-5990-456b-a3bc-4ea4a14bbc25-kube-api-access-z74nc\") pod \"cluster-samples-operator-665b6dd947-rvd8g\" (UID: \"af0d26af-5990-456b-a3bc-4ea4a14bbc25\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rvd8g" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.774494 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxqt7\" (UniqueName: \"kubernetes.io/projected/b0274d4b-eb80-4321-a4c1-6848c65bc32e-kube-api-access-lxqt7\") pod \"cluster-image-registry-operator-dc59b4c8b-q7c6j\" (UID: \"b0274d4b-eb80-4321-a4c1-6848c65bc32e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q7c6j" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.774522 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48571590-5f3e-4b3f-9cd5-451eeb22a435-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-n77c6\" (UID: \"48571590-5f3e-4b3f-9cd5-451eeb22a435\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n77c6" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.774554 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqhdk\" (UniqueName: \"kubernetes.io/projected/e7d85019-9a72-439e-a548-496027dd3d2c-kube-api-access-jqhdk\") pod \"openshift-config-operator-7777fb866f-5jfm7\" (UID: \"e7d85019-9a72-439e-a548-496027dd3d2c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5jfm7" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.774608 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bed81cec-625c-4239-92b4-39428a13becc-serving-cert\") pod \"console-operator-58897d9998-gb4pt\" (UID: \"bed81cec-625c-4239-92b4-39428a13becc\") " pod="openshift-console-operator/console-operator-58897d9998-gb4pt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.774644 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt9hc\" (UniqueName: \"kubernetes.io/projected/36eaeabc-508b-4a11-9dc5-45ff8b42e0a8-kube-api-access-tt9hc\") pod \"migrator-59844c95c7-5mm8b\" (UID: \"36eaeabc-508b-4a11-9dc5-45ff8b42e0a8\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5mm8b" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.774669 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7d85019-9a72-439e-a548-496027dd3d2c-serving-cert\") pod \"openshift-config-operator-7777fb866f-5jfm7\" (UID: \"e7d85019-9a72-439e-a548-496027dd3d2c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5jfm7" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.774697 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t85v\" (UniqueName: \"kubernetes.io/projected/1f30f03f-511a-4a29-beae-e3d6971a8c9e-kube-api-access-8t85v\") pod \"downloads-7954f5f757-4dhxq\" (UID: \"1f30f03f-511a-4a29-beae-e3d6971a8c9e\") " pod="openshift-console/downloads-7954f5f757-4dhxq" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.774729 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzv5r\" (UniqueName: \"kubernetes.io/projected/bed81cec-625c-4239-92b4-39428a13becc-kube-api-access-pzv5r\") pod \"console-operator-58897d9998-gb4pt\" (UID: \"bed81cec-625c-4239-92b4-39428a13becc\") " pod="openshift-console-operator/console-operator-58897d9998-gb4pt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.774758 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pf5f\" (UniqueName: \"kubernetes.io/projected/32e984aa-8399-4cf1-8a4a-b36525c67e35-kube-api-access-8pf5f\") pod \"marketplace-operator-79b997595-45mg7\" (UID: \"32e984aa-8399-4cf1-8a4a-b36525c67e35\") " pod="openshift-marketplace/marketplace-operator-79b997595-45mg7" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.774789 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32e984aa-8399-4cf1-8a4a-b36525c67e35-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-45mg7\" (UID: \"32e984aa-8399-4cf1-8a4a-b36525c67e35\") " pod="openshift-marketplace/marketplace-operator-79b997595-45mg7" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.774820 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0274d4b-eb80-4321-a4c1-6848c65bc32e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-q7c6j\" (UID: \"b0274d4b-eb80-4321-a4c1-6848c65bc32e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q7c6j" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.774857 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bed81cec-625c-4239-92b4-39428a13becc-trusted-ca\") pod \"console-operator-58897d9998-gb4pt\" (UID: \"bed81cec-625c-4239-92b4-39428a13becc\") " pod="openshift-console-operator/console-operator-58897d9998-gb4pt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.774887 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bed81cec-625c-4239-92b4-39428a13becc-config\") pod \"console-operator-58897d9998-gb4pt\" (UID: \"bed81cec-625c-4239-92b4-39428a13becc\") " pod="openshift-console-operator/console-operator-58897d9998-gb4pt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.774914 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/af0d26af-5990-456b-a3bc-4ea4a14bbc25-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rvd8g\" (UID: \"af0d26af-5990-456b-a3bc-4ea4a14bbc25\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rvd8g" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.774963 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e7d85019-9a72-439e-a548-496027dd3d2c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5jfm7\" (UID: \"e7d85019-9a72-439e-a548-496027dd3d2c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5jfm7" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.774991 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b0274d4b-eb80-4321-a4c1-6848c65bc32e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-q7c6j\" (UID: \"b0274d4b-eb80-4321-a4c1-6848c65bc32e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q7c6j" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.775044 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b0274d4b-eb80-4321-a4c1-6848c65bc32e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-q7c6j\" (UID: \"b0274d4b-eb80-4321-a4c1-6848c65bc32e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q7c6j" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.775079 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/32e984aa-8399-4cf1-8a4a-b36525c67e35-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-45mg7\" (UID: \"32e984aa-8399-4cf1-8a4a-b36525c67e35\") " pod="openshift-marketplace/marketplace-operator-79b997595-45mg7" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.775153 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48571590-5f3e-4b3f-9cd5-451eeb22a435-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-n77c6\" (UID: \"48571590-5f3e-4b3f-9cd5-451eeb22a435\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n77c6" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.775319 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48571590-5f3e-4b3f-9cd5-451eeb22a435-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-n77c6\" (UID: \"48571590-5f3e-4b3f-9cd5-451eeb22a435\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n77c6" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.775993 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9gfr4"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.776200 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e7d85019-9a72-439e-a548-496027dd3d2c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5jfm7\" (UID: \"e7d85019-9a72-439e-a548-496027dd3d2c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5jfm7" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.776310 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0274d4b-eb80-4321-a4c1-6848c65bc32e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-q7c6j\" (UID: \"b0274d4b-eb80-4321-a4c1-6848c65bc32e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q7c6j" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.777491 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32e984aa-8399-4cf1-8a4a-b36525c67e35-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-45mg7\" (UID: \"32e984aa-8399-4cf1-8a4a-b36525c67e35\") " pod="openshift-marketplace/marketplace-operator-79b997595-45mg7" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.777559 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8fhp6"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.779069 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n77c6"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.779443 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7d85019-9a72-439e-a548-496027dd3d2c-serving-cert\") pod \"openshift-config-operator-7777fb866f-5jfm7\" (UID: \"e7d85019-9a72-439e-a548-496027dd3d2c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5jfm7" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.779821 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b0274d4b-eb80-4321-a4c1-6848c65bc32e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-q7c6j\" (UID: \"b0274d4b-eb80-4321-a4c1-6848c65bc32e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q7c6j" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.780080 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q7c6j"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.781068 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hlg9d"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.781491 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/32e984aa-8399-4cf1-8a4a-b36525c67e35-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-45mg7\" (UID: \"32e984aa-8399-4cf1-8a4a-b36525c67e35\") " pod="openshift-marketplace/marketplace-operator-79b997595-45mg7" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.782063 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bfz6f"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.783023 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-xk2qk"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.783133 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/af0d26af-5990-456b-a3bc-4ea4a14bbc25-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rvd8g\" (UID: \"af0d26af-5990-456b-a3bc-4ea4a14bbc25\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rvd8g" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.784053 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.784366 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hdgkf"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.785461 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8lclt"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.786705 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7stx8"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.788329 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-srljc"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.789380 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536800-n8kg4"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.789518 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srljc" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.790163 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-dwgx7"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.790819 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-dwgx7" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.791061 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pz6hl"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.792064 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536810-bc446"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.793169 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-ghjwl"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.794263 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-srljc"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.795274 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gw4c8"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.798220 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-rfmxx"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.804732 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.807459 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rfmxx"] Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.807633 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rfmxx" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.823792 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.851771 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.857110 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bed81cec-625c-4239-92b4-39428a13becc-trusted-ca\") pod \"console-operator-58897d9998-gb4pt\" (UID: \"bed81cec-625c-4239-92b4-39428a13becc\") " pod="openshift-console-operator/console-operator-58897d9998-gb4pt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.863891 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.883808 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.890506 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bed81cec-625c-4239-92b4-39428a13becc-serving-cert\") pod \"console-operator-58897d9998-gb4pt\" (UID: \"bed81cec-625c-4239-92b4-39428a13becc\") " pod="openshift-console-operator/console-operator-58897d9998-gb4pt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.904913 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.924482 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.943604 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.946731 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bed81cec-625c-4239-92b4-39428a13becc-config\") pod \"console-operator-58897d9998-gb4pt\" (UID: \"bed81cec-625c-4239-92b4-39428a13becc\") " pod="openshift-console-operator/console-operator-58897d9998-gb4pt" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.964473 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.968396 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48571590-5f3e-4b3f-9cd5-451eeb22a435-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-n77c6\" (UID: \"48571590-5f3e-4b3f-9cd5-451eeb22a435\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n77c6" Feb 27 16:10:08 crc kubenswrapper[4830]: I0227 16:10:08.984566 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.004497 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.024748 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.045115 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.065162 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.085858 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.105497 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.124473 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.145415 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.164783 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.204534 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.224581 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.244669 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.265883 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.285391 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.324769 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.344394 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.364557 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.386062 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.406564 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.425411 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.445119 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.465011 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.484658 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.504561 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.524268 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.546491 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.563847 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.583841 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.605196 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.624175 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.645659 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.686119 4830 request.go:700] Waited for 1.008236395s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.693548 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.699283 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8t4q\" (UniqueName: \"kubernetes.io/projected/17018e1c-72bf-40ba-9240-5d6684ec855a-kube-api-access-g8t4q\") pod \"apiserver-7bbb656c7d-hgw6n\" (UID: \"17018e1c-72bf-40ba-9240-5d6684ec855a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgw6n" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.705066 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.726153 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.744996 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.765854 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.767996 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgw6n" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.786339 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.805327 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.825525 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.845944 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.865866 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.884918 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.931119 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st29l\" (UniqueName: \"kubernetes.io/projected/e02559f6-da6b-44d6-b0d3-16a5b400edda-kube-api-access-st29l\") pod \"authentication-operator-69f744f599-ngpn7\" (UID: \"e02559f6-da6b-44d6-b0d3-16a5b400edda\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ngpn7" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.961550 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5c8f\" (UniqueName: \"kubernetes.io/projected/278df35c-de00-443d-a6f7-e0cc526a487c-kube-api-access-t5c8f\") pod \"controller-manager-879f6c89f-v78pc\" (UID: \"278df35c-de00-443d-a6f7-e0cc526a487c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v78pc" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.970735 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sthg5\" (UniqueName: \"kubernetes.io/projected/4ce35469-d725-409b-8e24-2c74769d7b77-kube-api-access-sthg5\") pod \"route-controller-manager-6576b87f9c-2rrvm\" (UID: \"4ce35469-d725-409b-8e24-2c74769d7b77\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2rrvm" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.983041 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94mnb\" (UniqueName: \"kubernetes.io/projected/f18ef53a-23d0-4f48-b7a4-96f2716e137f-kube-api-access-94mnb\") pod \"oauth-openshift-558db77b4-vs8sq\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:09 crc kubenswrapper[4830]: I0227 16:10:09.984119 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.012250 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.024853 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.027572 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-v78pc" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.069034 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.086176 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2rrvm" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.093869 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s7mz\" (UniqueName: \"kubernetes.io/projected/e9c70786-d73e-4e48-a552-bdeb53daba49-kube-api-access-4s7mz\") pod \"apiserver-76f77b778f-9c4wb\" (UID: \"e9c70786-d73e-4e48-a552-bdeb53daba49\") " pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.108087 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwrqm\" (UniqueName: \"kubernetes.io/projected/cbddb809-9950-48a7-945a-ef66c2e1c1f9-kube-api-access-kwrqm\") pod \"openshift-apiserver-operator-796bbdcf4f-5pcpc\" (UID: \"cbddb809-9950-48a7-945a-ef66c2e1c1f9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5pcpc" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.114579 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-hgw6n"] Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.128607 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.140508 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4ntq\" (UniqueName: \"kubernetes.io/projected/11fbaa05-cf66-40dd-be15-c6474a011768-kube-api-access-z4ntq\") pod \"console-f9d7485db-kjfn6\" (UID: \"11fbaa05-cf66-40dd-be15-c6474a011768\") " pod="openshift-console/console-f9d7485db-kjfn6" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.145219 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-kjfn6" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.160761 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-ngpn7" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.165198 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.166735 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.167381 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcv5t\" (UniqueName: \"kubernetes.io/projected/1843207f-14a3-4f21-a253-dbd843d2d8bf-kube-api-access-kcv5t\") pod \"machine-api-operator-5694c8668f-khmn9\" (UID: \"1843207f-14a3-4f21-a253-dbd843d2d8bf\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-khmn9" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.185121 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.206517 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.224601 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.245592 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.257654 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-v78pc"] Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.265208 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 27 16:10:10 crc kubenswrapper[4830]: W0227 16:10:10.273169 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod278df35c_de00_443d_a6f7_e0cc526a487c.slice/crio-ca4725098cf27779bb591d014851950c8a7cb5a21be7b98c3fe6686d135f2de0 WatchSource:0}: Error finding container ca4725098cf27779bb591d014851950c8a7cb5a21be7b98c3fe6686d135f2de0: Status 404 returned error can't find the container with id ca4725098cf27779bb591d014851950c8a7cb5a21be7b98c3fe6686d135f2de0 Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.283470 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.302745 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.303950 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2rrvm"] Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.305009 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.325405 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.340762 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-khmn9" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.343664 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.358999 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-kjfn6"] Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.363568 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 27 16:10:10 crc kubenswrapper[4830]: W0227 16:10:10.376465 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11fbaa05_cf66_40dd_be15_c6474a011768.slice/crio-ed5a8190bcc1ccd763f39d2a6d76a6f0e916da530bc60d35fc51ab3831ea9848 WatchSource:0}: Error finding container ed5a8190bcc1ccd763f39d2a6d76a6f0e916da530bc60d35fc51ab3831ea9848: Status 404 returned error can't find the container with id ed5a8190bcc1ccd763f39d2a6d76a6f0e916da530bc60d35fc51ab3831ea9848 Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.385600 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.404934 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.407067 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5pcpc" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.419134 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ngpn7"] Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.424149 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.450449 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.452558 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vs8sq"] Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.464898 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.483745 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.488421 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9c4wb"] Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.504287 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.523797 4830 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.539954 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-khmn9"] Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.544952 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.579625 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ntc8\" (UniqueName: \"kubernetes.io/projected/48571590-5f3e-4b3f-9cd5-451eeb22a435-kube-api-access-7ntc8\") pod \"kube-storage-version-migrator-operator-b67b599dd-n77c6\" (UID: \"48571590-5f3e-4b3f-9cd5-451eeb22a435\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n77c6" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.589123 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n77c6" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.601138 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5pcpc"] Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.602089 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqhdk\" (UniqueName: \"kubernetes.io/projected/e7d85019-9a72-439e-a548-496027dd3d2c-kube-api-access-jqhdk\") pod \"openshift-config-operator-7777fb866f-5jfm7\" (UID: \"e7d85019-9a72-439e-a548-496027dd3d2c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5jfm7" Feb 27 16:10:10 crc kubenswrapper[4830]: W0227 16:10:10.622812 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbddb809_9950_48a7_945a_ef66c2e1c1f9.slice/crio-f69f2c98874810172d56539c344c91fc1cf61d2d1b58a0980c84e05376bbf919 WatchSource:0}: Error finding container f69f2c98874810172d56539c344c91fc1cf61d2d1b58a0980c84e05376bbf919: Status 404 returned error can't find the container with id f69f2c98874810172d56539c344c91fc1cf61d2d1b58a0980c84e05376bbf919 Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.628311 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z74nc\" (UniqueName: \"kubernetes.io/projected/af0d26af-5990-456b-a3bc-4ea4a14bbc25-kube-api-access-z74nc\") pod \"cluster-samples-operator-665b6dd947-rvd8g\" (UID: \"af0d26af-5990-456b-a3bc-4ea4a14bbc25\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rvd8g" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.637456 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxqt7\" (UniqueName: \"kubernetes.io/projected/b0274d4b-eb80-4321-a4c1-6848c65bc32e-kube-api-access-lxqt7\") pod \"cluster-image-registry-operator-dc59b4c8b-q7c6j\" (UID: \"b0274d4b-eb80-4321-a4c1-6848c65bc32e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q7c6j" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.671003 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b0274d4b-eb80-4321-a4c1-6848c65bc32e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-q7c6j\" (UID: \"b0274d4b-eb80-4321-a4c1-6848c65bc32e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q7c6j" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.684098 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt9hc\" (UniqueName: \"kubernetes.io/projected/36eaeabc-508b-4a11-9dc5-45ff8b42e0a8-kube-api-access-tt9hc\") pod \"migrator-59844c95c7-5mm8b\" (UID: \"36eaeabc-508b-4a11-9dc5-45ff8b42e0a8\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5mm8b" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.702661 4830 request.go:700] Waited for 1.926220938s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/serviceaccounts/console-operator/token Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.703960 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pf5f\" (UniqueName: \"kubernetes.io/projected/32e984aa-8399-4cf1-8a4a-b36525c67e35-kube-api-access-8pf5f\") pod \"marketplace-operator-79b997595-45mg7\" (UID: \"32e984aa-8399-4cf1-8a4a-b36525c67e35\") " pod="openshift-marketplace/marketplace-operator-79b997595-45mg7" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.733271 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzv5r\" (UniqueName: \"kubernetes.io/projected/bed81cec-625c-4239-92b4-39428a13becc-kube-api-access-pzv5r\") pod \"console-operator-58897d9998-gb4pt\" (UID: \"bed81cec-625c-4239-92b4-39428a13becc\") " pod="openshift-console-operator/console-operator-58897d9998-gb4pt" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.743950 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t85v\" (UniqueName: \"kubernetes.io/projected/1f30f03f-511a-4a29-beae-e3d6971a8c9e-kube-api-access-8t85v\") pod \"downloads-7954f5f757-4dhxq\" (UID: \"1f30f03f-511a-4a29-beae-e3d6971a8c9e\") " pod="openshift-console/downloads-7954f5f757-4dhxq" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.744232 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.763796 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.777985 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5jfm7" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.783630 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.784815 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n77c6"] Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.803140 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 27 16:10:10 crc kubenswrapper[4830]: W0227 16:10:10.804524 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48571590_5f3e_4b3f_9cd5_451eeb22a435.slice/crio-bc8fc3b9de59f7ea9ab45f5469455ac8eb00b5de9ca684d6c6758c9b7bfca92f WatchSource:0}: Error finding container bc8fc3b9de59f7ea9ab45f5469455ac8eb00b5de9ca684d6c6758c9b7bfca92f: Status 404 returned error can't find the container with id bc8fc3b9de59f7ea9ab45f5469455ac8eb00b5de9ca684d6c6758c9b7bfca92f Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.822453 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-4dhxq" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.823382 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.837866 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rvd8g" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.844532 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.865386 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.868918 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q7c6j" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.874387 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-45mg7" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.884832 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.901765 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.901902 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.901933 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.901976 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.902004 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:10:10 crc kubenswrapper[4830]: E0227 16:10:10.902260 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:12:12.902233896 +0000 UTC m=+328.991506359 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.904005 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-gb4pt" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.904915 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5pcpc" event={"ID":"cbddb809-9950-48a7-945a-ef66c2e1c1f9","Type":"ContainerStarted","Data":"f69f2c98874810172d56539c344c91fc1cf61d2d1b58a0980c84e05376bbf919"} Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.905003 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.907177 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.907858 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.913170 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" event={"ID":"f18ef53a-23d0-4f48-b7a4-96f2716e137f","Type":"ContainerStarted","Data":"6b51f4484e7a3a8e1a60b7c39c00240728b6a4fa179b2594fb0271caab50eb68"} Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.914151 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.914489 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.917455 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5mm8b" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.924509 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.926065 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-kjfn6" event={"ID":"11fbaa05-cf66-40dd-be15-c6474a011768","Type":"ContainerStarted","Data":"30e1c78398b7de2fa04252f58c541cb457aea0dcce0e056e3d2ee80cbd726291"} Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.926106 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-kjfn6" event={"ID":"11fbaa05-cf66-40dd-be15-c6474a011768","Type":"ContainerStarted","Data":"ed5a8190bcc1ccd763f39d2a6d76a6f0e916da530bc60d35fc51ab3831ea9848"} Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.931615 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-v78pc" event={"ID":"278df35c-de00-443d-a6f7-e0cc526a487c","Type":"ContainerStarted","Data":"1a9018ca3884a4806bc37e707c445a76d4c94f79575773e2c15aa66edafbcebc"} Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.931658 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-v78pc" event={"ID":"278df35c-de00-443d-a6f7-e0cc526a487c","Type":"ContainerStarted","Data":"ca4725098cf27779bb591d014851950c8a7cb5a21be7b98c3fe6686d135f2de0"} Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.932052 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-v78pc" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.939166 4830 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-v78pc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.939219 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-v78pc" podUID="278df35c-de00-443d-a6f7-e0cc526a487c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.940493 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-ngpn7" event={"ID":"e02559f6-da6b-44d6-b0d3-16a5b400edda","Type":"ContainerStarted","Data":"8b94cfd2c72680e07a3ad1f051768cb3b5164fafd8a35a4362c95a323195e396"} Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.943036 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-khmn9" event={"ID":"1843207f-14a3-4f21-a253-dbd843d2d8bf","Type":"ContainerStarted","Data":"42a4057c587c59fde4cbeb8cbb30b298d87ed09c8462f73a40fbc282c2ff3e93"} Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.943135 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-khmn9" event={"ID":"1843207f-14a3-4f21-a253-dbd843d2d8bf","Type":"ContainerStarted","Data":"8d64e6eec3c8bdcf34095a9a3fc488b8f034f0b5f5cf683eaa9bc42c29f54347"} Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.944746 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" event={"ID":"e9c70786-d73e-4e48-a552-bdeb53daba49","Type":"ContainerStarted","Data":"4a464b6bd14520173b2c609e16f953b8a9303c97054c4b027149c96dca4cf261"} Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.952279 4830 generic.go:334] "Generic (PLEG): container finished" podID="17018e1c-72bf-40ba-9240-5d6684ec855a" containerID="e7971d3c99a910fad7dc2f681f8596a06e1d8da1bd96b70733b0424f6660a193" exitCode=0 Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.952448 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgw6n" event={"ID":"17018e1c-72bf-40ba-9240-5d6684ec855a","Type":"ContainerDied","Data":"e7971d3c99a910fad7dc2f681f8596a06e1d8da1bd96b70733b0424f6660a193"} Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.952504 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgw6n" event={"ID":"17018e1c-72bf-40ba-9240-5d6684ec855a","Type":"ContainerStarted","Data":"7d5cfbad87a299e43979ab689665ef1d570a7412a99cb4c77814300a4420a27a"} Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.954455 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2rrvm" event={"ID":"4ce35469-d725-409b-8e24-2c74769d7b77","Type":"ContainerStarted","Data":"a1a24021a2ad8f73298f5fb9a4df12c89c987963d2a1004acd7c63aa5d7857da"} Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.954479 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2rrvm" event={"ID":"4ce35469-d725-409b-8e24-2c74769d7b77","Type":"ContainerStarted","Data":"18d9f694add02cd56a75816776b4fd3da281532b184f459f03e3d79219db7c75"} Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.955041 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2rrvm" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.956749 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n77c6" event={"ID":"48571590-5f3e-4b3f-9cd5-451eeb22a435","Type":"ContainerStarted","Data":"bc8fc3b9de59f7ea9ab45f5469455ac8eb00b5de9ca684d6c6758c9b7bfca92f"} Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.963746 4830 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-2rrvm container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.963812 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2rrvm" podUID="4ce35469-d725-409b-8e24-2c74769d7b77" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 27 16:10:10 crc kubenswrapper[4830]: I0227 16:10:10.972917 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5jfm7"] Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.017652 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.017694 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ba2fe32-66e0-4bcd-a646-9d07c9a21c54-metrics-certs\") pod \"network-metrics-daemon-kgdlg\" (UID: \"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\") " pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.017739 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e98d0941-0faf-4719-88a1-ff04ca46eece-registry-tls\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.017793 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:11 crc kubenswrapper[4830]: E0227 16:10:11.018169 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:11.518156512 +0000 UTC m=+207.607428975 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.026233 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6ba2fe32-66e0-4bcd-a646-9d07c9a21c54-metrics-certs\") pod \"network-metrics-daemon-kgdlg\" (UID: \"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54\") " pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:10:11 crc kubenswrapper[4830]: W0227 16:10:11.026406 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7d85019_9a72_439e_a548_496027dd3d2c.slice/crio-c36ba2cf4b2d1e915d58d6fb5b76f48cb2dc9810c0c8d11785821a40e0b54a7d WatchSource:0}: Error finding container c36ba2cf4b2d1e915d58d6fb5b76f48cb2dc9810c0c8d11785821a40e0b54a7d: Status 404 returned error can't find the container with id c36ba2cf4b2d1e915d58d6fb5b76f48cb2dc9810c0c8d11785821a40e0b54a7d Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.027559 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.040623 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.042285 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-4dhxq"] Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.119686 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.120404 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbwz7\" (UniqueName: \"kubernetes.io/projected/e00ea89f-b3e4-44ed-9348-5cd609b9c563-kube-api-access-xbwz7\") pod \"service-ca-operator-777779d784-xk2qk\" (UID: \"e00ea89f-b3e4-44ed-9348-5cd609b9c563\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xk2qk" Feb 27 16:10:11 crc kubenswrapper[4830]: E0227 16:10:11.121609 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:11.621593514 +0000 UTC m=+207.710865977 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.121776 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zctbn\" (UniqueName: \"kubernetes.io/projected/5c04971d-7bad-44c6-bd80-e27f65c8637f-kube-api-access-zctbn\") pod \"machine-approver-56656f9798-nfkdb\" (UID: \"5c04971d-7bad-44c6-bd80-e27f65c8637f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nfkdb" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.121803 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c235c0e5-a6f8-45d8-83e1-91be0d32ac19-metrics-tls\") pod \"dns-operator-744455d44c-n6xx6\" (UID: \"c235c0e5-a6f8-45d8-83e1-91be0d32ac19\") " pod="openshift-dns-operator/dns-operator-744455d44c-n6xx6" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.121949 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft756\" (UniqueName: \"kubernetes.io/projected/cb320ef7-4518-4b01-b1bd-13a60749cac4-kube-api-access-ft756\") pod \"openshift-controller-manager-operator-756b6f6bc6-2x2lh\" (UID: \"cb320ef7-4518-4b01-b1bd-13a60749cac4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2x2lh" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.124215 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c04971d-7bad-44c6-bd80-e27f65c8637f-config\") pod \"machine-approver-56656f9798-nfkdb\" (UID: \"5c04971d-7bad-44c6-bd80-e27f65c8637f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nfkdb" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.124248 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.124269 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7932070f-5985-4e34-84ff-0af75e044581-config\") pod \"kube-controller-manager-operator-78b949d7b-7stx8\" (UID: \"7932070f-5985-4e34-84ff-0af75e044581\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7stx8" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.124287 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a2a2ed5-abaa-4df6-b762-56bb964fbbca-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z5n7d\" (UID: \"0a2a2ed5-abaa-4df6-b762-56bb964fbbca\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z5n7d" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.124324 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a2a2ed5-abaa-4df6-b762-56bb964fbbca-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z5n7d\" (UID: \"0a2a2ed5-abaa-4df6-b762-56bb964fbbca\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z5n7d" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.124339 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/449e4d4d-4a51-4625-801a-980a93398439-certs\") pod \"machine-config-server-dwgx7\" (UID: \"449e4d4d-4a51-4625-801a-980a93398439\") " pod="openshift-machine-config-operator/machine-config-server-dwgx7" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.124401 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb320ef7-4518-4b01-b1bd-13a60749cac4-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2x2lh\" (UID: \"cb320ef7-4518-4b01-b1bd-13a60749cac4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2x2lh" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.124416 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5c04971d-7bad-44c6-bd80-e27f65c8637f-machine-approver-tls\") pod \"machine-approver-56656f9798-nfkdb\" (UID: \"5c04971d-7bad-44c6-bd80-e27f65c8637f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nfkdb" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.124442 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb320ef7-4518-4b01-b1bd-13a60749cac4-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2x2lh\" (UID: \"cb320ef7-4518-4b01-b1bd-13a60749cac4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2x2lh" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.124518 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e00ea89f-b3e4-44ed-9348-5cd609b9c563-serving-cert\") pod \"service-ca-operator-777779d784-xk2qk\" (UID: \"e00ea89f-b3e4-44ed-9348-5cd609b9c563\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xk2qk" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.124551 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lph79\" (UniqueName: \"kubernetes.io/projected/c235c0e5-a6f8-45d8-83e1-91be0d32ac19-kube-api-access-lph79\") pod \"dns-operator-744455d44c-n6xx6\" (UID: \"c235c0e5-a6f8-45d8-83e1-91be0d32ac19\") " pod="openshift-dns-operator/dns-operator-744455d44c-n6xx6" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.124566 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a2a2ed5-abaa-4df6-b762-56bb964fbbca-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z5n7d\" (UID: \"0a2a2ed5-abaa-4df6-b762-56bb964fbbca\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z5n7d" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.124594 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e98d0941-0faf-4719-88a1-ff04ca46eece-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.124608 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e98d0941-0faf-4719-88a1-ff04ca46eece-bound-sa-token\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.124640 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e98d0941-0faf-4719-88a1-ff04ca46eece-registry-tls\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.124708 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e98d0941-0faf-4719-88a1-ff04ca46eece-registry-certificates\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.124725 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7932070f-5985-4e34-84ff-0af75e044581-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-7stx8\" (UID: \"7932070f-5985-4e34-84ff-0af75e044581\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7stx8" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.124740 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv9vr\" (UniqueName: \"kubernetes.io/projected/449e4d4d-4a51-4625-801a-980a93398439-kube-api-access-wv9vr\") pod \"machine-config-server-dwgx7\" (UID: \"449e4d4d-4a51-4625-801a-980a93398439\") " pod="openshift-machine-config-operator/machine-config-server-dwgx7" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.124791 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e98d0941-0faf-4719-88a1-ff04ca46eece-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.124804 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e98d0941-0faf-4719-88a1-ff04ca46eece-trusted-ca\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.124818 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/449e4d4d-4a51-4625-801a-980a93398439-node-bootstrap-token\") pod \"machine-config-server-dwgx7\" (UID: \"449e4d4d-4a51-4625-801a-980a93398439\") " pod="openshift-machine-config-operator/machine-config-server-dwgx7" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.124852 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e00ea89f-b3e4-44ed-9348-5cd609b9c563-config\") pod \"service-ca-operator-777779d784-xk2qk\" (UID: \"e00ea89f-b3e4-44ed-9348-5cd609b9c563\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xk2qk" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.124894 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4whrx\" (UniqueName: \"kubernetes.io/projected/e98d0941-0faf-4719-88a1-ff04ca46eece-kube-api-access-4whrx\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.124909 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7932070f-5985-4e34-84ff-0af75e044581-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-7stx8\" (UID: \"7932070f-5985-4e34-84ff-0af75e044581\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7stx8" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.124923 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5c04971d-7bad-44c6-bd80-e27f65c8637f-auth-proxy-config\") pod \"machine-approver-56656f9798-nfkdb\" (UID: \"5c04971d-7bad-44c6-bd80-e27f65c8637f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nfkdb" Feb 27 16:10:11 crc kubenswrapper[4830]: E0227 16:10:11.125410 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:11.625399043 +0000 UTC m=+207.714671496 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.131902 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rvd8g"] Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.132113 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e98d0941-0faf-4719-88a1-ff04ca46eece-registry-tls\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.181996 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-45mg7"] Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.225300 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.226409 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q7c6j"] Feb 27 16:10:11 crc kubenswrapper[4830]: E0227 16:10:11.226493 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:11.726478215 +0000 UTC m=+207.815750678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.226548 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e98d0941-0faf-4719-88a1-ff04ca46eece-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.226567 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e98d0941-0faf-4719-88a1-ff04ca46eece-bound-sa-token\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.227024 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttpw6\" (UniqueName: \"kubernetes.io/projected/d473053a-d4df-40b8-a876-5582e1d8a702-kube-api-access-ttpw6\") pod \"router-default-5444994796-wh6nt\" (UID: \"d473053a-d4df-40b8-a876-5582e1d8a702\") " pod="openshift-ingress/router-default-5444994796-wh6nt" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.227084 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c873a2ac-f7d3-4bea-ad09-b16891a1edf6-profile-collector-cert\") pod \"catalog-operator-68c6474976-bfz6f\" (UID: \"c873a2ac-f7d3-4bea-ad09-b16891a1edf6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bfz6f" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.227116 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e98d0941-0faf-4719-88a1-ff04ca46eece-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.227132 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e98d0941-0faf-4719-88a1-ff04ca46eece-trusted-ca\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.227147 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4be12707-b1d1-4a30-bb2c-1af9e3d34d09-apiservice-cert\") pod \"packageserver-d55dfcdfc-8fhp6\" (UID: \"4be12707-b1d1-4a30-bb2c-1af9e3d34d09\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8fhp6" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.227168 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e00ea89f-b3e4-44ed-9348-5cd609b9c563-config\") pod \"service-ca-operator-777779d784-xk2qk\" (UID: \"e00ea89f-b3e4-44ed-9348-5cd609b9c563\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xk2qk" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.227187 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4whrx\" (UniqueName: \"kubernetes.io/projected/e98d0941-0faf-4719-88a1-ff04ca46eece-kube-api-access-4whrx\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.227203 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5c04971d-7bad-44c6-bd80-e27f65c8637f-auth-proxy-config\") pod \"machine-approver-56656f9798-nfkdb\" (UID: \"5c04971d-7bad-44c6-bd80-e27f65c8637f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nfkdb" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.227221 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/da82094a-cfed-404a-8fb9-2958b13ce78b-signing-cabundle\") pod \"service-ca-9c57cc56f-hdgkf\" (UID: \"da82094a-cfed-404a-8fb9-2958b13ce78b\") " pod="openshift-service-ca/service-ca-9c57cc56f-hdgkf" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.227264 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/3311d92d-90da-42f5-acf3-3ec723c5edad-csi-data-dir\") pod \"csi-hostpathplugin-gw4c8\" (UID: \"3311d92d-90da-42f5-acf3-3ec723c5edad\") " pod="hostpath-provisioner/csi-hostpathplugin-gw4c8" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.227312 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbwz7\" (UniqueName: \"kubernetes.io/projected/e00ea89f-b3e4-44ed-9348-5cd609b9c563-kube-api-access-xbwz7\") pod \"service-ca-operator-777779d784-xk2qk\" (UID: \"e00ea89f-b3e4-44ed-9348-5cd609b9c563\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xk2qk" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.227334 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m99n\" (UniqueName: \"kubernetes.io/projected/b883e3e8-e6d7-4402-816f-033a0668f6eb-kube-api-access-2m99n\") pod \"ingress-canary-rfmxx\" (UID: \"b883e3e8-e6d7-4402-816f-033a0668f6eb\") " pod="openshift-ingress-canary/ingress-canary-rfmxx" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.227353 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/88dc3209-64e3-47ef-b1f0-e2aeddfe8ece-metrics-tls\") pod \"dns-default-srljc\" (UID: \"88dc3209-64e3-47ef-b1f0-e2aeddfe8ece\") " pod="openshift-dns/dns-default-srljc" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.227371 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg577\" (UniqueName: \"kubernetes.io/projected/910e8c41-1fdf-4f16-9902-532e21fe81ab-kube-api-access-pg577\") pod \"ingress-operator-5b745b69d9-wpzdz\" (UID: \"910e8c41-1fdf-4f16-9902-532e21fe81ab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wpzdz" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.227401 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d473053a-d4df-40b8-a876-5582e1d8a702-service-ca-bundle\") pod \"router-default-5444994796-wh6nt\" (UID: \"d473053a-d4df-40b8-a876-5582e1d8a702\") " pod="openshift-ingress/router-default-5444994796-wh6nt" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.227437 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a235e26d-f41e-406d-992e-3dfb44246bdd-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-5shdf\" (UID: \"a235e26d-f41e-406d-992e-3dfb44246bdd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5shdf" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.227471 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6de44150-41d6-426a-92f4-d29fb3ee1afe-config\") pod \"etcd-operator-b45778765-ghjwl\" (UID: \"6de44150-41d6-426a-92f4-d29fb3ee1afe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ghjwl" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.227502 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c04971d-7bad-44c6-bd80-e27f65c8637f-config\") pod \"machine-approver-56656f9798-nfkdb\" (UID: \"5c04971d-7bad-44c6-bd80-e27f65c8637f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nfkdb" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.227518 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6de44150-41d6-426a-92f4-d29fb3ee1afe-etcd-service-ca\") pod \"etcd-operator-b45778765-ghjwl\" (UID: \"6de44150-41d6-426a-92f4-d29fb3ee1afe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ghjwl" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.227555 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e98d0941-0faf-4719-88a1-ff04ca46eece-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.227587 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c873a2ac-f7d3-4bea-ad09-b16891a1edf6-srv-cert\") pod \"catalog-operator-68c6474976-bfz6f\" (UID: \"c873a2ac-f7d3-4bea-ad09-b16891a1edf6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bfz6f" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.227656 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.227682 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a2a2ed5-abaa-4df6-b762-56bb964fbbca-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z5n7d\" (UID: \"0a2a2ed5-abaa-4df6-b762-56bb964fbbca\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z5n7d" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.227699 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8kft\" (UniqueName: \"kubernetes.io/projected/4be12707-b1d1-4a30-bb2c-1af9e3d34d09-kube-api-access-w8kft\") pod \"packageserver-d55dfcdfc-8fhp6\" (UID: \"4be12707-b1d1-4a30-bb2c-1af9e3d34d09\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8fhp6" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.227715 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4sfz\" (UniqueName: \"kubernetes.io/projected/3311d92d-90da-42f5-acf3-3ec723c5edad-kube-api-access-l4sfz\") pod \"csi-hostpathplugin-gw4c8\" (UID: \"3311d92d-90da-42f5-acf3-3ec723c5edad\") " pod="hostpath-provisioner/csi-hostpathplugin-gw4c8" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.227842 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj4s6\" (UniqueName: \"kubernetes.io/projected/6de44150-41d6-426a-92f4-d29fb3ee1afe-kube-api-access-hj4s6\") pod \"etcd-operator-b45778765-ghjwl\" (UID: \"6de44150-41d6-426a-92f4-d29fb3ee1afe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ghjwl" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.227864 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/449e4d4d-4a51-4625-801a-980a93398439-certs\") pod \"machine-config-server-dwgx7\" (UID: \"449e4d4d-4a51-4625-801a-980a93398439\") " pod="openshift-machine-config-operator/machine-config-server-dwgx7" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.227882 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/25e29ca7-7ff4-4263-8f4e-5a35a6c8118a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-n7thk\" (UID: \"25e29ca7-7ff4-4263-8f4e-5a35a6c8118a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n7thk" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.227901 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d473053a-d4df-40b8-a876-5582e1d8a702-stats-auth\") pod \"router-default-5444994796-wh6nt\" (UID: \"d473053a-d4df-40b8-a876-5582e1d8a702\") " pod="openshift-ingress/router-default-5444994796-wh6nt" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.227918 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8rfn\" (UniqueName: \"kubernetes.io/projected/c873a2ac-f7d3-4bea-ad09-b16891a1edf6-kube-api-access-h8rfn\") pod \"catalog-operator-68c6474976-bfz6f\" (UID: \"c873a2ac-f7d3-4bea-ad09-b16891a1edf6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bfz6f" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.227938 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5c04971d-7bad-44c6-bd80-e27f65c8637f-machine-approver-tls\") pod \"machine-approver-56656f9798-nfkdb\" (UID: \"5c04971d-7bad-44c6-bd80-e27f65c8637f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nfkdb" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.227994 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/584006df-9736-4ed2-aeba-118587f909d7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pz6hl\" (UID: \"584006df-9736-4ed2-aeba-118587f909d7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz6hl" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.228018 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e00ea89f-b3e4-44ed-9348-5cd609b9c563-serving-cert\") pod \"service-ca-operator-777779d784-xk2qk\" (UID: \"e00ea89f-b3e4-44ed-9348-5cd609b9c563\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xk2qk" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.228039 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a2a2ed5-abaa-4df6-b762-56bb964fbbca-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z5n7d\" (UID: \"0a2a2ed5-abaa-4df6-b762-56bb964fbbca\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z5n7d" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.228055 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfhxd\" (UniqueName: \"kubernetes.io/projected/627c853d-8a30-4a46-a190-dd490a39aa35-kube-api-access-qfhxd\") pod \"multus-admission-controller-857f4d67dd-c75pf\" (UID: \"627c853d-8a30-4a46-a190-dd490a39aa35\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c75pf" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.228077 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6de44150-41d6-426a-92f4-d29fb3ee1afe-etcd-ca\") pod \"etcd-operator-b45778765-ghjwl\" (UID: \"6de44150-41d6-426a-92f4-d29fb3ee1afe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ghjwl" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.228102 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffjz5\" (UniqueName: \"kubernetes.io/projected/25e29ca7-7ff4-4263-8f4e-5a35a6c8118a-kube-api-access-ffjz5\") pod \"olm-operator-6b444d44fb-n7thk\" (UID: \"25e29ca7-7ff4-4263-8f4e-5a35a6c8118a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n7thk" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.228116 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88dc3209-64e3-47ef-b1f0-e2aeddfe8ece-config-volume\") pod \"dns-default-srljc\" (UID: \"88dc3209-64e3-47ef-b1f0-e2aeddfe8ece\") " pod="openshift-dns/dns-default-srljc" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.228135 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/61f22a16-1565-425a-914d-ec0d5a5c1902-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-8lclt\" (UID: \"61f22a16-1565-425a-914d-ec0d5a5c1902\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8lclt" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.228153 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/584006df-9736-4ed2-aeba-118587f909d7-proxy-tls\") pod \"machine-config-operator-74547568cd-pz6hl\" (UID: \"584006df-9736-4ed2-aeba-118587f909d7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz6hl" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.228173 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df487ddf-86fd-4433-a32d-6d41ffeed9bc-config\") pod \"kube-apiserver-operator-766d6c64bb-b89fm\" (UID: \"df487ddf-86fd-4433-a32d-6d41ffeed9bc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b89fm" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.228189 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d48qd\" (UniqueName: \"kubernetes.io/projected/584006df-9736-4ed2-aeba-118587f909d7-kube-api-access-d48qd\") pod \"machine-config-operator-74547568cd-pz6hl\" (UID: \"584006df-9736-4ed2-aeba-118587f909d7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz6hl" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.228214 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e98d0941-0faf-4719-88a1-ff04ca46eece-registry-certificates\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.228231 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7932070f-5985-4e34-84ff-0af75e044581-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-7stx8\" (UID: \"7932070f-5985-4e34-84ff-0af75e044581\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7stx8" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.228248 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv9vr\" (UniqueName: \"kubernetes.io/projected/449e4d4d-4a51-4625-801a-980a93398439-kube-api-access-wv9vr\") pod \"machine-config-server-dwgx7\" (UID: \"449e4d4d-4a51-4625-801a-980a93398439\") " pod="openshift-machine-config-operator/machine-config-server-dwgx7" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.228265 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phjjp\" (UniqueName: \"kubernetes.io/projected/61f22a16-1565-425a-914d-ec0d5a5c1902-kube-api-access-phjjp\") pod \"control-plane-machine-set-operator-78cbb6b69f-8lclt\" (UID: \"61f22a16-1565-425a-914d-ec0d5a5c1902\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8lclt" Feb 27 16:10:11 crc kubenswrapper[4830]: E0227 16:10:11.228280 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:11.728265452 +0000 UTC m=+207.817537905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.228336 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/449e4d4d-4a51-4625-801a-980a93398439-node-bootstrap-token\") pod \"machine-config-server-dwgx7\" (UID: \"449e4d4d-4a51-4625-801a-980a93398439\") " pod="openshift-machine-config-operator/machine-config-server-dwgx7" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.228761 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a2a2ed5-abaa-4df6-b762-56bb964fbbca-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z5n7d\" (UID: \"0a2a2ed5-abaa-4df6-b762-56bb964fbbca\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z5n7d" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.229164 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e00ea89f-b3e4-44ed-9348-5cd609b9c563-config\") pod \"service-ca-operator-777779d784-xk2qk\" (UID: \"e00ea89f-b3e4-44ed-9348-5cd609b9c563\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xk2qk" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.229687 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e98d0941-0faf-4719-88a1-ff04ca46eece-registry-certificates\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.230024 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7932070f-5985-4e34-84ff-0af75e044581-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-7stx8\" (UID: \"7932070f-5985-4e34-84ff-0af75e044581\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7stx8" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.230049 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/584006df-9736-4ed2-aeba-118587f909d7-images\") pod \"machine-config-operator-74547568cd-pz6hl\" (UID: \"584006df-9736-4ed2-aeba-118587f909d7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz6hl" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.230114 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df487ddf-86fd-4433-a32d-6d41ffeed9bc-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-b89fm\" (UID: \"df487ddf-86fd-4433-a32d-6d41ffeed9bc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b89fm" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.230133 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/da82094a-cfed-404a-8fb9-2958b13ce78b-signing-key\") pod \"service-ca-9c57cc56f-hdgkf\" (UID: \"da82094a-cfed-404a-8fb9-2958b13ce78b\") " pod="openshift-service-ca/service-ca-9c57cc56f-hdgkf" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.230153 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3311d92d-90da-42f5-acf3-3ec723c5edad-registration-dir\") pod \"csi-hostpathplugin-gw4c8\" (UID: \"3311d92d-90da-42f5-acf3-3ec723c5edad\") " pod="hostpath-provisioner/csi-hostpathplugin-gw4c8" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.230169 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d473053a-d4df-40b8-a876-5582e1d8a702-default-certificate\") pod \"router-default-5444994796-wh6nt\" (UID: \"d473053a-d4df-40b8-a876-5582e1d8a702\") " pod="openshift-ingress/router-default-5444994796-wh6nt" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.230184 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4be12707-b1d1-4a30-bb2c-1af9e3d34d09-webhook-cert\") pod \"packageserver-d55dfcdfc-8fhp6\" (UID: \"4be12707-b1d1-4a30-bb2c-1af9e3d34d09\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8fhp6" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.230367 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/041ea905-9e91-41e3-9db6-820256d951aa-config-volume\") pod \"collect-profiles-29536800-n8kg4\" (UID: \"041ea905-9e91-41e3-9db6-820256d951aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536800-n8kg4" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.230388 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3311d92d-90da-42f5-acf3-3ec723c5edad-socket-dir\") pod \"csi-hostpathplugin-gw4c8\" (UID: \"3311d92d-90da-42f5-acf3-3ec723c5edad\") " pod="hostpath-provisioner/csi-hostpathplugin-gw4c8" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.230404 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b883e3e8-e6d7-4402-816f-033a0668f6eb-cert\") pod \"ingress-canary-rfmxx\" (UID: \"b883e3e8-e6d7-4402-816f-033a0668f6eb\") " pod="openshift-ingress-canary/ingress-canary-rfmxx" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.230420 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kthl\" (UniqueName: \"kubernetes.io/projected/88dc3209-64e3-47ef-b1f0-e2aeddfe8ece-kube-api-access-9kthl\") pod \"dns-default-srljc\" (UID: \"88dc3209-64e3-47ef-b1f0-e2aeddfe8ece\") " pod="openshift-dns/dns-default-srljc" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.230463 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft756\" (UniqueName: \"kubernetes.io/projected/cb320ef7-4518-4b01-b1bd-13a60749cac4-kube-api-access-ft756\") pod \"openshift-controller-manager-operator-756b6f6bc6-2x2lh\" (UID: \"cb320ef7-4518-4b01-b1bd-13a60749cac4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2x2lh" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.230479 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zctbn\" (UniqueName: \"kubernetes.io/projected/5c04971d-7bad-44c6-bd80-e27f65c8637f-kube-api-access-zctbn\") pod \"machine-approver-56656f9798-nfkdb\" (UID: \"5c04971d-7bad-44c6-bd80-e27f65c8637f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nfkdb" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.230494 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c235c0e5-a6f8-45d8-83e1-91be0d32ac19-metrics-tls\") pod \"dns-operator-744455d44c-n6xx6\" (UID: \"c235c0e5-a6f8-45d8-83e1-91be0d32ac19\") " pod="openshift-dns-operator/dns-operator-744455d44c-n6xx6" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.231269 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df487ddf-86fd-4433-a32d-6d41ffeed9bc-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-b89fm\" (UID: \"df487ddf-86fd-4433-a32d-6d41ffeed9bc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b89fm" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.231299 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpxx8\" (UniqueName: \"kubernetes.io/projected/a235e26d-f41e-406d-992e-3dfb44246bdd-kube-api-access-jpxx8\") pod \"machine-config-controller-84d6567774-5shdf\" (UID: \"a235e26d-f41e-406d-992e-3dfb44246bdd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5shdf" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.231324 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/3311d92d-90da-42f5-acf3-3ec723c5edad-mountpoint-dir\") pod \"csi-hostpathplugin-gw4c8\" (UID: \"3311d92d-90da-42f5-acf3-3ec723c5edad\") " pod="hostpath-provisioner/csi-hostpathplugin-gw4c8" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.231343 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/25e29ca7-7ff4-4263-8f4e-5a35a6c8118a-srv-cert\") pod \"olm-operator-6b444d44fb-n7thk\" (UID: \"25e29ca7-7ff4-4263-8f4e-5a35a6c8118a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n7thk" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.232276 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57czv\" (UniqueName: \"kubernetes.io/projected/da82094a-cfed-404a-8fb9-2958b13ce78b-kube-api-access-57czv\") pod \"service-ca-9c57cc56f-hdgkf\" (UID: \"da82094a-cfed-404a-8fb9-2958b13ce78b\") " pod="openshift-service-ca/service-ca-9c57cc56f-hdgkf" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.232301 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6de44150-41d6-426a-92f4-d29fb3ee1afe-etcd-client\") pod \"etcd-operator-b45778765-ghjwl\" (UID: \"6de44150-41d6-426a-92f4-d29fb3ee1afe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ghjwl" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.232340 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7932070f-5985-4e34-84ff-0af75e044581-config\") pod \"kube-controller-manager-operator-78b949d7b-7stx8\" (UID: \"7932070f-5985-4e34-84ff-0af75e044581\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7stx8" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.232358 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/41bce4eb-4367-4dee-9c26-df8e0a1e4ea8-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hlg9d\" (UID: \"41bce4eb-4367-4dee-9c26-df8e0a1e4ea8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hlg9d" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.232376 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/910e8c41-1fdf-4f16-9902-532e21fe81ab-metrics-tls\") pod \"ingress-operator-5b745b69d9-wpzdz\" (UID: \"910e8c41-1fdf-4f16-9902-532e21fe81ab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wpzdz" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.232395 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a2a2ed5-abaa-4df6-b762-56bb964fbbca-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z5n7d\" (UID: \"0a2a2ed5-abaa-4df6-b762-56bb964fbbca\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z5n7d" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.232411 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/910e8c41-1fdf-4f16-9902-532e21fe81ab-trusted-ca\") pod \"ingress-operator-5b745b69d9-wpzdz\" (UID: \"910e8c41-1fdf-4f16-9902-532e21fe81ab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wpzdz" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.232475 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb320ef7-4518-4b01-b1bd-13a60749cac4-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2x2lh\" (UID: \"cb320ef7-4518-4b01-b1bd-13a60749cac4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2x2lh" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.233505 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c04971d-7bad-44c6-bd80-e27f65c8637f-config\") pod \"machine-approver-56656f9798-nfkdb\" (UID: \"5c04971d-7bad-44c6-bd80-e27f65c8637f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nfkdb" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.233857 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7932070f-5985-4e34-84ff-0af75e044581-config\") pod \"kube-controller-manager-operator-78b949d7b-7stx8\" (UID: \"7932070f-5985-4e34-84ff-0af75e044581\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7stx8" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.234122 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7932070f-5985-4e34-84ff-0af75e044581-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-7stx8\" (UID: \"7932070f-5985-4e34-84ff-0af75e044581\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7stx8" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.234127 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb320ef7-4518-4b01-b1bd-13a60749cac4-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2x2lh\" (UID: \"cb320ef7-4518-4b01-b1bd-13a60749cac4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2x2lh" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.234181 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/627c853d-8a30-4a46-a190-dd490a39aa35-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-c75pf\" (UID: \"627c853d-8a30-4a46-a190-dd490a39aa35\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c75pf" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.234292 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/449e4d4d-4a51-4625-801a-980a93398439-node-bootstrap-token\") pod \"machine-config-server-dwgx7\" (UID: \"449e4d4d-4a51-4625-801a-980a93398439\") " pod="openshift-machine-config-operator/machine-config-server-dwgx7" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.234867 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4be12707-b1d1-4a30-bb2c-1af9e3d34d09-tmpfs\") pod \"packageserver-d55dfcdfc-8fhp6\" (UID: \"4be12707-b1d1-4a30-bb2c-1af9e3d34d09\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8fhp6" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.234944 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/910e8c41-1fdf-4f16-9902-532e21fe81ab-bound-sa-token\") pod \"ingress-operator-5b745b69d9-wpzdz\" (UID: \"910e8c41-1fdf-4f16-9902-532e21fe81ab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wpzdz" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.235001 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/3311d92d-90da-42f5-acf3-3ec723c5edad-plugins-dir\") pod \"csi-hostpathplugin-gw4c8\" (UID: \"3311d92d-90da-42f5-acf3-3ec723c5edad\") " pod="hostpath-provisioner/csi-hostpathplugin-gw4c8" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.235084 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt26r\" (UniqueName: \"kubernetes.io/projected/41bce4eb-4367-4dee-9c26-df8e0a1e4ea8-kube-api-access-vt26r\") pod \"package-server-manager-789f6589d5-hlg9d\" (UID: \"41bce4eb-4367-4dee-9c26-df8e0a1e4ea8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hlg9d" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.235142 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8dxs\" (UniqueName: \"kubernetes.io/projected/041ea905-9e91-41e3-9db6-820256d951aa-kube-api-access-k8dxs\") pod \"collect-profiles-29536800-n8kg4\" (UID: \"041ea905-9e91-41e3-9db6-820256d951aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536800-n8kg4" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.235173 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6nrn\" (UniqueName: \"kubernetes.io/projected/1eb064bc-39af-405a-bdbf-665e31fa07c3-kube-api-access-v6nrn\") pod \"auto-csr-approver-29536810-bc446\" (UID: \"1eb064bc-39af-405a-bdbf-665e31fa07c3\") " pod="openshift-infra/auto-csr-approver-29536810-bc446" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.235216 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/041ea905-9e91-41e3-9db6-820256d951aa-secret-volume\") pod \"collect-profiles-29536800-n8kg4\" (UID: \"041ea905-9e91-41e3-9db6-820256d951aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536800-n8kg4" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.235251 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a235e26d-f41e-406d-992e-3dfb44246bdd-proxy-tls\") pod \"machine-config-controller-84d6567774-5shdf\" (UID: \"a235e26d-f41e-406d-992e-3dfb44246bdd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5shdf" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.235289 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lph79\" (UniqueName: \"kubernetes.io/projected/c235c0e5-a6f8-45d8-83e1-91be0d32ac19-kube-api-access-lph79\") pod \"dns-operator-744455d44c-n6xx6\" (UID: \"c235c0e5-a6f8-45d8-83e1-91be0d32ac19\") " pod="openshift-dns-operator/dns-operator-744455d44c-n6xx6" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.235319 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d473053a-d4df-40b8-a876-5582e1d8a702-metrics-certs\") pod \"router-default-5444994796-wh6nt\" (UID: \"d473053a-d4df-40b8-a876-5582e1d8a702\") " pod="openshift-ingress/router-default-5444994796-wh6nt" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.235335 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6de44150-41d6-426a-92f4-d29fb3ee1afe-serving-cert\") pod \"etcd-operator-b45778765-ghjwl\" (UID: \"6de44150-41d6-426a-92f4-d29fb3ee1afe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ghjwl" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.236136 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5c04971d-7bad-44c6-bd80-e27f65c8637f-auth-proxy-config\") pod \"machine-approver-56656f9798-nfkdb\" (UID: \"5c04971d-7bad-44c6-bd80-e27f65c8637f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nfkdb" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.236397 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5c04971d-7bad-44c6-bd80-e27f65c8637f-machine-approver-tls\") pod \"machine-approver-56656f9798-nfkdb\" (UID: \"5c04971d-7bad-44c6-bd80-e27f65c8637f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nfkdb" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.236718 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e00ea89f-b3e4-44ed-9348-5cd609b9c563-serving-cert\") pod \"service-ca-operator-777779d784-xk2qk\" (UID: \"e00ea89f-b3e4-44ed-9348-5cd609b9c563\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xk2qk" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.236815 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e98d0941-0faf-4719-88a1-ff04ca46eece-trusted-ca\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.237630 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb320ef7-4518-4b01-b1bd-13a60749cac4-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2x2lh\" (UID: \"cb320ef7-4518-4b01-b1bd-13a60749cac4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2x2lh" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.244514 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb320ef7-4518-4b01-b1bd-13a60749cac4-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2x2lh\" (UID: \"cb320ef7-4518-4b01-b1bd-13a60749cac4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2x2lh" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.244673 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a2a2ed5-abaa-4df6-b762-56bb964fbbca-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z5n7d\" (UID: \"0a2a2ed5-abaa-4df6-b762-56bb964fbbca\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z5n7d" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.248502 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/449e4d4d-4a51-4625-801a-980a93398439-certs\") pod \"machine-config-server-dwgx7\" (UID: \"449e4d4d-4a51-4625-801a-980a93398439\") " pod="openshift-machine-config-operator/machine-config-server-dwgx7" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.248823 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c235c0e5-a6f8-45d8-83e1-91be0d32ac19-metrics-tls\") pod \"dns-operator-744455d44c-n6xx6\" (UID: \"c235c0e5-a6f8-45d8-83e1-91be0d32ac19\") " pod="openshift-dns-operator/dns-operator-744455d44c-n6xx6" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.251860 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e98d0941-0faf-4719-88a1-ff04ca46eece-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:11 crc kubenswrapper[4830]: W0227 16:10:11.260759 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0274d4b_eb80_4321_a4c1_6848c65bc32e.slice/crio-2831a98beea885a1209f1e3f51c915114b3408a6c80adb42d073b8b4123e08d3 WatchSource:0}: Error finding container 2831a98beea885a1209f1e3f51c915114b3408a6c80adb42d073b8b4123e08d3: Status 404 returned error can't find the container with id 2831a98beea885a1209f1e3f51c915114b3408a6c80adb42d073b8b4123e08d3 Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.269797 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5mm8b"] Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.285240 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e98d0941-0faf-4719-88a1-ff04ca46eece-bound-sa-token\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.296308 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kgdlg" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.301020 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4whrx\" (UniqueName: \"kubernetes.io/projected/e98d0941-0faf-4719-88a1-ff04ca46eece-kube-api-access-4whrx\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.318826 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbwz7\" (UniqueName: \"kubernetes.io/projected/e00ea89f-b3e4-44ed-9348-5cd609b9c563-kube-api-access-xbwz7\") pod \"service-ca-operator-777779d784-xk2qk\" (UID: \"e00ea89f-b3e4-44ed-9348-5cd609b9c563\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-xk2qk" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.336771 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.336986 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/584006df-9736-4ed2-aeba-118587f909d7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pz6hl\" (UID: \"584006df-9736-4ed2-aeba-118587f909d7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz6hl" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337014 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfhxd\" (UniqueName: \"kubernetes.io/projected/627c853d-8a30-4a46-a190-dd490a39aa35-kube-api-access-qfhxd\") pod \"multus-admission-controller-857f4d67dd-c75pf\" (UID: \"627c853d-8a30-4a46-a190-dd490a39aa35\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c75pf" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337032 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6de44150-41d6-426a-92f4-d29fb3ee1afe-etcd-ca\") pod \"etcd-operator-b45778765-ghjwl\" (UID: \"6de44150-41d6-426a-92f4-d29fb3ee1afe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ghjwl" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337049 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffjz5\" (UniqueName: \"kubernetes.io/projected/25e29ca7-7ff4-4263-8f4e-5a35a6c8118a-kube-api-access-ffjz5\") pod \"olm-operator-6b444d44fb-n7thk\" (UID: \"25e29ca7-7ff4-4263-8f4e-5a35a6c8118a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n7thk" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337064 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88dc3209-64e3-47ef-b1f0-e2aeddfe8ece-config-volume\") pod \"dns-default-srljc\" (UID: \"88dc3209-64e3-47ef-b1f0-e2aeddfe8ece\") " pod="openshift-dns/dns-default-srljc" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337084 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/61f22a16-1565-425a-914d-ec0d5a5c1902-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-8lclt\" (UID: \"61f22a16-1565-425a-914d-ec0d5a5c1902\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8lclt" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337117 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/584006df-9736-4ed2-aeba-118587f909d7-proxy-tls\") pod \"machine-config-operator-74547568cd-pz6hl\" (UID: \"584006df-9736-4ed2-aeba-118587f909d7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz6hl" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337132 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df487ddf-86fd-4433-a32d-6d41ffeed9bc-config\") pod \"kube-apiserver-operator-766d6c64bb-b89fm\" (UID: \"df487ddf-86fd-4433-a32d-6d41ffeed9bc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b89fm" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337149 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d48qd\" (UniqueName: \"kubernetes.io/projected/584006df-9736-4ed2-aeba-118587f909d7-kube-api-access-d48qd\") pod \"machine-config-operator-74547568cd-pz6hl\" (UID: \"584006df-9736-4ed2-aeba-118587f909d7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz6hl" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337172 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phjjp\" (UniqueName: \"kubernetes.io/projected/61f22a16-1565-425a-914d-ec0d5a5c1902-kube-api-access-phjjp\") pod \"control-plane-machine-set-operator-78cbb6b69f-8lclt\" (UID: \"61f22a16-1565-425a-914d-ec0d5a5c1902\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8lclt" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337214 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/584006df-9736-4ed2-aeba-118587f909d7-images\") pod \"machine-config-operator-74547568cd-pz6hl\" (UID: \"584006df-9736-4ed2-aeba-118587f909d7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz6hl" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337229 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df487ddf-86fd-4433-a32d-6d41ffeed9bc-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-b89fm\" (UID: \"df487ddf-86fd-4433-a32d-6d41ffeed9bc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b89fm" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337249 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/da82094a-cfed-404a-8fb9-2958b13ce78b-signing-key\") pod \"service-ca-9c57cc56f-hdgkf\" (UID: \"da82094a-cfed-404a-8fb9-2958b13ce78b\") " pod="openshift-service-ca/service-ca-9c57cc56f-hdgkf" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337264 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3311d92d-90da-42f5-acf3-3ec723c5edad-registration-dir\") pod \"csi-hostpathplugin-gw4c8\" (UID: \"3311d92d-90da-42f5-acf3-3ec723c5edad\") " pod="hostpath-provisioner/csi-hostpathplugin-gw4c8" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337280 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d473053a-d4df-40b8-a876-5582e1d8a702-default-certificate\") pod \"router-default-5444994796-wh6nt\" (UID: \"d473053a-d4df-40b8-a876-5582e1d8a702\") " pod="openshift-ingress/router-default-5444994796-wh6nt" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337296 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4be12707-b1d1-4a30-bb2c-1af9e3d34d09-webhook-cert\") pod \"packageserver-d55dfcdfc-8fhp6\" (UID: \"4be12707-b1d1-4a30-bb2c-1af9e3d34d09\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8fhp6" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337310 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/041ea905-9e91-41e3-9db6-820256d951aa-config-volume\") pod \"collect-profiles-29536800-n8kg4\" (UID: \"041ea905-9e91-41e3-9db6-820256d951aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536800-n8kg4" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337327 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3311d92d-90da-42f5-acf3-3ec723c5edad-socket-dir\") pod \"csi-hostpathplugin-gw4c8\" (UID: \"3311d92d-90da-42f5-acf3-3ec723c5edad\") " pod="hostpath-provisioner/csi-hostpathplugin-gw4c8" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337344 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b883e3e8-e6d7-4402-816f-033a0668f6eb-cert\") pod \"ingress-canary-rfmxx\" (UID: \"b883e3e8-e6d7-4402-816f-033a0668f6eb\") " pod="openshift-ingress-canary/ingress-canary-rfmxx" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337359 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kthl\" (UniqueName: \"kubernetes.io/projected/88dc3209-64e3-47ef-b1f0-e2aeddfe8ece-kube-api-access-9kthl\") pod \"dns-default-srljc\" (UID: \"88dc3209-64e3-47ef-b1f0-e2aeddfe8ece\") " pod="openshift-dns/dns-default-srljc" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337391 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpxx8\" (UniqueName: \"kubernetes.io/projected/a235e26d-f41e-406d-992e-3dfb44246bdd-kube-api-access-jpxx8\") pod \"machine-config-controller-84d6567774-5shdf\" (UID: \"a235e26d-f41e-406d-992e-3dfb44246bdd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5shdf" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337409 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/3311d92d-90da-42f5-acf3-3ec723c5edad-mountpoint-dir\") pod \"csi-hostpathplugin-gw4c8\" (UID: \"3311d92d-90da-42f5-acf3-3ec723c5edad\") " pod="hostpath-provisioner/csi-hostpathplugin-gw4c8" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337428 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df487ddf-86fd-4433-a32d-6d41ffeed9bc-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-b89fm\" (UID: \"df487ddf-86fd-4433-a32d-6d41ffeed9bc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b89fm" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337452 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/25e29ca7-7ff4-4263-8f4e-5a35a6c8118a-srv-cert\") pod \"olm-operator-6b444d44fb-n7thk\" (UID: \"25e29ca7-7ff4-4263-8f4e-5a35a6c8118a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n7thk" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337476 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57czv\" (UniqueName: \"kubernetes.io/projected/da82094a-cfed-404a-8fb9-2958b13ce78b-kube-api-access-57czv\") pod \"service-ca-9c57cc56f-hdgkf\" (UID: \"da82094a-cfed-404a-8fb9-2958b13ce78b\") " pod="openshift-service-ca/service-ca-9c57cc56f-hdgkf" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337497 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6de44150-41d6-426a-92f4-d29fb3ee1afe-etcd-client\") pod \"etcd-operator-b45778765-ghjwl\" (UID: \"6de44150-41d6-426a-92f4-d29fb3ee1afe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ghjwl" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337518 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/910e8c41-1fdf-4f16-9902-532e21fe81ab-metrics-tls\") pod \"ingress-operator-5b745b69d9-wpzdz\" (UID: \"910e8c41-1fdf-4f16-9902-532e21fe81ab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wpzdz" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337540 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/41bce4eb-4367-4dee-9c26-df8e0a1e4ea8-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hlg9d\" (UID: \"41bce4eb-4367-4dee-9c26-df8e0a1e4ea8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hlg9d" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337564 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/910e8c41-1fdf-4f16-9902-532e21fe81ab-trusted-ca\") pod \"ingress-operator-5b745b69d9-wpzdz\" (UID: \"910e8c41-1fdf-4f16-9902-532e21fe81ab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wpzdz" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337584 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/627c853d-8a30-4a46-a190-dd490a39aa35-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-c75pf\" (UID: \"627c853d-8a30-4a46-a190-dd490a39aa35\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c75pf" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337599 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4be12707-b1d1-4a30-bb2c-1af9e3d34d09-tmpfs\") pod \"packageserver-d55dfcdfc-8fhp6\" (UID: \"4be12707-b1d1-4a30-bb2c-1af9e3d34d09\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8fhp6" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337614 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/910e8c41-1fdf-4f16-9902-532e21fe81ab-bound-sa-token\") pod \"ingress-operator-5b745b69d9-wpzdz\" (UID: \"910e8c41-1fdf-4f16-9902-532e21fe81ab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wpzdz" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337628 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/3311d92d-90da-42f5-acf3-3ec723c5edad-plugins-dir\") pod \"csi-hostpathplugin-gw4c8\" (UID: \"3311d92d-90da-42f5-acf3-3ec723c5edad\") " pod="hostpath-provisioner/csi-hostpathplugin-gw4c8" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337644 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt26r\" (UniqueName: \"kubernetes.io/projected/41bce4eb-4367-4dee-9c26-df8e0a1e4ea8-kube-api-access-vt26r\") pod \"package-server-manager-789f6589d5-hlg9d\" (UID: \"41bce4eb-4367-4dee-9c26-df8e0a1e4ea8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hlg9d" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337661 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8dxs\" (UniqueName: \"kubernetes.io/projected/041ea905-9e91-41e3-9db6-820256d951aa-kube-api-access-k8dxs\") pod \"collect-profiles-29536800-n8kg4\" (UID: \"041ea905-9e91-41e3-9db6-820256d951aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536800-n8kg4" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337678 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6nrn\" (UniqueName: \"kubernetes.io/projected/1eb064bc-39af-405a-bdbf-665e31fa07c3-kube-api-access-v6nrn\") pod \"auto-csr-approver-29536810-bc446\" (UID: \"1eb064bc-39af-405a-bdbf-665e31fa07c3\") " pod="openshift-infra/auto-csr-approver-29536810-bc446" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337692 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/041ea905-9e91-41e3-9db6-820256d951aa-secret-volume\") pod \"collect-profiles-29536800-n8kg4\" (UID: \"041ea905-9e91-41e3-9db6-820256d951aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536800-n8kg4" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337708 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a235e26d-f41e-406d-992e-3dfb44246bdd-proxy-tls\") pod \"machine-config-controller-84d6567774-5shdf\" (UID: \"a235e26d-f41e-406d-992e-3dfb44246bdd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5shdf" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337711 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv9vr\" (UniqueName: \"kubernetes.io/projected/449e4d4d-4a51-4625-801a-980a93398439-kube-api-access-wv9vr\") pod \"machine-config-server-dwgx7\" (UID: \"449e4d4d-4a51-4625-801a-980a93398439\") " pod="openshift-machine-config-operator/machine-config-server-dwgx7" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337726 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d473053a-d4df-40b8-a876-5582e1d8a702-metrics-certs\") pod \"router-default-5444994796-wh6nt\" (UID: \"d473053a-d4df-40b8-a876-5582e1d8a702\") " pod="openshift-ingress/router-default-5444994796-wh6nt" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337742 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6de44150-41d6-426a-92f4-d29fb3ee1afe-serving-cert\") pod \"etcd-operator-b45778765-ghjwl\" (UID: \"6de44150-41d6-426a-92f4-d29fb3ee1afe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ghjwl" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337760 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttpw6\" (UniqueName: \"kubernetes.io/projected/d473053a-d4df-40b8-a876-5582e1d8a702-kube-api-access-ttpw6\") pod \"router-default-5444994796-wh6nt\" (UID: \"d473053a-d4df-40b8-a876-5582e1d8a702\") " pod="openshift-ingress/router-default-5444994796-wh6nt" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337777 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c873a2ac-f7d3-4bea-ad09-b16891a1edf6-profile-collector-cert\") pod \"catalog-operator-68c6474976-bfz6f\" (UID: \"c873a2ac-f7d3-4bea-ad09-b16891a1edf6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bfz6f" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337795 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4be12707-b1d1-4a30-bb2c-1af9e3d34d09-apiservice-cert\") pod \"packageserver-d55dfcdfc-8fhp6\" (UID: \"4be12707-b1d1-4a30-bb2c-1af9e3d34d09\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8fhp6" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337812 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/da82094a-cfed-404a-8fb9-2958b13ce78b-signing-cabundle\") pod \"service-ca-9c57cc56f-hdgkf\" (UID: \"da82094a-cfed-404a-8fb9-2958b13ce78b\") " pod="openshift-service-ca/service-ca-9c57cc56f-hdgkf" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337827 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/3311d92d-90da-42f5-acf3-3ec723c5edad-csi-data-dir\") pod \"csi-hostpathplugin-gw4c8\" (UID: \"3311d92d-90da-42f5-acf3-3ec723c5edad\") " pod="hostpath-provisioner/csi-hostpathplugin-gw4c8" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337844 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m99n\" (UniqueName: \"kubernetes.io/projected/b883e3e8-e6d7-4402-816f-033a0668f6eb-kube-api-access-2m99n\") pod \"ingress-canary-rfmxx\" (UID: \"b883e3e8-e6d7-4402-816f-033a0668f6eb\") " pod="openshift-ingress-canary/ingress-canary-rfmxx" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337859 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/88dc3209-64e3-47ef-b1f0-e2aeddfe8ece-metrics-tls\") pod \"dns-default-srljc\" (UID: \"88dc3209-64e3-47ef-b1f0-e2aeddfe8ece\") " pod="openshift-dns/dns-default-srljc" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337874 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg577\" (UniqueName: \"kubernetes.io/projected/910e8c41-1fdf-4f16-9902-532e21fe81ab-kube-api-access-pg577\") pod \"ingress-operator-5b745b69d9-wpzdz\" (UID: \"910e8c41-1fdf-4f16-9902-532e21fe81ab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wpzdz" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337888 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d473053a-d4df-40b8-a876-5582e1d8a702-service-ca-bundle\") pod \"router-default-5444994796-wh6nt\" (UID: \"d473053a-d4df-40b8-a876-5582e1d8a702\") " pod="openshift-ingress/router-default-5444994796-wh6nt" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337904 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a235e26d-f41e-406d-992e-3dfb44246bdd-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-5shdf\" (UID: \"a235e26d-f41e-406d-992e-3dfb44246bdd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5shdf" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337921 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6de44150-41d6-426a-92f4-d29fb3ee1afe-config\") pod \"etcd-operator-b45778765-ghjwl\" (UID: \"6de44150-41d6-426a-92f4-d29fb3ee1afe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ghjwl" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337936 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6de44150-41d6-426a-92f4-d29fb3ee1afe-etcd-service-ca\") pod \"etcd-operator-b45778765-ghjwl\" (UID: \"6de44150-41d6-426a-92f4-d29fb3ee1afe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ghjwl" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337980 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8kft\" (UniqueName: \"kubernetes.io/projected/4be12707-b1d1-4a30-bb2c-1af9e3d34d09-kube-api-access-w8kft\") pod \"packageserver-d55dfcdfc-8fhp6\" (UID: \"4be12707-b1d1-4a30-bb2c-1af9e3d34d09\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8fhp6" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.337997 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4sfz\" (UniqueName: \"kubernetes.io/projected/3311d92d-90da-42f5-acf3-3ec723c5edad-kube-api-access-l4sfz\") pod \"csi-hostpathplugin-gw4c8\" (UID: \"3311d92d-90da-42f5-acf3-3ec723c5edad\") " pod="hostpath-provisioner/csi-hostpathplugin-gw4c8" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.338011 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c873a2ac-f7d3-4bea-ad09-b16891a1edf6-srv-cert\") pod \"catalog-operator-68c6474976-bfz6f\" (UID: \"c873a2ac-f7d3-4bea-ad09-b16891a1edf6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bfz6f" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.338028 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj4s6\" (UniqueName: \"kubernetes.io/projected/6de44150-41d6-426a-92f4-d29fb3ee1afe-kube-api-access-hj4s6\") pod \"etcd-operator-b45778765-ghjwl\" (UID: \"6de44150-41d6-426a-92f4-d29fb3ee1afe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ghjwl" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.338043 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d473053a-d4df-40b8-a876-5582e1d8a702-stats-auth\") pod \"router-default-5444994796-wh6nt\" (UID: \"d473053a-d4df-40b8-a876-5582e1d8a702\") " pod="openshift-ingress/router-default-5444994796-wh6nt" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.338059 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8rfn\" (UniqueName: \"kubernetes.io/projected/c873a2ac-f7d3-4bea-ad09-b16891a1edf6-kube-api-access-h8rfn\") pod \"catalog-operator-68c6474976-bfz6f\" (UID: \"c873a2ac-f7d3-4bea-ad09-b16891a1edf6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bfz6f" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.338074 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/25e29ca7-7ff4-4263-8f4e-5a35a6c8118a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-n7thk\" (UID: \"25e29ca7-7ff4-4263-8f4e-5a35a6c8118a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n7thk" Feb 27 16:10:11 crc kubenswrapper[4830]: E0227 16:10:11.338886 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:11.83886973 +0000 UTC m=+207.928142193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.339222 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d473053a-d4df-40b8-a876-5582e1d8a702-service-ca-bundle\") pod \"router-default-5444994796-wh6nt\" (UID: \"d473053a-d4df-40b8-a876-5582e1d8a702\") " pod="openshift-ingress/router-default-5444994796-wh6nt" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.339425 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/584006df-9736-4ed2-aeba-118587f909d7-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pz6hl\" (UID: \"584006df-9736-4ed2-aeba-118587f909d7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz6hl" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.340563 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/25e29ca7-7ff4-4263-8f4e-5a35a6c8118a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-n7thk\" (UID: \"25e29ca7-7ff4-4263-8f4e-5a35a6c8118a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n7thk" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.341459 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6de44150-41d6-426a-92f4-d29fb3ee1afe-config\") pod \"etcd-operator-b45778765-ghjwl\" (UID: \"6de44150-41d6-426a-92f4-d29fb3ee1afe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ghjwl" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.341802 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6de44150-41d6-426a-92f4-d29fb3ee1afe-etcd-ca\") pod \"etcd-operator-b45778765-ghjwl\" (UID: \"6de44150-41d6-426a-92f4-d29fb3ee1afe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ghjwl" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.341845 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6de44150-41d6-426a-92f4-d29fb3ee1afe-etcd-service-ca\") pod \"etcd-operator-b45778765-ghjwl\" (UID: \"6de44150-41d6-426a-92f4-d29fb3ee1afe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ghjwl" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.342630 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3311d92d-90da-42f5-acf3-3ec723c5edad-registration-dir\") pod \"csi-hostpathplugin-gw4c8\" (UID: \"3311d92d-90da-42f5-acf3-3ec723c5edad\") " pod="hostpath-provisioner/csi-hostpathplugin-gw4c8" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.343271 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6de44150-41d6-426a-92f4-d29fb3ee1afe-serving-cert\") pod \"etcd-operator-b45778765-ghjwl\" (UID: \"6de44150-41d6-426a-92f4-d29fb3ee1afe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ghjwl" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.343361 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/3311d92d-90da-42f5-acf3-3ec723c5edad-csi-data-dir\") pod \"csi-hostpathplugin-gw4c8\" (UID: \"3311d92d-90da-42f5-acf3-3ec723c5edad\") " pod="hostpath-provisioner/csi-hostpathplugin-gw4c8" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.343359 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/584006df-9736-4ed2-aeba-118587f909d7-images\") pod \"machine-config-operator-74547568cd-pz6hl\" (UID: \"584006df-9736-4ed2-aeba-118587f909d7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz6hl" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.343457 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a235e26d-f41e-406d-992e-3dfb44246bdd-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-5shdf\" (UID: \"a235e26d-f41e-406d-992e-3dfb44246bdd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5shdf" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.344987 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88dc3209-64e3-47ef-b1f0-e2aeddfe8ece-config-volume\") pod \"dns-default-srljc\" (UID: \"88dc3209-64e3-47ef-b1f0-e2aeddfe8ece\") " pod="openshift-dns/dns-default-srljc" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.345125 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/da82094a-cfed-404a-8fb9-2958b13ce78b-signing-cabundle\") pod \"service-ca-9c57cc56f-hdgkf\" (UID: \"da82094a-cfed-404a-8fb9-2958b13ce78b\") " pod="openshift-service-ca/service-ca-9c57cc56f-hdgkf" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.345998 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df487ddf-86fd-4433-a32d-6d41ffeed9bc-config\") pod \"kube-apiserver-operator-766d6c64bb-b89fm\" (UID: \"df487ddf-86fd-4433-a32d-6d41ffeed9bc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b89fm" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.346777 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b883e3e8-e6d7-4402-816f-033a0668f6eb-cert\") pod \"ingress-canary-rfmxx\" (UID: \"b883e3e8-e6d7-4402-816f-033a0668f6eb\") " pod="openshift-ingress-canary/ingress-canary-rfmxx" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.347936 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df487ddf-86fd-4433-a32d-6d41ffeed9bc-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-b89fm\" (UID: \"df487ddf-86fd-4433-a32d-6d41ffeed9bc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b89fm" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.348114 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/da82094a-cfed-404a-8fb9-2958b13ce78b-signing-key\") pod \"service-ca-9c57cc56f-hdgkf\" (UID: \"da82094a-cfed-404a-8fb9-2958b13ce78b\") " pod="openshift-service-ca/service-ca-9c57cc56f-hdgkf" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.349268 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/910e8c41-1fdf-4f16-9902-532e21fe81ab-trusted-ca\") pod \"ingress-operator-5b745b69d9-wpzdz\" (UID: \"910e8c41-1fdf-4f16-9902-532e21fe81ab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wpzdz" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.349312 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d473053a-d4df-40b8-a876-5582e1d8a702-default-certificate\") pod \"router-default-5444994796-wh6nt\" (UID: \"d473053a-d4df-40b8-a876-5582e1d8a702\") " pod="openshift-ingress/router-default-5444994796-wh6nt" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.350166 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/3311d92d-90da-42f5-acf3-3ec723c5edad-mountpoint-dir\") pod \"csi-hostpathplugin-gw4c8\" (UID: \"3311d92d-90da-42f5-acf3-3ec723c5edad\") " pod="hostpath-provisioner/csi-hostpathplugin-gw4c8" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.350278 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/041ea905-9e91-41e3-9db6-820256d951aa-config-volume\") pod \"collect-profiles-29536800-n8kg4\" (UID: \"041ea905-9e91-41e3-9db6-820256d951aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536800-n8kg4" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.350616 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3311d92d-90da-42f5-acf3-3ec723c5edad-socket-dir\") pod \"csi-hostpathplugin-gw4c8\" (UID: \"3311d92d-90da-42f5-acf3-3ec723c5edad\") " pod="hostpath-provisioner/csi-hostpathplugin-gw4c8" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.350825 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d473053a-d4df-40b8-a876-5582e1d8a702-stats-auth\") pod \"router-default-5444994796-wh6nt\" (UID: \"d473053a-d4df-40b8-a876-5582e1d8a702\") " pod="openshift-ingress/router-default-5444994796-wh6nt" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.351203 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/584006df-9736-4ed2-aeba-118587f909d7-proxy-tls\") pod \"machine-config-operator-74547568cd-pz6hl\" (UID: \"584006df-9736-4ed2-aeba-118587f909d7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz6hl" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.351637 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4be12707-b1d1-4a30-bb2c-1af9e3d34d09-tmpfs\") pod \"packageserver-d55dfcdfc-8fhp6\" (UID: \"4be12707-b1d1-4a30-bb2c-1af9e3d34d09\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8fhp6" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.352149 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/627c853d-8a30-4a46-a190-dd490a39aa35-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-c75pf\" (UID: \"627c853d-8a30-4a46-a190-dd490a39aa35\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c75pf" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.352622 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/88dc3209-64e3-47ef-b1f0-e2aeddfe8ece-metrics-tls\") pod \"dns-default-srljc\" (UID: \"88dc3209-64e3-47ef-b1f0-e2aeddfe8ece\") " pod="openshift-dns/dns-default-srljc" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.352880 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d473053a-d4df-40b8-a876-5582e1d8a702-metrics-certs\") pod \"router-default-5444994796-wh6nt\" (UID: \"d473053a-d4df-40b8-a876-5582e1d8a702\") " pod="openshift-ingress/router-default-5444994796-wh6nt" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.354076 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6de44150-41d6-426a-92f4-d29fb3ee1afe-etcd-client\") pod \"etcd-operator-b45778765-ghjwl\" (UID: \"6de44150-41d6-426a-92f4-d29fb3ee1afe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ghjwl" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.354358 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/3311d92d-90da-42f5-acf3-3ec723c5edad-plugins-dir\") pod \"csi-hostpathplugin-gw4c8\" (UID: \"3311d92d-90da-42f5-acf3-3ec723c5edad\") " pod="hostpath-provisioner/csi-hostpathplugin-gw4c8" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.354793 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/25e29ca7-7ff4-4263-8f4e-5a35a6c8118a-srv-cert\") pod \"olm-operator-6b444d44fb-n7thk\" (UID: \"25e29ca7-7ff4-4263-8f4e-5a35a6c8118a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n7thk" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.356679 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4be12707-b1d1-4a30-bb2c-1af9e3d34d09-webhook-cert\") pod \"packageserver-d55dfcdfc-8fhp6\" (UID: \"4be12707-b1d1-4a30-bb2c-1af9e3d34d09\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8fhp6" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.358237 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/61f22a16-1565-425a-914d-ec0d5a5c1902-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-8lclt\" (UID: \"61f22a16-1565-425a-914d-ec0d5a5c1902\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8lclt" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.358258 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4be12707-b1d1-4a30-bb2c-1af9e3d34d09-apiservice-cert\") pod \"packageserver-d55dfcdfc-8fhp6\" (UID: \"4be12707-b1d1-4a30-bb2c-1af9e3d34d09\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8fhp6" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.358464 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7932070f-5985-4e34-84ff-0af75e044581-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-7stx8\" (UID: \"7932070f-5985-4e34-84ff-0af75e044581\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7stx8" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.358772 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/041ea905-9e91-41e3-9db6-820256d951aa-secret-volume\") pod \"collect-profiles-29536800-n8kg4\" (UID: \"041ea905-9e91-41e3-9db6-820256d951aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536800-n8kg4" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.359560 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/41bce4eb-4367-4dee-9c26-df8e0a1e4ea8-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hlg9d\" (UID: \"41bce4eb-4367-4dee-9c26-df8e0a1e4ea8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hlg9d" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.362014 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a235e26d-f41e-406d-992e-3dfb44246bdd-proxy-tls\") pod \"machine-config-controller-84d6567774-5shdf\" (UID: \"a235e26d-f41e-406d-992e-3dfb44246bdd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5shdf" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.362060 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/910e8c41-1fdf-4f16-9902-532e21fe81ab-metrics-tls\") pod \"ingress-operator-5b745b69d9-wpzdz\" (UID: \"910e8c41-1fdf-4f16-9902-532e21fe81ab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wpzdz" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.364070 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c873a2ac-f7d3-4bea-ad09-b16891a1edf6-profile-collector-cert\") pod \"catalog-operator-68c6474976-bfz6f\" (UID: \"c873a2ac-f7d3-4bea-ad09-b16891a1edf6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bfz6f" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.366019 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c873a2ac-f7d3-4bea-ad09-b16891a1edf6-srv-cert\") pod \"catalog-operator-68c6474976-bfz6f\" (UID: \"c873a2ac-f7d3-4bea-ad09-b16891a1edf6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bfz6f" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.368203 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gb4pt"] Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.379455 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft756\" (UniqueName: \"kubernetes.io/projected/cb320ef7-4518-4b01-b1bd-13a60749cac4-kube-api-access-ft756\") pod \"openshift-controller-manager-operator-756b6f6bc6-2x2lh\" (UID: \"cb320ef7-4518-4b01-b1bd-13a60749cac4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2x2lh" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.397407 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zctbn\" (UniqueName: \"kubernetes.io/projected/5c04971d-7bad-44c6-bd80-e27f65c8637f-kube-api-access-zctbn\") pod \"machine-approver-56656f9798-nfkdb\" (UID: \"5c04971d-7bad-44c6-bd80-e27f65c8637f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nfkdb" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.415743 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-dwgx7" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.417182 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a2a2ed5-abaa-4df6-b762-56bb964fbbca-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z5n7d\" (UID: \"0a2a2ed5-abaa-4df6-b762-56bb964fbbca\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z5n7d" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.440023 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:11 crc kubenswrapper[4830]: E0227 16:10:11.440325 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:11.940312681 +0000 UTC m=+208.029585144 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.445371 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lph79\" (UniqueName: \"kubernetes.io/projected/c235c0e5-a6f8-45d8-83e1-91be0d32ac19-kube-api-access-lph79\") pod \"dns-operator-744455d44c-n6xx6\" (UID: \"c235c0e5-a6f8-45d8-83e1-91be0d32ac19\") " pod="openshift-dns-operator/dns-operator-744455d44c-n6xx6" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.451024 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-n6xx6" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.455475 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2x2lh" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.461346 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nfkdb" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.479452 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d48qd\" (UniqueName: \"kubernetes.io/projected/584006df-9736-4ed2-aeba-118587f909d7-kube-api-access-d48qd\") pod \"machine-config-operator-74547568cd-pz6hl\" (UID: \"584006df-9736-4ed2-aeba-118587f909d7\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz6hl" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.500036 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z5n7d" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.508934 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfhxd\" (UniqueName: \"kubernetes.io/projected/627c853d-8a30-4a46-a190-dd490a39aa35-kube-api-access-qfhxd\") pod \"multus-admission-controller-857f4d67dd-c75pf\" (UID: \"627c853d-8a30-4a46-a190-dd490a39aa35\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c75pf" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.523791 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7stx8" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.524808 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4sfz\" (UniqueName: \"kubernetes.io/projected/3311d92d-90da-42f5-acf3-3ec723c5edad-kube-api-access-l4sfz\") pod \"csi-hostpathplugin-gw4c8\" (UID: \"3311d92d-90da-42f5-acf3-3ec723c5edad\") " pod="hostpath-provisioner/csi-hostpathplugin-gw4c8" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.541502 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz6hl" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.543315 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:11 crc kubenswrapper[4830]: E0227 16:10:11.543683 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:12.043669662 +0000 UTC m=+208.132942125 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.551792 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-c75pf" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.560401 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffjz5\" (UniqueName: \"kubernetes.io/projected/25e29ca7-7ff4-4263-8f4e-5a35a6c8118a-kube-api-access-ffjz5\") pod \"olm-operator-6b444d44fb-n7thk\" (UID: \"25e29ca7-7ff4-4263-8f4e-5a35a6c8118a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n7thk" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.570735 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8kft\" (UniqueName: \"kubernetes.io/projected/4be12707-b1d1-4a30-bb2c-1af9e3d34d09-kube-api-access-w8kft\") pod \"packageserver-d55dfcdfc-8fhp6\" (UID: \"4be12707-b1d1-4a30-bb2c-1af9e3d34d09\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8fhp6" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.579861 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xk2qk" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.601334 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n7thk" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.604606 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phjjp\" (UniqueName: \"kubernetes.io/projected/61f22a16-1565-425a-914d-ec0d5a5c1902-kube-api-access-phjjp\") pod \"control-plane-machine-set-operator-78cbb6b69f-8lclt\" (UID: \"61f22a16-1565-425a-914d-ec0d5a5c1902\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8lclt" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.607642 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8lclt" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.615420 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj4s6\" (UniqueName: \"kubernetes.io/projected/6de44150-41d6-426a-92f4-d29fb3ee1afe-kube-api-access-hj4s6\") pod \"etcd-operator-b45778765-ghjwl\" (UID: \"6de44150-41d6-426a-92f4-d29fb3ee1afe\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ghjwl" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.622582 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m99n\" (UniqueName: \"kubernetes.io/projected/b883e3e8-e6d7-4402-816f-033a0668f6eb-kube-api-access-2m99n\") pod \"ingress-canary-rfmxx\" (UID: \"b883e3e8-e6d7-4402-816f-033a0668f6eb\") " pod="openshift-ingress-canary/ingress-canary-rfmxx" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.644819 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.645120 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8rfn\" (UniqueName: \"kubernetes.io/projected/c873a2ac-f7d3-4bea-ad09-b16891a1edf6-kube-api-access-h8rfn\") pod \"catalog-operator-68c6474976-bfz6f\" (UID: \"c873a2ac-f7d3-4bea-ad09-b16891a1edf6\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bfz6f" Feb 27 16:10:11 crc kubenswrapper[4830]: E0227 16:10:11.645203 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:12.145191745 +0000 UTC m=+208.234464208 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.647213 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kgdlg"] Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.661026 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttpw6\" (UniqueName: \"kubernetes.io/projected/d473053a-d4df-40b8-a876-5582e1d8a702-kube-api-access-ttpw6\") pod \"router-default-5444994796-wh6nt\" (UID: \"d473053a-d4df-40b8-a876-5582e1d8a702\") " pod="openshift-ingress/router-default-5444994796-wh6nt" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.674678 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-ghjwl" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.679749 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt26r\" (UniqueName: \"kubernetes.io/projected/41bce4eb-4367-4dee-9c26-df8e0a1e4ea8-kube-api-access-vt26r\") pod \"package-server-manager-789f6589d5-hlg9d\" (UID: \"41bce4eb-4367-4dee-9c26-df8e0a1e4ea8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hlg9d" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.683871 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-gw4c8" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.720037 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpxx8\" (UniqueName: \"kubernetes.io/projected/a235e26d-f41e-406d-992e-3dfb44246bdd-kube-api-access-jpxx8\") pod \"machine-config-controller-84d6567774-5shdf\" (UID: \"a235e26d-f41e-406d-992e-3dfb44246bdd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5shdf" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.722318 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rfmxx" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.724195 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kthl\" (UniqueName: \"kubernetes.io/projected/88dc3209-64e3-47ef-b1f0-e2aeddfe8ece-kube-api-access-9kthl\") pod \"dns-default-srljc\" (UID: \"88dc3209-64e3-47ef-b1f0-e2aeddfe8ece\") " pod="openshift-dns/dns-default-srljc" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.751588 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:11 crc kubenswrapper[4830]: E0227 16:10:11.752860 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:12.252841907 +0000 UTC m=+208.342114370 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.756069 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df487ddf-86fd-4433-a32d-6d41ffeed9bc-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-b89fm\" (UID: \"df487ddf-86fd-4433-a32d-6d41ffeed9bc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b89fm" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.783617 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/910e8c41-1fdf-4f16-9902-532e21fe81ab-bound-sa-token\") pod \"ingress-operator-5b745b69d9-wpzdz\" (UID: \"910e8c41-1fdf-4f16-9902-532e21fe81ab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wpzdz" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.788541 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57czv\" (UniqueName: \"kubernetes.io/projected/da82094a-cfed-404a-8fb9-2958b13ce78b-kube-api-access-57czv\") pod \"service-ca-9c57cc56f-hdgkf\" (UID: \"da82094a-cfed-404a-8fb9-2958b13ce78b\") " pod="openshift-service-ca/service-ca-9c57cc56f-hdgkf" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.809897 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6nrn\" (UniqueName: \"kubernetes.io/projected/1eb064bc-39af-405a-bdbf-665e31fa07c3-kube-api-access-v6nrn\") pod \"auto-csr-approver-29536810-bc446\" (UID: \"1eb064bc-39af-405a-bdbf-665e31fa07c3\") " pod="openshift-infra/auto-csr-approver-29536810-bc446" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.813972 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z5n7d"] Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.823927 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8dxs\" (UniqueName: \"kubernetes.io/projected/041ea905-9e91-41e3-9db6-820256d951aa-kube-api-access-k8dxs\") pod \"collect-profiles-29536800-n8kg4\" (UID: \"041ea905-9e91-41e3-9db6-820256d951aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536800-n8kg4" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.838727 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg577\" (UniqueName: \"kubernetes.io/projected/910e8c41-1fdf-4f16-9902-532e21fe81ab-kube-api-access-pg577\") pod \"ingress-operator-5b745b69d9-wpzdz\" (UID: \"910e8c41-1fdf-4f16-9902-532e21fe81ab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wpzdz" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.843704 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5shdf" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.852589 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:11 crc kubenswrapper[4830]: E0227 16:10:11.852920 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:12.352908274 +0000 UTC m=+208.442180737 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.859216 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b89fm" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.865139 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8fhp6" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.873538 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-wh6nt" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.886281 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-hdgkf" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.894832 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wpzdz" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.900624 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-n6xx6"] Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.914997 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bfz6f" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.922611 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2x2lh"] Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.923916 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hlg9d" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.930013 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536800-n8kg4" Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.953616 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:11 crc kubenswrapper[4830]: E0227 16:10:11.954249 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:12.454221922 +0000 UTC m=+208.543494385 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.965208 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536810-bc446" Feb 27 16:10:11 crc kubenswrapper[4830]: W0227 16:10:11.973039 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb320ef7_4518_4b01_b1bd_13a60749cac4.slice/crio-f03bb9a235dfa03807e25b70ce8a152ec064abf1b43119b3638e935b226e0053 WatchSource:0}: Error finding container f03bb9a235dfa03807e25b70ce8a152ec064abf1b43119b3638e935b226e0053: Status 404 returned error can't find the container with id f03bb9a235dfa03807e25b70ce8a152ec064abf1b43119b3638e935b226e0053 Feb 27 16:10:11 crc kubenswrapper[4830]: I0227 16:10:11.986059 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-xk2qk"] Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.005682 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-srljc" Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.008392 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n7thk"] Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.018780 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q7c6j" event={"ID":"b0274d4b-eb80-4321-a4c1-6848c65bc32e","Type":"ContainerStarted","Data":"2831a98beea885a1209f1e3f51c915114b3408a6c80adb42d073b8b4123e08d3"} Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.022742 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8lclt"] Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.026222 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-dwgx7" event={"ID":"449e4d4d-4a51-4625-801a-980a93398439","Type":"ContainerStarted","Data":"8b0d756c2218f1b5eed40599ae0cfc8bc80c43bf83219c1c3eed5c1f7bcdc06a"} Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.034014 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kgdlg" event={"ID":"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54","Type":"ContainerStarted","Data":"84a1b78d2e5c65edda02ed0388d31e2f4681d26995370d5089a20378d6d4e64a"} Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.054010 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rvd8g" event={"ID":"af0d26af-5990-456b-a3bc-4ea4a14bbc25","Type":"ContainerStarted","Data":"4977fe55dd35186d4814d9bdf65689adee244080ab3461a31fa5dcd1a758596e"} Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.055283 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z5n7d" event={"ID":"0a2a2ed5-abaa-4df6-b762-56bb964fbbca","Type":"ContainerStarted","Data":"a263a8d8cd9afb3c7106d09de7276d60bd8eb77e794d2993045e178c15867a49"} Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.055993 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:12 crc kubenswrapper[4830]: E0227 16:10:12.056380 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:12.556363702 +0000 UTC m=+208.645636165 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.057288 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-45mg7" event={"ID":"32e984aa-8399-4cf1-8a4a-b36525c67e35","Type":"ContainerStarted","Data":"49054df53335758b3881b76c3a3c62d68b35f8674db1ebfbd73ee163d939df11"} Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.071196 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4dhxq" event={"ID":"1f30f03f-511a-4a29-beae-e3d6971a8c9e","Type":"ContainerStarted","Data":"b141c95318a5e67b3011273667892320219cb8b98bd670bbab711f837bcb857d"} Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.071235 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4dhxq" event={"ID":"1f30f03f-511a-4a29-beae-e3d6971a8c9e","Type":"ContainerStarted","Data":"f77d87c6ec33d2c2d1c33ef873aac5a42d317be4a1a0cdaf6b6c4bf52426ea82"} Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.074688 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-ngpn7" event={"ID":"e02559f6-da6b-44d6-b0d3-16a5b400edda","Type":"ContainerStarted","Data":"dc9f10d690c53b0f373203670bd61ef0c3f8c536b44e6e648f2053d0f180478b"} Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.077088 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"71fc94e20f668d0a7b897bde72b23f13db4ab58b8b874e1956f888d3f82fb5c5"} Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.078322 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5mm8b" event={"ID":"36eaeabc-508b-4a11-9dc5-45ff8b42e0a8","Type":"ContainerStarted","Data":"25479aeb0930c444ec15919b05e94052114ec473523a730b3a5a2cf0c6db2166"} Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.078918 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-gb4pt" event={"ID":"bed81cec-625c-4239-92b4-39428a13becc","Type":"ContainerStarted","Data":"d0e1d069440cf5b0962e3bfb366be5fe54d3571193c69a12beb23fde9a177bef"} Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.081157 4830 generic.go:334] "Generic (PLEG): container finished" podID="e9c70786-d73e-4e48-a552-bdeb53daba49" containerID="3462fa9d661885565288e87bc025827861611bd2d3a3d33e234661f8dde61048" exitCode=0 Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.081192 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" event={"ID":"e9c70786-d73e-4e48-a552-bdeb53daba49","Type":"ContainerDied","Data":"3462fa9d661885565288e87bc025827861611bd2d3a3d33e234661f8dde61048"} Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.082852 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5jfm7" event={"ID":"e7d85019-9a72-439e-a548-496027dd3d2c","Type":"ContainerStarted","Data":"8a369a0d2c8aa6701369b24b2c281c69ac3e9effc7069dcf2f8b0151b99174b6"} Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.082870 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5jfm7" event={"ID":"e7d85019-9a72-439e-a548-496027dd3d2c","Type":"ContainerStarted","Data":"c36ba2cf4b2d1e915d58d6fb5b76f48cb2dc9810c0c8d11785821a40e0b54a7d"} Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.084114 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n77c6" event={"ID":"48571590-5f3e-4b3f-9cd5-451eeb22a435","Type":"ContainerStarted","Data":"34db8c5feb3ebe16c6a6e8f12d33919559de7278e102f874c571f8babb1ef7fa"} Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.085102 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nfkdb" event={"ID":"5c04971d-7bad-44c6-bd80-e27f65c8637f","Type":"ContainerStarted","Data":"cef6d444e04f758cfd8df51cc62fc5fd8687783b48ebad1cd01a671978f316ea"} Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.087701 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"643940e1f864035cdcc39a97fd34834c3d7332a8faac64b85d441953965798e8"} Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.088671 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7stx8"] Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.088989 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-n6xx6" event={"ID":"c235c0e5-a6f8-45d8-83e1-91be0d32ac19","Type":"ContainerStarted","Data":"d81fe527a29fd1e83de7d2beb07bec922514fecc1ad57fac89df876e5941c096"} Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.090235 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5pcpc" event={"ID":"cbddb809-9950-48a7-945a-ef66c2e1c1f9","Type":"ContainerStarted","Data":"1b5db8d5989e72ea03b636ef6de200068f1e2250e4f1cc1a9462a11d7ba221d0"} Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.091653 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" event={"ID":"f18ef53a-23d0-4f48-b7a4-96f2716e137f","Type":"ContainerStarted","Data":"2c6a7624dc201e01490accbe5f90f7436104c0a890bb255b4a85fd5b80e889da"} Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.092292 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:12 crc kubenswrapper[4830]: W0227 16:10:12.093686 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61f22a16_1565_425a_914d_ec0d5a5c1902.slice/crio-50e9088023a6d84766b8010198a363b90e8b7ae28f847b37c5ef28d6b5844645 WatchSource:0}: Error finding container 50e9088023a6d84766b8010198a363b90e8b7ae28f847b37c5ef28d6b5844645: Status 404 returned error can't find the container with id 50e9088023a6d84766b8010198a363b90e8b7ae28f847b37c5ef28d6b5844645 Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.094236 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"812578cd9896242f31132abb9a5b235553ca12846c1c360a241f90cd8d1cda7b"} Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.094333 4830 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-vs8sq container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.094361 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" podUID="f18ef53a-23d0-4f48-b7a4-96f2716e137f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.094715 4830 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-2rrvm container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.094732 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2rrvm" podUID="4ce35469-d725-409b-8e24-2c74769d7b77" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.094808 4830 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-v78pc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.094858 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-v78pc" podUID="278df35c-de00-443d-a6f7-e0cc526a487c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.136012 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pz6hl"] Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.154268 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-c75pf"] Feb 27 16:10:12 crc kubenswrapper[4830]: W0227 16:10:12.154587 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7932070f_5985_4e34_84ff_0af75e044581.slice/crio-e0b20b3c5ae60a2fac87c86bd647202d36256b29f348cd8e189a26569f25a147 WatchSource:0}: Error finding container e0b20b3c5ae60a2fac87c86bd647202d36256b29f348cd8e189a26569f25a147: Status 404 returned error can't find the container with id e0b20b3c5ae60a2fac87c86bd647202d36256b29f348cd8e189a26569f25a147 Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.156405 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:12 crc kubenswrapper[4830]: E0227 16:10:12.156807 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:12.656786307 +0000 UTC m=+208.746058770 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.248025 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rfmxx"] Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.259781 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:12 crc kubenswrapper[4830]: E0227 16:10:12.265857 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:12.765843425 +0000 UTC m=+208.855115888 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.297576 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b89fm"] Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.348641 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hdgkf"] Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.360624 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:12 crc kubenswrapper[4830]: E0227 16:10:12.360885 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:12.860840081 +0000 UTC m=+208.950112534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:12 crc kubenswrapper[4830]: W0227 16:10:12.374252 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod627c853d_8a30_4a46_a190_dd490a39aa35.slice/crio-fb6af9df0ba8392a5da4615c9d2cde6c40517757e08d139075e70609e065cc62 WatchSource:0}: Error finding container fb6af9df0ba8392a5da4615c9d2cde6c40517757e08d139075e70609e065cc62: Status 404 returned error can't find the container with id fb6af9df0ba8392a5da4615c9d2cde6c40517757e08d139075e70609e065cc62 Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.382192 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gw4c8"] Feb 27 16:10:12 crc kubenswrapper[4830]: W0227 16:10:12.395690 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf487ddf_86fd_4433_a32d_6d41ffeed9bc.slice/crio-a0f32369b8030b447094b9e026e7b24abc01d4eac34aa4359a105c948c40f48c WatchSource:0}: Error finding container a0f32369b8030b447094b9e026e7b24abc01d4eac34aa4359a105c948c40f48c: Status 404 returned error can't find the container with id a0f32369b8030b447094b9e026e7b24abc01d4eac34aa4359a105c948c40f48c Feb 27 16:10:12 crc kubenswrapper[4830]: W0227 16:10:12.397132 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb883e3e8_e6d7_4402_816f_033a0668f6eb.slice/crio-de8b5b44f26acc444c693b6f18cc369dcb79a0b86e8051491362790f7ea953e0 WatchSource:0}: Error finding container de8b5b44f26acc444c693b6f18cc369dcb79a0b86e8051491362790f7ea953e0: Status 404 returned error can't find the container with id de8b5b44f26acc444c693b6f18cc369dcb79a0b86e8051491362790f7ea953e0 Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.429611 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5pcpc" podStartSLOduration=172.429593907 podStartE2EDuration="2m52.429593907s" podCreationTimestamp="2026-02-27 16:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:12.427039781 +0000 UTC m=+208.516312244" watchObservedRunningTime="2026-02-27 16:10:12.429593907 +0000 UTC m=+208.518866370" Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.433443 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bfz6f"] Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.453707 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-ghjwl"] Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.461813 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:12 crc kubenswrapper[4830]: E0227 16:10:12.462108 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:12.962095747 +0000 UTC m=+209.051368210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.494179 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-v78pc" podStartSLOduration=172.494162486 podStartE2EDuration="2m52.494162486s" podCreationTimestamp="2026-02-27 16:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:12.492455262 +0000 UTC m=+208.581727735" watchObservedRunningTime="2026-02-27 16:10:12.494162486 +0000 UTC m=+208.583434949" Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.562467 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:12 crc kubenswrapper[4830]: E0227 16:10:12.562790 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:13.062775019 +0000 UTC m=+209.152047482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.664322 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:12 crc kubenswrapper[4830]: E0227 16:10:12.664699 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:13.164674993 +0000 UTC m=+209.253947456 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.735799 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2rrvm" podStartSLOduration=171.735782469 podStartE2EDuration="2m51.735782469s" podCreationTimestamp="2026-02-27 16:07:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:12.733712686 +0000 UTC m=+208.822985149" watchObservedRunningTime="2026-02-27 16:10:12.735782469 +0000 UTC m=+208.825054932" Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.766530 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:12 crc kubenswrapper[4830]: E0227 16:10:12.766777 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:13.26676292 +0000 UTC m=+209.356035383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.776643 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n77c6" podStartSLOduration=172.776631236 podStartE2EDuration="2m52.776631236s" podCreationTimestamp="2026-02-27 16:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:12.775153107 +0000 UTC m=+208.864425590" watchObservedRunningTime="2026-02-27 16:10:12.776631236 +0000 UTC m=+208.865903689" Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.780532 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hlg9d"] Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.829684 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-5shdf"] Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.845385 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536800-n8kg4"] Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.860348 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" podStartSLOduration=172.860329988 podStartE2EDuration="2m52.860329988s" podCreationTimestamp="2026-02-27 16:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:12.858685706 +0000 UTC m=+208.947958169" watchObservedRunningTime="2026-02-27 16:10:12.860329988 +0000 UTC m=+208.949602441" Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.868592 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:12 crc kubenswrapper[4830]: E0227 16:10:12.868917 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:13.368902239 +0000 UTC m=+209.458174702 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.875479 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536810-bc446"] Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.878285 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-srljc"] Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.912637 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-wpzdz"] Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.925518 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8fhp6"] Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.938193 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 16:10:12 crc kubenswrapper[4830]: W0227 16:10:12.939073 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda235e26d_f41e_406d_992e_3dfb44246bdd.slice/crio-a351ce515685e376c762588157d28baa0d989076c32398e6281b5fc9a676fbcd WatchSource:0}: Error finding container a351ce515685e376c762588157d28baa0d989076c32398e6281b5fc9a676fbcd: Status 404 returned error can't find the container with id a351ce515685e376c762588157d28baa0d989076c32398e6281b5fc9a676fbcd Feb 27 16:10:12 crc kubenswrapper[4830]: W0227 16:10:12.951998 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88dc3209_64e3_47ef_b1f0_e2aeddfe8ece.slice/crio-f6c6c4b80175378ea5cb9cc886d1677bf9d26205282e3ffa8aec7fb56c2b3460 WatchSource:0}: Error finding container f6c6c4b80175378ea5cb9cc886d1677bf9d26205282e3ffa8aec7fb56c2b3460: Status 404 returned error can't find the container with id f6c6c4b80175378ea5cb9cc886d1677bf9d26205282e3ffa8aec7fb56c2b3460 Feb 27 16:10:12 crc kubenswrapper[4830]: W0227 16:10:12.954610 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod910e8c41_1fdf_4f16_9902_532e21fe81ab.slice/crio-ffdfd2e78d8e5fb6539ac8e7fb7a527ff8e170bc1763911e7d0077875a7d444f WatchSource:0}: Error finding container ffdfd2e78d8e5fb6539ac8e7fb7a527ff8e170bc1763911e7d0077875a7d444f: Status 404 returned error can't find the container with id ffdfd2e78d8e5fb6539ac8e7fb7a527ff8e170bc1763911e7d0077875a7d444f Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.971602 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:12 crc kubenswrapper[4830]: E0227 16:10:12.971776 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:13.471736368 +0000 UTC m=+209.561008831 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.971839 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:12 crc kubenswrapper[4830]: E0227 16:10:12.973082 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:13.473065852 +0000 UTC m=+209.562338315 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:12 crc kubenswrapper[4830]: I0227 16:10:12.976231 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-kjfn6" podStartSLOduration=172.976211093 podStartE2EDuration="2m52.976211093s" podCreationTimestamp="2026-02-27 16:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:12.975988037 +0000 UTC m=+209.065260500" watchObservedRunningTime="2026-02-27 16:10:12.976211093 +0000 UTC m=+209.065483556" Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.072803 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:13 crc kubenswrapper[4830]: E0227 16:10:13.072998 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:13.572971693 +0000 UTC m=+209.662244146 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.073159 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:13 crc kubenswrapper[4830]: E0227 16:10:13.073502 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:13.573491618 +0000 UTC m=+209.662764091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.109660 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-srljc" event={"ID":"88dc3209-64e3-47ef-b1f0-e2aeddfe8ece","Type":"ContainerStarted","Data":"f6c6c4b80175378ea5cb9cc886d1677bf9d26205282e3ffa8aec7fb56c2b3460"} Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.114018 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q7c6j" event={"ID":"b0274d4b-eb80-4321-a4c1-6848c65bc32e","Type":"ContainerStarted","Data":"7ee02d6173ecc7e92dd77fa1089ada565351fc7a849658eae2dbfc0a88aa554b"} Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.114752 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b89fm" event={"ID":"df487ddf-86fd-4433-a32d-6d41ffeed9bc","Type":"ContainerStarted","Data":"a0f32369b8030b447094b9e026e7b24abc01d4eac34aa4359a105c948c40f48c"} Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.115637 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gw4c8" event={"ID":"3311d92d-90da-42f5-acf3-3ec723c5edad","Type":"ContainerStarted","Data":"bbdec3c192cc8802d6c4a382f856d555a5ed860e64c5c80e5285f851cb7d2d47"} Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.116359 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wpzdz" event={"ID":"910e8c41-1fdf-4f16-9902-532e21fe81ab","Type":"ContainerStarted","Data":"ffdfd2e78d8e5fb6539ac8e7fb7a527ff8e170bc1763911e7d0077875a7d444f"} Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.117159 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rfmxx" event={"ID":"b883e3e8-e6d7-4402-816f-033a0668f6eb","Type":"ContainerStarted","Data":"de8b5b44f26acc444c693b6f18cc369dcb79a0b86e8051491362790f7ea953e0"} Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.118136 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2x2lh" event={"ID":"cb320ef7-4518-4b01-b1bd-13a60749cac4","Type":"ContainerStarted","Data":"61eb07abd53e17f8f42790dd221ef33c55d74f2968bd64ca4b68b0f4f2422cb0"} Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.118163 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2x2lh" event={"ID":"cb320ef7-4518-4b01-b1bd-13a60749cac4","Type":"ContainerStarted","Data":"f03bb9a235dfa03807e25b70ce8a152ec064abf1b43119b3638e935b226e0053"} Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.118880 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-hdgkf" event={"ID":"da82094a-cfed-404a-8fb9-2958b13ce78b","Type":"ContainerStarted","Data":"8757c88a3dae5f8949c05b885fee299df4d191d6c5999b7dc20d2cdbff4ac871"} Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.120204 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nfkdb" event={"ID":"5c04971d-7bad-44c6-bd80-e27f65c8637f","Type":"ContainerStarted","Data":"97d42a298d1af4238e7dde9db8b27803c07d91d2d5f199159448d9c6e7742da1"} Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.121242 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5shdf" event={"ID":"a235e26d-f41e-406d-992e-3dfb44246bdd","Type":"ContainerStarted","Data":"a351ce515685e376c762588157d28baa0d989076c32398e6281b5fc9a676fbcd"} Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.122503 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"0abc2d1086e2f760067f5ed558cef29c0fc8ecb654d77b684bea73b307ef60c6"} Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.124026 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7stx8" event={"ID":"7932070f-5985-4e34-84ff-0af75e044581","Type":"ContainerStarted","Data":"e0b20b3c5ae60a2fac87c86bd647202d36256b29f348cd8e189a26569f25a147"} Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.124917 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536810-bc446" event={"ID":"1eb064bc-39af-405a-bdbf-665e31fa07c3","Type":"ContainerStarted","Data":"79fe0767947c910e07f906c0180675a5a7751edd0dcacd4f0b6e5af87fcc945b"} Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.125995 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"9b287d506aef1a11f4328a78bf6dea34f5897d4425df002828ba24f995a9d4db"} Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.126994 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-ghjwl" event={"ID":"6de44150-41d6-426a-92f4-d29fb3ee1afe","Type":"ContainerStarted","Data":"5f15d656a9565bd235215ee765dd1feda4e387c469f78b7482ff7f3182d9cd7c"} Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.128160 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-45mg7" event={"ID":"32e984aa-8399-4cf1-8a4a-b36525c67e35","Type":"ContainerStarted","Data":"0204c4fa0fcb07e9b828264151181c41186fb5044b2e7a58f9938a37bc377462"} Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.128566 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-45mg7" Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.131374 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8lclt" event={"ID":"61f22a16-1565-425a-914d-ec0d5a5c1902","Type":"ContainerStarted","Data":"50e9088023a6d84766b8010198a363b90e8b7ae28f847b37c5ef28d6b5844645"} Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.132519 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n7thk" event={"ID":"25e29ca7-7ff4-4263-8f4e-5a35a6c8118a","Type":"ContainerStarted","Data":"f3603afdb133f32273431a2dd8643c59388c286700d992bece0492bf3046ef2a"} Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.133952 4830 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-45mg7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/healthz\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.134003 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-45mg7" podUID="32e984aa-8399-4cf1-8a4a-b36525c67e35" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.19:8080/healthz\": dial tcp 10.217.0.19:8080: connect: connection refused" Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.134479 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"065f78de7631981985099230eabdebb53a35ab0e244236fdabe7485c6bb4f9b6"} Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.140689 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-wh6nt" event={"ID":"d473053a-d4df-40b8-a876-5582e1d8a702","Type":"ContainerStarted","Data":"4b887a09b3a69ea498603bfd67981d926f5e2326468e714feca0161f1e63fb30"} Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.144016 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bfz6f" event={"ID":"c873a2ac-f7d3-4bea-ad09-b16891a1edf6","Type":"ContainerStarted","Data":"a9be3a024b7a9ff4569265b2fd5671e61b39ae6c88503250c846d6a51ff90b61"} Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.145939 4830 generic.go:334] "Generic (PLEG): container finished" podID="e7d85019-9a72-439e-a548-496027dd3d2c" containerID="8a369a0d2c8aa6701369b24b2c281c69ac3e9effc7069dcf2f8b0151b99174b6" exitCode=0 Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.146033 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5jfm7" event={"ID":"e7d85019-9a72-439e-a548-496027dd3d2c","Type":"ContainerDied","Data":"8a369a0d2c8aa6701369b24b2c281c69ac3e9effc7069dcf2f8b0151b99174b6"} Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.146855 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hlg9d" event={"ID":"41bce4eb-4367-4dee-9c26-df8e0a1e4ea8","Type":"ContainerStarted","Data":"2e79c2fdc3b1da9eefce35ae1b4b1499034064c14f086602ff5d41bb43b74887"} Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.147881 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-dwgx7" event={"ID":"449e4d4d-4a51-4625-801a-980a93398439","Type":"ContainerStarted","Data":"627921ba077e86d7f246422b82019d33d4c015d7f5e3e176cb7b263ab6bd92d6"} Feb 27 16:10:13 crc kubenswrapper[4830]: W0227 16:10:13.150343 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4be12707_b1d1_4a30_bb2c_1af9e3d34d09.slice/crio-e9bc7daf53f454ed1804efcbdf3d5edd26b4eb6f0f8ea62090c3eae03059caa7 WatchSource:0}: Error finding container e9bc7daf53f454ed1804efcbdf3d5edd26b4eb6f0f8ea62090c3eae03059caa7: Status 404 returned error can't find the container with id e9bc7daf53f454ed1804efcbdf3d5edd26b4eb6f0f8ea62090c3eae03059caa7 Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.151281 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5mm8b" event={"ID":"36eaeabc-508b-4a11-9dc5-45ff8b42e0a8","Type":"ContainerStarted","Data":"0e27024585f02ba275b2f43edb98b9e2433754b30de63288545e88fccf0db8fd"} Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.160005 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-gb4pt" event={"ID":"bed81cec-625c-4239-92b4-39428a13becc","Type":"ContainerStarted","Data":"daf124606307c44f179cd53116c580b4779ad71e9154d695b88fac045d96be95"} Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.161284 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz6hl" event={"ID":"584006df-9736-4ed2-aeba-118587f909d7","Type":"ContainerStarted","Data":"8a9a37403c126947033a84aa414269c9d2c79a1ce9f062cdf2d62e61452e221e"} Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.162384 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kgdlg" event={"ID":"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54","Type":"ContainerStarted","Data":"81e74652d0f0f6b7448437c769c331ed1a04a1230f2fe2a66451bc3053e832d5"} Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.163668 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536800-n8kg4" event={"ID":"041ea905-9e91-41e3-9db6-820256d951aa","Type":"ContainerStarted","Data":"5c054053597c2cf37a8455db1b02e74206f2ebf4ee5f070e47bc8b3126da49e6"} Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.166762 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rvd8g" event={"ID":"af0d26af-5990-456b-a3bc-4ea4a14bbc25","Type":"ContainerStarted","Data":"6aa52bc2fbe5430c535a7bb35101c6915e8f905ec808ed26728085622c51c000"} Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.168528 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xk2qk" event={"ID":"e00ea89f-b3e4-44ed-9348-5cd609b9c563","Type":"ContainerStarted","Data":"752c3df93b17ae6c880b3e0a358fc89417ea1513ea4565d190629c17b2ee8a7c"} Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.170410 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-khmn9" event={"ID":"1843207f-14a3-4f21-a253-dbd843d2d8bf","Type":"ContainerStarted","Data":"001c91c50fee35c8a5a7e9a9cdec2121112b329210b79f25c26372a2d4905eb8"} Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.172015 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-c75pf" event={"ID":"627c853d-8a30-4a46-a190-dd490a39aa35","Type":"ContainerStarted","Data":"fb6af9df0ba8392a5da4615c9d2cde6c40517757e08d139075e70609e065cc62"} Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.174421 4830 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-vs8sq container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.174470 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" podUID="f18ef53a-23d0-4f48-b7a4-96f2716e137f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.175116 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:13 crc kubenswrapper[4830]: E0227 16:10:13.175599 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:13.675579035 +0000 UTC m=+209.764851518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.278139 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:13 crc kubenswrapper[4830]: E0227 16:10:13.301095 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:13.801024527 +0000 UTC m=+209.890296990 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.336923 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-ngpn7" podStartSLOduration=173.33670545 podStartE2EDuration="2m53.33670545s" podCreationTimestamp="2026-02-27 16:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:13.287495547 +0000 UTC m=+209.376768010" watchObservedRunningTime="2026-02-27 16:10:13.33670545 +0000 UTC m=+209.425977913" Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.378807 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:13 crc kubenswrapper[4830]: E0227 16:10:13.379186 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:13.879169657 +0000 UTC m=+209.968442120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.479805 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:13 crc kubenswrapper[4830]: E0227 16:10:13.480170 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:13.980154896 +0000 UTC m=+210.069427359 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.580829 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:13 crc kubenswrapper[4830]: E0227 16:10:13.581044 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:14.081012803 +0000 UTC m=+210.170285276 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.581337 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:13 crc kubenswrapper[4830]: E0227 16:10:13.581756 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:14.081743672 +0000 UTC m=+210.171016145 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.618302 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-4dhxq" podStartSLOduration=173.618276936 podStartE2EDuration="2m53.618276936s" podCreationTimestamp="2026-02-27 16:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:13.614404526 +0000 UTC m=+209.703677009" watchObservedRunningTime="2026-02-27 16:10:13.618276936 +0000 UTC m=+209.707549419" Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.664073 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-khmn9" podStartSLOduration=172.664053688 podStartE2EDuration="2m52.664053688s" podCreationTimestamp="2026-02-27 16:07:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:13.661607436 +0000 UTC m=+209.750879909" watchObservedRunningTime="2026-02-27 16:10:13.664053688 +0000 UTC m=+209.753326141" Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.687366 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:13 crc kubenswrapper[4830]: E0227 16:10:13.687624 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:14.187573537 +0000 UTC m=+210.276846000 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.687718 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:13 crc kubenswrapper[4830]: E0227 16:10:13.688383 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:14.188369628 +0000 UTC m=+210.277642091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.697666 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-dwgx7" podStartSLOduration=5.697635596 podStartE2EDuration="5.697635596s" podCreationTimestamp="2026-02-27 16:10:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:13.695733308 +0000 UTC m=+209.785005771" watchObservedRunningTime="2026-02-27 16:10:13.697635596 +0000 UTC m=+209.786908099" Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.737171 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-45mg7" podStartSLOduration=172.737153168 podStartE2EDuration="2m52.737153168s" podCreationTimestamp="2026-02-27 16:07:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:13.735471724 +0000 UTC m=+209.824744187" watchObservedRunningTime="2026-02-27 16:10:13.737153168 +0000 UTC m=+209.826425641" Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.780562 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-q7c6j" podStartSLOduration=173.78053407 podStartE2EDuration="2m53.78053407s" podCreationTimestamp="2026-02-27 16:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:13.776406363 +0000 UTC m=+209.865678856" watchObservedRunningTime="2026-02-27 16:10:13.78053407 +0000 UTC m=+209.869806543" Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.789218 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:13 crc kubenswrapper[4830]: E0227 16:10:13.789416 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:14.289395018 +0000 UTC m=+210.378667511 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.789541 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:13 crc kubenswrapper[4830]: E0227 16:10:13.789910 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:14.289900441 +0000 UTC m=+210.379172934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.890736 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:13 crc kubenswrapper[4830]: E0227 16:10:13.890996 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:14.390925622 +0000 UTC m=+210.480198125 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.891339 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:13 crc kubenswrapper[4830]: E0227 16:10:13.891992 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:14.391872866 +0000 UTC m=+210.481145359 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.991914 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:13 crc kubenswrapper[4830]: E0227 16:10:13.992137 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:14.492100427 +0000 UTC m=+210.581372950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:13 crc kubenswrapper[4830]: I0227 16:10:13.992942 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:13 crc kubenswrapper[4830]: E0227 16:10:13.993629 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:14.493611266 +0000 UTC m=+210.582883769 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:14 crc kubenswrapper[4830]: I0227 16:10:14.093860 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:14 crc kubenswrapper[4830]: E0227 16:10:14.094111 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:14.594056571 +0000 UTC m=+210.683329074 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:14 crc kubenswrapper[4830]: I0227 16:10:14.094409 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:14 crc kubenswrapper[4830]: E0227 16:10:14.095134 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:14.595117159 +0000 UTC m=+210.684389642 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:14 crc kubenswrapper[4830]: I0227 16:10:14.193007 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8lclt" event={"ID":"61f22a16-1565-425a-914d-ec0d5a5c1902","Type":"ContainerStarted","Data":"d36a8ee3bda095ea0ad1142e611ff74ca6fc8f91acf270aecf22357ed3f267ab"} Feb 27 16:10:14 crc kubenswrapper[4830]: I0227 16:10:14.198511 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:14 crc kubenswrapper[4830]: E0227 16:10:14.198655 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:14.698628634 +0000 UTC m=+210.787901097 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:14 crc kubenswrapper[4830]: I0227 16:10:14.198698 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:14 crc kubenswrapper[4830]: E0227 16:10:14.199049 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:14.699037654 +0000 UTC m=+210.788310117 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:14 crc kubenswrapper[4830]: I0227 16:10:14.199452 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n7thk" event={"ID":"25e29ca7-7ff4-4263-8f4e-5a35a6c8118a","Type":"ContainerStarted","Data":"c77951e73689d952825da0883a809c7ea7a83a33a2706a35cd0b3ed2d1e2aede"} Feb 27 16:10:14 crc kubenswrapper[4830]: I0227 16:10:14.202256 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z5n7d" event={"ID":"0a2a2ed5-abaa-4df6-b762-56bb964fbbca","Type":"ContainerStarted","Data":"669eb38557cd827a0fdb929caf37d9b2d93348a74bd033ddd7838e027d9a7eb9"} Feb 27 16:10:14 crc kubenswrapper[4830]: I0227 16:10:14.204780 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8fhp6" event={"ID":"4be12707-b1d1-4a30-bb2c-1af9e3d34d09","Type":"ContainerStarted","Data":"e9bc7daf53f454ed1804efcbdf3d5edd26b4eb6f0f8ea62090c3eae03059caa7"} Feb 27 16:10:14 crc kubenswrapper[4830]: I0227 16:10:14.207937 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xk2qk" event={"ID":"e00ea89f-b3e4-44ed-9348-5cd609b9c563","Type":"ContainerStarted","Data":"244ece4920736dec3cd1f509054a18df2dd65f79e128c40a91e62976dda4949e"} Feb 27 16:10:14 crc kubenswrapper[4830]: I0227 16:10:14.208254 4830 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-vs8sq container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Feb 27 16:10:14 crc kubenswrapper[4830]: I0227 16:10:14.208313 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" podUID="f18ef53a-23d0-4f48-b7a4-96f2716e137f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Feb 27 16:10:14 crc kubenswrapper[4830]: I0227 16:10:14.209627 4830 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-45mg7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/healthz\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Feb 27 16:10:14 crc kubenswrapper[4830]: I0227 16:10:14.209658 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-45mg7" podUID="32e984aa-8399-4cf1-8a4a-b36525c67e35" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.19:8080/healthz\": dial tcp 10.217.0.19:8080: connect: connection refused" Feb 27 16:10:14 crc kubenswrapper[4830]: I0227 16:10:14.257396 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-gb4pt" podStartSLOduration=174.256926651 podStartE2EDuration="2m54.256926651s" podCreationTimestamp="2026-02-27 16:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:14.237314774 +0000 UTC m=+210.326587377" watchObservedRunningTime="2026-02-27 16:10:14.256926651 +0000 UTC m=+210.346199114" Feb 27 16:10:14 crc kubenswrapper[4830]: I0227 16:10:14.300530 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:14 crc kubenswrapper[4830]: E0227 16:10:14.300894 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:14.800878976 +0000 UTC m=+210.890151439 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:14 crc kubenswrapper[4830]: I0227 16:10:14.404753 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:14 crc kubenswrapper[4830]: E0227 16:10:14.406894 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:14.906883766 +0000 UTC m=+210.996156229 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:14 crc kubenswrapper[4830]: I0227 16:10:14.506016 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:14 crc kubenswrapper[4830]: E0227 16:10:14.506233 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:15.006212463 +0000 UTC m=+211.095484916 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:14 crc kubenswrapper[4830]: I0227 16:10:14.506392 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:14 crc kubenswrapper[4830]: E0227 16:10:14.506690 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:15.006683565 +0000 UTC m=+211.095956028 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:14 crc kubenswrapper[4830]: I0227 16:10:14.606821 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:14 crc kubenswrapper[4830]: E0227 16:10:14.607048 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:15.107033648 +0000 UTC m=+211.196306111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:14 crc kubenswrapper[4830]: I0227 16:10:14.707622 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:14 crc kubenswrapper[4830]: E0227 16:10:14.707895 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:15.207884744 +0000 UTC m=+211.297157207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:14 crc kubenswrapper[4830]: I0227 16:10:14.808350 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:14 crc kubenswrapper[4830]: E0227 16:10:14.809004 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:15.308975617 +0000 UTC m=+211.398248080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:14 crc kubenswrapper[4830]: I0227 16:10:14.814388 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:14 crc kubenswrapper[4830]: E0227 16:10:14.814731 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:15.314715855 +0000 UTC m=+211.403988328 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:14 crc kubenswrapper[4830]: I0227 16:10:14.914710 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:14 crc kubenswrapper[4830]: E0227 16:10:14.914808 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:15.414785142 +0000 UTC m=+211.504057605 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:14 crc kubenswrapper[4830]: I0227 16:10:14.915298 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:14 crc kubenswrapper[4830]: E0227 16:10:14.915610 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:15.415601352 +0000 UTC m=+211.504873815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.015974 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:15 crc kubenswrapper[4830]: E0227 16:10:15.016365 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:15.516346786 +0000 UTC m=+211.605619249 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.117076 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:15 crc kubenswrapper[4830]: E0227 16:10:15.117404 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:15.617389267 +0000 UTC m=+211.706661720 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.215931 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rfmxx" event={"ID":"b883e3e8-e6d7-4402-816f-033a0668f6eb","Type":"ContainerStarted","Data":"31a4b0afbfbdbef24d49fbb766d7d2db985aebd0da17d31bfb09d020f775a575"} Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.217541 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:15 crc kubenswrapper[4830]: E0227 16:10:15.218169 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:15.71813932 +0000 UTC m=+211.807411823 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.220285 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8fhp6" event={"ID":"4be12707-b1d1-4a30-bb2c-1af9e3d34d09","Type":"ContainerStarted","Data":"3df20908cd5bdece2c96848d7a5055860bc56f7395558b624af478a8f4fbef11"} Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.220717 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8fhp6" Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.222417 4830 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-8fhp6 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" start-of-body= Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.222584 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8fhp6" podUID="4be12707-b1d1-4a30-bb2c-1af9e3d34d09" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.224218 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5jfm7" event={"ID":"e7d85019-9a72-439e-a548-496027dd3d2c","Type":"ContainerStarted","Data":"c25793e6f0535ea71cb50036beff8a9cdb3143caa5658f133799da1192d2e931"} Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.224376 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5jfm7" Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.226067 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgw6n" event={"ID":"17018e1c-72bf-40ba-9240-5d6684ec855a","Type":"ContainerStarted","Data":"569b27c765b1d56bfc3853ceb6a09aaeacfc0871ceebfdd17727faa2d60b6e39"} Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.227285 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-c75pf" event={"ID":"627c853d-8a30-4a46-a190-dd490a39aa35","Type":"ContainerStarted","Data":"c168d99b098fbc2f0c8a37cffcd468187010a98998ac83d6c0ceb0d882c5dd85"} Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.229286 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-ghjwl" event={"ID":"6de44150-41d6-426a-92f4-d29fb3ee1afe","Type":"ContainerStarted","Data":"389a7bfe0626e5bdb6319a0c05ccf1031004c6e65da512fc3ed018c35b5031da"} Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.230906 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz6hl" event={"ID":"584006df-9736-4ed2-aeba-118587f909d7","Type":"ContainerStarted","Data":"e050f6eb8aee221d7c3bbc4e1ae20f306dede75f04c9f143a4bd781f586b088a"} Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.234357 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5mm8b" event={"ID":"36eaeabc-508b-4a11-9dc5-45ff8b42e0a8","Type":"ContainerStarted","Data":"e0a87fb6f63e491528b8943d717d20d6019ffa8981a457300238d27d4aa39439"} Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.236804 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" event={"ID":"e9c70786-d73e-4e48-a552-bdeb53daba49","Type":"ContainerStarted","Data":"426e6eddfcdbe12fec977e906194daba187b107141a1b47877a7a2bd2fafe5ce"} Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.240018 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-hdgkf" event={"ID":"da82094a-cfed-404a-8fb9-2958b13ce78b","Type":"ContainerStarted","Data":"135eee53b52d3697ed2ec66898142f64d9e4c28108bee4e1d1ba130c1d497454"} Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.286973 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-rfmxx" podStartSLOduration=7.286915518 podStartE2EDuration="7.286915518s" podCreationTimestamp="2026-02-27 16:10:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:15.23592377 +0000 UTC m=+211.325196233" watchObservedRunningTime="2026-02-27 16:10:15.286915518 +0000 UTC m=+211.376187981" Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.288257 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kgdlg" event={"ID":"6ba2fe32-66e0-4bcd-a646-9d07c9a21c54","Type":"ContainerStarted","Data":"97ac90a18188b65e3cd3c4c83c4e98d17b435fdc7ea5074b7d180326f993db8c"} Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.298890 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-srljc" event={"ID":"88dc3209-64e3-47ef-b1f0-e2aeddfe8ece","Type":"ContainerStarted","Data":"2e5694224fda2393bd11d4a3b06f16fd856ef4cecd5b09e9319be255e975821b"} Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.316388 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-n6xx6" event={"ID":"c235c0e5-a6f8-45d8-83e1-91be0d32ac19","Type":"ContainerStarted","Data":"3467a9be8de5efe1a53748f1bc96a607b55759330b4b5eff7c018c7ce34d3589"} Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.317855 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8fhp6" podStartSLOduration=174.317834457 podStartE2EDuration="2m54.317834457s" podCreationTimestamp="2026-02-27 16:07:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:15.315182809 +0000 UTC m=+211.404455272" watchObservedRunningTime="2026-02-27 16:10:15.317834457 +0000 UTC m=+211.407106920" Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.318219 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgw6n" podStartSLOduration=174.318214877 podStartE2EDuration="2m54.318214877s" podCreationTimestamp="2026-02-27 16:07:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:15.294259318 +0000 UTC m=+211.383531781" watchObservedRunningTime="2026-02-27 16:10:15.318214877 +0000 UTC m=+211.407487340" Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.319122 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:15 crc kubenswrapper[4830]: E0227 16:10:15.319900 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:15.81988205 +0000 UTC m=+211.909154513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.322791 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nfkdb" event={"ID":"5c04971d-7bad-44c6-bd80-e27f65c8637f","Type":"ContainerStarted","Data":"8cc6d04a70f44a69418c54b3f2466644db90305f1ab4a97a6be602072fa732f8"} Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.324914 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hlg9d" event={"ID":"41bce4eb-4367-4dee-9c26-df8e0a1e4ea8","Type":"ContainerStarted","Data":"e6674c1c695c5667c5fb13bd177ea0e87a7d8250c235c2bb9dc32731b1c51cd5"} Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.328274 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wpzdz" event={"ID":"910e8c41-1fdf-4f16-9902-532e21fe81ab","Type":"ContainerStarted","Data":"cbae82245fda834613371e773a1c33453c6b7f9a306bb854fe2195ce6f8ccc83"} Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.329579 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5mm8b" podStartSLOduration=175.32956163 podStartE2EDuration="2m55.32956163s" podCreationTimestamp="2026-02-27 16:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:15.326518532 +0000 UTC m=+211.415790995" watchObservedRunningTime="2026-02-27 16:10:15.32956163 +0000 UTC m=+211.418834093" Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.330112 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536800-n8kg4" event={"ID":"041ea905-9e91-41e3-9db6-820256d951aa","Type":"ContainerStarted","Data":"1d1db6ba2d26bed55d01d495302f772f7446ef48d3dfc1ab5d8cdb0c74fec5ae"} Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.332247 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rvd8g" event={"ID":"af0d26af-5990-456b-a3bc-4ea4a14bbc25","Type":"ContainerStarted","Data":"ae01020039e1b95ae9f9e71e55c6a02275f9555d98cc0acedce32f6b79e693bf"} Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.334356 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5shdf" event={"ID":"a235e26d-f41e-406d-992e-3dfb44246bdd","Type":"ContainerStarted","Data":"85ee3a891fb6a8b5d1d4f4348df05aa17d8fd23a2a41208cc455d232ab5b3db5"} Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.336230 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7stx8" event={"ID":"7932070f-5985-4e34-84ff-0af75e044581","Type":"ContainerStarted","Data":"b853f7061f896557aaaf3193e02fbab1827fce0b93cdb516ad041eacf83b3c55"} Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.338035 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bfz6f" event={"ID":"c873a2ac-f7d3-4bea-ad09-b16891a1edf6","Type":"ContainerStarted","Data":"4dd9f8bdaf779814168195bbae32cda3ce59ce184235b77c2db59c37c0f0e99c"} Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.338252 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bfz6f" Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.339016 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b89fm" event={"ID":"df487ddf-86fd-4433-a32d-6d41ffeed9bc","Type":"ContainerStarted","Data":"e3ee9274e7db112ac077004659b27acbfc2b90d24cb9e2eb140fae1ba3a4c729"} Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.339342 4830 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-bfz6f container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.339440 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bfz6f" podUID="c873a2ac-f7d3-4bea-ad09-b16891a1edf6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.342384 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-wh6nt" event={"ID":"d473053a-d4df-40b8-a876-5582e1d8a702","Type":"ContainerStarted","Data":"a4c5b682559a00b4e5aad2a5e8345a658dacfce9719f418c2644b04e3a57c6d6"} Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.364379 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5jfm7" podStartSLOduration=175.36436101 podStartE2EDuration="2m55.36436101s" podCreationTimestamp="2026-02-27 16:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:15.357211215 +0000 UTC m=+211.446483678" watchObservedRunningTime="2026-02-27 16:10:15.36436101 +0000 UTC m=+211.453633473" Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.366422 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-ghjwl" podStartSLOduration=175.366416392 podStartE2EDuration="2m55.366416392s" podCreationTimestamp="2026-02-27 16:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:15.341759446 +0000 UTC m=+211.431031909" watchObservedRunningTime="2026-02-27 16:10:15.366416392 +0000 UTC m=+211.455688855" Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.376003 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8lclt" podStartSLOduration=174.37598832 podStartE2EDuration="2m54.37598832s" podCreationTimestamp="2026-02-27 16:07:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:15.375268172 +0000 UTC m=+211.464540635" watchObservedRunningTime="2026-02-27 16:10:15.37598832 +0000 UTC m=+211.465260783" Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.393424 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n7thk" podStartSLOduration=174.39340669 podStartE2EDuration="2m54.39340669s" podCreationTimestamp="2026-02-27 16:07:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:15.392397664 +0000 UTC m=+211.481670127" watchObservedRunningTime="2026-02-27 16:10:15.39340669 +0000 UTC m=+211.482679153" Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.415614 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29536800-n8kg4" podStartSLOduration=175.415597963 podStartE2EDuration="2m55.415597963s" podCreationTimestamp="2026-02-27 16:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:15.414007972 +0000 UTC m=+211.503280435" watchObservedRunningTime="2026-02-27 16:10:15.415597963 +0000 UTC m=+211.504870426" Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.420569 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:15 crc kubenswrapper[4830]: E0227 16:10:15.420692 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:15.920671745 +0000 UTC m=+212.009944208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.421023 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:15 crc kubenswrapper[4830]: E0227 16:10:15.422136 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:15.922126382 +0000 UTC m=+212.011398845 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.454146 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-b89fm" podStartSLOduration=175.45412827 podStartE2EDuration="2m55.45412827s" podCreationTimestamp="2026-02-27 16:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:15.433748562 +0000 UTC m=+211.523021025" watchObservedRunningTime="2026-02-27 16:10:15.45412827 +0000 UTC m=+211.543400733" Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.454461 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rvd8g" podStartSLOduration=175.454457468 podStartE2EDuration="2m55.454457468s" podCreationTimestamp="2026-02-27 16:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:15.452099447 +0000 UTC m=+211.541371910" watchObservedRunningTime="2026-02-27 16:10:15.454457468 +0000 UTC m=+211.543729931" Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.467292 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z5n7d" podStartSLOduration=175.467284939 podStartE2EDuration="2m55.467284939s" podCreationTimestamp="2026-02-27 16:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:15.46618452 +0000 UTC m=+211.555456983" watchObservedRunningTime="2026-02-27 16:10:15.467284939 +0000 UTC m=+211.556557402" Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.517471 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-xk2qk" podStartSLOduration=174.517452446 podStartE2EDuration="2m54.517452446s" podCreationTimestamp="2026-02-27 16:07:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:15.487895192 +0000 UTC m=+211.577167655" watchObservedRunningTime="2026-02-27 16:10:15.517452446 +0000 UTC m=+211.606724909" Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.518451 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-wh6nt" podStartSLOduration=175.518446212 podStartE2EDuration="2m55.518446212s" podCreationTimestamp="2026-02-27 16:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:15.516004998 +0000 UTC m=+211.605277461" watchObservedRunningTime="2026-02-27 16:10:15.518446212 +0000 UTC m=+211.607718675" Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.522023 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:15 crc kubenswrapper[4830]: E0227 16:10:15.522236 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:16.022210308 +0000 UTC m=+212.111482771 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.522586 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:15 crc kubenswrapper[4830]: E0227 16:10:15.525024 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:16.025015981 +0000 UTC m=+212.114288444 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.536638 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nfkdb" podStartSLOduration=175.536624322 podStartE2EDuration="2m55.536624322s" podCreationTimestamp="2026-02-27 16:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:15.536030356 +0000 UTC m=+211.625302819" watchObservedRunningTime="2026-02-27 16:10:15.536624322 +0000 UTC m=+211.625896785" Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.580083 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7stx8" podStartSLOduration=175.580066734 podStartE2EDuration="2m55.580066734s" podCreationTimestamp="2026-02-27 16:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:15.576610974 +0000 UTC m=+211.665883427" watchObservedRunningTime="2026-02-27 16:10:15.580066734 +0000 UTC m=+211.669339197" Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.580468 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-hdgkf" podStartSLOduration=174.580463244 podStartE2EDuration="2m54.580463244s" podCreationTimestamp="2026-02-27 16:07:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:15.564135372 +0000 UTC m=+211.653407835" watchObservedRunningTime="2026-02-27 16:10:15.580463244 +0000 UTC m=+211.669735707" Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.593384 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2x2lh" podStartSLOduration=175.593364067 podStartE2EDuration="2m55.593364067s" podCreationTimestamp="2026-02-27 16:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:15.58996462 +0000 UTC m=+211.679237103" watchObservedRunningTime="2026-02-27 16:10:15.593364067 +0000 UTC m=+211.682636530" Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.623416 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:15 crc kubenswrapper[4830]: E0227 16:10:15.623735 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:16.123707982 +0000 UTC m=+212.212980445 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.623941 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:15 crc kubenswrapper[4830]: E0227 16:10:15.624385 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:16.124373319 +0000 UTC m=+212.213645782 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.725535 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:15 crc kubenswrapper[4830]: E0227 16:10:15.726236 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:16.226210261 +0000 UTC m=+212.315482734 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.827626 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:15 crc kubenswrapper[4830]: E0227 16:10:15.828023 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:16.328007782 +0000 UTC m=+212.417280245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.874188 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-wh6nt" Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.876607 4830 patch_prober.go:28] interesting pod/router-default-5444994796-wh6nt container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.876664 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wh6nt" podUID="d473053a-d4df-40b8-a876-5582e1d8a702" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.928822 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:15 crc kubenswrapper[4830]: E0227 16:10:15.929038 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:16.428923879 +0000 UTC m=+212.518196342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:15 crc kubenswrapper[4830]: I0227 16:10:15.929357 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:15 crc kubenswrapper[4830]: E0227 16:10:15.929725 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:16.42971623 +0000 UTC m=+212.518988693 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:16 crc kubenswrapper[4830]: I0227 16:10:16.030325 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:16 crc kubenswrapper[4830]: E0227 16:10:16.030578 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:16.530546676 +0000 UTC m=+212.619819139 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:16 crc kubenswrapper[4830]: I0227 16:10:16.030664 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:16 crc kubenswrapper[4830]: E0227 16:10:16.031024 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:16.531016628 +0000 UTC m=+212.620289091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:16 crc kubenswrapper[4830]: I0227 16:10:16.131714 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:16 crc kubenswrapper[4830]: E0227 16:10:16.132025 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:16.631936956 +0000 UTC m=+212.721209449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:16 crc kubenswrapper[4830]: I0227 16:10:16.233270 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:16 crc kubenswrapper[4830]: E0227 16:10:16.233652 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:16.733635974 +0000 UTC m=+212.822908437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:16 crc kubenswrapper[4830]: I0227 16:10:16.333831 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:16 crc kubenswrapper[4830]: E0227 16:10:16.334007 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:16.833976887 +0000 UTC m=+212.923249350 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:16 crc kubenswrapper[4830]: I0227 16:10:16.334271 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:16 crc kubenswrapper[4830]: E0227 16:10:16.334546 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:16.834534932 +0000 UTC m=+212.923807395 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:16 crc kubenswrapper[4830]: I0227 16:10:16.367251 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-srljc" event={"ID":"88dc3209-64e3-47ef-b1f0-e2aeddfe8ece","Type":"ContainerStarted","Data":"01bf8119082af97d426e791595f6f790f8ce945ad6fdd022d3961e7c1aff759e"} Feb 27 16:10:16 crc kubenswrapper[4830]: I0227 16:10:16.367483 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-srljc" Feb 27 16:10:16 crc kubenswrapper[4830]: I0227 16:10:16.372332 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5shdf" event={"ID":"a235e26d-f41e-406d-992e-3dfb44246bdd","Type":"ContainerStarted","Data":"3e6c8da18416332de9166540fd576ee63b46410febd911a210cd643c07e5b9e2"} Feb 27 16:10:16 crc kubenswrapper[4830]: I0227 16:10:16.378075 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" event={"ID":"e9c70786-d73e-4e48-a552-bdeb53daba49","Type":"ContainerStarted","Data":"584c8d3fb942c2f7e7d588ed369445bbe51a6fcd5d8f76ff9a0e4302bb489484"} Feb 27 16:10:16 crc kubenswrapper[4830]: I0227 16:10:16.381145 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bfz6f" podStartSLOduration=175.381136196 podStartE2EDuration="2m55.381136196s" podCreationTimestamp="2026-02-27 16:07:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:15.612355899 +0000 UTC m=+211.701628362" watchObservedRunningTime="2026-02-27 16:10:16.381136196 +0000 UTC m=+212.470408659" Feb 27 16:10:16 crc kubenswrapper[4830]: I0227 16:10:16.383693 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-c75pf" event={"ID":"627c853d-8a30-4a46-a190-dd490a39aa35","Type":"ContainerStarted","Data":"cda442b1ba6a7171572df7416cbfc872ca52de8ffb1d79f2020ab550deded9b1"} Feb 27 16:10:16 crc kubenswrapper[4830]: I0227 16:10:16.395019 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hlg9d" event={"ID":"41bce4eb-4367-4dee-9c26-df8e0a1e4ea8","Type":"ContainerStarted","Data":"9e7db85d36b52bb2725a63ecb3c45ba7bab6f5d0397dd6e3783df2fc887e0055"} Feb 27 16:10:16 crc kubenswrapper[4830]: I0227 16:10:16.395247 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hlg9d" Feb 27 16:10:16 crc kubenswrapper[4830]: I0227 16:10:16.398638 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5shdf" podStartSLOduration=175.398624037 podStartE2EDuration="2m55.398624037s" podCreationTimestamp="2026-02-27 16:07:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:16.397817627 +0000 UTC m=+212.487090090" watchObservedRunningTime="2026-02-27 16:10:16.398624037 +0000 UTC m=+212.487896500" Feb 27 16:10:16 crc kubenswrapper[4830]: I0227 16:10:16.401074 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-srljc" podStartSLOduration=8.401064991 podStartE2EDuration="8.401064991s" podCreationTimestamp="2026-02-27 16:10:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:16.382731307 +0000 UTC m=+212.472003770" watchObservedRunningTime="2026-02-27 16:10:16.401064991 +0000 UTC m=+212.490337454" Feb 27 16:10:16 crc kubenswrapper[4830]: I0227 16:10:16.404890 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz6hl" event={"ID":"584006df-9736-4ed2-aeba-118587f909d7","Type":"ContainerStarted","Data":"b07cc3bf44827b7e54b458f962c682431d058e707f4382899b0e08de375a47f6"} Feb 27 16:10:16 crc kubenswrapper[4830]: I0227 16:10:16.411732 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-n6xx6" event={"ID":"c235c0e5-a6f8-45d8-83e1-91be0d32ac19","Type":"ContainerStarted","Data":"dbdeae58f5eb4ef9a5303fa05cce5a9d87699b94ebf793ea628a4bc8e7a95af4"} Feb 27 16:10:16 crc kubenswrapper[4830]: I0227 16:10:16.416322 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wpzdz" event={"ID":"910e8c41-1fdf-4f16-9902-532e21fe81ab","Type":"ContainerStarted","Data":"307f9d9d3f24b9e2c045de4b2272bbc3da8b918fe2099d1f8204f05734854f29"} Feb 27 16:10:16 crc kubenswrapper[4830]: I0227 16:10:16.420661 4830 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-bfz6f container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Feb 27 16:10:16 crc kubenswrapper[4830]: I0227 16:10:16.420699 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bfz6f" podUID="c873a2ac-f7d3-4bea-ad09-b16891a1edf6" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Feb 27 16:10:16 crc kubenswrapper[4830]: I0227 16:10:16.420800 4830 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-8fhp6 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" start-of-body= Feb 27 16:10:16 crc kubenswrapper[4830]: I0227 16:10:16.420882 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8fhp6" podUID="4be12707-b1d1-4a30-bb2c-1af9e3d34d09" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" Feb 27 16:10:16 crc kubenswrapper[4830]: I0227 16:10:16.435267 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:16 crc kubenswrapper[4830]: E0227 16:10:16.436030 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:16.935972943 +0000 UTC m=+213.025245406 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:16 crc kubenswrapper[4830]: I0227 16:10:16.452678 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" podStartSLOduration=176.452649804 podStartE2EDuration="2m56.452649804s" podCreationTimestamp="2026-02-27 16:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:16.433961121 +0000 UTC m=+212.523233574" watchObservedRunningTime="2026-02-27 16:10:16.452649804 +0000 UTC m=+212.541922277" Feb 27 16:10:16 crc kubenswrapper[4830]: I0227 16:10:16.460059 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pz6hl" podStartSLOduration=175.460045265 podStartE2EDuration="2m55.460045265s" podCreationTimestamp="2026-02-27 16:07:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:16.454197614 +0000 UTC m=+212.543470077" watchObservedRunningTime="2026-02-27 16:10:16.460045265 +0000 UTC m=+212.549317738" Feb 27 16:10:16 crc kubenswrapper[4830]: I0227 16:10:16.475825 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hlg9d" podStartSLOduration=175.475800653 podStartE2EDuration="2m55.475800653s" podCreationTimestamp="2026-02-27 16:07:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:16.475071393 +0000 UTC m=+212.564343856" watchObservedRunningTime="2026-02-27 16:10:16.475800653 +0000 UTC m=+212.565073116" Feb 27 16:10:16 crc kubenswrapper[4830]: I0227 16:10:16.489262 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-kgdlg" podStartSLOduration=176.489241589 podStartE2EDuration="2m56.489241589s" podCreationTimestamp="2026-02-27 16:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:16.487585897 +0000 UTC m=+212.576858360" watchObservedRunningTime="2026-02-27 16:10:16.489241589 +0000 UTC m=+212.578514052" Feb 27 16:10:16 crc kubenswrapper[4830]: I0227 16:10:16.511374 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-c75pf" podStartSLOduration=175.511353471 podStartE2EDuration="2m55.511353471s" podCreationTimestamp="2026-02-27 16:07:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:16.502310407 +0000 UTC m=+212.591582870" watchObservedRunningTime="2026-02-27 16:10:16.511353471 +0000 UTC m=+212.600625934" Feb 27 16:10:16 crc kubenswrapper[4830]: I0227 16:10:16.540125 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:16 crc kubenswrapper[4830]: E0227 16:10:16.540596 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:17.040579316 +0000 UTC m=+213.129851779 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:16 crc kubenswrapper[4830]: I0227 16:10:16.585993 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wpzdz" podStartSLOduration=176.585940818 podStartE2EDuration="2m56.585940818s" podCreationTimestamp="2026-02-27 16:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:16.584615144 +0000 UTC m=+212.673887607" watchObservedRunningTime="2026-02-27 16:10:16.585940818 +0000 UTC m=+212.675213281" Feb 27 16:10:16 crc kubenswrapper[4830]: I0227 16:10:16.614084 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-n6xx6" podStartSLOduration=176.614067875 podStartE2EDuration="2m56.614067875s" podCreationTimestamp="2026-02-27 16:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:16.610422112 +0000 UTC m=+212.699694575" watchObservedRunningTime="2026-02-27 16:10:16.614067875 +0000 UTC m=+212.703340338" Feb 27 16:10:16 crc kubenswrapper[4830]: I0227 16:10:16.641845 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:16 crc kubenswrapper[4830]: E0227 16:10:16.642651 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:17.142626814 +0000 UTC m=+213.231899267 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:16 crc kubenswrapper[4830]: I0227 16:10:16.743792 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:16 crc kubenswrapper[4830]: E0227 16:10:16.744332 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:17.244308001 +0000 UTC m=+213.333580464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:16 crc kubenswrapper[4830]: I0227 16:10:16.845402 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:16 crc kubenswrapper[4830]: E0227 16:10:16.845952 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:17.345923947 +0000 UTC m=+213.435196410 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:16 crc kubenswrapper[4830]: I0227 16:10:16.881240 4830 patch_prober.go:28] interesting pod/router-default-5444994796-wh6nt container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 27 16:10:16 crc kubenswrapper[4830]: I0227 16:10:16.881303 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wh6nt" podUID="d473053a-d4df-40b8-a876-5582e1d8a702" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 27 16:10:16 crc kubenswrapper[4830]: I0227 16:10:16.946848 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:16 crc kubenswrapper[4830]: E0227 16:10:16.947619 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:17.447605394 +0000 UTC m=+213.536877857 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:17 crc kubenswrapper[4830]: I0227 16:10:17.048177 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:17 crc kubenswrapper[4830]: E0227 16:10:17.048397 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:17.548362628 +0000 UTC m=+213.637635091 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:17 crc kubenswrapper[4830]: I0227 16:10:17.048541 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:17 crc kubenswrapper[4830]: E0227 16:10:17.049063 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:17.549054657 +0000 UTC m=+213.638327120 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:17 crc kubenswrapper[4830]: I0227 16:10:17.149696 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:17 crc kubenswrapper[4830]: E0227 16:10:17.150149 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:17.650115158 +0000 UTC m=+213.739387621 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:17 crc kubenswrapper[4830]: I0227 16:10:17.250910 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:17 crc kubenswrapper[4830]: E0227 16:10:17.251194 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:17.75118347 +0000 UTC m=+213.840455933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:17 crc kubenswrapper[4830]: I0227 16:10:17.352207 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:17 crc kubenswrapper[4830]: E0227 16:10:17.352494 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:17.852476888 +0000 UTC m=+213.941749351 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:17 crc kubenswrapper[4830]: I0227 16:10:17.453832 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:17 crc kubenswrapper[4830]: E0227 16:10:17.454158 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:17.954146095 +0000 UTC m=+214.043418548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:17 crc kubenswrapper[4830]: I0227 16:10:17.556791 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:17 crc kubenswrapper[4830]: E0227 16:10:17.557056 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:18.057019234 +0000 UTC m=+214.146291687 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:17 crc kubenswrapper[4830]: I0227 16:10:17.557248 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:17 crc kubenswrapper[4830]: E0227 16:10:17.558460 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:18.058438571 +0000 UTC m=+214.147711034 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:17 crc kubenswrapper[4830]: I0227 16:10:17.658582 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:17 crc kubenswrapper[4830]: E0227 16:10:17.658790 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:18.158757473 +0000 UTC m=+214.248029936 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:17 crc kubenswrapper[4830]: I0227 16:10:17.659006 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:17 crc kubenswrapper[4830]: E0227 16:10:17.659387 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:18.159379909 +0000 UTC m=+214.248652372 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:17 crc kubenswrapper[4830]: I0227 16:10:17.760072 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:17 crc kubenswrapper[4830]: E0227 16:10:17.760230 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:18.260207515 +0000 UTC m=+214.349479978 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:17 crc kubenswrapper[4830]: I0227 16:10:17.760318 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:17 crc kubenswrapper[4830]: E0227 16:10:17.760604 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:18.260597525 +0000 UTC m=+214.349869988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:17 crc kubenswrapper[4830]: I0227 16:10:17.861128 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:17 crc kubenswrapper[4830]: E0227 16:10:17.861310 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:18.361281687 +0000 UTC m=+214.450554150 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:17 crc kubenswrapper[4830]: I0227 16:10:17.861360 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:17 crc kubenswrapper[4830]: E0227 16:10:17.861650 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:18.361638046 +0000 UTC m=+214.450910509 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:17 crc kubenswrapper[4830]: I0227 16:10:17.884872 4830 patch_prober.go:28] interesting pod/router-default-5444994796-wh6nt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 16:10:17 crc kubenswrapper[4830]: [-]has-synced failed: reason withheld Feb 27 16:10:17 crc kubenswrapper[4830]: [+]process-running ok Feb 27 16:10:17 crc kubenswrapper[4830]: healthz check failed Feb 27 16:10:17 crc kubenswrapper[4830]: I0227 16:10:17.885315 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wh6nt" podUID="d473053a-d4df-40b8-a876-5582e1d8a702" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:10:17 crc kubenswrapper[4830]: I0227 16:10:17.962865 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:17 crc kubenswrapper[4830]: E0227 16:10:17.963125 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:18.463085998 +0000 UTC m=+214.552358461 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:17 crc kubenswrapper[4830]: I0227 16:10:17.963187 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:17 crc kubenswrapper[4830]: E0227 16:10:17.963492 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:18.463476988 +0000 UTC m=+214.552749451 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:18 crc kubenswrapper[4830]: I0227 16:10:18.064850 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:18 crc kubenswrapper[4830]: E0227 16:10:18.065044 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:18.565016012 +0000 UTC m=+214.654288475 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:18 crc kubenswrapper[4830]: I0227 16:10:18.065097 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:18 crc kubenswrapper[4830]: E0227 16:10:18.065369 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:18.565357701 +0000 UTC m=+214.654630164 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:18 crc kubenswrapper[4830]: I0227 16:10:18.165940 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:18 crc kubenswrapper[4830]: E0227 16:10:18.166046 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:18.666027003 +0000 UTC m=+214.755299466 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:18 crc kubenswrapper[4830]: I0227 16:10:18.166158 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:18 crc kubenswrapper[4830]: E0227 16:10:18.166457 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:18.666450313 +0000 UTC m=+214.755722776 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:18 crc kubenswrapper[4830]: I0227 16:10:18.267491 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:18 crc kubenswrapper[4830]: E0227 16:10:18.267660 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:18.767630658 +0000 UTC m=+214.856903121 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:18 crc kubenswrapper[4830]: I0227 16:10:18.267767 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:18 crc kubenswrapper[4830]: E0227 16:10:18.268121 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:18.768112471 +0000 UTC m=+214.857384934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:18 crc kubenswrapper[4830]: I0227 16:10:18.304539 4830 ???:1] "http: TLS handshake error from 192.168.126.11:56550: no serving certificate available for the kubelet" Feb 27 16:10:18 crc kubenswrapper[4830]: I0227 16:10:18.369036 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:18 crc kubenswrapper[4830]: E0227 16:10:18.369173 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:18.869155762 +0000 UTC m=+214.958428225 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:18 crc kubenswrapper[4830]: I0227 16:10:18.369254 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:18 crc kubenswrapper[4830]: E0227 16:10:18.369495 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:18.86948602 +0000 UTC m=+214.958758483 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:18 crc kubenswrapper[4830]: I0227 16:10:18.402692 4830 ???:1] "http: TLS handshake error from 192.168.126.11:56564: no serving certificate available for the kubelet" Feb 27 16:10:18 crc kubenswrapper[4830]: I0227 16:10:18.442183 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gw4c8" event={"ID":"3311d92d-90da-42f5-acf3-3ec723c5edad","Type":"ContainerStarted","Data":"1ebe23edaaff2ec2e5f1265d020c6aa79ea7757538663e9e427800afdfed765a"} Feb 27 16:10:18 crc kubenswrapper[4830]: I0227 16:10:18.470012 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:18 crc kubenswrapper[4830]: E0227 16:10:18.470341 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:18.970326816 +0000 UTC m=+215.059599269 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:18 crc kubenswrapper[4830]: I0227 16:10:18.476481 4830 ???:1] "http: TLS handshake error from 192.168.126.11:56572: no serving certificate available for the kubelet" Feb 27 16:10:18 crc kubenswrapper[4830]: I0227 16:10:18.571967 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:18 crc kubenswrapper[4830]: E0227 16:10:18.572390 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:19.072369954 +0000 UTC m=+215.161642417 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:18 crc kubenswrapper[4830]: I0227 16:10:18.591676 4830 ???:1] "http: TLS handshake error from 192.168.126.11:56576: no serving certificate available for the kubelet" Feb 27 16:10:18 crc kubenswrapper[4830]: I0227 16:10:18.673346 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:18 crc kubenswrapper[4830]: E0227 16:10:18.673671 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:19.173657051 +0000 UTC m=+215.262929514 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:18 crc kubenswrapper[4830]: I0227 16:10:18.693129 4830 ???:1] "http: TLS handshake error from 192.168.126.11:56588: no serving certificate available for the kubelet" Feb 27 16:10:18 crc kubenswrapper[4830]: I0227 16:10:18.774956 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:18 crc kubenswrapper[4830]: E0227 16:10:18.775247 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:19.275236196 +0000 UTC m=+215.364508659 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:18 crc kubenswrapper[4830]: I0227 16:10:18.787257 4830 ???:1] "http: TLS handshake error from 192.168.126.11:56598: no serving certificate available for the kubelet" Feb 27 16:10:18 crc kubenswrapper[4830]: I0227 16:10:18.876144 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:18 crc kubenswrapper[4830]: E0227 16:10:18.876326 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:19.376302578 +0000 UTC m=+215.465575041 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:18 crc kubenswrapper[4830]: I0227 16:10:18.876400 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:18 crc kubenswrapper[4830]: E0227 16:10:18.876710 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:19.376697678 +0000 UTC m=+215.465970141 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:18 crc kubenswrapper[4830]: I0227 16:10:18.876930 4830 patch_prober.go:28] interesting pod/router-default-5444994796-wh6nt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 16:10:18 crc kubenswrapper[4830]: [-]has-synced failed: reason withheld Feb 27 16:10:18 crc kubenswrapper[4830]: [+]process-running ok Feb 27 16:10:18 crc kubenswrapper[4830]: healthz check failed Feb 27 16:10:18 crc kubenswrapper[4830]: I0227 16:10:18.876968 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wh6nt" podUID="d473053a-d4df-40b8-a876-5582e1d8a702" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:10:18 crc kubenswrapper[4830]: I0227 16:10:18.897072 4830 ???:1] "http: TLS handshake error from 192.168.126.11:56600: no serving certificate available for the kubelet" Feb 27 16:10:18 crc kubenswrapper[4830]: I0227 16:10:18.977027 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:18 crc kubenswrapper[4830]: E0227 16:10:18.977351 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:19.477337579 +0000 UTC m=+215.566610042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.078568 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:19 crc kubenswrapper[4830]: E0227 16:10:19.078903 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:19.578892694 +0000 UTC m=+215.668165157 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.079291 4830 ???:1] "http: TLS handshake error from 192.168.126.11:56610: no serving certificate available for the kubelet" Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.179529 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:19 crc kubenswrapper[4830]: E0227 16:10:19.179750 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:19.679724469 +0000 UTC m=+215.768996932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.179823 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:19 crc kubenswrapper[4830]: E0227 16:10:19.180093 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:19.680081918 +0000 UTC m=+215.769354381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.281407 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:19 crc kubenswrapper[4830]: E0227 16:10:19.281723 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:19.781707055 +0000 UTC m=+215.870979518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.382453 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:19 crc kubenswrapper[4830]: E0227 16:10:19.382712 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:19.882698565 +0000 UTC m=+215.971971028 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.483342 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:19 crc kubenswrapper[4830]: E0227 16:10:19.487280 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:19.987239126 +0000 UTC m=+216.076511589 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.566700 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-v78pc"] Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.567418 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-v78pc" podUID="278df35c-de00-443d-a6f7-e0cc526a487c" containerName="controller-manager" containerID="cri-o://1a9018ca3884a4806bc37e707c445a76d4c94f79575773e2c15aa66edafbcebc" gracePeriod=30 Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.574248 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-v78pc" Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.585075 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:19 crc kubenswrapper[4830]: E0227 16:10:19.585399 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:20.085382952 +0000 UTC m=+216.174655415 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.592789 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2rrvm"] Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.593010 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2rrvm" podUID="4ce35469-d725-409b-8e24-2c74769d7b77" containerName="route-controller-manager" containerID="cri-o://a1a24021a2ad8f73298f5fb9a4df12c89c987963d2a1004acd7c63aa5d7857da" gracePeriod=30 Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.599450 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2rrvm" Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.618135 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k7l8d"] Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.621083 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k7l8d" Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.623333 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.645306 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k7l8d"] Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.688279 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:19 crc kubenswrapper[4830]: E0227 16:10:19.688881 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:20.188854536 +0000 UTC m=+216.278126999 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.765880 4830 ???:1] "http: TLS handshake error from 192.168.126.11:56616: no serving certificate available for the kubelet" Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.769311 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgw6n" Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.769350 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgw6n" Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.791536 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgw6n" Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.792030 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.792100 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2579681-6b81-4b58-9d2c-c26b123be8ec-catalog-content\") pod \"certified-operators-k7l8d\" (UID: \"f2579681-6b81-4b58-9d2c-c26b123be8ec\") " pod="openshift-marketplace/certified-operators-k7l8d" Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.792135 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2579681-6b81-4b58-9d2c-c26b123be8ec-utilities\") pod \"certified-operators-k7l8d\" (UID: \"f2579681-6b81-4b58-9d2c-c26b123be8ec\") " pod="openshift-marketplace/certified-operators-k7l8d" Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.792158 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pq52\" (UniqueName: \"kubernetes.io/projected/f2579681-6b81-4b58-9d2c-c26b123be8ec-kube-api-access-4pq52\") pod \"certified-operators-k7l8d\" (UID: \"f2579681-6b81-4b58-9d2c-c26b123be8ec\") " pod="openshift-marketplace/certified-operators-k7l8d" Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.792620 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5jfm7" Feb 27 16:10:19 crc kubenswrapper[4830]: E0227 16:10:19.793221 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:20.293210643 +0000 UTC m=+216.382483106 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.794097 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-966h2"] Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.795660 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-966h2" Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.797725 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.842526 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-966h2"] Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.877660 4830 patch_prober.go:28] interesting pod/router-default-5444994796-wh6nt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 16:10:19 crc kubenswrapper[4830]: [-]has-synced failed: reason withheld Feb 27 16:10:19 crc kubenswrapper[4830]: [+]process-running ok Feb 27 16:10:19 crc kubenswrapper[4830]: healthz check failed Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.877702 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wh6nt" podUID="d473053a-d4df-40b8-a876-5582e1d8a702" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.894179 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.894378 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x989f\" (UniqueName: \"kubernetes.io/projected/8b33138a-5b9d-4af8-b13d-4db4c2613983-kube-api-access-x989f\") pod \"community-operators-966h2\" (UID: \"8b33138a-5b9d-4af8-b13d-4db4c2613983\") " pod="openshift-marketplace/community-operators-966h2" Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.894460 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2579681-6b81-4b58-9d2c-c26b123be8ec-catalog-content\") pod \"certified-operators-k7l8d\" (UID: \"f2579681-6b81-4b58-9d2c-c26b123be8ec\") " pod="openshift-marketplace/certified-operators-k7l8d" Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.894523 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2579681-6b81-4b58-9d2c-c26b123be8ec-utilities\") pod \"certified-operators-k7l8d\" (UID: \"f2579681-6b81-4b58-9d2c-c26b123be8ec\") " pod="openshift-marketplace/certified-operators-k7l8d" Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.894541 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b33138a-5b9d-4af8-b13d-4db4c2613983-catalog-content\") pod \"community-operators-966h2\" (UID: \"8b33138a-5b9d-4af8-b13d-4db4c2613983\") " pod="openshift-marketplace/community-operators-966h2" Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.894558 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pq52\" (UniqueName: \"kubernetes.io/projected/f2579681-6b81-4b58-9d2c-c26b123be8ec-kube-api-access-4pq52\") pod \"certified-operators-k7l8d\" (UID: \"f2579681-6b81-4b58-9d2c-c26b123be8ec\") " pod="openshift-marketplace/certified-operators-k7l8d" Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.894573 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b33138a-5b9d-4af8-b13d-4db4c2613983-utilities\") pod \"community-operators-966h2\" (UID: \"8b33138a-5b9d-4af8-b13d-4db4c2613983\") " pod="openshift-marketplace/community-operators-966h2" Feb 27 16:10:19 crc kubenswrapper[4830]: E0227 16:10:19.895382 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:20.395368114 +0000 UTC m=+216.484640577 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.896442 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2579681-6b81-4b58-9d2c-c26b123be8ec-catalog-content\") pod \"certified-operators-k7l8d\" (UID: \"f2579681-6b81-4b58-9d2c-c26b123be8ec\") " pod="openshift-marketplace/certified-operators-k7l8d" Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.896747 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2579681-6b81-4b58-9d2c-c26b123be8ec-utilities\") pod \"certified-operators-k7l8d\" (UID: \"f2579681-6b81-4b58-9d2c-c26b123be8ec\") " pod="openshift-marketplace/certified-operators-k7l8d" Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.960917 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pq52\" (UniqueName: \"kubernetes.io/projected/f2579681-6b81-4b58-9d2c-c26b123be8ec-kube-api-access-4pq52\") pod \"certified-operators-k7l8d\" (UID: \"f2579681-6b81-4b58-9d2c-c26b123be8ec\") " pod="openshift-marketplace/certified-operators-k7l8d" Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.993512 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dnpxp"] Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.994543 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dnpxp" Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.995356 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b33138a-5b9d-4af8-b13d-4db4c2613983-catalog-content\") pod \"community-operators-966h2\" (UID: \"8b33138a-5b9d-4af8-b13d-4db4c2613983\") " pod="openshift-marketplace/community-operators-966h2" Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.995397 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b33138a-5b9d-4af8-b13d-4db4c2613983-utilities\") pod \"community-operators-966h2\" (UID: \"8b33138a-5b9d-4af8-b13d-4db4c2613983\") " pod="openshift-marketplace/community-operators-966h2" Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.995436 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.995459 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x989f\" (UniqueName: \"kubernetes.io/projected/8b33138a-5b9d-4af8-b13d-4db4c2613983-kube-api-access-x989f\") pod \"community-operators-966h2\" (UID: \"8b33138a-5b9d-4af8-b13d-4db4c2613983\") " pod="openshift-marketplace/community-operators-966h2" Feb 27 16:10:19 crc kubenswrapper[4830]: E0227 16:10:19.995985 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:20.495970444 +0000 UTC m=+216.585242907 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.996217 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b33138a-5b9d-4af8-b13d-4db4c2613983-utilities\") pod \"community-operators-966h2\" (UID: \"8b33138a-5b9d-4af8-b13d-4db4c2613983\") " pod="openshift-marketplace/community-operators-966h2" Feb 27 16:10:19 crc kubenswrapper[4830]: I0227 16:10:19.996388 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b33138a-5b9d-4af8-b13d-4db4c2613983-catalog-content\") pod \"community-operators-966h2\" (UID: \"8b33138a-5b9d-4af8-b13d-4db4c2613983\") " pod="openshift-marketplace/community-operators-966h2" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.007529 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dnpxp"] Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.020858 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x989f\" (UniqueName: \"kubernetes.io/projected/8b33138a-5b9d-4af8-b13d-4db4c2613983-kube-api-access-x989f\") pod \"community-operators-966h2\" (UID: \"8b33138a-5b9d-4af8-b13d-4db4c2613983\") " pod="openshift-marketplace/community-operators-966h2" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.037071 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-v78pc" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.087364 4830 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-2rrvm container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.087420 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2rrvm" podUID="4ce35469-d725-409b-8e24-2c74769d7b77" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.096083 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/278df35c-de00-443d-a6f7-e0cc526a487c-client-ca\") pod \"278df35c-de00-443d-a6f7-e0cc526a487c\" (UID: \"278df35c-de00-443d-a6f7-e0cc526a487c\") " Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.096159 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/278df35c-de00-443d-a6f7-e0cc526a487c-config\") pod \"278df35c-de00-443d-a6f7-e0cc526a487c\" (UID: \"278df35c-de00-443d-a6f7-e0cc526a487c\") " Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.096386 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.096447 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/278df35c-de00-443d-a6f7-e0cc526a487c-proxy-ca-bundles\") pod \"278df35c-de00-443d-a6f7-e0cc526a487c\" (UID: \"278df35c-de00-443d-a6f7-e0cc526a487c\") " Feb 27 16:10:20 crc kubenswrapper[4830]: E0227 16:10:20.096547 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:20.596522641 +0000 UTC m=+216.685795094 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.096587 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/278df35c-de00-443d-a6f7-e0cc526a487c-serving-cert\") pod \"278df35c-de00-443d-a6f7-e0cc526a487c\" (UID: \"278df35c-de00-443d-a6f7-e0cc526a487c\") " Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.096616 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5c8f\" (UniqueName: \"kubernetes.io/projected/278df35c-de00-443d-a6f7-e0cc526a487c-kube-api-access-t5c8f\") pod \"278df35c-de00-443d-a6f7-e0cc526a487c\" (UID: \"278df35c-de00-443d-a6f7-e0cc526a487c\") " Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.096790 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/789ee180-dd8e-4cb2-884e-beea08667c53-catalog-content\") pod \"certified-operators-dnpxp\" (UID: \"789ee180-dd8e-4cb2-884e-beea08667c53\") " pod="openshift-marketplace/certified-operators-dnpxp" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.096856 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzbpj\" (UniqueName: \"kubernetes.io/projected/789ee180-dd8e-4cb2-884e-beea08667c53-kube-api-access-lzbpj\") pod \"certified-operators-dnpxp\" (UID: \"789ee180-dd8e-4cb2-884e-beea08667c53\") " pod="openshift-marketplace/certified-operators-dnpxp" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.097086 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/789ee180-dd8e-4cb2-884e-beea08667c53-utilities\") pod \"certified-operators-dnpxp\" (UID: \"789ee180-dd8e-4cb2-884e-beea08667c53\") " pod="openshift-marketplace/certified-operators-dnpxp" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.097032 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/278df35c-de00-443d-a6f7-e0cc526a487c-client-ca" (OuterVolumeSpecName: "client-ca") pod "278df35c-de00-443d-a6f7-e0cc526a487c" (UID: "278df35c-de00-443d-a6f7-e0cc526a487c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.097149 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.097223 4830 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/278df35c-de00-443d-a6f7-e0cc526a487c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.097497 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/278df35c-de00-443d-a6f7-e0cc526a487c-config" (OuterVolumeSpecName: "config") pod "278df35c-de00-443d-a6f7-e0cc526a487c" (UID: "278df35c-de00-443d-a6f7-e0cc526a487c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.097562 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/278df35c-de00-443d-a6f7-e0cc526a487c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "278df35c-de00-443d-a6f7-e0cc526a487c" (UID: "278df35c-de00-443d-a6f7-e0cc526a487c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:10:20 crc kubenswrapper[4830]: E0227 16:10:20.097691 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:20.597678371 +0000 UTC m=+216.686950834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.099816 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/278df35c-de00-443d-a6f7-e0cc526a487c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "278df35c-de00-443d-a6f7-e0cc526a487c" (UID: "278df35c-de00-443d-a6f7-e0cc526a487c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.099997 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/278df35c-de00-443d-a6f7-e0cc526a487c-kube-api-access-t5c8f" (OuterVolumeSpecName: "kube-api-access-t5c8f") pod "278df35c-de00-443d-a6f7-e0cc526a487c" (UID: "278df35c-de00-443d-a6f7-e0cc526a487c"). InnerVolumeSpecName "kube-api-access-t5c8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.146898 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-kjfn6" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.148533 4830 patch_prober.go:28] interesting pod/console-f9d7485db-kjfn6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.148583 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-kjfn6" podUID="11fbaa05-cf66-40dd-be15-c6474a011768" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.149065 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-kjfn6" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.162767 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-966h2" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.179039 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.194902 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s4bpk"] Feb 27 16:10:20 crc kubenswrapper[4830]: E0227 16:10:20.195221 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="278df35c-de00-443d-a6f7-e0cc526a487c" containerName="controller-manager" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.195233 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="278df35c-de00-443d-a6f7-e0cc526a487c" containerName="controller-manager" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.195313 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="278df35c-de00-443d-a6f7-e0cc526a487c" containerName="controller-manager" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.195893 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s4bpk" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.197740 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:20 crc kubenswrapper[4830]: E0227 16:10:20.197908 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:20.697884551 +0000 UTC m=+216.787157014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.197951 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/789ee180-dd8e-4cb2-884e-beea08667c53-catalog-content\") pod \"certified-operators-dnpxp\" (UID: \"789ee180-dd8e-4cb2-884e-beea08667c53\") " pod="openshift-marketplace/certified-operators-dnpxp" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.198024 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzbpj\" (UniqueName: \"kubernetes.io/projected/789ee180-dd8e-4cb2-884e-beea08667c53-kube-api-access-lzbpj\") pod \"certified-operators-dnpxp\" (UID: \"789ee180-dd8e-4cb2-884e-beea08667c53\") " pod="openshift-marketplace/certified-operators-dnpxp" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.198195 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/789ee180-dd8e-4cb2-884e-beea08667c53-utilities\") pod \"certified-operators-dnpxp\" (UID: \"789ee180-dd8e-4cb2-884e-beea08667c53\") " pod="openshift-marketplace/certified-operators-dnpxp" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.198240 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.198280 4830 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/278df35c-de00-443d-a6f7-e0cc526a487c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.198293 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/278df35c-de00-443d-a6f7-e0cc526a487c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.198303 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5c8f\" (UniqueName: \"kubernetes.io/projected/278df35c-de00-443d-a6f7-e0cc526a487c-kube-api-access-t5c8f\") on node \"crc\" DevicePath \"\"" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.198314 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/278df35c-de00-443d-a6f7-e0cc526a487c-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:10:20 crc kubenswrapper[4830]: E0227 16:10:20.198533 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:20.698526218 +0000 UTC m=+216.787798681 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.198985 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/789ee180-dd8e-4cb2-884e-beea08667c53-catalog-content\") pod \"certified-operators-dnpxp\" (UID: \"789ee180-dd8e-4cb2-884e-beea08667c53\") " pod="openshift-marketplace/certified-operators-dnpxp" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.200279 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/789ee180-dd8e-4cb2-884e-beea08667c53-utilities\") pod \"certified-operators-dnpxp\" (UID: \"789ee180-dd8e-4cb2-884e-beea08667c53\") " pod="openshift-marketplace/certified-operators-dnpxp" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.211607 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s4bpk"] Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.225833 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzbpj\" (UniqueName: \"kubernetes.io/projected/789ee180-dd8e-4cb2-884e-beea08667c53-kube-api-access-lzbpj\") pod \"certified-operators-dnpxp\" (UID: \"789ee180-dd8e-4cb2-884e-beea08667c53\") " pod="openshift-marketplace/certified-operators-dnpxp" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.239333 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k7l8d" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.297461 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2rrvm" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.299342 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:20 crc kubenswrapper[4830]: E0227 16:10:20.299818 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:20.799804015 +0000 UTC m=+216.889076478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.299877 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c5e2cae-7890-48fb-ab76-7e53c52fd6ac-catalog-content\") pod \"community-operators-s4bpk\" (UID: \"1c5e2cae-7890-48fb-ab76-7e53c52fd6ac\") " pod="openshift-marketplace/community-operators-s4bpk" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.299984 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c5e2cae-7890-48fb-ab76-7e53c52fd6ac-utilities\") pod \"community-operators-s4bpk\" (UID: \"1c5e2cae-7890-48fb-ab76-7e53c52fd6ac\") " pod="openshift-marketplace/community-operators-s4bpk" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.300011 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mngln\" (UniqueName: \"kubernetes.io/projected/1c5e2cae-7890-48fb-ab76-7e53c52fd6ac-kube-api-access-mngln\") pod \"community-operators-s4bpk\" (UID: \"1c5e2cae-7890-48fb-ab76-7e53c52fd6ac\") " pod="openshift-marketplace/community-operators-s4bpk" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.300041 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:20 crc kubenswrapper[4830]: E0227 16:10:20.300531 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:20.800515443 +0000 UTC m=+216.889787906 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.305495 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.305533 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.331235 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dnpxp" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.332832 4830 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.364145 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 27 16:10:20 crc kubenswrapper[4830]: E0227 16:10:20.365429 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ce35469-d725-409b-8e24-2c74769d7b77" containerName="route-controller-manager" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.366762 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ce35469-d725-409b-8e24-2c74769d7b77" containerName="route-controller-manager" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.369526 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ce35469-d725-409b-8e24-2c74769d7b77" containerName="route-controller-manager" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.370011 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.373395 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.374425 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.374669 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.400588 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sthg5\" (UniqueName: \"kubernetes.io/projected/4ce35469-d725-409b-8e24-2c74769d7b77-kube-api-access-sthg5\") pod \"4ce35469-d725-409b-8e24-2c74769d7b77\" (UID: \"4ce35469-d725-409b-8e24-2c74769d7b77\") " Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.400760 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.400796 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ce35469-d725-409b-8e24-2c74769d7b77-client-ca\") pod \"4ce35469-d725-409b-8e24-2c74769d7b77\" (UID: \"4ce35469-d725-409b-8e24-2c74769d7b77\") " Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.400820 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ce35469-d725-409b-8e24-2c74769d7b77-serving-cert\") pod \"4ce35469-d725-409b-8e24-2c74769d7b77\" (UID: \"4ce35469-d725-409b-8e24-2c74769d7b77\") " Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.400840 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ce35469-d725-409b-8e24-2c74769d7b77-config\") pod \"4ce35469-d725-409b-8e24-2c74769d7b77\" (UID: \"4ce35469-d725-409b-8e24-2c74769d7b77\") " Feb 27 16:10:20 crc kubenswrapper[4830]: E0227 16:10:20.401137 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:20.901093573 +0000 UTC m=+216.990366036 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.401226 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c5e2cae-7890-48fb-ab76-7e53c52fd6ac-catalog-content\") pod \"community-operators-s4bpk\" (UID: \"1c5e2cae-7890-48fb-ab76-7e53c52fd6ac\") " pod="openshift-marketplace/community-operators-s4bpk" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.401371 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c5e2cae-7890-48fb-ab76-7e53c52fd6ac-utilities\") pod \"community-operators-s4bpk\" (UID: \"1c5e2cae-7890-48fb-ab76-7e53c52fd6ac\") " pod="openshift-marketplace/community-operators-s4bpk" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.401394 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mngln\" (UniqueName: \"kubernetes.io/projected/1c5e2cae-7890-48fb-ab76-7e53c52fd6ac-kube-api-access-mngln\") pod \"community-operators-s4bpk\" (UID: \"1c5e2cae-7890-48fb-ab76-7e53c52fd6ac\") " pod="openshift-marketplace/community-operators-s4bpk" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.403412 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ce35469-d725-409b-8e24-2c74769d7b77-client-ca" (OuterVolumeSpecName: "client-ca") pod "4ce35469-d725-409b-8e24-2c74769d7b77" (UID: "4ce35469-d725-409b-8e24-2c74769d7b77"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.404212 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ce35469-d725-409b-8e24-2c74769d7b77-config" (OuterVolumeSpecName: "config") pod "4ce35469-d725-409b-8e24-2c74769d7b77" (UID: "4ce35469-d725-409b-8e24-2c74769d7b77"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.405289 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c5e2cae-7890-48fb-ab76-7e53c52fd6ac-catalog-content\") pod \"community-operators-s4bpk\" (UID: \"1c5e2cae-7890-48fb-ab76-7e53c52fd6ac\") " pod="openshift-marketplace/community-operators-s4bpk" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.405501 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c5e2cae-7890-48fb-ab76-7e53c52fd6ac-utilities\") pod \"community-operators-s4bpk\" (UID: \"1c5e2cae-7890-48fb-ab76-7e53c52fd6ac\") " pod="openshift-marketplace/community-operators-s4bpk" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.409934 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ce35469-d725-409b-8e24-2c74769d7b77-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4ce35469-d725-409b-8e24-2c74769d7b77" (UID: "4ce35469-d725-409b-8e24-2c74769d7b77"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.414886 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ce35469-d725-409b-8e24-2c74769d7b77-kube-api-access-sthg5" (OuterVolumeSpecName: "kube-api-access-sthg5") pod "4ce35469-d725-409b-8e24-2c74769d7b77" (UID: "4ce35469-d725-409b-8e24-2c74769d7b77"). InnerVolumeSpecName "kube-api-access-sthg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.424379 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mngln\" (UniqueName: \"kubernetes.io/projected/1c5e2cae-7890-48fb-ab76-7e53c52fd6ac-kube-api-access-mngln\") pod \"community-operators-s4bpk\" (UID: \"1c5e2cae-7890-48fb-ab76-7e53c52fd6ac\") " pod="openshift-marketplace/community-operators-s4bpk" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.475062 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gw4c8" event={"ID":"3311d92d-90da-42f5-acf3-3ec723c5edad","Type":"ContainerStarted","Data":"6d293d314166181c5786b0c63d47eacca91969631871e7935ed5109cad5a0641"} Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.475109 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gw4c8" event={"ID":"3311d92d-90da-42f5-acf3-3ec723c5edad","Type":"ContainerStarted","Data":"9b641dbd5f6639ac00bd192d5cfdfe04e9c8b4333ca9909c6b38a1dc48f949de"} Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.482289 4830 generic.go:334] "Generic (PLEG): container finished" podID="4ce35469-d725-409b-8e24-2c74769d7b77" containerID="a1a24021a2ad8f73298f5fb9a4df12c89c987963d2a1004acd7c63aa5d7857da" exitCode=0 Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.482355 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2rrvm" event={"ID":"4ce35469-d725-409b-8e24-2c74769d7b77","Type":"ContainerDied","Data":"a1a24021a2ad8f73298f5fb9a4df12c89c987963d2a1004acd7c63aa5d7857da"} Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.482383 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2rrvm" event={"ID":"4ce35469-d725-409b-8e24-2c74769d7b77","Type":"ContainerDied","Data":"18d9f694add02cd56a75816776b4fd3da281532b184f459f03e3d79219db7c75"} Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.482402 4830 scope.go:117] "RemoveContainer" containerID="a1a24021a2ad8f73298f5fb9a4df12c89c987963d2a1004acd7c63aa5d7857da" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.482667 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-2rrvm" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.489089 4830 generic.go:334] "Generic (PLEG): container finished" podID="278df35c-de00-443d-a6f7-e0cc526a487c" containerID="1a9018ca3884a4806bc37e707c445a76d4c94f79575773e2c15aa66edafbcebc" exitCode=0 Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.489725 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-v78pc" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.489738 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-v78pc" event={"ID":"278df35c-de00-443d-a6f7-e0cc526a487c","Type":"ContainerDied","Data":"1a9018ca3884a4806bc37e707c445a76d4c94f79575773e2c15aa66edafbcebc"} Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.494268 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-v78pc" event={"ID":"278df35c-de00-443d-a6f7-e0cc526a487c","Type":"ContainerDied","Data":"ca4725098cf27779bb591d014851950c8a7cb5a21be7b98c3fe6686d135f2de0"} Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.502223 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb19456d-a4a1-42eb-af47-f987cd981816-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cb19456d-a4a1-42eb-af47-f987cd981816\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.502246 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb19456d-a4a1-42eb-af47-f987cd981816-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cb19456d-a4a1-42eb-af47-f987cd981816\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.502300 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.502382 4830 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ce35469-d725-409b-8e24-2c74769d7b77-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.502401 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ce35469-d725-409b-8e24-2c74769d7b77-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.502413 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ce35469-d725-409b-8e24-2c74769d7b77-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.502425 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sthg5\" (UniqueName: \"kubernetes.io/projected/4ce35469-d725-409b-8e24-2c74769d7b77-kube-api-access-sthg5\") on node \"crc\" DevicePath \"\"" Feb 27 16:10:20 crc kubenswrapper[4830]: E0227 16:10:20.502533 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:21.002522874 +0000 UTC m=+217.091795327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.506916 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-hgw6n" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.515205 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-966h2"] Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.517348 4830 scope.go:117] "RemoveContainer" containerID="a1a24021a2ad8f73298f5fb9a4df12c89c987963d2a1004acd7c63aa5d7857da" Feb 27 16:10:20 crc kubenswrapper[4830]: E0227 16:10:20.527380 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1a24021a2ad8f73298f5fb9a4df12c89c987963d2a1004acd7c63aa5d7857da\": container with ID starting with a1a24021a2ad8f73298f5fb9a4df12c89c987963d2a1004acd7c63aa5d7857da not found: ID does not exist" containerID="a1a24021a2ad8f73298f5fb9a4df12c89c987963d2a1004acd7c63aa5d7857da" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.527433 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1a24021a2ad8f73298f5fb9a4df12c89c987963d2a1004acd7c63aa5d7857da"} err="failed to get container status \"a1a24021a2ad8f73298f5fb9a4df12c89c987963d2a1004acd7c63aa5d7857da\": rpc error: code = NotFound desc = could not find container \"a1a24021a2ad8f73298f5fb9a4df12c89c987963d2a1004acd7c63aa5d7857da\": container with ID starting with a1a24021a2ad8f73298f5fb9a4df12c89c987963d2a1004acd7c63aa5d7857da not found: ID does not exist" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.527466 4830 scope.go:117] "RemoveContainer" containerID="1a9018ca3884a4806bc37e707c445a76d4c94f79575773e2c15aa66edafbcebc" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.565880 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2rrvm"] Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.566771 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s4bpk" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.567662 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-2rrvm"] Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.589826 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.590396 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.592460 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.592620 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.594836 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-v78pc"] Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.602169 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-v78pc"] Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.602698 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.603047 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb19456d-a4a1-42eb-af47-f987cd981816-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cb19456d-a4a1-42eb-af47-f987cd981816\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.603068 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb19456d-a4a1-42eb-af47-f987cd981816-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cb19456d-a4a1-42eb-af47-f987cd981816\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.604000 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb19456d-a4a1-42eb-af47-f987cd981816-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cb19456d-a4a1-42eb-af47-f987cd981816\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.605137 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 27 16:10:20 crc kubenswrapper[4830]: E0227 16:10:20.605240 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:21.105225118 +0000 UTC m=+217.194497581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.609036 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k7l8d"] Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.615890 4830 patch_prober.go:28] interesting pod/apiserver-76f77b778f-9c4wb container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 27 16:10:20 crc kubenswrapper[4830]: [+]log ok Feb 27 16:10:20 crc kubenswrapper[4830]: [+]etcd ok Feb 27 16:10:20 crc kubenswrapper[4830]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 27 16:10:20 crc kubenswrapper[4830]: [+]poststarthook/generic-apiserver-start-informers ok Feb 27 16:10:20 crc kubenswrapper[4830]: [+]poststarthook/max-in-flight-filter ok Feb 27 16:10:20 crc kubenswrapper[4830]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 27 16:10:20 crc kubenswrapper[4830]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 27 16:10:20 crc kubenswrapper[4830]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 27 16:10:20 crc kubenswrapper[4830]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 27 16:10:20 crc kubenswrapper[4830]: [+]poststarthook/project.openshift.io-projectcache ok Feb 27 16:10:20 crc kubenswrapper[4830]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 27 16:10:20 crc kubenswrapper[4830]: [+]poststarthook/openshift.io-startinformers ok Feb 27 16:10:20 crc kubenswrapper[4830]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 27 16:10:20 crc kubenswrapper[4830]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 27 16:10:20 crc kubenswrapper[4830]: livez check failed Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.615932 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" podUID="e9c70786-d73e-4e48-a552-bdeb53daba49" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.623664 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb19456d-a4a1-42eb-af47-f987cd981816-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cb19456d-a4a1-42eb-af47-f987cd981816\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.624901 4830 scope.go:117] "RemoveContainer" containerID="1a9018ca3884a4806bc37e707c445a76d4c94f79575773e2c15aa66edafbcebc" Feb 27 16:10:20 crc kubenswrapper[4830]: E0227 16:10:20.636053 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a9018ca3884a4806bc37e707c445a76d4c94f79575773e2c15aa66edafbcebc\": container with ID starting with 1a9018ca3884a4806bc37e707c445a76d4c94f79575773e2c15aa66edafbcebc not found: ID does not exist" containerID="1a9018ca3884a4806bc37e707c445a76d4c94f79575773e2c15aa66edafbcebc" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.636122 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a9018ca3884a4806bc37e707c445a76d4c94f79575773e2c15aa66edafbcebc"} err="failed to get container status \"1a9018ca3884a4806bc37e707c445a76d4c94f79575773e2c15aa66edafbcebc\": rpc error: code = NotFound desc = could not find container \"1a9018ca3884a4806bc37e707c445a76d4c94f79575773e2c15aa66edafbcebc\": container with ID starting with 1a9018ca3884a4806bc37e707c445a76d4c94f79575773e2c15aa66edafbcebc not found: ID does not exist" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.685620 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dnpxp"] Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.704671 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cbf6fbf6-60fa-43c6-97aa-f8e8ac24fbce-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"cbf6fbf6-60fa-43c6-97aa-f8e8ac24fbce\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.704713 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.704761 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbf6fbf6-60fa-43c6-97aa-f8e8ac24fbce-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"cbf6fbf6-60fa-43c6-97aa-f8e8ac24fbce\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 16:10:20 crc kubenswrapper[4830]: E0227 16:10:20.705065 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:21.205053778 +0000 UTC m=+217.294326241 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.735712 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.771672 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="278df35c-de00-443d-a6f7-e0cc526a487c" path="/var/lib/kubelet/pods/278df35c-de00-443d-a6f7-e0cc526a487c/volumes" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.772530 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ce35469-d725-409b-8e24-2c74769d7b77" path="/var/lib/kubelet/pods/4ce35469-d725-409b-8e24-2c74769d7b77/volumes" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.805533 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.805775 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cbf6fbf6-60fa-43c6-97aa-f8e8ac24fbce-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"cbf6fbf6-60fa-43c6-97aa-f8e8ac24fbce\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.805833 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbf6fbf6-60fa-43c6-97aa-f8e8ac24fbce-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"cbf6fbf6-60fa-43c6-97aa-f8e8ac24fbce\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 16:10:20 crc kubenswrapper[4830]: E0227 16:10:20.806644 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:21.306630003 +0000 UTC m=+217.395902466 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.806671 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cbf6fbf6-60fa-43c6-97aa-f8e8ac24fbce-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"cbf6fbf6-60fa-43c6-97aa-f8e8ac24fbce\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.824388 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-4dhxq" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.827369 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-86d57f9bf7-657rh"] Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.828100 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86d57f9bf7-657rh" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.837598 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-4dhxq container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.837644 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-4dhxq" podUID="1f30f03f-511a-4a29-beae-e3d6971a8c9e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.837737 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-4dhxq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.837790 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4dhxq" podUID="1f30f03f-511a-4a29-beae-e3d6971a8c9e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.838165 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-4dhxq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.838190 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4dhxq" podUID="1f30f03f-511a-4a29-beae-e3d6971a8c9e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.838376 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.838400 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.838574 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.838744 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.838784 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.838754 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.841044 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbf6fbf6-60fa-43c6-97aa-f8e8ac24fbce-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"cbf6fbf6-60fa-43c6-97aa-f8e8ac24fbce\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.844785 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.845875 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fbf66d869-t9dv7"] Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.846920 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fbf66d869-t9dv7" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.849497 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.849523 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.849498 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.849646 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.849741 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.850009 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.851925 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86d57f9bf7-657rh"] Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.869421 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fbf66d869-t9dv7"] Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.877930 4830 patch_prober.go:28] interesting pod/router-default-5444994796-wh6nt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 16:10:20 crc kubenswrapper[4830]: [-]has-synced failed: reason withheld Feb 27 16:10:20 crc kubenswrapper[4830]: [+]process-running ok Feb 27 16:10:20 crc kubenswrapper[4830]: healthz check failed Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.878067 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-45mg7" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.878093 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wh6nt" podUID="d473053a-d4df-40b8-a876-5582e1d8a702" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.894201 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s4bpk"] Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.906186 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-gb4pt" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.917613 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7725e3c-523d-4de0-9764-213008ccd32c-config\") pod \"route-controller-manager-fbf66d869-t9dv7\" (UID: \"c7725e3c-523d-4de0-9764-213008ccd32c\") " pod="openshift-route-controller-manager/route-controller-manager-fbf66d869-t9dv7" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.917659 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39a011ff-ad25-4470-84da-7a645ea582ce-proxy-ca-bundles\") pod \"controller-manager-86d57f9bf7-657rh\" (UID: \"39a011ff-ad25-4470-84da-7a645ea582ce\") " pod="openshift-controller-manager/controller-manager-86d57f9bf7-657rh" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.917681 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39a011ff-ad25-4470-84da-7a645ea582ce-config\") pod \"controller-manager-86d57f9bf7-657rh\" (UID: \"39a011ff-ad25-4470-84da-7a645ea582ce\") " pod="openshift-controller-manager/controller-manager-86d57f9bf7-657rh" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.917698 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.917722 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7725e3c-523d-4de0-9764-213008ccd32c-serving-cert\") pod \"route-controller-manager-fbf66d869-t9dv7\" (UID: \"c7725e3c-523d-4de0-9764-213008ccd32c\") " pod="openshift-route-controller-manager/route-controller-manager-fbf66d869-t9dv7" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.917752 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7725e3c-523d-4de0-9764-213008ccd32c-client-ca\") pod \"route-controller-manager-fbf66d869-t9dv7\" (UID: \"c7725e3c-523d-4de0-9764-213008ccd32c\") " pod="openshift-route-controller-manager/route-controller-manager-fbf66d869-t9dv7" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.917812 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39a011ff-ad25-4470-84da-7a645ea582ce-serving-cert\") pod \"controller-manager-86d57f9bf7-657rh\" (UID: \"39a011ff-ad25-4470-84da-7a645ea582ce\") " pod="openshift-controller-manager/controller-manager-86d57f9bf7-657rh" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.917853 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.917916 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39a011ff-ad25-4470-84da-7a645ea582ce-client-ca\") pod \"controller-manager-86d57f9bf7-657rh\" (UID: \"39a011ff-ad25-4470-84da-7a645ea582ce\") " pod="openshift-controller-manager/controller-manager-86d57f9bf7-657rh" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.917983 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bf4d\" (UniqueName: \"kubernetes.io/projected/c7725e3c-523d-4de0-9764-213008ccd32c-kube-api-access-5bf4d\") pod \"route-controller-manager-fbf66d869-t9dv7\" (UID: \"c7725e3c-523d-4de0-9764-213008ccd32c\") " pod="openshift-route-controller-manager/route-controller-manager-fbf66d869-t9dv7" Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.918021 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx5gh\" (UniqueName: \"kubernetes.io/projected/39a011ff-ad25-4470-84da-7a645ea582ce-kube-api-access-dx5gh\") pod \"controller-manager-86d57f9bf7-657rh\" (UID: \"39a011ff-ad25-4470-84da-7a645ea582ce\") " pod="openshift-controller-manager/controller-manager-86d57f9bf7-657rh" Feb 27 16:10:20 crc kubenswrapper[4830]: E0227 16:10:20.918827 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:21.418814132 +0000 UTC m=+217.508086605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:20 crc kubenswrapper[4830]: I0227 16:10:20.960110 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-gb4pt" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.018868 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.019769 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:21 crc kubenswrapper[4830]: E0227 16:10:21.019884 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:10:21.519866364 +0000 UTC m=+217.609138827 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.019928 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7725e3c-523d-4de0-9764-213008ccd32c-config\") pod \"route-controller-manager-fbf66d869-t9dv7\" (UID: \"c7725e3c-523d-4de0-9764-213008ccd32c\") " pod="openshift-route-controller-manager/route-controller-manager-fbf66d869-t9dv7" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.019881 4830 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-27T16:10:20.33291217Z","Handler":null,"Name":""} Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.019977 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39a011ff-ad25-4470-84da-7a645ea582ce-proxy-ca-bundles\") pod \"controller-manager-86d57f9bf7-657rh\" (UID: \"39a011ff-ad25-4470-84da-7a645ea582ce\") " pod="openshift-controller-manager/controller-manager-86d57f9bf7-657rh" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.020001 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39a011ff-ad25-4470-84da-7a645ea582ce-config\") pod \"controller-manager-86d57f9bf7-657rh\" (UID: \"39a011ff-ad25-4470-84da-7a645ea582ce\") " pod="openshift-controller-manager/controller-manager-86d57f9bf7-657rh" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.020043 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7725e3c-523d-4de0-9764-213008ccd32c-serving-cert\") pod \"route-controller-manager-fbf66d869-t9dv7\" (UID: \"c7725e3c-523d-4de0-9764-213008ccd32c\") " pod="openshift-route-controller-manager/route-controller-manager-fbf66d869-t9dv7" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.020087 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7725e3c-523d-4de0-9764-213008ccd32c-client-ca\") pod \"route-controller-manager-fbf66d869-t9dv7\" (UID: \"c7725e3c-523d-4de0-9764-213008ccd32c\") " pod="openshift-route-controller-manager/route-controller-manager-fbf66d869-t9dv7" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.021049 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7725e3c-523d-4de0-9764-213008ccd32c-client-ca\") pod \"route-controller-manager-fbf66d869-t9dv7\" (UID: \"c7725e3c-523d-4de0-9764-213008ccd32c\") " pod="openshift-route-controller-manager/route-controller-manager-fbf66d869-t9dv7" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.021118 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7725e3c-523d-4de0-9764-213008ccd32c-config\") pod \"route-controller-manager-fbf66d869-t9dv7\" (UID: \"c7725e3c-523d-4de0-9764-213008ccd32c\") " pod="openshift-route-controller-manager/route-controller-manager-fbf66d869-t9dv7" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.021212 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39a011ff-ad25-4470-84da-7a645ea582ce-serving-cert\") pod \"controller-manager-86d57f9bf7-657rh\" (UID: \"39a011ff-ad25-4470-84da-7a645ea582ce\") " pod="openshift-controller-manager/controller-manager-86d57f9bf7-657rh" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.021347 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39a011ff-ad25-4470-84da-7a645ea582ce-config\") pod \"controller-manager-86d57f9bf7-657rh\" (UID: \"39a011ff-ad25-4470-84da-7a645ea582ce\") " pod="openshift-controller-manager/controller-manager-86d57f9bf7-657rh" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.021732 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.021780 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39a011ff-ad25-4470-84da-7a645ea582ce-client-ca\") pod \"controller-manager-86d57f9bf7-657rh\" (UID: \"39a011ff-ad25-4470-84da-7a645ea582ce\") " pod="openshift-controller-manager/controller-manager-86d57f9bf7-657rh" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.021847 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bf4d\" (UniqueName: \"kubernetes.io/projected/c7725e3c-523d-4de0-9764-213008ccd32c-kube-api-access-5bf4d\") pod \"route-controller-manager-fbf66d869-t9dv7\" (UID: \"c7725e3c-523d-4de0-9764-213008ccd32c\") " pod="openshift-route-controller-manager/route-controller-manager-fbf66d869-t9dv7" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.021919 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx5gh\" (UniqueName: \"kubernetes.io/projected/39a011ff-ad25-4470-84da-7a645ea582ce-kube-api-access-dx5gh\") pod \"controller-manager-86d57f9bf7-657rh\" (UID: \"39a011ff-ad25-4470-84da-7a645ea582ce\") " pod="openshift-controller-manager/controller-manager-86d57f9bf7-657rh" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.022562 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39a011ff-ad25-4470-84da-7a645ea582ce-proxy-ca-bundles\") pod \"controller-manager-86d57f9bf7-657rh\" (UID: \"39a011ff-ad25-4470-84da-7a645ea582ce\") " pod="openshift-controller-manager/controller-manager-86d57f9bf7-657rh" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.022608 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39a011ff-ad25-4470-84da-7a645ea582ce-client-ca\") pod \"controller-manager-86d57f9bf7-657rh\" (UID: \"39a011ff-ad25-4470-84da-7a645ea582ce\") " pod="openshift-controller-manager/controller-manager-86d57f9bf7-657rh" Feb 27 16:10:21 crc kubenswrapper[4830]: E0227 16:10:21.022901 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:10:21.522887222 +0000 UTC m=+217.612159685 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9gfr4" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.023169 4830 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.023192 4830 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.028980 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39a011ff-ad25-4470-84da-7a645ea582ce-serving-cert\") pod \"controller-manager-86d57f9bf7-657rh\" (UID: \"39a011ff-ad25-4470-84da-7a645ea582ce\") " pod="openshift-controller-manager/controller-manager-86d57f9bf7-657rh" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.031679 4830 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-v78pc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.031723 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-v78pc" podUID="278df35c-de00-443d-a6f7-e0cc526a487c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.045423 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bf4d\" (UniqueName: \"kubernetes.io/projected/c7725e3c-523d-4de0-9764-213008ccd32c-kube-api-access-5bf4d\") pod \"route-controller-manager-fbf66d869-t9dv7\" (UID: \"c7725e3c-523d-4de0-9764-213008ccd32c\") " pod="openshift-route-controller-manager/route-controller-manager-fbf66d869-t9dv7" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.049198 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7725e3c-523d-4de0-9764-213008ccd32c-serving-cert\") pod \"route-controller-manager-fbf66d869-t9dv7\" (UID: \"c7725e3c-523d-4de0-9764-213008ccd32c\") " pod="openshift-route-controller-manager/route-controller-manager-fbf66d869-t9dv7" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.053771 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx5gh\" (UniqueName: \"kubernetes.io/projected/39a011ff-ad25-4470-84da-7a645ea582ce-kube-api-access-dx5gh\") pod \"controller-manager-86d57f9bf7-657rh\" (UID: \"39a011ff-ad25-4470-84da-7a645ea582ce\") " pod="openshift-controller-manager/controller-manager-86d57f9bf7-657rh" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.056256 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.068204 4830 ???:1] "http: TLS handshake error from 192.168.126.11:56618: no serving certificate available for the kubelet" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.123107 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.132479 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.180977 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86d57f9bf7-657rh" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.207743 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fbf66d869-t9dv7"] Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.208180 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fbf66d869-t9dv7" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.210343 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86d57f9bf7-657rh"] Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.224536 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.234555 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.234592 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.280873 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9gfr4\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.345470 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.409997 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86d57f9bf7-657rh"] Feb 27 16:10:21 crc kubenswrapper[4830]: W0227 16:10:21.415431 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39a011ff_ad25_4470_84da_7a645ea582ce.slice/crio-0ef5bd5807952e3ea56f6d7db60b337c0e614358f38759e18d5af43eed92d535 WatchSource:0}: Error finding container 0ef5bd5807952e3ea56f6d7db60b337c0e614358f38759e18d5af43eed92d535: Status 404 returned error can't find the container with id 0ef5bd5807952e3ea56f6d7db60b337c0e614358f38759e18d5af43eed92d535 Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.430807 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.446785 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fbf66d869-t9dv7"] Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.502270 4830 generic.go:334] "Generic (PLEG): container finished" podID="1c5e2cae-7890-48fb-ab76-7e53c52fd6ac" containerID="1b77db50d0d117fa49b266837e58b661e70cc8d35c82ff5810f8ea72d6daf765" exitCode=0 Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.502337 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4bpk" event={"ID":"1c5e2cae-7890-48fb-ab76-7e53c52fd6ac","Type":"ContainerDied","Data":"1b77db50d0d117fa49b266837e58b661e70cc8d35c82ff5810f8ea72d6daf765"} Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.502359 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4bpk" event={"ID":"1c5e2cae-7890-48fb-ab76-7e53c52fd6ac","Type":"ContainerStarted","Data":"51b8e035ea8513e95d44b35eaa5b469325b15d1a8ba826b39d84ee669e11e1aa"} Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.513406 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"cb19456d-a4a1-42eb-af47-f987cd981816","Type":"ContainerStarted","Data":"c92fe710cfb7b6d3c8221cd4fe442dbcf7a9ad52982d5387e7257d8408922aee"} Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.521018 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gw4c8" event={"ID":"3311d92d-90da-42f5-acf3-3ec723c5edad","Type":"ContainerStarted","Data":"9591430c79c827ac26ec99c549ea351f20693c74b2ee9ca9278cf5df7dabb667"} Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.528304 4830 generic.go:334] "Generic (PLEG): container finished" podID="041ea905-9e91-41e3-9db6-820256d951aa" containerID="1d1db6ba2d26bed55d01d495302f772f7446ef48d3dfc1ab5d8cdb0c74fec5ae" exitCode=0 Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.528334 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536800-n8kg4" event={"ID":"041ea905-9e91-41e3-9db6-820256d951aa","Type":"ContainerDied","Data":"1d1db6ba2d26bed55d01d495302f772f7446ef48d3dfc1ab5d8cdb0c74fec5ae"} Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.530430 4830 generic.go:334] "Generic (PLEG): container finished" podID="8b33138a-5b9d-4af8-b13d-4db4c2613983" containerID="5c9aab3a73c629bd869eed924733be4ab0e2f3a57268750c95d6c1598dcd566c" exitCode=0 Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.530491 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-966h2" event={"ID":"8b33138a-5b9d-4af8-b13d-4db4c2613983","Type":"ContainerDied","Data":"5c9aab3a73c629bd869eed924733be4ab0e2f3a57268750c95d6c1598dcd566c"} Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.530510 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-966h2" event={"ID":"8b33138a-5b9d-4af8-b13d-4db4c2613983","Type":"ContainerStarted","Data":"23c33b55f6a16c12ef7ea8bc14ae6050c37f27dc6b7d943048c0535970ac6972"} Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.531690 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86d57f9bf7-657rh" event={"ID":"39a011ff-ad25-4470-84da-7a645ea582ce","Type":"ContainerStarted","Data":"0ef5bd5807952e3ea56f6d7db60b337c0e614358f38759e18d5af43eed92d535"} Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.533820 4830 generic.go:334] "Generic (PLEG): container finished" podID="789ee180-dd8e-4cb2-884e-beea08667c53" containerID="a92b59e6489f19adce074bfbc81353c0bb9b1718b3c77f9d2eec7547c43b3655" exitCode=0 Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.533884 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnpxp" event={"ID":"789ee180-dd8e-4cb2-884e-beea08667c53","Type":"ContainerDied","Data":"a92b59e6489f19adce074bfbc81353c0bb9b1718b3c77f9d2eec7547c43b3655"} Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.533905 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnpxp" event={"ID":"789ee180-dd8e-4cb2-884e-beea08667c53","Type":"ContainerStarted","Data":"fbedc23b91b7d31b19327fd5a20984e57e48532a563c7a1024a77329cb7ac5b6"} Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.545200 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-gw4c8" podStartSLOduration=13.545178109 podStartE2EDuration="13.545178109s" podCreationTimestamp="2026-02-27 16:10:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:21.535277813 +0000 UTC m=+217.624550276" watchObservedRunningTime="2026-02-27 16:10:21.545178109 +0000 UTC m=+217.634450572" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.545434 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fbf66d869-t9dv7" event={"ID":"c7725e3c-523d-4de0-9764-213008ccd32c","Type":"ContainerStarted","Data":"1089de176d6c643a257453eca19134fef7b90f981bfb3be17860a69aa67a4475"} Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.549543 4830 generic.go:334] "Generic (PLEG): container finished" podID="f2579681-6b81-4b58-9d2c-c26b123be8ec" containerID="580d57ccadc2b72e237a298049219e0b38ea38314a110fdcef6eddd7d98a3314" exitCode=0 Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.549606 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k7l8d" event={"ID":"f2579681-6b81-4b58-9d2c-c26b123be8ec","Type":"ContainerDied","Data":"580d57ccadc2b72e237a298049219e0b38ea38314a110fdcef6eddd7d98a3314"} Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.549631 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k7l8d" event={"ID":"f2579681-6b81-4b58-9d2c-c26b123be8ec","Type":"ContainerStarted","Data":"84793fd5fcaacd431824841d6e2b8422e958b94c7956d1b13be0f87bbce99d67"} Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.570671 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cbf6fbf6-60fa-43c6-97aa-f8e8ac24fbce","Type":"ContainerStarted","Data":"547bb5646c6d50fcf21ac15fc67a4052e23ee3d9052eb964d8eaee29c0a0745e"} Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.603565 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n7thk" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.630249 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-n7thk" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.710856 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9gfr4"] Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.795788 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kkwcl"] Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.798856 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kkwcl" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.802502 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.814246 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkwcl"] Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.833550 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc-catalog-content\") pod \"redhat-marketplace-kkwcl\" (UID: \"a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc\") " pod="openshift-marketplace/redhat-marketplace-kkwcl" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.833583 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc-utilities\") pod \"redhat-marketplace-kkwcl\" (UID: \"a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc\") " pod="openshift-marketplace/redhat-marketplace-kkwcl" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.833611 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46ljt\" (UniqueName: \"kubernetes.io/projected/a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc-kube-api-access-46ljt\") pod \"redhat-marketplace-kkwcl\" (UID: \"a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc\") " pod="openshift-marketplace/redhat-marketplace-kkwcl" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.874613 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-wh6nt" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.879332 4830 patch_prober.go:28] interesting pod/router-default-5444994796-wh6nt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 16:10:21 crc kubenswrapper[4830]: [-]has-synced failed: reason withheld Feb 27 16:10:21 crc kubenswrapper[4830]: [+]process-running ok Feb 27 16:10:21 crc kubenswrapper[4830]: healthz check failed Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.879368 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wh6nt" podUID="d473053a-d4df-40b8-a876-5582e1d8a702" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.881900 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-8fhp6" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.929185 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-bfz6f" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.934662 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc-catalog-content\") pod \"redhat-marketplace-kkwcl\" (UID: \"a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc\") " pod="openshift-marketplace/redhat-marketplace-kkwcl" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.934707 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc-utilities\") pod \"redhat-marketplace-kkwcl\" (UID: \"a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc\") " pod="openshift-marketplace/redhat-marketplace-kkwcl" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.934748 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46ljt\" (UniqueName: \"kubernetes.io/projected/a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc-kube-api-access-46ljt\") pod \"redhat-marketplace-kkwcl\" (UID: \"a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc\") " pod="openshift-marketplace/redhat-marketplace-kkwcl" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.936865 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc-catalog-content\") pod \"redhat-marketplace-kkwcl\" (UID: \"a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc\") " pod="openshift-marketplace/redhat-marketplace-kkwcl" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.937091 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc-utilities\") pod \"redhat-marketplace-kkwcl\" (UID: \"a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc\") " pod="openshift-marketplace/redhat-marketplace-kkwcl" Feb 27 16:10:21 crc kubenswrapper[4830]: I0227 16:10:21.964098 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46ljt\" (UniqueName: \"kubernetes.io/projected/a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc-kube-api-access-46ljt\") pod \"redhat-marketplace-kkwcl\" (UID: \"a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc\") " pod="openshift-marketplace/redhat-marketplace-kkwcl" Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.119469 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kkwcl" Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.186612 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zwcdd"] Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.187561 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwcdd" Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.258412 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/728cab24-3fc3-4249-b37e-183d5676c191-catalog-content\") pod \"redhat-marketplace-zwcdd\" (UID: \"728cab24-3fc3-4249-b37e-183d5676c191\") " pod="openshift-marketplace/redhat-marketplace-zwcdd" Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.258715 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/728cab24-3fc3-4249-b37e-183d5676c191-utilities\") pod \"redhat-marketplace-zwcdd\" (UID: \"728cab24-3fc3-4249-b37e-183d5676c191\") " pod="openshift-marketplace/redhat-marketplace-zwcdd" Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.258733 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4j5c\" (UniqueName: \"kubernetes.io/projected/728cab24-3fc3-4249-b37e-183d5676c191-kube-api-access-z4j5c\") pod \"redhat-marketplace-zwcdd\" (UID: \"728cab24-3fc3-4249-b37e-183d5676c191\") " pod="openshift-marketplace/redhat-marketplace-zwcdd" Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.272385 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwcdd"] Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.372993 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/728cab24-3fc3-4249-b37e-183d5676c191-utilities\") pod \"redhat-marketplace-zwcdd\" (UID: \"728cab24-3fc3-4249-b37e-183d5676c191\") " pod="openshift-marketplace/redhat-marketplace-zwcdd" Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.373040 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4j5c\" (UniqueName: \"kubernetes.io/projected/728cab24-3fc3-4249-b37e-183d5676c191-kube-api-access-z4j5c\") pod \"redhat-marketplace-zwcdd\" (UID: \"728cab24-3fc3-4249-b37e-183d5676c191\") " pod="openshift-marketplace/redhat-marketplace-zwcdd" Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.373083 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/728cab24-3fc3-4249-b37e-183d5676c191-catalog-content\") pod \"redhat-marketplace-zwcdd\" (UID: \"728cab24-3fc3-4249-b37e-183d5676c191\") " pod="openshift-marketplace/redhat-marketplace-zwcdd" Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.373779 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/728cab24-3fc3-4249-b37e-183d5676c191-catalog-content\") pod \"redhat-marketplace-zwcdd\" (UID: \"728cab24-3fc3-4249-b37e-183d5676c191\") " pod="openshift-marketplace/redhat-marketplace-zwcdd" Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.373789 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/728cab24-3fc3-4249-b37e-183d5676c191-utilities\") pod \"redhat-marketplace-zwcdd\" (UID: \"728cab24-3fc3-4249-b37e-183d5676c191\") " pod="openshift-marketplace/redhat-marketplace-zwcdd" Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.398694 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4j5c\" (UniqueName: \"kubernetes.io/projected/728cab24-3fc3-4249-b37e-183d5676c191-kube-api-access-z4j5c\") pod \"redhat-marketplace-zwcdd\" (UID: \"728cab24-3fc3-4249-b37e-183d5676c191\") " pod="openshift-marketplace/redhat-marketplace-zwcdd" Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.428119 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkwcl"] Feb 27 16:10:22 crc kubenswrapper[4830]: W0227 16:10:22.449571 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7e1e0a3_a7d4_4508_b84e_6ba87fced6fc.slice/crio-a9663769ecc6d6c865b5af6cebc5d814f7292ed85c27ca2c0aa948e8dc7dfc90 WatchSource:0}: Error finding container a9663769ecc6d6c865b5af6cebc5d814f7292ed85c27ca2c0aa948e8dc7dfc90: Status 404 returned error can't find the container with id a9663769ecc6d6c865b5af6cebc5d814f7292ed85c27ca2c0aa948e8dc7dfc90 Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.520190 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwcdd" Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.601319 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fbf66d869-t9dv7" event={"ID":"c7725e3c-523d-4de0-9764-213008ccd32c","Type":"ContainerStarted","Data":"b97c4bd5179ab32af577a6d41865e231c39b16eb3545366b531410da14ad861f"} Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.601607 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-fbf66d869-t9dv7" Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.601761 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-fbf66d869-t9dv7" podUID="c7725e3c-523d-4de0-9764-213008ccd32c" containerName="route-controller-manager" containerID="cri-o://b97c4bd5179ab32af577a6d41865e231c39b16eb3545366b531410da14ad861f" gracePeriod=30 Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.615538 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-fbf66d869-t9dv7" Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.617167 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-fbf66d869-t9dv7" podStartSLOduration=3.617151792 podStartE2EDuration="3.617151792s" podCreationTimestamp="2026-02-27 16:10:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:22.615972932 +0000 UTC m=+218.705245395" watchObservedRunningTime="2026-02-27 16:10:22.617151792 +0000 UTC m=+218.706424255" Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.620972 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" event={"ID":"e98d0941-0faf-4719-88a1-ff04ca46eece","Type":"ContainerStarted","Data":"b86aa15a8c214c6b6148673776675862a26671cf68d29f3585b2fdffbe01f6a2"} Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.621004 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" event={"ID":"e98d0941-0faf-4719-88a1-ff04ca46eece","Type":"ContainerStarted","Data":"00271d2f05b33a512bb343f4ac3027c7f04d956d7222bb1ce98e8ceb2b3ca9ac"} Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.621545 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.634191 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkwcl" event={"ID":"a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc","Type":"ContainerStarted","Data":"a9663769ecc6d6c865b5af6cebc5d814f7292ed85c27ca2c0aa948e8dc7dfc90"} Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.648507 4830 generic.go:334] "Generic (PLEG): container finished" podID="cbf6fbf6-60fa-43c6-97aa-f8e8ac24fbce" containerID="42c37a387419baf46a347ec0d6e986cb1ae2746c212eb916b137f890d0844d25" exitCode=0 Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.648637 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cbf6fbf6-60fa-43c6-97aa-f8e8ac24fbce","Type":"ContainerDied","Data":"42c37a387419baf46a347ec0d6e986cb1ae2746c212eb916b137f890d0844d25"} Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.658645 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" podStartSLOduration=182.658630363 podStartE2EDuration="3m2.658630363s" podCreationTimestamp="2026-02-27 16:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:22.656359785 +0000 UTC m=+218.745632248" watchObservedRunningTime="2026-02-27 16:10:22.658630363 +0000 UTC m=+218.747902826" Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.666088 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86d57f9bf7-657rh" event={"ID":"39a011ff-ad25-4470-84da-7a645ea582ce","Type":"ContainerStarted","Data":"c6f573904fba95e9be1365a5179cce2f9178e089c1ce42c81c506f9840068361"} Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.666265 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-86d57f9bf7-657rh" podUID="39a011ff-ad25-4470-84da-7a645ea582ce" containerName="controller-manager" containerID="cri-o://c6f573904fba95e9be1365a5179cce2f9178e089c1ce42c81c506f9840068361" gracePeriod=30 Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.669686 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-86d57f9bf7-657rh" Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.674405 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-86d57f9bf7-657rh" Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.675998 4830 generic.go:334] "Generic (PLEG): container finished" podID="cb19456d-a4a1-42eb-af47-f987cd981816" containerID="51d0027703e10cfb6c259934dba5e05ab5b9f507a70c0cbe5991fc143f77d29b" exitCode=0 Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.676245 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"cb19456d-a4a1-42eb-af47-f987cd981816","Type":"ContainerDied","Data":"51d0027703e10cfb6c259934dba5e05ab5b9f507a70c0cbe5991fc143f77d29b"} Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.717310 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-86d57f9bf7-657rh" podStartSLOduration=3.71729638 podStartE2EDuration="3.71729638s" podCreationTimestamp="2026-02-27 16:10:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:10:22.701009619 +0000 UTC m=+218.790282082" watchObservedRunningTime="2026-02-27 16:10:22.71729638 +0000 UTC m=+218.806568843" Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.770309 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.781565 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tr5cj"] Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.785987 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tr5cj" Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.789605 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.794077 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tr5cj"] Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.879025 4830 patch_prober.go:28] interesting pod/router-default-5444994796-wh6nt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 16:10:22 crc kubenswrapper[4830]: [-]has-synced failed: reason withheld Feb 27 16:10:22 crc kubenswrapper[4830]: [+]process-running ok Feb 27 16:10:22 crc kubenswrapper[4830]: healthz check failed Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.879251 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wh6nt" podUID="d473053a-d4df-40b8-a876-5582e1d8a702" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.883973 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48011108-ee2c-4d3b-9f28-65cfc91b90ab-catalog-content\") pod \"redhat-operators-tr5cj\" (UID: \"48011108-ee2c-4d3b-9f28-65cfc91b90ab\") " pod="openshift-marketplace/redhat-operators-tr5cj" Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.884019 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk54l\" (UniqueName: \"kubernetes.io/projected/48011108-ee2c-4d3b-9f28-65cfc91b90ab-kube-api-access-qk54l\") pod \"redhat-operators-tr5cj\" (UID: \"48011108-ee2c-4d3b-9f28-65cfc91b90ab\") " pod="openshift-marketplace/redhat-operators-tr5cj" Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.884060 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48011108-ee2c-4d3b-9f28-65cfc91b90ab-utilities\") pod \"redhat-operators-tr5cj\" (UID: \"48011108-ee2c-4d3b-9f28-65cfc91b90ab\") " pod="openshift-marketplace/redhat-operators-tr5cj" Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.956171 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536800-n8kg4" Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.985484 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48011108-ee2c-4d3b-9f28-65cfc91b90ab-utilities\") pod \"redhat-operators-tr5cj\" (UID: \"48011108-ee2c-4d3b-9f28-65cfc91b90ab\") " pod="openshift-marketplace/redhat-operators-tr5cj" Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.985557 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48011108-ee2c-4d3b-9f28-65cfc91b90ab-catalog-content\") pod \"redhat-operators-tr5cj\" (UID: \"48011108-ee2c-4d3b-9f28-65cfc91b90ab\") " pod="openshift-marketplace/redhat-operators-tr5cj" Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.985587 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk54l\" (UniqueName: \"kubernetes.io/projected/48011108-ee2c-4d3b-9f28-65cfc91b90ab-kube-api-access-qk54l\") pod \"redhat-operators-tr5cj\" (UID: \"48011108-ee2c-4d3b-9f28-65cfc91b90ab\") " pod="openshift-marketplace/redhat-operators-tr5cj" Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.988307 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48011108-ee2c-4d3b-9f28-65cfc91b90ab-utilities\") pod \"redhat-operators-tr5cj\" (UID: \"48011108-ee2c-4d3b-9f28-65cfc91b90ab\") " pod="openshift-marketplace/redhat-operators-tr5cj" Feb 27 16:10:22 crc kubenswrapper[4830]: I0227 16:10:22.989250 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48011108-ee2c-4d3b-9f28-65cfc91b90ab-catalog-content\") pod \"redhat-operators-tr5cj\" (UID: \"48011108-ee2c-4d3b-9f28-65cfc91b90ab\") " pod="openshift-marketplace/redhat-operators-tr5cj" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.017597 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk54l\" (UniqueName: \"kubernetes.io/projected/48011108-ee2c-4d3b-9f28-65cfc91b90ab-kube-api-access-qk54l\") pod \"redhat-operators-tr5cj\" (UID: \"48011108-ee2c-4d3b-9f28-65cfc91b90ab\") " pod="openshift-marketplace/redhat-operators-tr5cj" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.022897 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwcdd"] Feb 27 16:10:23 crc kubenswrapper[4830]: W0227 16:10:23.030824 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod728cab24_3fc3_4249_b37e_183d5676c191.slice/crio-11a15f4f81a0c8ba7bfcfb0caae3d29a2f928469187a063f087b8d427c5179e1 WatchSource:0}: Error finding container 11a15f4f81a0c8ba7bfcfb0caae3d29a2f928469187a063f087b8d427c5179e1: Status 404 returned error can't find the container with id 11a15f4f81a0c8ba7bfcfb0caae3d29a2f928469187a063f087b8d427c5179e1 Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.035360 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86d57f9bf7-657rh" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.086434 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39a011ff-ad25-4470-84da-7a645ea582ce-proxy-ca-bundles\") pod \"39a011ff-ad25-4470-84da-7a645ea582ce\" (UID: \"39a011ff-ad25-4470-84da-7a645ea582ce\") " Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.086492 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39a011ff-ad25-4470-84da-7a645ea582ce-serving-cert\") pod \"39a011ff-ad25-4470-84da-7a645ea582ce\" (UID: \"39a011ff-ad25-4470-84da-7a645ea582ce\") " Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.086518 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/041ea905-9e91-41e3-9db6-820256d951aa-config-volume\") pod \"041ea905-9e91-41e3-9db6-820256d951aa\" (UID: \"041ea905-9e91-41e3-9db6-820256d951aa\") " Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.086544 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/041ea905-9e91-41e3-9db6-820256d951aa-secret-volume\") pod \"041ea905-9e91-41e3-9db6-820256d951aa\" (UID: \"041ea905-9e91-41e3-9db6-820256d951aa\") " Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.086586 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8dxs\" (UniqueName: \"kubernetes.io/projected/041ea905-9e91-41e3-9db6-820256d951aa-kube-api-access-k8dxs\") pod \"041ea905-9e91-41e3-9db6-820256d951aa\" (UID: \"041ea905-9e91-41e3-9db6-820256d951aa\") " Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.086619 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39a011ff-ad25-4470-84da-7a645ea582ce-client-ca\") pod \"39a011ff-ad25-4470-84da-7a645ea582ce\" (UID: \"39a011ff-ad25-4470-84da-7a645ea582ce\") " Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.086638 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39a011ff-ad25-4470-84da-7a645ea582ce-config\") pod \"39a011ff-ad25-4470-84da-7a645ea582ce\" (UID: \"39a011ff-ad25-4470-84da-7a645ea582ce\") " Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.086653 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx5gh\" (UniqueName: \"kubernetes.io/projected/39a011ff-ad25-4470-84da-7a645ea582ce-kube-api-access-dx5gh\") pod \"39a011ff-ad25-4470-84da-7a645ea582ce\" (UID: \"39a011ff-ad25-4470-84da-7a645ea582ce\") " Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.087803 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39a011ff-ad25-4470-84da-7a645ea582ce-client-ca" (OuterVolumeSpecName: "client-ca") pod "39a011ff-ad25-4470-84da-7a645ea582ce" (UID: "39a011ff-ad25-4470-84da-7a645ea582ce"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.089161 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39a011ff-ad25-4470-84da-7a645ea582ce-config" (OuterVolumeSpecName: "config") pod "39a011ff-ad25-4470-84da-7a645ea582ce" (UID: "39a011ff-ad25-4470-84da-7a645ea582ce"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.089617 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39a011ff-ad25-4470-84da-7a645ea582ce-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "39a011ff-ad25-4470-84da-7a645ea582ce" (UID: "39a011ff-ad25-4470-84da-7a645ea582ce"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.090119 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39a011ff-ad25-4470-84da-7a645ea582ce-kube-api-access-dx5gh" (OuterVolumeSpecName: "kube-api-access-dx5gh") pod "39a011ff-ad25-4470-84da-7a645ea582ce" (UID: "39a011ff-ad25-4470-84da-7a645ea582ce"). InnerVolumeSpecName "kube-api-access-dx5gh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.090415 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/041ea905-9e91-41e3-9db6-820256d951aa-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "041ea905-9e91-41e3-9db6-820256d951aa" (UID: "041ea905-9e91-41e3-9db6-820256d951aa"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.090933 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/041ea905-9e91-41e3-9db6-820256d951aa-config-volume" (OuterVolumeSpecName: "config-volume") pod "041ea905-9e91-41e3-9db6-820256d951aa" (UID: "041ea905-9e91-41e3-9db6-820256d951aa"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.091299 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/041ea905-9e91-41e3-9db6-820256d951aa-kube-api-access-k8dxs" (OuterVolumeSpecName: "kube-api-access-k8dxs") pod "041ea905-9e91-41e3-9db6-820256d951aa" (UID: "041ea905-9e91-41e3-9db6-820256d951aa"). InnerVolumeSpecName "kube-api-access-k8dxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.091508 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39a011ff-ad25-4470-84da-7a645ea582ce-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "39a011ff-ad25-4470-84da-7a645ea582ce" (UID: "39a011ff-ad25-4470-84da-7a645ea582ce"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.188030 4830 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39a011ff-ad25-4470-84da-7a645ea582ce-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.188063 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39a011ff-ad25-4470-84da-7a645ea582ce-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.188075 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx5gh\" (UniqueName: \"kubernetes.io/projected/39a011ff-ad25-4470-84da-7a645ea582ce-kube-api-access-dx5gh\") on node \"crc\" DevicePath \"\"" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.188086 4830 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/39a011ff-ad25-4470-84da-7a645ea582ce-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.188094 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39a011ff-ad25-4470-84da-7a645ea582ce-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.188102 4830 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/041ea905-9e91-41e3-9db6-820256d951aa-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.188110 4830 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/041ea905-9e91-41e3-9db6-820256d951aa-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.188118 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8dxs\" (UniqueName: \"kubernetes.io/projected/041ea905-9e91-41e3-9db6-820256d951aa-kube-api-access-k8dxs\") on node \"crc\" DevicePath \"\"" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.198113 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s5z2n"] Feb 27 16:10:23 crc kubenswrapper[4830]: E0227 16:10:23.198314 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39a011ff-ad25-4470-84da-7a645ea582ce" containerName="controller-manager" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.198326 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="39a011ff-ad25-4470-84da-7a645ea582ce" containerName="controller-manager" Feb 27 16:10:23 crc kubenswrapper[4830]: E0227 16:10:23.198337 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="041ea905-9e91-41e3-9db6-820256d951aa" containerName="collect-profiles" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.198343 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="041ea905-9e91-41e3-9db6-820256d951aa" containerName="collect-profiles" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.198433 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="041ea905-9e91-41e3-9db6-820256d951aa" containerName="collect-profiles" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.198446 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="39a011ff-ad25-4470-84da-7a645ea582ce" containerName="controller-manager" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.199100 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s5z2n" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.210570 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s5z2n"] Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.252549 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tr5cj" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.289430 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s82dh\" (UniqueName: \"kubernetes.io/projected/514ae4c6-322a-458e-a1e5-df6d6a47fc88-kube-api-access-s82dh\") pod \"redhat-operators-s5z2n\" (UID: \"514ae4c6-322a-458e-a1e5-df6d6a47fc88\") " pod="openshift-marketplace/redhat-operators-s5z2n" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.289499 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/514ae4c6-322a-458e-a1e5-df6d6a47fc88-utilities\") pod \"redhat-operators-s5z2n\" (UID: \"514ae4c6-322a-458e-a1e5-df6d6a47fc88\") " pod="openshift-marketplace/redhat-operators-s5z2n" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.289536 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/514ae4c6-322a-458e-a1e5-df6d6a47fc88-catalog-content\") pod \"redhat-operators-s5z2n\" (UID: \"514ae4c6-322a-458e-a1e5-df6d6a47fc88\") " pod="openshift-marketplace/redhat-operators-s5z2n" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.390558 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/514ae4c6-322a-458e-a1e5-df6d6a47fc88-catalog-content\") pod \"redhat-operators-s5z2n\" (UID: \"514ae4c6-322a-458e-a1e5-df6d6a47fc88\") " pod="openshift-marketplace/redhat-operators-s5z2n" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.390627 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s82dh\" (UniqueName: \"kubernetes.io/projected/514ae4c6-322a-458e-a1e5-df6d6a47fc88-kube-api-access-s82dh\") pod \"redhat-operators-s5z2n\" (UID: \"514ae4c6-322a-458e-a1e5-df6d6a47fc88\") " pod="openshift-marketplace/redhat-operators-s5z2n" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.390667 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/514ae4c6-322a-458e-a1e5-df6d6a47fc88-utilities\") pod \"redhat-operators-s5z2n\" (UID: \"514ae4c6-322a-458e-a1e5-df6d6a47fc88\") " pod="openshift-marketplace/redhat-operators-s5z2n" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.391219 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/514ae4c6-322a-458e-a1e5-df6d6a47fc88-utilities\") pod \"redhat-operators-s5z2n\" (UID: \"514ae4c6-322a-458e-a1e5-df6d6a47fc88\") " pod="openshift-marketplace/redhat-operators-s5z2n" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.391535 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/514ae4c6-322a-458e-a1e5-df6d6a47fc88-catalog-content\") pod \"redhat-operators-s5z2n\" (UID: \"514ae4c6-322a-458e-a1e5-df6d6a47fc88\") " pod="openshift-marketplace/redhat-operators-s5z2n" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.411108 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s82dh\" (UniqueName: \"kubernetes.io/projected/514ae4c6-322a-458e-a1e5-df6d6a47fc88-kube-api-access-s82dh\") pod \"redhat-operators-s5z2n\" (UID: \"514ae4c6-322a-458e-a1e5-df6d6a47fc88\") " pod="openshift-marketplace/redhat-operators-s5z2n" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.518003 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s5z2n" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.643866 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tr5cj"] Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.651586 4830 ???:1] "http: TLS handshake error from 192.168.126.11:33510: no serving certificate available for the kubelet" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.687801 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s5z2n"] Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.689091 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwcdd" event={"ID":"728cab24-3fc3-4249-b37e-183d5676c191","Type":"ContainerStarted","Data":"11a15f4f81a0c8ba7bfcfb0caae3d29a2f928469187a063f087b8d427c5179e1"} Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.692571 4830 generic.go:334] "Generic (PLEG): container finished" podID="c7725e3c-523d-4de0-9764-213008ccd32c" containerID="b97c4bd5179ab32af577a6d41865e231c39b16eb3545366b531410da14ad861f" exitCode=0 Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.692632 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fbf66d869-t9dv7" event={"ID":"c7725e3c-523d-4de0-9764-213008ccd32c","Type":"ContainerDied","Data":"b97c4bd5179ab32af577a6d41865e231c39b16eb3545366b531410da14ad861f"} Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.697309 4830 generic.go:334] "Generic (PLEG): container finished" podID="a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc" containerID="73f0f72216ba308c51c1f84db1ce043b27aa6c0dfd99bda506b1cb5082cee083" exitCode=0 Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.697377 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkwcl" event={"ID":"a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc","Type":"ContainerDied","Data":"73f0f72216ba308c51c1f84db1ce043b27aa6c0dfd99bda506b1cb5082cee083"} Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.699043 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tr5cj" event={"ID":"48011108-ee2c-4d3b-9f28-65cfc91b90ab","Type":"ContainerStarted","Data":"612e1dd9538265d782e323493367826df990f79faf4fd468a9fc7ab772bb8719"} Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.700267 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536800-n8kg4" event={"ID":"041ea905-9e91-41e3-9db6-820256d951aa","Type":"ContainerDied","Data":"5c054053597c2cf37a8455db1b02e74206f2ebf4ee5f070e47bc8b3126da49e6"} Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.700295 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c054053597c2cf37a8455db1b02e74206f2ebf4ee5f070e47bc8b3126da49e6" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.700345 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536800-n8kg4" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.702129 4830 generic.go:334] "Generic (PLEG): container finished" podID="39a011ff-ad25-4470-84da-7a645ea582ce" containerID="c6f573904fba95e9be1365a5179cce2f9178e089c1ce42c81c506f9840068361" exitCode=0 Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.702152 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86d57f9bf7-657rh" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.702176 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86d57f9bf7-657rh" event={"ID":"39a011ff-ad25-4470-84da-7a645ea582ce","Type":"ContainerDied","Data":"c6f573904fba95e9be1365a5179cce2f9178e089c1ce42c81c506f9840068361"} Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.702195 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86d57f9bf7-657rh" event={"ID":"39a011ff-ad25-4470-84da-7a645ea582ce","Type":"ContainerDied","Data":"0ef5bd5807952e3ea56f6d7db60b337c0e614358f38759e18d5af43eed92d535"} Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.702210 4830 scope.go:117] "RemoveContainer" containerID="c6f573904fba95e9be1365a5179cce2f9178e089c1ce42c81c506f9840068361" Feb 27 16:10:23 crc kubenswrapper[4830]: W0227 16:10:23.736681 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod514ae4c6_322a_458e_a1e5_df6d6a47fc88.slice/crio-92a08b63a14ad202a1d5ae495c67d3ba3864adcf0339c9ae9f83cb24e4ba2c07 WatchSource:0}: Error finding container 92a08b63a14ad202a1d5ae495c67d3ba3864adcf0339c9ae9f83cb24e4ba2c07: Status 404 returned error can't find the container with id 92a08b63a14ad202a1d5ae495c67d3ba3864adcf0339c9ae9f83cb24e4ba2c07 Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.773350 4830 scope.go:117] "RemoveContainer" containerID="c6f573904fba95e9be1365a5179cce2f9178e089c1ce42c81c506f9840068361" Feb 27 16:10:23 crc kubenswrapper[4830]: E0227 16:10:23.773800 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6f573904fba95e9be1365a5179cce2f9178e089c1ce42c81c506f9840068361\": container with ID starting with c6f573904fba95e9be1365a5179cce2f9178e089c1ce42c81c506f9840068361 not found: ID does not exist" containerID="c6f573904fba95e9be1365a5179cce2f9178e089c1ce42c81c506f9840068361" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.773834 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6f573904fba95e9be1365a5179cce2f9178e089c1ce42c81c506f9840068361"} err="failed to get container status \"c6f573904fba95e9be1365a5179cce2f9178e089c1ce42c81c506f9840068361\": rpc error: code = NotFound desc = could not find container \"c6f573904fba95e9be1365a5179cce2f9178e089c1ce42c81c506f9840068361\": container with ID starting with c6f573904fba95e9be1365a5179cce2f9178e089c1ce42c81c506f9840068361 not found: ID does not exist" Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.805225 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86d57f9bf7-657rh"] Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.806825 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-86d57f9bf7-657rh"] Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.881673 4830 patch_prober.go:28] interesting pod/router-default-5444994796-wh6nt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 16:10:23 crc kubenswrapper[4830]: [-]has-synced failed: reason withheld Feb 27 16:10:23 crc kubenswrapper[4830]: [+]process-running ok Feb 27 16:10:23 crc kubenswrapper[4830]: healthz check failed Feb 27 16:10:23 crc kubenswrapper[4830]: I0227 16:10:23.881777 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wh6nt" podUID="d473053a-d4df-40b8-a876-5582e1d8a702" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.013499 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-srljc" Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.082025 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.097376 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.119471 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fbf66d869-t9dv7" Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.200181 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbf6fbf6-60fa-43c6-97aa-f8e8ac24fbce-kube-api-access\") pod \"cbf6fbf6-60fa-43c6-97aa-f8e8ac24fbce\" (UID: \"cbf6fbf6-60fa-43c6-97aa-f8e8ac24fbce\") " Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.200223 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb19456d-a4a1-42eb-af47-f987cd981816-kubelet-dir\") pod \"cb19456d-a4a1-42eb-af47-f987cd981816\" (UID: \"cb19456d-a4a1-42eb-af47-f987cd981816\") " Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.200255 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7725e3c-523d-4de0-9764-213008ccd32c-config\") pod \"c7725e3c-523d-4de0-9764-213008ccd32c\" (UID: \"c7725e3c-523d-4de0-9764-213008ccd32c\") " Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.200274 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7725e3c-523d-4de0-9764-213008ccd32c-serving-cert\") pod \"c7725e3c-523d-4de0-9764-213008ccd32c\" (UID: \"c7725e3c-523d-4de0-9764-213008ccd32c\") " Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.200290 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7725e3c-523d-4de0-9764-213008ccd32c-client-ca\") pod \"c7725e3c-523d-4de0-9764-213008ccd32c\" (UID: \"c7725e3c-523d-4de0-9764-213008ccd32c\") " Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.200314 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb19456d-a4a1-42eb-af47-f987cd981816-kube-api-access\") pod \"cb19456d-a4a1-42eb-af47-f987cd981816\" (UID: \"cb19456d-a4a1-42eb-af47-f987cd981816\") " Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.200359 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cbf6fbf6-60fa-43c6-97aa-f8e8ac24fbce-kubelet-dir\") pod \"cbf6fbf6-60fa-43c6-97aa-f8e8ac24fbce\" (UID: \"cbf6fbf6-60fa-43c6-97aa-f8e8ac24fbce\") " Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.200385 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bf4d\" (UniqueName: \"kubernetes.io/projected/c7725e3c-523d-4de0-9764-213008ccd32c-kube-api-access-5bf4d\") pod \"c7725e3c-523d-4de0-9764-213008ccd32c\" (UID: \"c7725e3c-523d-4de0-9764-213008ccd32c\") " Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.201582 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb19456d-a4a1-42eb-af47-f987cd981816-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cb19456d-a4a1-42eb-af47-f987cd981816" (UID: "cb19456d-a4a1-42eb-af47-f987cd981816"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.201810 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbf6fbf6-60fa-43c6-97aa-f8e8ac24fbce-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cbf6fbf6-60fa-43c6-97aa-f8e8ac24fbce" (UID: "cbf6fbf6-60fa-43c6-97aa-f8e8ac24fbce"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.202687 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7725e3c-523d-4de0-9764-213008ccd32c-config" (OuterVolumeSpecName: "config") pod "c7725e3c-523d-4de0-9764-213008ccd32c" (UID: "c7725e3c-523d-4de0-9764-213008ccd32c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.203031 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7725e3c-523d-4de0-9764-213008ccd32c-client-ca" (OuterVolumeSpecName: "client-ca") pod "c7725e3c-523d-4de0-9764-213008ccd32c" (UID: "c7725e3c-523d-4de0-9764-213008ccd32c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.206196 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7725e3c-523d-4de0-9764-213008ccd32c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c7725e3c-523d-4de0-9764-213008ccd32c" (UID: "c7725e3c-523d-4de0-9764-213008ccd32c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.206280 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb19456d-a4a1-42eb-af47-f987cd981816-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cb19456d-a4a1-42eb-af47-f987cd981816" (UID: "cb19456d-a4a1-42eb-af47-f987cd981816"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.206495 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7725e3c-523d-4de0-9764-213008ccd32c-kube-api-access-5bf4d" (OuterVolumeSpecName: "kube-api-access-5bf4d") pod "c7725e3c-523d-4de0-9764-213008ccd32c" (UID: "c7725e3c-523d-4de0-9764-213008ccd32c"). InnerVolumeSpecName "kube-api-access-5bf4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.206646 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbf6fbf6-60fa-43c6-97aa-f8e8ac24fbce-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cbf6fbf6-60fa-43c6-97aa-f8e8ac24fbce" (UID: "cbf6fbf6-60fa-43c6-97aa-f8e8ac24fbce"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.304493 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbf6fbf6-60fa-43c6-97aa-f8e8ac24fbce-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.304536 4830 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb19456d-a4a1-42eb-af47-f987cd981816-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.304551 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7725e3c-523d-4de0-9764-213008ccd32c-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.304563 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7725e3c-523d-4de0-9764-213008ccd32c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.304575 4830 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7725e3c-523d-4de0-9764-213008ccd32c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.304585 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb19456d-a4a1-42eb-af47-f987cd981816-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.304594 4830 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cbf6fbf6-60fa-43c6-97aa-f8e8ac24fbce-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.304606 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bf4d\" (UniqueName: \"kubernetes.io/projected/c7725e3c-523d-4de0-9764-213008ccd32c-kube-api-access-5bf4d\") on node \"crc\" DevicePath \"\"" Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.712690 4830 generic.go:334] "Generic (PLEG): container finished" podID="728cab24-3fc3-4249-b37e-183d5676c191" containerID="56e9a05684abd121b608488eee870ec02035de2ff2ffe382701155153d851688" exitCode=0 Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.712772 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwcdd" event={"ID":"728cab24-3fc3-4249-b37e-183d5676c191","Type":"ContainerDied","Data":"56e9a05684abd121b608488eee870ec02035de2ff2ffe382701155153d851688"} Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.719605 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fbf66d869-t9dv7" event={"ID":"c7725e3c-523d-4de0-9764-213008ccd32c","Type":"ContainerDied","Data":"1089de176d6c643a257453eca19134fef7b90f981bfb3be17860a69aa67a4475"} Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.719672 4830 scope.go:117] "RemoveContainer" containerID="b97c4bd5179ab32af577a6d41865e231c39b16eb3545366b531410da14ad861f" Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.719661 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fbf66d869-t9dv7" Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.721867 4830 generic.go:334] "Generic (PLEG): container finished" podID="514ae4c6-322a-458e-a1e5-df6d6a47fc88" containerID="e451a4ea3c51636710a864c002dc901d70b11039337616deb3fc447374e38648" exitCode=0 Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.721957 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5z2n" event={"ID":"514ae4c6-322a-458e-a1e5-df6d6a47fc88","Type":"ContainerDied","Data":"e451a4ea3c51636710a864c002dc901d70b11039337616deb3fc447374e38648"} Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.722008 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5z2n" event={"ID":"514ae4c6-322a-458e-a1e5-df6d6a47fc88","Type":"ContainerStarted","Data":"92a08b63a14ad202a1d5ae495c67d3ba3864adcf0339c9ae9f83cb24e4ba2c07"} Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.724548 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cbf6fbf6-60fa-43c6-97aa-f8e8ac24fbce","Type":"ContainerDied","Data":"547bb5646c6d50fcf21ac15fc67a4052e23ee3d9052eb964d8eaee29c0a0745e"} Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.724571 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="547bb5646c6d50fcf21ac15fc67a4052e23ee3d9052eb964d8eaee29c0a0745e" Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.724624 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.728156 4830 generic.go:334] "Generic (PLEG): container finished" podID="48011108-ee2c-4d3b-9f28-65cfc91b90ab" containerID="19152c7e45c1d0d863dc124c17373bb842b76968ed71362f85243c4e84f80696" exitCode=0 Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.728231 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tr5cj" event={"ID":"48011108-ee2c-4d3b-9f28-65cfc91b90ab","Type":"ContainerDied","Data":"19152c7e45c1d0d863dc124c17373bb842b76968ed71362f85243c4e84f80696"} Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.741333 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"cb19456d-a4a1-42eb-af47-f987cd981816","Type":"ContainerDied","Data":"c92fe710cfb7b6d3c8221cd4fe442dbcf7a9ad52982d5387e7257d8408922aee"} Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.741402 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c92fe710cfb7b6d3c8221cd4fe442dbcf7a9ad52982d5387e7257d8408922aee" Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.744865 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.782615 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39a011ff-ad25-4470-84da-7a645ea582ce" path="/var/lib/kubelet/pods/39a011ff-ad25-4470-84da-7a645ea582ce/volumes" Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.784834 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fbf66d869-t9dv7"] Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.787164 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fbf66d869-t9dv7"] Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.879538 4830 patch_prober.go:28] interesting pod/router-default-5444994796-wh6nt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 16:10:24 crc kubenswrapper[4830]: [-]has-synced failed: reason withheld Feb 27 16:10:24 crc kubenswrapper[4830]: [+]process-running ok Feb 27 16:10:24 crc kubenswrapper[4830]: healthz check failed Feb 27 16:10:24 crc kubenswrapper[4830]: I0227 16:10:24.879920 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wh6nt" podUID="d473053a-d4df-40b8-a876-5582e1d8a702" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:10:25 crc kubenswrapper[4830]: I0227 16:10:25.309483 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" Feb 27 16:10:25 crc kubenswrapper[4830]: I0227 16:10:25.315988 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-9c4wb" Feb 27 16:10:25 crc kubenswrapper[4830]: I0227 16:10:25.860301 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c87b6b5b9-9pj4q"] Feb 27 16:10:25 crc kubenswrapper[4830]: E0227 16:10:25.862415 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf6fbf6-60fa-43c6-97aa-f8e8ac24fbce" containerName="pruner" Feb 27 16:10:25 crc kubenswrapper[4830]: I0227 16:10:25.862470 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf6fbf6-60fa-43c6-97aa-f8e8ac24fbce" containerName="pruner" Feb 27 16:10:25 crc kubenswrapper[4830]: E0227 16:10:25.862492 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7725e3c-523d-4de0-9764-213008ccd32c" containerName="route-controller-manager" Feb 27 16:10:25 crc kubenswrapper[4830]: I0227 16:10:25.862500 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7725e3c-523d-4de0-9764-213008ccd32c" containerName="route-controller-manager" Feb 27 16:10:25 crc kubenswrapper[4830]: E0227 16:10:25.862513 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb19456d-a4a1-42eb-af47-f987cd981816" containerName="pruner" Feb 27 16:10:25 crc kubenswrapper[4830]: I0227 16:10:25.862519 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb19456d-a4a1-42eb-af47-f987cd981816" containerName="pruner" Feb 27 16:10:25 crc kubenswrapper[4830]: I0227 16:10:25.862872 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7725e3c-523d-4de0-9764-213008ccd32c" containerName="route-controller-manager" Feb 27 16:10:25 crc kubenswrapper[4830]: I0227 16:10:25.862898 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbf6fbf6-60fa-43c6-97aa-f8e8ac24fbce" containerName="pruner" Feb 27 16:10:25 crc kubenswrapper[4830]: I0227 16:10:25.862909 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb19456d-a4a1-42eb-af47-f987cd981816" containerName="pruner" Feb 27 16:10:25 crc kubenswrapper[4830]: I0227 16:10:25.863621 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c87b6b5b9-9pj4q" Feb 27 16:10:25 crc kubenswrapper[4830]: I0227 16:10:25.866986 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 27 16:10:25 crc kubenswrapper[4830]: I0227 16:10:25.867116 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 27 16:10:25 crc kubenswrapper[4830]: I0227 16:10:25.870456 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-b9f669b87-b29pg"] Feb 27 16:10:25 crc kubenswrapper[4830]: I0227 16:10:25.871633 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 27 16:10:25 crc kubenswrapper[4830]: I0227 16:10:25.873054 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b9f669b87-b29pg" Feb 27 16:10:25 crc kubenswrapper[4830]: I0227 16:10:25.873283 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 27 16:10:25 crc kubenswrapper[4830]: I0227 16:10:25.873566 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 27 16:10:25 crc kubenswrapper[4830]: I0227 16:10:25.873861 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 27 16:10:25 crc kubenswrapper[4830]: I0227 16:10:25.875431 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b9f669b87-b29pg"] Feb 27 16:10:25 crc kubenswrapper[4830]: I0227 16:10:25.877435 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 27 16:10:25 crc kubenswrapper[4830]: I0227 16:10:25.878402 4830 patch_prober.go:28] interesting pod/router-default-5444994796-wh6nt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 16:10:25 crc kubenswrapper[4830]: [-]has-synced failed: reason withheld Feb 27 16:10:25 crc kubenswrapper[4830]: [+]process-running ok Feb 27 16:10:25 crc kubenswrapper[4830]: healthz check failed Feb 27 16:10:25 crc kubenswrapper[4830]: I0227 16:10:25.878456 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wh6nt" podUID="d473053a-d4df-40b8-a876-5582e1d8a702" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:10:25 crc kubenswrapper[4830]: I0227 16:10:25.878707 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 27 16:10:25 crc kubenswrapper[4830]: I0227 16:10:25.878849 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 27 16:10:25 crc kubenswrapper[4830]: I0227 16:10:25.878880 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 27 16:10:25 crc kubenswrapper[4830]: I0227 16:10:25.879297 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 27 16:10:25 crc kubenswrapper[4830]: I0227 16:10:25.879389 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 27 16:10:25 crc kubenswrapper[4830]: I0227 16:10:25.884068 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 27 16:10:25 crc kubenswrapper[4830]: I0227 16:10:25.886041 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c87b6b5b9-9pj4q"] Feb 27 16:10:25 crc kubenswrapper[4830]: I0227 16:10:25.935437 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abe9afe9-de5f-4b67-a8f3-aeae379314bf-serving-cert\") pod \"route-controller-manager-6c87b6b5b9-9pj4q\" (UID: \"abe9afe9-de5f-4b67-a8f3-aeae379314bf\") " pod="openshift-route-controller-manager/route-controller-manager-6c87b6b5b9-9pj4q" Feb 27 16:10:25 crc kubenswrapper[4830]: I0227 16:10:25.935487 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16475e02-4dc7-4adf-954a-00721032f157-config\") pod \"controller-manager-b9f669b87-b29pg\" (UID: \"16475e02-4dc7-4adf-954a-00721032f157\") " pod="openshift-controller-manager/controller-manager-b9f669b87-b29pg" Feb 27 16:10:25 crc kubenswrapper[4830]: I0227 16:10:25.935518 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmmqg\" (UniqueName: \"kubernetes.io/projected/abe9afe9-de5f-4b67-a8f3-aeae379314bf-kube-api-access-lmmqg\") pod \"route-controller-manager-6c87b6b5b9-9pj4q\" (UID: \"abe9afe9-de5f-4b67-a8f3-aeae379314bf\") " pod="openshift-route-controller-manager/route-controller-manager-6c87b6b5b9-9pj4q" Feb 27 16:10:25 crc kubenswrapper[4830]: I0227 16:10:25.935593 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trz8x\" (UniqueName: \"kubernetes.io/projected/16475e02-4dc7-4adf-954a-00721032f157-kube-api-access-trz8x\") pod \"controller-manager-b9f669b87-b29pg\" (UID: \"16475e02-4dc7-4adf-954a-00721032f157\") " pod="openshift-controller-manager/controller-manager-b9f669b87-b29pg" Feb 27 16:10:25 crc kubenswrapper[4830]: I0227 16:10:25.935629 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abe9afe9-de5f-4b67-a8f3-aeae379314bf-client-ca\") pod \"route-controller-manager-6c87b6b5b9-9pj4q\" (UID: \"abe9afe9-de5f-4b67-a8f3-aeae379314bf\") " pod="openshift-route-controller-manager/route-controller-manager-6c87b6b5b9-9pj4q" Feb 27 16:10:25 crc kubenswrapper[4830]: I0227 16:10:25.935649 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abe9afe9-de5f-4b67-a8f3-aeae379314bf-config\") pod \"route-controller-manager-6c87b6b5b9-9pj4q\" (UID: \"abe9afe9-de5f-4b67-a8f3-aeae379314bf\") " pod="openshift-route-controller-manager/route-controller-manager-6c87b6b5b9-9pj4q" Feb 27 16:10:25 crc kubenswrapper[4830]: I0227 16:10:25.935665 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16475e02-4dc7-4adf-954a-00721032f157-serving-cert\") pod \"controller-manager-b9f669b87-b29pg\" (UID: \"16475e02-4dc7-4adf-954a-00721032f157\") " pod="openshift-controller-manager/controller-manager-b9f669b87-b29pg" Feb 27 16:10:25 crc kubenswrapper[4830]: I0227 16:10:25.935683 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16475e02-4dc7-4adf-954a-00721032f157-client-ca\") pod \"controller-manager-b9f669b87-b29pg\" (UID: \"16475e02-4dc7-4adf-954a-00721032f157\") " pod="openshift-controller-manager/controller-manager-b9f669b87-b29pg" Feb 27 16:10:25 crc kubenswrapper[4830]: I0227 16:10:25.935710 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16475e02-4dc7-4adf-954a-00721032f157-proxy-ca-bundles\") pod \"controller-manager-b9f669b87-b29pg\" (UID: \"16475e02-4dc7-4adf-954a-00721032f157\") " pod="openshift-controller-manager/controller-manager-b9f669b87-b29pg" Feb 27 16:10:26 crc kubenswrapper[4830]: I0227 16:10:26.036408 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abe9afe9-de5f-4b67-a8f3-aeae379314bf-serving-cert\") pod \"route-controller-manager-6c87b6b5b9-9pj4q\" (UID: \"abe9afe9-de5f-4b67-a8f3-aeae379314bf\") " pod="openshift-route-controller-manager/route-controller-manager-6c87b6b5b9-9pj4q" Feb 27 16:10:26 crc kubenswrapper[4830]: I0227 16:10:26.036461 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16475e02-4dc7-4adf-954a-00721032f157-config\") pod \"controller-manager-b9f669b87-b29pg\" (UID: \"16475e02-4dc7-4adf-954a-00721032f157\") " pod="openshift-controller-manager/controller-manager-b9f669b87-b29pg" Feb 27 16:10:26 crc kubenswrapper[4830]: I0227 16:10:26.036482 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmmqg\" (UniqueName: \"kubernetes.io/projected/abe9afe9-de5f-4b67-a8f3-aeae379314bf-kube-api-access-lmmqg\") pod \"route-controller-manager-6c87b6b5b9-9pj4q\" (UID: \"abe9afe9-de5f-4b67-a8f3-aeae379314bf\") " pod="openshift-route-controller-manager/route-controller-manager-6c87b6b5b9-9pj4q" Feb 27 16:10:26 crc kubenswrapper[4830]: I0227 16:10:26.036517 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trz8x\" (UniqueName: \"kubernetes.io/projected/16475e02-4dc7-4adf-954a-00721032f157-kube-api-access-trz8x\") pod \"controller-manager-b9f669b87-b29pg\" (UID: \"16475e02-4dc7-4adf-954a-00721032f157\") " pod="openshift-controller-manager/controller-manager-b9f669b87-b29pg" Feb 27 16:10:26 crc kubenswrapper[4830]: I0227 16:10:26.036560 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abe9afe9-de5f-4b67-a8f3-aeae379314bf-client-ca\") pod \"route-controller-manager-6c87b6b5b9-9pj4q\" (UID: \"abe9afe9-de5f-4b67-a8f3-aeae379314bf\") " pod="openshift-route-controller-manager/route-controller-manager-6c87b6b5b9-9pj4q" Feb 27 16:10:26 crc kubenswrapper[4830]: I0227 16:10:26.036596 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abe9afe9-de5f-4b67-a8f3-aeae379314bf-config\") pod \"route-controller-manager-6c87b6b5b9-9pj4q\" (UID: \"abe9afe9-de5f-4b67-a8f3-aeae379314bf\") " pod="openshift-route-controller-manager/route-controller-manager-6c87b6b5b9-9pj4q" Feb 27 16:10:26 crc kubenswrapper[4830]: I0227 16:10:26.036612 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16475e02-4dc7-4adf-954a-00721032f157-serving-cert\") pod \"controller-manager-b9f669b87-b29pg\" (UID: \"16475e02-4dc7-4adf-954a-00721032f157\") " pod="openshift-controller-manager/controller-manager-b9f669b87-b29pg" Feb 27 16:10:26 crc kubenswrapper[4830]: I0227 16:10:26.036629 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16475e02-4dc7-4adf-954a-00721032f157-client-ca\") pod \"controller-manager-b9f669b87-b29pg\" (UID: \"16475e02-4dc7-4adf-954a-00721032f157\") " pod="openshift-controller-manager/controller-manager-b9f669b87-b29pg" Feb 27 16:10:26 crc kubenswrapper[4830]: I0227 16:10:26.036656 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16475e02-4dc7-4adf-954a-00721032f157-proxy-ca-bundles\") pod \"controller-manager-b9f669b87-b29pg\" (UID: \"16475e02-4dc7-4adf-954a-00721032f157\") " pod="openshift-controller-manager/controller-manager-b9f669b87-b29pg" Feb 27 16:10:26 crc kubenswrapper[4830]: I0227 16:10:26.037919 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16475e02-4dc7-4adf-954a-00721032f157-config\") pod \"controller-manager-b9f669b87-b29pg\" (UID: \"16475e02-4dc7-4adf-954a-00721032f157\") " pod="openshift-controller-manager/controller-manager-b9f669b87-b29pg" Feb 27 16:10:26 crc kubenswrapper[4830]: I0227 16:10:26.038493 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16475e02-4dc7-4adf-954a-00721032f157-client-ca\") pod \"controller-manager-b9f669b87-b29pg\" (UID: \"16475e02-4dc7-4adf-954a-00721032f157\") " pod="openshift-controller-manager/controller-manager-b9f669b87-b29pg" Feb 27 16:10:26 crc kubenswrapper[4830]: I0227 16:10:26.038623 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abe9afe9-de5f-4b67-a8f3-aeae379314bf-client-ca\") pod \"route-controller-manager-6c87b6b5b9-9pj4q\" (UID: \"abe9afe9-de5f-4b67-a8f3-aeae379314bf\") " pod="openshift-route-controller-manager/route-controller-manager-6c87b6b5b9-9pj4q" Feb 27 16:10:26 crc kubenswrapper[4830]: I0227 16:10:26.038937 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16475e02-4dc7-4adf-954a-00721032f157-proxy-ca-bundles\") pod \"controller-manager-b9f669b87-b29pg\" (UID: \"16475e02-4dc7-4adf-954a-00721032f157\") " pod="openshift-controller-manager/controller-manager-b9f669b87-b29pg" Feb 27 16:10:26 crc kubenswrapper[4830]: I0227 16:10:26.039012 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abe9afe9-de5f-4b67-a8f3-aeae379314bf-config\") pod \"route-controller-manager-6c87b6b5b9-9pj4q\" (UID: \"abe9afe9-de5f-4b67-a8f3-aeae379314bf\") " pod="openshift-route-controller-manager/route-controller-manager-6c87b6b5b9-9pj4q" Feb 27 16:10:26 crc kubenswrapper[4830]: I0227 16:10:26.042572 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abe9afe9-de5f-4b67-a8f3-aeae379314bf-serving-cert\") pod \"route-controller-manager-6c87b6b5b9-9pj4q\" (UID: \"abe9afe9-de5f-4b67-a8f3-aeae379314bf\") " pod="openshift-route-controller-manager/route-controller-manager-6c87b6b5b9-9pj4q" Feb 27 16:10:26 crc kubenswrapper[4830]: I0227 16:10:26.052051 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trz8x\" (UniqueName: \"kubernetes.io/projected/16475e02-4dc7-4adf-954a-00721032f157-kube-api-access-trz8x\") pod \"controller-manager-b9f669b87-b29pg\" (UID: \"16475e02-4dc7-4adf-954a-00721032f157\") " pod="openshift-controller-manager/controller-manager-b9f669b87-b29pg" Feb 27 16:10:26 crc kubenswrapper[4830]: I0227 16:10:26.053241 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmmqg\" (UniqueName: \"kubernetes.io/projected/abe9afe9-de5f-4b67-a8f3-aeae379314bf-kube-api-access-lmmqg\") pod \"route-controller-manager-6c87b6b5b9-9pj4q\" (UID: \"abe9afe9-de5f-4b67-a8f3-aeae379314bf\") " pod="openshift-route-controller-manager/route-controller-manager-6c87b6b5b9-9pj4q" Feb 27 16:10:26 crc kubenswrapper[4830]: I0227 16:10:26.053735 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16475e02-4dc7-4adf-954a-00721032f157-serving-cert\") pod \"controller-manager-b9f669b87-b29pg\" (UID: \"16475e02-4dc7-4adf-954a-00721032f157\") " pod="openshift-controller-manager/controller-manager-b9f669b87-b29pg" Feb 27 16:10:26 crc kubenswrapper[4830]: I0227 16:10:26.188644 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c87b6b5b9-9pj4q" Feb 27 16:10:26 crc kubenswrapper[4830]: I0227 16:10:26.198513 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b9f669b87-b29pg" Feb 27 16:10:26 crc kubenswrapper[4830]: I0227 16:10:26.804521 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7725e3c-523d-4de0-9764-213008ccd32c" path="/var/lib/kubelet/pods/c7725e3c-523d-4de0-9764-213008ccd32c/volumes" Feb 27 16:10:26 crc kubenswrapper[4830]: I0227 16:10:26.841316 4830 ???:1] "http: TLS handshake error from 192.168.126.11:33512: no serving certificate available for the kubelet" Feb 27 16:10:26 crc kubenswrapper[4830]: I0227 16:10:26.878896 4830 patch_prober.go:28] interesting pod/router-default-5444994796-wh6nt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 16:10:26 crc kubenswrapper[4830]: [-]has-synced failed: reason withheld Feb 27 16:10:26 crc kubenswrapper[4830]: [+]process-running ok Feb 27 16:10:26 crc kubenswrapper[4830]: healthz check failed Feb 27 16:10:26 crc kubenswrapper[4830]: I0227 16:10:26.879350 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wh6nt" podUID="d473053a-d4df-40b8-a876-5582e1d8a702" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:10:27 crc kubenswrapper[4830]: I0227 16:10:27.877468 4830 patch_prober.go:28] interesting pod/router-default-5444994796-wh6nt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 16:10:27 crc kubenswrapper[4830]: [-]has-synced failed: reason withheld Feb 27 16:10:27 crc kubenswrapper[4830]: [+]process-running ok Feb 27 16:10:27 crc kubenswrapper[4830]: healthz check failed Feb 27 16:10:27 crc kubenswrapper[4830]: I0227 16:10:27.877531 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wh6nt" podUID="d473053a-d4df-40b8-a876-5582e1d8a702" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:10:28 crc kubenswrapper[4830]: I0227 16:10:28.810596 4830 ???:1] "http: TLS handshake error from 192.168.126.11:33518: no serving certificate available for the kubelet" Feb 27 16:10:28 crc kubenswrapper[4830]: I0227 16:10:28.876166 4830 patch_prober.go:28] interesting pod/router-default-5444994796-wh6nt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 16:10:28 crc kubenswrapper[4830]: [-]has-synced failed: reason withheld Feb 27 16:10:28 crc kubenswrapper[4830]: [+]process-running ok Feb 27 16:10:28 crc kubenswrapper[4830]: healthz check failed Feb 27 16:10:28 crc kubenswrapper[4830]: I0227 16:10:28.876232 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wh6nt" podUID="d473053a-d4df-40b8-a876-5582e1d8a702" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:10:29 crc kubenswrapper[4830]: I0227 16:10:29.879095 4830 patch_prober.go:28] interesting pod/router-default-5444994796-wh6nt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 16:10:29 crc kubenswrapper[4830]: [-]has-synced failed: reason withheld Feb 27 16:10:29 crc kubenswrapper[4830]: [+]process-running ok Feb 27 16:10:29 crc kubenswrapper[4830]: healthz check failed Feb 27 16:10:29 crc kubenswrapper[4830]: I0227 16:10:29.879224 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wh6nt" podUID="d473053a-d4df-40b8-a876-5582e1d8a702" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:10:30 crc kubenswrapper[4830]: I0227 16:10:30.146341 4830 patch_prober.go:28] interesting pod/console-f9d7485db-kjfn6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 27 16:10:30 crc kubenswrapper[4830]: I0227 16:10:30.146421 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-kjfn6" podUID="11fbaa05-cf66-40dd-be15-c6474a011768" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 27 16:10:30 crc kubenswrapper[4830]: I0227 16:10:30.823772 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-4dhxq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 27 16:10:30 crc kubenswrapper[4830]: I0227 16:10:30.824289 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4dhxq" podUID="1f30f03f-511a-4a29-beae-e3d6971a8c9e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 27 16:10:30 crc kubenswrapper[4830]: I0227 16:10:30.824082 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-4dhxq container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 27 16:10:30 crc kubenswrapper[4830]: I0227 16:10:30.824381 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-4dhxq" podUID="1f30f03f-511a-4a29-beae-e3d6971a8c9e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 27 16:10:30 crc kubenswrapper[4830]: I0227 16:10:30.879064 4830 patch_prober.go:28] interesting pod/router-default-5444994796-wh6nt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 16:10:30 crc kubenswrapper[4830]: [-]has-synced failed: reason withheld Feb 27 16:10:30 crc kubenswrapper[4830]: [+]process-running ok Feb 27 16:10:30 crc kubenswrapper[4830]: healthz check failed Feb 27 16:10:30 crc kubenswrapper[4830]: I0227 16:10:30.879334 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wh6nt" podUID="d473053a-d4df-40b8-a876-5582e1d8a702" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:10:31 crc kubenswrapper[4830]: I0227 16:10:31.877514 4830 patch_prober.go:28] interesting pod/router-default-5444994796-wh6nt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 16:10:31 crc kubenswrapper[4830]: [-]has-synced failed: reason withheld Feb 27 16:10:31 crc kubenswrapper[4830]: [+]process-running ok Feb 27 16:10:31 crc kubenswrapper[4830]: healthz check failed Feb 27 16:10:31 crc kubenswrapper[4830]: I0227 16:10:31.877594 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wh6nt" podUID="d473053a-d4df-40b8-a876-5582e1d8a702" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:10:32 crc kubenswrapper[4830]: I0227 16:10:32.878736 4830 patch_prober.go:28] interesting pod/router-default-5444994796-wh6nt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 16:10:32 crc kubenswrapper[4830]: [-]has-synced failed: reason withheld Feb 27 16:10:32 crc kubenswrapper[4830]: [+]process-running ok Feb 27 16:10:32 crc kubenswrapper[4830]: healthz check failed Feb 27 16:10:32 crc kubenswrapper[4830]: I0227 16:10:32.878831 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wh6nt" podUID="d473053a-d4df-40b8-a876-5582e1d8a702" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:10:33 crc kubenswrapper[4830]: I0227 16:10:33.160460 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 16:10:33 crc kubenswrapper[4830]: I0227 16:10:33.160530 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 16:10:33 crc kubenswrapper[4830]: I0227 16:10:33.878673 4830 patch_prober.go:28] interesting pod/router-default-5444994796-wh6nt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 16:10:33 crc kubenswrapper[4830]: [-]has-synced failed: reason withheld Feb 27 16:10:33 crc kubenswrapper[4830]: [+]process-running ok Feb 27 16:10:33 crc kubenswrapper[4830]: healthz check failed Feb 27 16:10:33 crc kubenswrapper[4830]: I0227 16:10:33.878973 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wh6nt" podUID="d473053a-d4df-40b8-a876-5582e1d8a702" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:10:34 crc kubenswrapper[4830]: I0227 16:10:34.877163 4830 patch_prober.go:28] interesting pod/router-default-5444994796-wh6nt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 16:10:34 crc kubenswrapper[4830]: [-]has-synced failed: reason withheld Feb 27 16:10:34 crc kubenswrapper[4830]: [+]process-running ok Feb 27 16:10:34 crc kubenswrapper[4830]: healthz check failed Feb 27 16:10:34 crc kubenswrapper[4830]: I0227 16:10:34.877271 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wh6nt" podUID="d473053a-d4df-40b8-a876-5582e1d8a702" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:10:35 crc kubenswrapper[4830]: I0227 16:10:35.902193 4830 patch_prober.go:28] interesting pod/router-default-5444994796-wh6nt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 16:10:35 crc kubenswrapper[4830]: [-]has-synced failed: reason withheld Feb 27 16:10:35 crc kubenswrapper[4830]: [+]process-running ok Feb 27 16:10:35 crc kubenswrapper[4830]: healthz check failed Feb 27 16:10:35 crc kubenswrapper[4830]: I0227 16:10:35.902317 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wh6nt" podUID="d473053a-d4df-40b8-a876-5582e1d8a702" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:10:36 crc kubenswrapper[4830]: I0227 16:10:36.876772 4830 patch_prober.go:28] interesting pod/router-default-5444994796-wh6nt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 16:10:36 crc kubenswrapper[4830]: [-]has-synced failed: reason withheld Feb 27 16:10:36 crc kubenswrapper[4830]: [+]process-running ok Feb 27 16:10:36 crc kubenswrapper[4830]: healthz check failed Feb 27 16:10:36 crc kubenswrapper[4830]: I0227 16:10:36.877115 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wh6nt" podUID="d473053a-d4df-40b8-a876-5582e1d8a702" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:10:37 crc kubenswrapper[4830]: I0227 16:10:37.879823 4830 patch_prober.go:28] interesting pod/router-default-5444994796-wh6nt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 16:10:37 crc kubenswrapper[4830]: [+]has-synced ok Feb 27 16:10:37 crc kubenswrapper[4830]: [+]process-running ok Feb 27 16:10:37 crc kubenswrapper[4830]: healthz check failed Feb 27 16:10:37 crc kubenswrapper[4830]: I0227 16:10:37.879931 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wh6nt" podUID="d473053a-d4df-40b8-a876-5582e1d8a702" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:10:38 crc kubenswrapper[4830]: I0227 16:10:38.880118 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-wh6nt" Feb 27 16:10:38 crc kubenswrapper[4830]: I0227 16:10:38.887615 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-wh6nt" Feb 27 16:10:38 crc kubenswrapper[4830]: I0227 16:10:38.969059 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b9f669b87-b29pg"] Feb 27 16:10:39 crc kubenswrapper[4830]: I0227 16:10:39.020336 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c87b6b5b9-9pj4q"] Feb 27 16:10:40 crc kubenswrapper[4830]: I0227 16:10:40.146700 4830 patch_prober.go:28] interesting pod/console-f9d7485db-kjfn6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 27 16:10:40 crc kubenswrapper[4830]: I0227 16:10:40.147309 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-kjfn6" podUID="11fbaa05-cf66-40dd-be15-c6474a011768" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 27 16:10:40 crc kubenswrapper[4830]: I0227 16:10:40.823763 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-4dhxq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 27 16:10:40 crc kubenswrapper[4830]: I0227 16:10:40.823844 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4dhxq" podUID="1f30f03f-511a-4a29-beae-e3d6971a8c9e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 27 16:10:40 crc kubenswrapper[4830]: I0227 16:10:40.823929 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-4dhxq container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 27 16:10:40 crc kubenswrapper[4830]: I0227 16:10:40.824061 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-4dhxq" podUID="1f30f03f-511a-4a29-beae-e3d6971a8c9e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 27 16:10:40 crc kubenswrapper[4830]: I0227 16:10:40.824143 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-4dhxq" Feb 27 16:10:40 crc kubenswrapper[4830]: I0227 16:10:40.824750 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-4dhxq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 27 16:10:40 crc kubenswrapper[4830]: I0227 16:10:40.824836 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4dhxq" podUID="1f30f03f-511a-4a29-beae-e3d6971a8c9e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 27 16:10:40 crc kubenswrapper[4830]: I0227 16:10:40.825048 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"b141c95318a5e67b3011273667892320219cb8b98bd670bbab711f837bcb857d"} pod="openshift-console/downloads-7954f5f757-4dhxq" containerMessage="Container download-server failed liveness probe, will be restarted" Feb 27 16:10:40 crc kubenswrapper[4830]: I0227 16:10:40.825115 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-4dhxq" podUID="1f30f03f-511a-4a29-beae-e3d6971a8c9e" containerName="download-server" containerID="cri-o://b141c95318a5e67b3011273667892320219cb8b98bd670bbab711f837bcb857d" gracePeriod=2 Feb 27 16:10:41 crc kubenswrapper[4830]: I0227 16:10:41.440349 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:10:43 crc kubenswrapper[4830]: I0227 16:10:43.928632 4830 generic.go:334] "Generic (PLEG): container finished" podID="1f30f03f-511a-4a29-beae-e3d6971a8c9e" containerID="b141c95318a5e67b3011273667892320219cb8b98bd670bbab711f837bcb857d" exitCode=0 Feb 27 16:10:43 crc kubenswrapper[4830]: I0227 16:10:43.928710 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4dhxq" event={"ID":"1f30f03f-511a-4a29-beae-e3d6971a8c9e","Type":"ContainerDied","Data":"b141c95318a5e67b3011273667892320219cb8b98bd670bbab711f837bcb857d"} Feb 27 16:10:49 crc kubenswrapper[4830]: I0227 16:10:49.318287 4830 ???:1] "http: TLS handshake error from 192.168.126.11:49974: no serving certificate available for the kubelet" Feb 27 16:10:50 crc kubenswrapper[4830]: I0227 16:10:50.179105 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-kjfn6" Feb 27 16:10:50 crc kubenswrapper[4830]: I0227 16:10:50.184360 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-kjfn6" Feb 27 16:10:50 crc kubenswrapper[4830]: I0227 16:10:50.826116 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-4dhxq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 27 16:10:50 crc kubenswrapper[4830]: I0227 16:10:50.826221 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4dhxq" podUID="1f30f03f-511a-4a29-beae-e3d6971a8c9e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 27 16:10:51 crc kubenswrapper[4830]: I0227 16:10:51.023759 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:10:51 crc kubenswrapper[4830]: I0227 16:10:51.930376 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hlg9d" Feb 27 16:10:54 crc kubenswrapper[4830]: I0227 16:10:54.364162 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 27 16:10:54 crc kubenswrapper[4830]: I0227 16:10:54.365103 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 16:10:54 crc kubenswrapper[4830]: I0227 16:10:54.368019 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 27 16:10:54 crc kubenswrapper[4830]: I0227 16:10:54.368311 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 27 16:10:54 crc kubenswrapper[4830]: I0227 16:10:54.381108 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 27 16:10:54 crc kubenswrapper[4830]: I0227 16:10:54.487752 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7b610c66-1e54-490d-bf39-27add37574a4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7b610c66-1e54-490d-bf39-27add37574a4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 16:10:54 crc kubenswrapper[4830]: I0227 16:10:54.487805 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7b610c66-1e54-490d-bf39-27add37574a4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7b610c66-1e54-490d-bf39-27add37574a4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 16:10:54 crc kubenswrapper[4830]: I0227 16:10:54.589787 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7b610c66-1e54-490d-bf39-27add37574a4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7b610c66-1e54-490d-bf39-27add37574a4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 16:10:54 crc kubenswrapper[4830]: I0227 16:10:54.589907 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7b610c66-1e54-490d-bf39-27add37574a4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7b610c66-1e54-490d-bf39-27add37574a4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 16:10:54 crc kubenswrapper[4830]: I0227 16:10:54.589988 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7b610c66-1e54-490d-bf39-27add37574a4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7b610c66-1e54-490d-bf39-27add37574a4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 16:10:54 crc kubenswrapper[4830]: I0227 16:10:54.608605 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7b610c66-1e54-490d-bf39-27add37574a4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7b610c66-1e54-490d-bf39-27add37574a4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 16:10:54 crc kubenswrapper[4830]: I0227 16:10:54.703211 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 16:10:55 crc kubenswrapper[4830]: E0227 16:10:55.850271 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 16:10:55 crc kubenswrapper[4830]: E0227 16:10:55.850840 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 16:10:55 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 16:10:55 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v6nrn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536810-bc446_openshift-infra(1eb064bc-39af-405a-bdbf-665e31fa07c3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Feb 27 16:10:55 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 16:10:55 crc kubenswrapper[4830]: E0227 16:10:55.853791 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29536810-bc446" podUID="1eb064bc-39af-405a-bdbf-665e31fa07c3" Feb 27 16:10:56 crc kubenswrapper[4830]: E0227 16:10:56.039455 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536810-bc446" podUID="1eb064bc-39af-405a-bdbf-665e31fa07c3" Feb 27 16:10:58 crc kubenswrapper[4830]: I0227 16:10:58.965507 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 27 16:10:58 crc kubenswrapper[4830]: I0227 16:10:58.966844 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 27 16:10:58 crc kubenswrapper[4830]: I0227 16:10:58.993079 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 27 16:10:59 crc kubenswrapper[4830]: I0227 16:10:59.052686 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3e80191-de07-41aa-b0d7-69b826f5378b-kube-api-access\") pod \"installer-9-crc\" (UID: \"d3e80191-de07-41aa-b0d7-69b826f5378b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 16:10:59 crc kubenswrapper[4830]: I0227 16:10:59.053103 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d3e80191-de07-41aa-b0d7-69b826f5378b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"d3e80191-de07-41aa-b0d7-69b826f5378b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 16:10:59 crc kubenswrapper[4830]: I0227 16:10:59.053157 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d3e80191-de07-41aa-b0d7-69b826f5378b-var-lock\") pod \"installer-9-crc\" (UID: \"d3e80191-de07-41aa-b0d7-69b826f5378b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 16:10:59 crc kubenswrapper[4830]: I0227 16:10:59.154419 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d3e80191-de07-41aa-b0d7-69b826f5378b-var-lock\") pod \"installer-9-crc\" (UID: \"d3e80191-de07-41aa-b0d7-69b826f5378b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 16:10:59 crc kubenswrapper[4830]: I0227 16:10:59.154497 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3e80191-de07-41aa-b0d7-69b826f5378b-kube-api-access\") pod \"installer-9-crc\" (UID: \"d3e80191-de07-41aa-b0d7-69b826f5378b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 16:10:59 crc kubenswrapper[4830]: I0227 16:10:59.154537 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d3e80191-de07-41aa-b0d7-69b826f5378b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"d3e80191-de07-41aa-b0d7-69b826f5378b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 16:10:59 crc kubenswrapper[4830]: I0227 16:10:59.154543 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d3e80191-de07-41aa-b0d7-69b826f5378b-var-lock\") pod \"installer-9-crc\" (UID: \"d3e80191-de07-41aa-b0d7-69b826f5378b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 16:10:59 crc kubenswrapper[4830]: I0227 16:10:59.154629 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d3e80191-de07-41aa-b0d7-69b826f5378b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"d3e80191-de07-41aa-b0d7-69b826f5378b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 16:10:59 crc kubenswrapper[4830]: I0227 16:10:59.172964 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3e80191-de07-41aa-b0d7-69b826f5378b-kube-api-access\") pod \"installer-9-crc\" (UID: \"d3e80191-de07-41aa-b0d7-69b826f5378b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 16:10:59 crc kubenswrapper[4830]: I0227 16:10:59.283506 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 27 16:11:00 crc kubenswrapper[4830]: I0227 16:11:00.824216 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-4dhxq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 27 16:11:00 crc kubenswrapper[4830]: I0227 16:11:00.824312 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4dhxq" podUID="1f30f03f-511a-4a29-beae-e3d6971a8c9e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 27 16:11:03 crc kubenswrapper[4830]: I0227 16:11:03.160721 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 16:11:03 crc kubenswrapper[4830]: I0227 16:11:03.160803 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 16:11:03 crc kubenswrapper[4830]: I0227 16:11:03.160866 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 16:11:03 crc kubenswrapper[4830]: I0227 16:11:03.161622 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516"} pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 16:11:03 crc kubenswrapper[4830]: I0227 16:11:03.161683 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" containerID="cri-o://5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516" gracePeriod=600 Feb 27 16:11:06 crc kubenswrapper[4830]: I0227 16:11:06.103887 4830 generic.go:334] "Generic (PLEG): container finished" podID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerID="5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516" exitCode=0 Feb 27 16:11:06 crc kubenswrapper[4830]: I0227 16:11:06.104032 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerDied","Data":"5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516"} Feb 27 16:11:10 crc kubenswrapper[4830]: I0227 16:11:10.823763 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-4dhxq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 27 16:11:10 crc kubenswrapper[4830]: I0227 16:11:10.824174 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4dhxq" podUID="1f30f03f-511a-4a29-beae-e3d6971a8c9e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 27 16:11:17 crc kubenswrapper[4830]: E0227 16:11:17.299184 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 16:11:17 crc kubenswrapper[4830]: E0227 16:11:17.300031 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x989f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-966h2_openshift-marketplace(8b33138a-5b9d-4af8-b13d-4db4c2613983): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 16:11:17 crc kubenswrapper[4830]: E0227 16:11:17.301331 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-966h2" podUID="8b33138a-5b9d-4af8-b13d-4db4c2613983" Feb 27 16:11:17 crc kubenswrapper[4830]: I0227 16:11:17.745639 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c87b6b5b9-9pj4q"] Feb 27 16:11:20 crc kubenswrapper[4830]: I0227 16:11:20.824646 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-4dhxq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 27 16:11:20 crc kubenswrapper[4830]: I0227 16:11:20.824728 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4dhxq" podUID="1f30f03f-511a-4a29-beae-e3d6971a8c9e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 27 16:11:20 crc kubenswrapper[4830]: E0227 16:11:20.986631 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 27 16:11:20 crc kubenswrapper[4830]: E0227 16:11:20.986774 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4pq52,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-k7l8d_openshift-marketplace(f2579681-6b81-4b58-9d2c-c26b123be8ec): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 16:11:20 crc kubenswrapper[4830]: E0227 16:11:20.988101 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-k7l8d" podUID="f2579681-6b81-4b58-9d2c-c26b123be8ec" Feb 27 16:11:21 crc kubenswrapper[4830]: E0227 16:11:21.182753 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 27 16:11:21 crc kubenswrapper[4830]: E0227 16:11:21.183119 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lzbpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-dnpxp_openshift-marketplace(789ee180-dd8e-4cb2-884e-beea08667c53): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 16:11:21 crc kubenswrapper[4830]: E0227 16:11:21.184483 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-dnpxp" podUID="789ee180-dd8e-4cb2-884e-beea08667c53" Feb 27 16:11:21 crc kubenswrapper[4830]: E0227 16:11:21.400462 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 16:11:21 crc kubenswrapper[4830]: E0227 16:11:21.400606 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mngln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-s4bpk_openshift-marketplace(1c5e2cae-7890-48fb-ab76-7e53c52fd6ac): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 16:11:21 crc kubenswrapper[4830]: E0227 16:11:21.401694 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-s4bpk" podUID="1c5e2cae-7890-48fb-ab76-7e53c52fd6ac" Feb 27 16:11:21 crc kubenswrapper[4830]: E0227 16:11:21.786301 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-966h2" podUID="8b33138a-5b9d-4af8-b13d-4db4c2613983" Feb 27 16:11:21 crc kubenswrapper[4830]: E0227 16:11:21.786833 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-dnpxp" podUID="789ee180-dd8e-4cb2-884e-beea08667c53" Feb 27 16:11:21 crc kubenswrapper[4830]: E0227 16:11:21.786856 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-k7l8d" podUID="f2579681-6b81-4b58-9d2c-c26b123be8ec" Feb 27 16:11:22 crc kubenswrapper[4830]: E0227 16:11:22.004302 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 16:11:22 crc kubenswrapper[4830]: E0227 16:11:22.004726 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-46ljt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-kkwcl_openshift-marketplace(a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 16:11:22 crc kubenswrapper[4830]: E0227 16:11:22.005905 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-kkwcl" podUID="a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc" Feb 27 16:11:22 crc kubenswrapper[4830]: I0227 16:11:22.233573 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b9f669b87-b29pg"] Feb 27 16:11:25 crc kubenswrapper[4830]: E0227 16:11:25.251828 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-s4bpk" podUID="1c5e2cae-7890-48fb-ab76-7e53c52fd6ac" Feb 27 16:11:25 crc kubenswrapper[4830]: E0227 16:11:25.252449 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-kkwcl" podUID="a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc" Feb 27 16:11:25 crc kubenswrapper[4830]: W0227 16:11:25.254104 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabe9afe9_de5f_4b67_a8f3_aeae379314bf.slice/crio-61f2d1e9789621ffb16b99218991602ca105d10e03e29b61232bc5d858de9472 WatchSource:0}: Error finding container 61f2d1e9789621ffb16b99218991602ca105d10e03e29b61232bc5d858de9472: Status 404 returned error can't find the container with id 61f2d1e9789621ffb16b99218991602ca105d10e03e29b61232bc5d858de9472 Feb 27 16:11:25 crc kubenswrapper[4830]: W0227 16:11:25.258308 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16475e02_4dc7_4adf_954a_00721032f157.slice/crio-e24d9eecfb269356f01390a349aad8f73b3bcb92fbdfad86a3511b341f639688 WatchSource:0}: Error finding container e24d9eecfb269356f01390a349aad8f73b3bcb92fbdfad86a3511b341f639688: Status 404 returned error can't find the container with id e24d9eecfb269356f01390a349aad8f73b3bcb92fbdfad86a3511b341f639688 Feb 27 16:11:25 crc kubenswrapper[4830]: E0227 16:11:25.329607 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 16:11:25 crc kubenswrapper[4830]: E0227 16:11:25.329828 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s82dh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-s5z2n_openshift-marketplace(514ae4c6-322a-458e-a1e5-df6d6a47fc88): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 16:11:25 crc kubenswrapper[4830]: E0227 16:11:25.331193 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-s5z2n" podUID="514ae4c6-322a-458e-a1e5-df6d6a47fc88" Feb 27 16:11:25 crc kubenswrapper[4830]: E0227 16:11:25.354896 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 16:11:25 crc kubenswrapper[4830]: E0227 16:11:25.355086 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z4j5c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-zwcdd_openshift-marketplace(728cab24-3fc3-4249-b37e-183d5676c191): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 16:11:25 crc kubenswrapper[4830]: E0227 16:11:25.356267 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-zwcdd" podUID="728cab24-3fc3-4249-b37e-183d5676c191" Feb 27 16:11:25 crc kubenswrapper[4830]: E0227 16:11:25.419127 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 16:11:25 crc kubenswrapper[4830]: E0227 16:11:25.419253 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qk54l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-tr5cj_openshift-marketplace(48011108-ee2c-4d3b-9f28-65cfc91b90ab): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 16:11:25 crc kubenswrapper[4830]: E0227 16:11:25.420459 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-tr5cj" podUID="48011108-ee2c-4d3b-9f28-65cfc91b90ab" Feb 27 16:11:25 crc kubenswrapper[4830]: I0227 16:11:25.771741 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 27 16:11:25 crc kubenswrapper[4830]: I0227 16:11:25.825012 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.241911 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c87b6b5b9-9pj4q" event={"ID":"abe9afe9-de5f-4b67-a8f3-aeae379314bf","Type":"ContainerStarted","Data":"37c10a33cf4fa3a826072376f142dab22e5ac2dcb89037184defb8dafcc9756e"} Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.242651 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c87b6b5b9-9pj4q" event={"ID":"abe9afe9-de5f-4b67-a8f3-aeae379314bf","Type":"ContainerStarted","Data":"61f2d1e9789621ffb16b99218991602ca105d10e03e29b61232bc5d858de9472"} Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.242770 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6c87b6b5b9-9pj4q" podUID="abe9afe9-de5f-4b67-a8f3-aeae379314bf" containerName="route-controller-manager" containerID="cri-o://37c10a33cf4fa3a826072376f142dab22e5ac2dcb89037184defb8dafcc9756e" gracePeriod=30 Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.243716 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6c87b6b5b9-9pj4q" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.247656 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6c87b6b5b9-9pj4q" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.251526 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536810-bc446" event={"ID":"1eb064bc-39af-405a-bdbf-665e31fa07c3","Type":"ContainerStarted","Data":"2752d42115a6a9ee8f1db79008a40907b77e6730aee724c7ce880c7ef63ed522"} Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.271829 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6c87b6b5b9-9pj4q" podStartSLOduration=65.271803736 podStartE2EDuration="1m5.271803736s" podCreationTimestamp="2026-02-27 16:10:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:11:26.26070504 +0000 UTC m=+282.349977513" watchObservedRunningTime="2026-02-27 16:11:26.271803736 +0000 UTC m=+282.361076199" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.275869 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-b9f669b87-b29pg" podUID="16475e02-4dc7-4adf-954a-00721032f157" containerName="controller-manager" containerID="cri-o://fb5b33e770c48586ce60381a606b79fedbfab466bb8c24fefcaa20b68ef91b23" gracePeriod=30 Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.276064 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b9f669b87-b29pg" event={"ID":"16475e02-4dc7-4adf-954a-00721032f157","Type":"ContainerStarted","Data":"fb5b33e770c48586ce60381a606b79fedbfab466bb8c24fefcaa20b68ef91b23"} Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.277405 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-b9f669b87-b29pg" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.277525 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b9f669b87-b29pg" event={"ID":"16475e02-4dc7-4adf-954a-00721032f157","Type":"ContainerStarted","Data":"e24d9eecfb269356f01390a349aad8f73b3bcb92fbdfad86a3511b341f639688"} Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.279567 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7b610c66-1e54-490d-bf39-27add37574a4","Type":"ContainerStarted","Data":"da30552f22cffa62c82a0ce5a780f184fa7b869ce67db24f0b6f14743a879a81"} Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.281394 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerStarted","Data":"ad7b3479bfc7bc824e438e72666ce37c850e7de1824a4243534d5a7cc2b790bd"} Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.284114 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d3e80191-de07-41aa-b0d7-69b826f5378b","Type":"ContainerStarted","Data":"132b04e1399ac3f5e9b1da70094189822a66567c85a1c5bd88e2b2441d1440f4"} Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.285440 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-b9f669b87-b29pg" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.295077 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4dhxq" event={"ID":"1f30f03f-511a-4a29-beae-e3d6971a8c9e","Type":"ContainerStarted","Data":"f2713c8e9d72c3489200595de40861602cdf6fa95effb70610720c01a0c58928"} Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.295195 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-4dhxq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.295237 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4dhxq" podUID="1f30f03f-511a-4a29-beae-e3d6971a8c9e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.295210 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-4dhxq" Feb 27 16:11:26 crc kubenswrapper[4830]: E0227 16:11:26.295334 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-zwcdd" podUID="728cab24-3fc3-4249-b37e-183d5676c191" Feb 27 16:11:26 crc kubenswrapper[4830]: E0227 16:11:26.300531 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-tr5cj" podUID="48011108-ee2c-4d3b-9f28-65cfc91b90ab" Feb 27 16:11:26 crc kubenswrapper[4830]: E0227 16:11:26.300606 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-s5z2n" podUID="514ae4c6-322a-458e-a1e5-df6d6a47fc88" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.304498 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536810-bc446" podStartSLOduration=13.679791375 podStartE2EDuration="1m26.304478521s" podCreationTimestamp="2026-02-27 16:10:00 +0000 UTC" firstStartedPulling="2026-02-27 16:10:12.93815386 +0000 UTC m=+209.027426323" lastFinishedPulling="2026-02-27 16:11:25.562840996 +0000 UTC m=+281.652113469" observedRunningTime="2026-02-27 16:11:26.303336192 +0000 UTC m=+282.392608655" watchObservedRunningTime="2026-02-27 16:11:26.304478521 +0000 UTC m=+282.393750994" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.378424 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-b9f669b87-b29pg" podStartSLOduration=65.378404065 podStartE2EDuration="1m5.378404065s" podCreationTimestamp="2026-02-27 16:10:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:11:26.375273157 +0000 UTC m=+282.464545620" watchObservedRunningTime="2026-02-27 16:11:26.378404065 +0000 UTC m=+282.467676528" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.458070 4830 csr.go:261] certificate signing request csr-l742t is approved, waiting to be issued Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.463425 4830 csr.go:257] certificate signing request csr-l742t is issued Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.646131 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c87b6b5b9-9pj4q" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.650861 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b9f669b87-b29pg" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.727897 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-598f9554f5-fh5l2"] Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.744717 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16475e02-4dc7-4adf-954a-00721032f157-client-ca\") pod \"16475e02-4dc7-4adf-954a-00721032f157\" (UID: \"16475e02-4dc7-4adf-954a-00721032f157\") " Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.744768 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abe9afe9-de5f-4b67-a8f3-aeae379314bf-client-ca\") pod \"abe9afe9-de5f-4b67-a8f3-aeae379314bf\" (UID: \"abe9afe9-de5f-4b67-a8f3-aeae379314bf\") " Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.744797 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abe9afe9-de5f-4b67-a8f3-aeae379314bf-config\") pod \"abe9afe9-de5f-4b67-a8f3-aeae379314bf\" (UID: \"abe9afe9-de5f-4b67-a8f3-aeae379314bf\") " Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.745805 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16475e02-4dc7-4adf-954a-00721032f157-client-ca" (OuterVolumeSpecName: "client-ca") pod "16475e02-4dc7-4adf-954a-00721032f157" (UID: "16475e02-4dc7-4adf-954a-00721032f157"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.746364 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abe9afe9-de5f-4b67-a8f3-aeae379314bf-config" (OuterVolumeSpecName: "config") pod "abe9afe9-de5f-4b67-a8f3-aeae379314bf" (UID: "abe9afe9-de5f-4b67-a8f3-aeae379314bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.747057 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abe9afe9-de5f-4b67-a8f3-aeae379314bf-client-ca" (OuterVolumeSpecName: "client-ca") pod "abe9afe9-de5f-4b67-a8f3-aeae379314bf" (UID: "abe9afe9-de5f-4b67-a8f3-aeae379314bf"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.748609 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmmqg\" (UniqueName: \"kubernetes.io/projected/abe9afe9-de5f-4b67-a8f3-aeae379314bf-kube-api-access-lmmqg\") pod \"abe9afe9-de5f-4b67-a8f3-aeae379314bf\" (UID: \"abe9afe9-de5f-4b67-a8f3-aeae379314bf\") " Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.748651 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trz8x\" (UniqueName: \"kubernetes.io/projected/16475e02-4dc7-4adf-954a-00721032f157-kube-api-access-trz8x\") pod \"16475e02-4dc7-4adf-954a-00721032f157\" (UID: \"16475e02-4dc7-4adf-954a-00721032f157\") " Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.748692 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16475e02-4dc7-4adf-954a-00721032f157-serving-cert\") pod \"16475e02-4dc7-4adf-954a-00721032f157\" (UID: \"16475e02-4dc7-4adf-954a-00721032f157\") " Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.749607 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16475e02-4dc7-4adf-954a-00721032f157-config\") pod \"16475e02-4dc7-4adf-954a-00721032f157\" (UID: \"16475e02-4dc7-4adf-954a-00721032f157\") " Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.749742 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16475e02-4dc7-4adf-954a-00721032f157-proxy-ca-bundles\") pod \"16475e02-4dc7-4adf-954a-00721032f157\" (UID: \"16475e02-4dc7-4adf-954a-00721032f157\") " Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.749841 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abe9afe9-de5f-4b67-a8f3-aeae379314bf-serving-cert\") pod \"abe9afe9-de5f-4b67-a8f3-aeae379314bf\" (UID: \"abe9afe9-de5f-4b67-a8f3-aeae379314bf\") " Feb 27 16:11:26 crc kubenswrapper[4830]: E0227 16:11:26.756907 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abe9afe9-de5f-4b67-a8f3-aeae379314bf" containerName="route-controller-manager" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.756962 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="abe9afe9-de5f-4b67-a8f3-aeae379314bf" containerName="route-controller-manager" Feb 27 16:11:26 crc kubenswrapper[4830]: E0227 16:11:26.756973 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16475e02-4dc7-4adf-954a-00721032f157" containerName="controller-manager" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.756980 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="16475e02-4dc7-4adf-954a-00721032f157" containerName="controller-manager" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.757221 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="abe9afe9-de5f-4b67-a8f3-aeae379314bf" containerName="route-controller-manager" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.757242 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="16475e02-4dc7-4adf-954a-00721032f157" containerName="controller-manager" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.757965 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-598f9554f5-fh5l2" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.758983 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16475e02-4dc7-4adf-954a-00721032f157-config" (OuterVolumeSpecName: "config") pod "16475e02-4dc7-4adf-954a-00721032f157" (UID: "16475e02-4dc7-4adf-954a-00721032f157"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.759184 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abe9afe9-de5f-4b67-a8f3-aeae379314bf-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "abe9afe9-de5f-4b67-a8f3-aeae379314bf" (UID: "abe9afe9-de5f-4b67-a8f3-aeae379314bf"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.759212 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abe9afe9-de5f-4b67-a8f3-aeae379314bf-kube-api-access-lmmqg" (OuterVolumeSpecName: "kube-api-access-lmmqg") pod "abe9afe9-de5f-4b67-a8f3-aeae379314bf" (UID: "abe9afe9-de5f-4b67-a8f3-aeae379314bf"). InnerVolumeSpecName "kube-api-access-lmmqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.759232 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16475e02-4dc7-4adf-954a-00721032f157-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16475e02-4dc7-4adf-954a-00721032f157" (UID: "16475e02-4dc7-4adf-954a-00721032f157"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.759336 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16475e02-4dc7-4adf-954a-00721032f157-kube-api-access-trz8x" (OuterVolumeSpecName: "kube-api-access-trz8x") pod "16475e02-4dc7-4adf-954a-00721032f157" (UID: "16475e02-4dc7-4adf-954a-00721032f157"). InnerVolumeSpecName "kube-api-access-trz8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.759353 4830 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16475e02-4dc7-4adf-954a-00721032f157-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.759406 4830 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abe9afe9-de5f-4b67-a8f3-aeae379314bf-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.759471 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abe9afe9-de5f-4b67-a8f3-aeae379314bf-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.759719 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16475e02-4dc7-4adf-954a-00721032f157-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "16475e02-4dc7-4adf-954a-00721032f157" (UID: "16475e02-4dc7-4adf-954a-00721032f157"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.785279 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-598f9554f5-fh5l2"] Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.860767 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14683c90-aea4-45e5-88ed-c6d9a95a18af-serving-cert\") pod \"route-controller-manager-598f9554f5-fh5l2\" (UID: \"14683c90-aea4-45e5-88ed-c6d9a95a18af\") " pod="openshift-route-controller-manager/route-controller-manager-598f9554f5-fh5l2" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.860869 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf9qn\" (UniqueName: \"kubernetes.io/projected/14683c90-aea4-45e5-88ed-c6d9a95a18af-kube-api-access-sf9qn\") pod \"route-controller-manager-598f9554f5-fh5l2\" (UID: \"14683c90-aea4-45e5-88ed-c6d9a95a18af\") " pod="openshift-route-controller-manager/route-controller-manager-598f9554f5-fh5l2" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.861039 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14683c90-aea4-45e5-88ed-c6d9a95a18af-config\") pod \"route-controller-manager-598f9554f5-fh5l2\" (UID: \"14683c90-aea4-45e5-88ed-c6d9a95a18af\") " pod="openshift-route-controller-manager/route-controller-manager-598f9554f5-fh5l2" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.861089 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14683c90-aea4-45e5-88ed-c6d9a95a18af-client-ca\") pod \"route-controller-manager-598f9554f5-fh5l2\" (UID: \"14683c90-aea4-45e5-88ed-c6d9a95a18af\") " pod="openshift-route-controller-manager/route-controller-manager-598f9554f5-fh5l2" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.861174 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmmqg\" (UniqueName: \"kubernetes.io/projected/abe9afe9-de5f-4b67-a8f3-aeae379314bf-kube-api-access-lmmqg\") on node \"crc\" DevicePath \"\"" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.861206 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trz8x\" (UniqueName: \"kubernetes.io/projected/16475e02-4dc7-4adf-954a-00721032f157-kube-api-access-trz8x\") on node \"crc\" DevicePath \"\"" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.861227 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16475e02-4dc7-4adf-954a-00721032f157-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.861250 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16475e02-4dc7-4adf-954a-00721032f157-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.861270 4830 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16475e02-4dc7-4adf-954a-00721032f157-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.861286 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abe9afe9-de5f-4b67-a8f3-aeae379314bf-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.962771 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14683c90-aea4-45e5-88ed-c6d9a95a18af-config\") pod \"route-controller-manager-598f9554f5-fh5l2\" (UID: \"14683c90-aea4-45e5-88ed-c6d9a95a18af\") " pod="openshift-route-controller-manager/route-controller-manager-598f9554f5-fh5l2" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.962834 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14683c90-aea4-45e5-88ed-c6d9a95a18af-client-ca\") pod \"route-controller-manager-598f9554f5-fh5l2\" (UID: \"14683c90-aea4-45e5-88ed-c6d9a95a18af\") " pod="openshift-route-controller-manager/route-controller-manager-598f9554f5-fh5l2" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.962895 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14683c90-aea4-45e5-88ed-c6d9a95a18af-serving-cert\") pod \"route-controller-manager-598f9554f5-fh5l2\" (UID: \"14683c90-aea4-45e5-88ed-c6d9a95a18af\") " pod="openshift-route-controller-manager/route-controller-manager-598f9554f5-fh5l2" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.962932 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf9qn\" (UniqueName: \"kubernetes.io/projected/14683c90-aea4-45e5-88ed-c6d9a95a18af-kube-api-access-sf9qn\") pod \"route-controller-manager-598f9554f5-fh5l2\" (UID: \"14683c90-aea4-45e5-88ed-c6d9a95a18af\") " pod="openshift-route-controller-manager/route-controller-manager-598f9554f5-fh5l2" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.964581 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14683c90-aea4-45e5-88ed-c6d9a95a18af-client-ca\") pod \"route-controller-manager-598f9554f5-fh5l2\" (UID: \"14683c90-aea4-45e5-88ed-c6d9a95a18af\") " pod="openshift-route-controller-manager/route-controller-manager-598f9554f5-fh5l2" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.969683 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14683c90-aea4-45e5-88ed-c6d9a95a18af-serving-cert\") pod \"route-controller-manager-598f9554f5-fh5l2\" (UID: \"14683c90-aea4-45e5-88ed-c6d9a95a18af\") " pod="openshift-route-controller-manager/route-controller-manager-598f9554f5-fh5l2" Feb 27 16:11:26 crc kubenswrapper[4830]: I0227 16:11:26.981881 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14683c90-aea4-45e5-88ed-c6d9a95a18af-config\") pod \"route-controller-manager-598f9554f5-fh5l2\" (UID: \"14683c90-aea4-45e5-88ed-c6d9a95a18af\") " pod="openshift-route-controller-manager/route-controller-manager-598f9554f5-fh5l2" Feb 27 16:11:27 crc kubenswrapper[4830]: I0227 16:11:27.001252 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf9qn\" (UniqueName: \"kubernetes.io/projected/14683c90-aea4-45e5-88ed-c6d9a95a18af-kube-api-access-sf9qn\") pod \"route-controller-manager-598f9554f5-fh5l2\" (UID: \"14683c90-aea4-45e5-88ed-c6d9a95a18af\") " pod="openshift-route-controller-manager/route-controller-manager-598f9554f5-fh5l2" Feb 27 16:11:27 crc kubenswrapper[4830]: I0227 16:11:27.147005 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-598f9554f5-fh5l2" Feb 27 16:11:27 crc kubenswrapper[4830]: I0227 16:11:27.327737 4830 generic.go:334] "Generic (PLEG): container finished" podID="1eb064bc-39af-405a-bdbf-665e31fa07c3" containerID="2752d42115a6a9ee8f1db79008a40907b77e6730aee724c7ce880c7ef63ed522" exitCode=0 Feb 27 16:11:27 crc kubenswrapper[4830]: I0227 16:11:27.327974 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536810-bc446" event={"ID":"1eb064bc-39af-405a-bdbf-665e31fa07c3","Type":"ContainerDied","Data":"2752d42115a6a9ee8f1db79008a40907b77e6730aee724c7ce880c7ef63ed522"} Feb 27 16:11:27 crc kubenswrapper[4830]: I0227 16:11:27.330700 4830 generic.go:334] "Generic (PLEG): container finished" podID="16475e02-4dc7-4adf-954a-00721032f157" containerID="fb5b33e770c48586ce60381a606b79fedbfab466bb8c24fefcaa20b68ef91b23" exitCode=0 Feb 27 16:11:27 crc kubenswrapper[4830]: I0227 16:11:27.330767 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b9f669b87-b29pg" Feb 27 16:11:27 crc kubenswrapper[4830]: I0227 16:11:27.330805 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b9f669b87-b29pg" event={"ID":"16475e02-4dc7-4adf-954a-00721032f157","Type":"ContainerDied","Data":"fb5b33e770c48586ce60381a606b79fedbfab466bb8c24fefcaa20b68ef91b23"} Feb 27 16:11:27 crc kubenswrapper[4830]: I0227 16:11:27.330870 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b9f669b87-b29pg" event={"ID":"16475e02-4dc7-4adf-954a-00721032f157","Type":"ContainerDied","Data":"e24d9eecfb269356f01390a349aad8f73b3bcb92fbdfad86a3511b341f639688"} Feb 27 16:11:27 crc kubenswrapper[4830]: I0227 16:11:27.330917 4830 scope.go:117] "RemoveContainer" containerID="fb5b33e770c48586ce60381a606b79fedbfab466bb8c24fefcaa20b68ef91b23" Feb 27 16:11:27 crc kubenswrapper[4830]: I0227 16:11:27.333699 4830 generic.go:334] "Generic (PLEG): container finished" podID="7b610c66-1e54-490d-bf39-27add37574a4" containerID="41c2e45eb40b121ab381606231f12f245dda230eee99489a07c12e7d4aa68224" exitCode=0 Feb 27 16:11:27 crc kubenswrapper[4830]: I0227 16:11:27.333824 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7b610c66-1e54-490d-bf39-27add37574a4","Type":"ContainerDied","Data":"41c2e45eb40b121ab381606231f12f245dda230eee99489a07c12e7d4aa68224"} Feb 27 16:11:27 crc kubenswrapper[4830]: I0227 16:11:27.335698 4830 generic.go:334] "Generic (PLEG): container finished" podID="abe9afe9-de5f-4b67-a8f3-aeae379314bf" containerID="37c10a33cf4fa3a826072376f142dab22e5ac2dcb89037184defb8dafcc9756e" exitCode=0 Feb 27 16:11:27 crc kubenswrapper[4830]: I0227 16:11:27.335824 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c87b6b5b9-9pj4q" Feb 27 16:11:27 crc kubenswrapper[4830]: I0227 16:11:27.336590 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c87b6b5b9-9pj4q" event={"ID":"abe9afe9-de5f-4b67-a8f3-aeae379314bf","Type":"ContainerDied","Data":"37c10a33cf4fa3a826072376f142dab22e5ac2dcb89037184defb8dafcc9756e"} Feb 27 16:11:27 crc kubenswrapper[4830]: I0227 16:11:27.336627 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c87b6b5b9-9pj4q" event={"ID":"abe9afe9-de5f-4b67-a8f3-aeae379314bf","Type":"ContainerDied","Data":"61f2d1e9789621ffb16b99218991602ca105d10e03e29b61232bc5d858de9472"} Feb 27 16:11:27 crc kubenswrapper[4830]: I0227 16:11:27.339425 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d3e80191-de07-41aa-b0d7-69b826f5378b","Type":"ContainerStarted","Data":"f23a9800b7f520d81ac1102678715aa9664e7d9924714bc31ef74545233594f0"} Feb 27 16:11:27 crc kubenswrapper[4830]: I0227 16:11:27.343864 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-4dhxq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 27 16:11:27 crc kubenswrapper[4830]: I0227 16:11:27.343994 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4dhxq" podUID="1f30f03f-511a-4a29-beae-e3d6971a8c9e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 27 16:11:27 crc kubenswrapper[4830]: I0227 16:11:27.376693 4830 scope.go:117] "RemoveContainer" containerID="fb5b33e770c48586ce60381a606b79fedbfab466bb8c24fefcaa20b68ef91b23" Feb 27 16:11:27 crc kubenswrapper[4830]: E0227 16:11:27.377137 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb5b33e770c48586ce60381a606b79fedbfab466bb8c24fefcaa20b68ef91b23\": container with ID starting with fb5b33e770c48586ce60381a606b79fedbfab466bb8c24fefcaa20b68ef91b23 not found: ID does not exist" containerID="fb5b33e770c48586ce60381a606b79fedbfab466bb8c24fefcaa20b68ef91b23" Feb 27 16:11:27 crc kubenswrapper[4830]: I0227 16:11:27.377181 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb5b33e770c48586ce60381a606b79fedbfab466bb8c24fefcaa20b68ef91b23"} err="failed to get container status \"fb5b33e770c48586ce60381a606b79fedbfab466bb8c24fefcaa20b68ef91b23\": rpc error: code = NotFound desc = could not find container \"fb5b33e770c48586ce60381a606b79fedbfab466bb8c24fefcaa20b68ef91b23\": container with ID starting with fb5b33e770c48586ce60381a606b79fedbfab466bb8c24fefcaa20b68ef91b23 not found: ID does not exist" Feb 27 16:11:27 crc kubenswrapper[4830]: I0227 16:11:27.377203 4830 scope.go:117] "RemoveContainer" containerID="37c10a33cf4fa3a826072376f142dab22e5ac2dcb89037184defb8dafcc9756e" Feb 27 16:11:27 crc kubenswrapper[4830]: I0227 16:11:27.383322 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c87b6b5b9-9pj4q"] Feb 27 16:11:27 crc kubenswrapper[4830]: I0227 16:11:27.396136 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c87b6b5b9-9pj4q"] Feb 27 16:11:27 crc kubenswrapper[4830]: I0227 16:11:27.403330 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=29.403309633 podStartE2EDuration="29.403309633s" podCreationTimestamp="2026-02-27 16:10:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:11:27.388392981 +0000 UTC m=+283.477665474" watchObservedRunningTime="2026-02-27 16:11:27.403309633 +0000 UTC m=+283.492582106" Feb 27 16:11:27 crc kubenswrapper[4830]: I0227 16:11:27.417165 4830 scope.go:117] "RemoveContainer" containerID="37c10a33cf4fa3a826072376f142dab22e5ac2dcb89037184defb8dafcc9756e" Feb 27 16:11:27 crc kubenswrapper[4830]: E0227 16:11:27.418869 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37c10a33cf4fa3a826072376f142dab22e5ac2dcb89037184defb8dafcc9756e\": container with ID starting with 37c10a33cf4fa3a826072376f142dab22e5ac2dcb89037184defb8dafcc9756e not found: ID does not exist" containerID="37c10a33cf4fa3a826072376f142dab22e5ac2dcb89037184defb8dafcc9756e" Feb 27 16:11:27 crc kubenswrapper[4830]: I0227 16:11:27.418852 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-598f9554f5-fh5l2"] Feb 27 16:11:27 crc kubenswrapper[4830]: I0227 16:11:27.418912 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37c10a33cf4fa3a826072376f142dab22e5ac2dcb89037184defb8dafcc9756e"} err="failed to get container status \"37c10a33cf4fa3a826072376f142dab22e5ac2dcb89037184defb8dafcc9756e\": rpc error: code = NotFound desc = could not find container \"37c10a33cf4fa3a826072376f142dab22e5ac2dcb89037184defb8dafcc9756e\": container with ID starting with 37c10a33cf4fa3a826072376f142dab22e5ac2dcb89037184defb8dafcc9756e not found: ID does not exist" Feb 27 16:11:27 crc kubenswrapper[4830]: I0227 16:11:27.422772 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b9f669b87-b29pg"] Feb 27 16:11:27 crc kubenswrapper[4830]: I0227 16:11:27.426708 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-b9f669b87-b29pg"] Feb 27 16:11:27 crc kubenswrapper[4830]: I0227 16:11:27.464958 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-24 23:52:11.965594183 +0000 UTC Feb 27 16:11:27 crc kubenswrapper[4830]: I0227 16:11:27.465002 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6487h40m44.500596211s for next certificate rotation Feb 27 16:11:28 crc kubenswrapper[4830]: I0227 16:11:28.351800 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-598f9554f5-fh5l2" event={"ID":"14683c90-aea4-45e5-88ed-c6d9a95a18af","Type":"ContainerStarted","Data":"7dd62fcef1330cf70f2e5d61e4d6bc0bfcc0001b3e319b8a8fde736684601040"} Feb 27 16:11:28 crc kubenswrapper[4830]: I0227 16:11:28.352208 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-598f9554f5-fh5l2" event={"ID":"14683c90-aea4-45e5-88ed-c6d9a95a18af","Type":"ContainerStarted","Data":"da0121d3002c20c163ac44ec10bb683868afdeb616288dbbfb502a0017e01d85"} Feb 27 16:11:28 crc kubenswrapper[4830]: I0227 16:11:28.352229 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-598f9554f5-fh5l2" Feb 27 16:11:28 crc kubenswrapper[4830]: I0227 16:11:28.360327 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-598f9554f5-fh5l2" Feb 27 16:11:28 crc kubenswrapper[4830]: I0227 16:11:28.381623 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-598f9554f5-fh5l2" podStartSLOduration=29.38159643 podStartE2EDuration="29.38159643s" podCreationTimestamp="2026-02-27 16:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:11:28.373332374 +0000 UTC m=+284.462604867" watchObservedRunningTime="2026-02-27 16:11:28.38159643 +0000 UTC m=+284.470868943" Feb 27 16:11:28 crc kubenswrapper[4830]: I0227 16:11:28.465832 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2027-01-13 13:04:59.586514754 +0000 UTC Feb 27 16:11:28 crc kubenswrapper[4830]: I0227 16:11:28.465879 4830 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7676h53m31.120638903s for next certificate rotation Feb 27 16:11:28 crc kubenswrapper[4830]: I0227 16:11:28.639003 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536810-bc446" Feb 27 16:11:28 crc kubenswrapper[4830]: I0227 16:11:28.643046 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 16:11:28 crc kubenswrapper[4830]: I0227 16:11:28.769601 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16475e02-4dc7-4adf-954a-00721032f157" path="/var/lib/kubelet/pods/16475e02-4dc7-4adf-954a-00721032f157/volumes" Feb 27 16:11:28 crc kubenswrapper[4830]: I0227 16:11:28.770628 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abe9afe9-de5f-4b67-a8f3-aeae379314bf" path="/var/lib/kubelet/pods/abe9afe9-de5f-4b67-a8f3-aeae379314bf/volumes" Feb 27 16:11:28 crc kubenswrapper[4830]: I0227 16:11:28.796794 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6nrn\" (UniqueName: \"kubernetes.io/projected/1eb064bc-39af-405a-bdbf-665e31fa07c3-kube-api-access-v6nrn\") pod \"1eb064bc-39af-405a-bdbf-665e31fa07c3\" (UID: \"1eb064bc-39af-405a-bdbf-665e31fa07c3\") " Feb 27 16:11:28 crc kubenswrapper[4830]: I0227 16:11:28.797077 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7b610c66-1e54-490d-bf39-27add37574a4-kubelet-dir\") pod \"7b610c66-1e54-490d-bf39-27add37574a4\" (UID: \"7b610c66-1e54-490d-bf39-27add37574a4\") " Feb 27 16:11:28 crc kubenswrapper[4830]: I0227 16:11:28.797181 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7b610c66-1e54-490d-bf39-27add37574a4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7b610c66-1e54-490d-bf39-27add37574a4" (UID: "7b610c66-1e54-490d-bf39-27add37574a4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:11:28 crc kubenswrapper[4830]: I0227 16:11:28.797509 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7b610c66-1e54-490d-bf39-27add37574a4-kube-api-access\") pod \"7b610c66-1e54-490d-bf39-27add37574a4\" (UID: \"7b610c66-1e54-490d-bf39-27add37574a4\") " Feb 27 16:11:28 crc kubenswrapper[4830]: I0227 16:11:28.798141 4830 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7b610c66-1e54-490d-bf39-27add37574a4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 27 16:11:28 crc kubenswrapper[4830]: I0227 16:11:28.805017 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b610c66-1e54-490d-bf39-27add37574a4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7b610c66-1e54-490d-bf39-27add37574a4" (UID: "7b610c66-1e54-490d-bf39-27add37574a4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:11:28 crc kubenswrapper[4830]: I0227 16:11:28.819306 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1eb064bc-39af-405a-bdbf-665e31fa07c3-kube-api-access-v6nrn" (OuterVolumeSpecName: "kube-api-access-v6nrn") pod "1eb064bc-39af-405a-bdbf-665e31fa07c3" (UID: "1eb064bc-39af-405a-bdbf-665e31fa07c3"). InnerVolumeSpecName "kube-api-access-v6nrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:11:28 crc kubenswrapper[4830]: I0227 16:11:28.879592 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7b6c7769d6-k4cdj"] Feb 27 16:11:28 crc kubenswrapper[4830]: E0227 16:11:28.879853 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eb064bc-39af-405a-bdbf-665e31fa07c3" containerName="oc" Feb 27 16:11:28 crc kubenswrapper[4830]: I0227 16:11:28.879869 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eb064bc-39af-405a-bdbf-665e31fa07c3" containerName="oc" Feb 27 16:11:28 crc kubenswrapper[4830]: E0227 16:11:28.879886 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b610c66-1e54-490d-bf39-27add37574a4" containerName="pruner" Feb 27 16:11:28 crc kubenswrapper[4830]: I0227 16:11:28.879894 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b610c66-1e54-490d-bf39-27add37574a4" containerName="pruner" Feb 27 16:11:28 crc kubenswrapper[4830]: I0227 16:11:28.880070 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b610c66-1e54-490d-bf39-27add37574a4" containerName="pruner" Feb 27 16:11:28 crc kubenswrapper[4830]: I0227 16:11:28.880083 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="1eb064bc-39af-405a-bdbf-665e31fa07c3" containerName="oc" Feb 27 16:11:28 crc kubenswrapper[4830]: I0227 16:11:28.880522 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b6c7769d6-k4cdj" Feb 27 16:11:28 crc kubenswrapper[4830]: I0227 16:11:28.884698 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 27 16:11:28 crc kubenswrapper[4830]: I0227 16:11:28.885181 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 27 16:11:28 crc kubenswrapper[4830]: I0227 16:11:28.885433 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 27 16:11:28 crc kubenswrapper[4830]: I0227 16:11:28.885512 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 27 16:11:28 crc kubenswrapper[4830]: I0227 16:11:28.885727 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 27 16:11:28 crc kubenswrapper[4830]: I0227 16:11:28.886146 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 27 16:11:28 crc kubenswrapper[4830]: I0227 16:11:28.894300 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 27 16:11:28 crc kubenswrapper[4830]: I0227 16:11:28.899672 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c8bb8b2-5d49-4500-910d-b8f48097bbcc-serving-cert\") pod \"controller-manager-7b6c7769d6-k4cdj\" (UID: \"5c8bb8b2-5d49-4500-910d-b8f48097bbcc\") " pod="openshift-controller-manager/controller-manager-7b6c7769d6-k4cdj" Feb 27 16:11:28 crc kubenswrapper[4830]: I0227 16:11:28.900062 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5c8bb8b2-5d49-4500-910d-b8f48097bbcc-client-ca\") pod \"controller-manager-7b6c7769d6-k4cdj\" (UID: \"5c8bb8b2-5d49-4500-910d-b8f48097bbcc\") " pod="openshift-controller-manager/controller-manager-7b6c7769d6-k4cdj" Feb 27 16:11:28 crc kubenswrapper[4830]: I0227 16:11:28.900414 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5c8bb8b2-5d49-4500-910d-b8f48097bbcc-proxy-ca-bundles\") pod \"controller-manager-7b6c7769d6-k4cdj\" (UID: \"5c8bb8b2-5d49-4500-910d-b8f48097bbcc\") " pod="openshift-controller-manager/controller-manager-7b6c7769d6-k4cdj" Feb 27 16:11:28 crc kubenswrapper[4830]: I0227 16:11:28.900591 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brwvv\" (UniqueName: \"kubernetes.io/projected/5c8bb8b2-5d49-4500-910d-b8f48097bbcc-kube-api-access-brwvv\") pod \"controller-manager-7b6c7769d6-k4cdj\" (UID: \"5c8bb8b2-5d49-4500-910d-b8f48097bbcc\") " pod="openshift-controller-manager/controller-manager-7b6c7769d6-k4cdj" Feb 27 16:11:28 crc kubenswrapper[4830]: I0227 16:11:28.900926 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c8bb8b2-5d49-4500-910d-b8f48097bbcc-config\") pod \"controller-manager-7b6c7769d6-k4cdj\" (UID: \"5c8bb8b2-5d49-4500-910d-b8f48097bbcc\") " pod="openshift-controller-manager/controller-manager-7b6c7769d6-k4cdj" Feb 27 16:11:28 crc kubenswrapper[4830]: I0227 16:11:28.901215 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7b610c66-1e54-490d-bf39-27add37574a4-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 16:11:28 crc kubenswrapper[4830]: I0227 16:11:28.901368 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6nrn\" (UniqueName: \"kubernetes.io/projected/1eb064bc-39af-405a-bdbf-665e31fa07c3-kube-api-access-v6nrn\") on node \"crc\" DevicePath \"\"" Feb 27 16:11:28 crc kubenswrapper[4830]: I0227 16:11:28.901841 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b6c7769d6-k4cdj"] Feb 27 16:11:29 crc kubenswrapper[4830]: I0227 16:11:29.002025 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c8bb8b2-5d49-4500-910d-b8f48097bbcc-serving-cert\") pod \"controller-manager-7b6c7769d6-k4cdj\" (UID: \"5c8bb8b2-5d49-4500-910d-b8f48097bbcc\") " pod="openshift-controller-manager/controller-manager-7b6c7769d6-k4cdj" Feb 27 16:11:29 crc kubenswrapper[4830]: I0227 16:11:29.002333 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5c8bb8b2-5d49-4500-910d-b8f48097bbcc-client-ca\") pod \"controller-manager-7b6c7769d6-k4cdj\" (UID: \"5c8bb8b2-5d49-4500-910d-b8f48097bbcc\") " pod="openshift-controller-manager/controller-manager-7b6c7769d6-k4cdj" Feb 27 16:11:29 crc kubenswrapper[4830]: I0227 16:11:29.002538 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5c8bb8b2-5d49-4500-910d-b8f48097bbcc-proxy-ca-bundles\") pod \"controller-manager-7b6c7769d6-k4cdj\" (UID: \"5c8bb8b2-5d49-4500-910d-b8f48097bbcc\") " pod="openshift-controller-manager/controller-manager-7b6c7769d6-k4cdj" Feb 27 16:11:29 crc kubenswrapper[4830]: I0227 16:11:29.002843 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brwvv\" (UniqueName: \"kubernetes.io/projected/5c8bb8b2-5d49-4500-910d-b8f48097bbcc-kube-api-access-brwvv\") pod \"controller-manager-7b6c7769d6-k4cdj\" (UID: \"5c8bb8b2-5d49-4500-910d-b8f48097bbcc\") " pod="openshift-controller-manager/controller-manager-7b6c7769d6-k4cdj" Feb 27 16:11:29 crc kubenswrapper[4830]: I0227 16:11:29.003105 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c8bb8b2-5d49-4500-910d-b8f48097bbcc-config\") pod \"controller-manager-7b6c7769d6-k4cdj\" (UID: \"5c8bb8b2-5d49-4500-910d-b8f48097bbcc\") " pod="openshift-controller-manager/controller-manager-7b6c7769d6-k4cdj" Feb 27 16:11:29 crc kubenswrapper[4830]: I0227 16:11:29.003552 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5c8bb8b2-5d49-4500-910d-b8f48097bbcc-client-ca\") pod \"controller-manager-7b6c7769d6-k4cdj\" (UID: \"5c8bb8b2-5d49-4500-910d-b8f48097bbcc\") " pod="openshift-controller-manager/controller-manager-7b6c7769d6-k4cdj" Feb 27 16:11:29 crc kubenswrapper[4830]: I0227 16:11:29.004610 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5c8bb8b2-5d49-4500-910d-b8f48097bbcc-proxy-ca-bundles\") pod \"controller-manager-7b6c7769d6-k4cdj\" (UID: \"5c8bb8b2-5d49-4500-910d-b8f48097bbcc\") " pod="openshift-controller-manager/controller-manager-7b6c7769d6-k4cdj" Feb 27 16:11:29 crc kubenswrapper[4830]: I0227 16:11:29.004833 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c8bb8b2-5d49-4500-910d-b8f48097bbcc-config\") pod \"controller-manager-7b6c7769d6-k4cdj\" (UID: \"5c8bb8b2-5d49-4500-910d-b8f48097bbcc\") " pod="openshift-controller-manager/controller-manager-7b6c7769d6-k4cdj" Feb 27 16:11:29 crc kubenswrapper[4830]: I0227 16:11:29.017969 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c8bb8b2-5d49-4500-910d-b8f48097bbcc-serving-cert\") pod \"controller-manager-7b6c7769d6-k4cdj\" (UID: \"5c8bb8b2-5d49-4500-910d-b8f48097bbcc\") " pod="openshift-controller-manager/controller-manager-7b6c7769d6-k4cdj" Feb 27 16:11:29 crc kubenswrapper[4830]: I0227 16:11:29.032147 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brwvv\" (UniqueName: \"kubernetes.io/projected/5c8bb8b2-5d49-4500-910d-b8f48097bbcc-kube-api-access-brwvv\") pod \"controller-manager-7b6c7769d6-k4cdj\" (UID: \"5c8bb8b2-5d49-4500-910d-b8f48097bbcc\") " pod="openshift-controller-manager/controller-manager-7b6c7769d6-k4cdj" Feb 27 16:11:29 crc kubenswrapper[4830]: I0227 16:11:29.208313 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b6c7769d6-k4cdj" Feb 27 16:11:29 crc kubenswrapper[4830]: I0227 16:11:29.368524 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536810-bc446" event={"ID":"1eb064bc-39af-405a-bdbf-665e31fa07c3","Type":"ContainerDied","Data":"79fe0767947c910e07f906c0180675a5a7751edd0dcacd4f0b6e5af87fcc945b"} Feb 27 16:11:29 crc kubenswrapper[4830]: I0227 16:11:29.368579 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79fe0767947c910e07f906c0180675a5a7751edd0dcacd4f0b6e5af87fcc945b" Feb 27 16:11:29 crc kubenswrapper[4830]: I0227 16:11:29.368605 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536810-bc446" Feb 27 16:11:29 crc kubenswrapper[4830]: I0227 16:11:29.370622 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"7b610c66-1e54-490d-bf39-27add37574a4","Type":"ContainerDied","Data":"da30552f22cffa62c82a0ce5a780f184fa7b869ce67db24f0b6f14743a879a81"} Feb 27 16:11:29 crc kubenswrapper[4830]: I0227 16:11:29.370674 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da30552f22cffa62c82a0ce5a780f184fa7b869ce67db24f0b6f14743a879a81" Feb 27 16:11:29 crc kubenswrapper[4830]: I0227 16:11:29.370634 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 16:11:29 crc kubenswrapper[4830]: I0227 16:11:29.487881 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b6c7769d6-k4cdj"] Feb 27 16:11:30 crc kubenswrapper[4830]: I0227 16:11:30.378789 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b6c7769d6-k4cdj" event={"ID":"5c8bb8b2-5d49-4500-910d-b8f48097bbcc","Type":"ContainerStarted","Data":"415a686044dad06554046a42bbf873b6823df19e136e5a84e82ea0f3429bf340"} Feb 27 16:11:30 crc kubenswrapper[4830]: I0227 16:11:30.378832 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b6c7769d6-k4cdj" event={"ID":"5c8bb8b2-5d49-4500-910d-b8f48097bbcc","Type":"ContainerStarted","Data":"eb5e834e9e83295676a10064665af9cf88902dbb9b6c7398488d1a4ae63bc22f"} Feb 27 16:11:30 crc kubenswrapper[4830]: I0227 16:11:30.379136 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7b6c7769d6-k4cdj" Feb 27 16:11:30 crc kubenswrapper[4830]: I0227 16:11:30.384295 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7b6c7769d6-k4cdj" Feb 27 16:11:30 crc kubenswrapper[4830]: I0227 16:11:30.417835 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7b6c7769d6-k4cdj" podStartSLOduration=32.417818438 podStartE2EDuration="32.417818438s" podCreationTimestamp="2026-02-27 16:10:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:11:30.402156467 +0000 UTC m=+286.491428940" watchObservedRunningTime="2026-02-27 16:11:30.417818438 +0000 UTC m=+286.507090911" Feb 27 16:11:30 crc kubenswrapper[4830]: I0227 16:11:30.823291 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-4dhxq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 27 16:11:30 crc kubenswrapper[4830]: I0227 16:11:30.823843 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4dhxq" podUID="1f30f03f-511a-4a29-beae-e3d6971a8c9e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 27 16:11:30 crc kubenswrapper[4830]: I0227 16:11:30.823534 4830 patch_prober.go:28] interesting pod/downloads-7954f5f757-4dhxq container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 27 16:11:30 crc kubenswrapper[4830]: I0227 16:11:30.823933 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-4dhxq" podUID="1f30f03f-511a-4a29-beae-e3d6971a8c9e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 27 16:11:40 crc kubenswrapper[4830]: I0227 16:11:40.445736 4830 generic.go:334] "Generic (PLEG): container finished" podID="789ee180-dd8e-4cb2-884e-beea08667c53" containerID="e87c5c09c93b3a772e9b716d5f7da922b173132f3fc61fb11a463475894474b0" exitCode=0 Feb 27 16:11:40 crc kubenswrapper[4830]: I0227 16:11:40.446077 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnpxp" event={"ID":"789ee180-dd8e-4cb2-884e-beea08667c53","Type":"ContainerDied","Data":"e87c5c09c93b3a772e9b716d5f7da922b173132f3fc61fb11a463475894474b0"} Feb 27 16:11:40 crc kubenswrapper[4830]: I0227 16:11:40.450986 4830 generic.go:334] "Generic (PLEG): container finished" podID="8b33138a-5b9d-4af8-b13d-4db4c2613983" containerID="3d0927005c6ee0d40ef4812464f5f371dda4630446091c790bf59a1173396d25" exitCode=0 Feb 27 16:11:40 crc kubenswrapper[4830]: I0227 16:11:40.451024 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-966h2" event={"ID":"8b33138a-5b9d-4af8-b13d-4db4c2613983","Type":"ContainerDied","Data":"3d0927005c6ee0d40ef4812464f5f371dda4630446091c790bf59a1173396d25"} Feb 27 16:11:40 crc kubenswrapper[4830]: I0227 16:11:40.843235 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-4dhxq" Feb 27 16:11:58 crc kubenswrapper[4830]: I0227 16:11:58.907742 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7b6c7769d6-k4cdj"] Feb 27 16:11:58 crc kubenswrapper[4830]: I0227 16:11:58.908225 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7b6c7769d6-k4cdj" podUID="5c8bb8b2-5d49-4500-910d-b8f48097bbcc" containerName="controller-manager" containerID="cri-o://415a686044dad06554046a42bbf873b6823df19e136e5a84e82ea0f3429bf340" gracePeriod=30 Feb 27 16:11:59 crc kubenswrapper[4830]: I0227 16:11:59.003679 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-598f9554f5-fh5l2"] Feb 27 16:11:59 crc kubenswrapper[4830]: I0227 16:11:59.003874 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-598f9554f5-fh5l2" podUID="14683c90-aea4-45e5-88ed-c6d9a95a18af" containerName="route-controller-manager" containerID="cri-o://7dd62fcef1330cf70f2e5d61e4d6bc0bfcc0001b3e319b8a8fde736684601040" gracePeriod=30 Feb 27 16:11:59 crc kubenswrapper[4830]: I0227 16:11:59.209373 4830 patch_prober.go:28] interesting pod/controller-manager-7b6c7769d6-k4cdj container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Feb 27 16:11:59 crc kubenswrapper[4830]: I0227 16:11:59.209450 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7b6c7769d6-k4cdj" podUID="5c8bb8b2-5d49-4500-910d-b8f48097bbcc" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Feb 27 16:11:59 crc kubenswrapper[4830]: I0227 16:11:59.591218 4830 generic.go:334] "Generic (PLEG): container finished" podID="f2579681-6b81-4b58-9d2c-c26b123be8ec" containerID="5006b9250f9894eb42bca91b07eebb8aab60e723730dfc9f81383c40b15104d1" exitCode=0 Feb 27 16:11:59 crc kubenswrapper[4830]: I0227 16:11:59.591320 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k7l8d" event={"ID":"f2579681-6b81-4b58-9d2c-c26b123be8ec","Type":"ContainerDied","Data":"5006b9250f9894eb42bca91b07eebb8aab60e723730dfc9f81383c40b15104d1"} Feb 27 16:11:59 crc kubenswrapper[4830]: I0227 16:11:59.597873 4830 generic.go:334] "Generic (PLEG): container finished" podID="1c5e2cae-7890-48fb-ab76-7e53c52fd6ac" containerID="e339fc82a2d616e77fbf1f1320e48ce61fd5bb06ddd415acedd19281d147df0f" exitCode=0 Feb 27 16:11:59 crc kubenswrapper[4830]: I0227 16:11:59.597989 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4bpk" event={"ID":"1c5e2cae-7890-48fb-ab76-7e53c52fd6ac","Type":"ContainerDied","Data":"e339fc82a2d616e77fbf1f1320e48ce61fd5bb06ddd415acedd19281d147df0f"} Feb 27 16:11:59 crc kubenswrapper[4830]: I0227 16:11:59.600527 4830 generic.go:334] "Generic (PLEG): container finished" podID="a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc" containerID="8600cbda840369d0b64909468a7d15d1b52aef5711388bc2d83b0df75cfd43dc" exitCode=0 Feb 27 16:11:59 crc kubenswrapper[4830]: I0227 16:11:59.600579 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkwcl" event={"ID":"a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc","Type":"ContainerDied","Data":"8600cbda840369d0b64909468a7d15d1b52aef5711388bc2d83b0df75cfd43dc"} Feb 27 16:11:59 crc kubenswrapper[4830]: I0227 16:11:59.614735 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-966h2" event={"ID":"8b33138a-5b9d-4af8-b13d-4db4c2613983","Type":"ContainerStarted","Data":"425f05b409c5b9847f770836cb23fa92d243640eae8fc7ca0ac2121b3fb5332b"} Feb 27 16:11:59 crc kubenswrapper[4830]: I0227 16:11:59.617171 4830 generic.go:334] "Generic (PLEG): container finished" podID="728cab24-3fc3-4249-b37e-183d5676c191" containerID="f04e60e18187ba1d3282128f864582da59bd93aa71c25dae624bdb15480fcfa0" exitCode=0 Feb 27 16:11:59 crc kubenswrapper[4830]: I0227 16:11:59.617224 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwcdd" event={"ID":"728cab24-3fc3-4249-b37e-183d5676c191","Type":"ContainerDied","Data":"f04e60e18187ba1d3282128f864582da59bd93aa71c25dae624bdb15480fcfa0"} Feb 27 16:11:59 crc kubenswrapper[4830]: I0227 16:11:59.621688 4830 generic.go:334] "Generic (PLEG): container finished" podID="14683c90-aea4-45e5-88ed-c6d9a95a18af" containerID="7dd62fcef1330cf70f2e5d61e4d6bc0bfcc0001b3e319b8a8fde736684601040" exitCode=0 Feb 27 16:11:59 crc kubenswrapper[4830]: I0227 16:11:59.621734 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-598f9554f5-fh5l2" event={"ID":"14683c90-aea4-45e5-88ed-c6d9a95a18af","Type":"ContainerDied","Data":"7dd62fcef1330cf70f2e5d61e4d6bc0bfcc0001b3e319b8a8fde736684601040"} Feb 27 16:11:59 crc kubenswrapper[4830]: I0227 16:11:59.625827 4830 generic.go:334] "Generic (PLEG): container finished" podID="514ae4c6-322a-458e-a1e5-df6d6a47fc88" containerID="1274ee2697a94c60c1ceef4cc65ab3e5bb7f2453521c41698fdacd8ff1e99dc5" exitCode=0 Feb 27 16:11:59 crc kubenswrapper[4830]: I0227 16:11:59.625862 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5z2n" event={"ID":"514ae4c6-322a-458e-a1e5-df6d6a47fc88","Type":"ContainerDied","Data":"1274ee2697a94c60c1ceef4cc65ab3e5bb7f2453521c41698fdacd8ff1e99dc5"} Feb 27 16:11:59 crc kubenswrapper[4830]: I0227 16:11:59.629847 4830 generic.go:334] "Generic (PLEG): container finished" podID="48011108-ee2c-4d3b-9f28-65cfc91b90ab" containerID="d7e48c6aa9dd849482268d80a315d75cf18dcf794580cf30768ac6ce0a1c2753" exitCode=0 Feb 27 16:11:59 crc kubenswrapper[4830]: I0227 16:11:59.629908 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tr5cj" event={"ID":"48011108-ee2c-4d3b-9f28-65cfc91b90ab","Type":"ContainerDied","Data":"d7e48c6aa9dd849482268d80a315d75cf18dcf794580cf30768ac6ce0a1c2753"} Feb 27 16:11:59 crc kubenswrapper[4830]: I0227 16:11:59.632481 4830 generic.go:334] "Generic (PLEG): container finished" podID="5c8bb8b2-5d49-4500-910d-b8f48097bbcc" containerID="415a686044dad06554046a42bbf873b6823df19e136e5a84e82ea0f3429bf340" exitCode=0 Feb 27 16:11:59 crc kubenswrapper[4830]: I0227 16:11:59.632521 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b6c7769d6-k4cdj" event={"ID":"5c8bb8b2-5d49-4500-910d-b8f48097bbcc","Type":"ContainerDied","Data":"415a686044dad06554046a42bbf873b6823df19e136e5a84e82ea0f3429bf340"} Feb 27 16:11:59 crc kubenswrapper[4830]: I0227 16:11:59.635313 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnpxp" event={"ID":"789ee180-dd8e-4cb2-884e-beea08667c53","Type":"ContainerStarted","Data":"921f9a872842fc060faffc6334f320e1f4a8ad67594a2cc96ae83a856abd0c29"} Feb 27 16:11:59 crc kubenswrapper[4830]: I0227 16:11:59.685480 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-598f9554f5-fh5l2" Feb 27 16:11:59 crc kubenswrapper[4830]: I0227 16:11:59.699329 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dnpxp" podStartSLOduration=4.195552849 podStartE2EDuration="1m40.699317444s" podCreationTimestamp="2026-02-27 16:10:19 +0000 UTC" firstStartedPulling="2026-02-27 16:10:21.539767299 +0000 UTC m=+217.629039762" lastFinishedPulling="2026-02-27 16:11:58.043531864 +0000 UTC m=+314.132804357" observedRunningTime="2026-02-27 16:11:59.697751455 +0000 UTC m=+315.787023918" watchObservedRunningTime="2026-02-27 16:11:59.699317444 +0000 UTC m=+315.788589907" Feb 27 16:11:59 crc kubenswrapper[4830]: I0227 16:11:59.717388 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-966h2" podStartSLOduration=4.134326938 podStartE2EDuration="1m40.717364515s" podCreationTimestamp="2026-02-27 16:10:19 +0000 UTC" firstStartedPulling="2026-02-27 16:10:21.533470447 +0000 UTC m=+217.622742910" lastFinishedPulling="2026-02-27 16:11:58.116507994 +0000 UTC m=+314.205780487" observedRunningTime="2026-02-27 16:11:59.714864562 +0000 UTC m=+315.804137025" watchObservedRunningTime="2026-02-27 16:11:59.717364515 +0000 UTC m=+315.806636988" Feb 27 16:11:59 crc kubenswrapper[4830]: I0227 16:11:59.830530 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14683c90-aea4-45e5-88ed-c6d9a95a18af-config\") pod \"14683c90-aea4-45e5-88ed-c6d9a95a18af\" (UID: \"14683c90-aea4-45e5-88ed-c6d9a95a18af\") " Feb 27 16:11:59 crc kubenswrapper[4830]: I0227 16:11:59.830575 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14683c90-aea4-45e5-88ed-c6d9a95a18af-client-ca\") pod \"14683c90-aea4-45e5-88ed-c6d9a95a18af\" (UID: \"14683c90-aea4-45e5-88ed-c6d9a95a18af\") " Feb 27 16:11:59 crc kubenswrapper[4830]: I0227 16:11:59.830601 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sf9qn\" (UniqueName: \"kubernetes.io/projected/14683c90-aea4-45e5-88ed-c6d9a95a18af-kube-api-access-sf9qn\") pod \"14683c90-aea4-45e5-88ed-c6d9a95a18af\" (UID: \"14683c90-aea4-45e5-88ed-c6d9a95a18af\") " Feb 27 16:11:59 crc kubenswrapper[4830]: I0227 16:11:59.830696 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14683c90-aea4-45e5-88ed-c6d9a95a18af-serving-cert\") pod \"14683c90-aea4-45e5-88ed-c6d9a95a18af\" (UID: \"14683c90-aea4-45e5-88ed-c6d9a95a18af\") " Feb 27 16:11:59 crc kubenswrapper[4830]: I0227 16:11:59.831363 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14683c90-aea4-45e5-88ed-c6d9a95a18af-client-ca" (OuterVolumeSpecName: "client-ca") pod "14683c90-aea4-45e5-88ed-c6d9a95a18af" (UID: "14683c90-aea4-45e5-88ed-c6d9a95a18af"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:11:59 crc kubenswrapper[4830]: I0227 16:11:59.832163 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14683c90-aea4-45e5-88ed-c6d9a95a18af-config" (OuterVolumeSpecName: "config") pod "14683c90-aea4-45e5-88ed-c6d9a95a18af" (UID: "14683c90-aea4-45e5-88ed-c6d9a95a18af"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:11:59 crc kubenswrapper[4830]: I0227 16:11:59.848094 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14683c90-aea4-45e5-88ed-c6d9a95a18af-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "14683c90-aea4-45e5-88ed-c6d9a95a18af" (UID: "14683c90-aea4-45e5-88ed-c6d9a95a18af"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:11:59 crc kubenswrapper[4830]: I0227 16:11:59.854261 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14683c90-aea4-45e5-88ed-c6d9a95a18af-kube-api-access-sf9qn" (OuterVolumeSpecName: "kube-api-access-sf9qn") pod "14683c90-aea4-45e5-88ed-c6d9a95a18af" (UID: "14683c90-aea4-45e5-88ed-c6d9a95a18af"). InnerVolumeSpecName "kube-api-access-sf9qn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:11:59 crc kubenswrapper[4830]: I0227 16:11:59.932025 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14683c90-aea4-45e5-88ed-c6d9a95a18af-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:11:59 crc kubenswrapper[4830]: I0227 16:11:59.932352 4830 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14683c90-aea4-45e5-88ed-c6d9a95a18af-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:11:59 crc kubenswrapper[4830]: I0227 16:11:59.932366 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sf9qn\" (UniqueName: \"kubernetes.io/projected/14683c90-aea4-45e5-88ed-c6d9a95a18af-kube-api-access-sf9qn\") on node \"crc\" DevicePath \"\"" Feb 27 16:11:59 crc kubenswrapper[4830]: I0227 16:11:59.932377 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14683c90-aea4-45e5-88ed-c6d9a95a18af-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.137188 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536812-7wkbt"] Feb 27 16:12:00 crc kubenswrapper[4830]: E0227 16:12:00.137445 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14683c90-aea4-45e5-88ed-c6d9a95a18af" containerName="route-controller-manager" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.137459 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="14683c90-aea4-45e5-88ed-c6d9a95a18af" containerName="route-controller-manager" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.137590 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="14683c90-aea4-45e5-88ed-c6d9a95a18af" containerName="route-controller-manager" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.138110 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536812-7wkbt" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.140286 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.140418 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.140533 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.163404 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-966h2" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.163442 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-966h2" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.177634 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b6c7769d6-k4cdj" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.191423 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536812-7wkbt"] Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.236202 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5c8bb8b2-5d49-4500-910d-b8f48097bbcc-client-ca\") pod \"5c8bb8b2-5d49-4500-910d-b8f48097bbcc\" (UID: \"5c8bb8b2-5d49-4500-910d-b8f48097bbcc\") " Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.236269 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c8bb8b2-5d49-4500-910d-b8f48097bbcc-serving-cert\") pod \"5c8bb8b2-5d49-4500-910d-b8f48097bbcc\" (UID: \"5c8bb8b2-5d49-4500-910d-b8f48097bbcc\") " Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.236359 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brwvv\" (UniqueName: \"kubernetes.io/projected/5c8bb8b2-5d49-4500-910d-b8f48097bbcc-kube-api-access-brwvv\") pod \"5c8bb8b2-5d49-4500-910d-b8f48097bbcc\" (UID: \"5c8bb8b2-5d49-4500-910d-b8f48097bbcc\") " Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.236377 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5c8bb8b2-5d49-4500-910d-b8f48097bbcc-proxy-ca-bundles\") pod \"5c8bb8b2-5d49-4500-910d-b8f48097bbcc\" (UID: \"5c8bb8b2-5d49-4500-910d-b8f48097bbcc\") " Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.236402 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c8bb8b2-5d49-4500-910d-b8f48097bbcc-config\") pod \"5c8bb8b2-5d49-4500-910d-b8f48097bbcc\" (UID: \"5c8bb8b2-5d49-4500-910d-b8f48097bbcc\") " Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.236529 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6brj\" (UniqueName: \"kubernetes.io/projected/dce3358b-25c4-4fe9-a3fa-0a0be053e8f0-kube-api-access-b6brj\") pod \"auto-csr-approver-29536812-7wkbt\" (UID: \"dce3358b-25c4-4fe9-a3fa-0a0be053e8f0\") " pod="openshift-infra/auto-csr-approver-29536812-7wkbt" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.237231 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c8bb8b2-5d49-4500-910d-b8f48097bbcc-client-ca" (OuterVolumeSpecName: "client-ca") pod "5c8bb8b2-5d49-4500-910d-b8f48097bbcc" (UID: "5c8bb8b2-5d49-4500-910d-b8f48097bbcc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.237486 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c8bb8b2-5d49-4500-910d-b8f48097bbcc-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5c8bb8b2-5d49-4500-910d-b8f48097bbcc" (UID: "5c8bb8b2-5d49-4500-910d-b8f48097bbcc"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.237832 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c8bb8b2-5d49-4500-910d-b8f48097bbcc-config" (OuterVolumeSpecName: "config") pod "5c8bb8b2-5d49-4500-910d-b8f48097bbcc" (UID: "5c8bb8b2-5d49-4500-910d-b8f48097bbcc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.241085 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c8bb8b2-5d49-4500-910d-b8f48097bbcc-kube-api-access-brwvv" (OuterVolumeSpecName: "kube-api-access-brwvv") pod "5c8bb8b2-5d49-4500-910d-b8f48097bbcc" (UID: "5c8bb8b2-5d49-4500-910d-b8f48097bbcc"). InnerVolumeSpecName "kube-api-access-brwvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.241184 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c8bb8b2-5d49-4500-910d-b8f48097bbcc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5c8bb8b2-5d49-4500-910d-b8f48097bbcc" (UID: "5c8bb8b2-5d49-4500-910d-b8f48097bbcc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.331593 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dnpxp" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.331646 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dnpxp" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.337826 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6brj\" (UniqueName: \"kubernetes.io/projected/dce3358b-25c4-4fe9-a3fa-0a0be053e8f0-kube-api-access-b6brj\") pod \"auto-csr-approver-29536812-7wkbt\" (UID: \"dce3358b-25c4-4fe9-a3fa-0a0be053e8f0\") " pod="openshift-infra/auto-csr-approver-29536812-7wkbt" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.337907 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brwvv\" (UniqueName: \"kubernetes.io/projected/5c8bb8b2-5d49-4500-910d-b8f48097bbcc-kube-api-access-brwvv\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.337921 4830 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5c8bb8b2-5d49-4500-910d-b8f48097bbcc-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.337929 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c8bb8b2-5d49-4500-910d-b8f48097bbcc-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.337938 4830 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5c8bb8b2-5d49-4500-910d-b8f48097bbcc-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.337968 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c8bb8b2-5d49-4500-910d-b8f48097bbcc-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.357501 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6brj\" (UniqueName: \"kubernetes.io/projected/dce3358b-25c4-4fe9-a3fa-0a0be053e8f0-kube-api-access-b6brj\") pod \"auto-csr-approver-29536812-7wkbt\" (UID: \"dce3358b-25c4-4fe9-a3fa-0a0be053e8f0\") " pod="openshift-infra/auto-csr-approver-29536812-7wkbt" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.453361 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536812-7wkbt" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.641649 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b6c7769d6-k4cdj" event={"ID":"5c8bb8b2-5d49-4500-910d-b8f48097bbcc","Type":"ContainerDied","Data":"eb5e834e9e83295676a10064665af9cf88902dbb9b6c7398488d1a4ae63bc22f"} Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.641976 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b6c7769d6-k4cdj" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.641988 4830 scope.go:117] "RemoveContainer" containerID="415a686044dad06554046a42bbf873b6823df19e136e5a84e82ea0f3429bf340" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.658591 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-598f9554f5-fh5l2" event={"ID":"14683c90-aea4-45e5-88ed-c6d9a95a18af","Type":"ContainerDied","Data":"da0121d3002c20c163ac44ec10bb683868afdeb616288dbbfb502a0017e01d85"} Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.658720 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-598f9554f5-fh5l2" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.664849 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5z2n" event={"ID":"514ae4c6-322a-458e-a1e5-df6d6a47fc88","Type":"ContainerStarted","Data":"e2341099010dc017a2b84fe2d9ea379fea8c45cce2448de08f9e21ea1ca4f046"} Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.691773 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k7l8d" event={"ID":"f2579681-6b81-4b58-9d2c-c26b123be8ec","Type":"ContainerStarted","Data":"bf8f7f00dabc83ed88321c54eb8ecc1093da98806c893dfc048a629d090d59ac"} Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.713866 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s5z2n" podStartSLOduration=2.088239113 podStartE2EDuration="1m37.713849595s" podCreationTimestamp="2026-02-27 16:10:23 +0000 UTC" firstStartedPulling="2026-02-27 16:10:24.723496946 +0000 UTC m=+220.812769409" lastFinishedPulling="2026-02-27 16:12:00.349107428 +0000 UTC m=+316.438379891" observedRunningTime="2026-02-27 16:12:00.684283607 +0000 UTC m=+316.773556070" watchObservedRunningTime="2026-02-27 16:12:00.713849595 +0000 UTC m=+316.803122058" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.716716 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7b6c7769d6-k4cdj"] Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.718821 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tr5cj" event={"ID":"48011108-ee2c-4d3b-9f28-65cfc91b90ab","Type":"ContainerStarted","Data":"98e33657c6848e8254bbea28a416aa7f0ca69f675bd06430a56e87cfef186663"} Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.720540 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7b6c7769d6-k4cdj"] Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.740715 4830 scope.go:117] "RemoveContainer" containerID="7dd62fcef1330cf70f2e5d61e4d6bc0bfcc0001b3e319b8a8fde736684601040" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.744431 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k7l8d" podStartSLOduration=3.151376322 podStartE2EDuration="1m41.744415927s" podCreationTimestamp="2026-02-27 16:10:19 +0000 UTC" firstStartedPulling="2026-02-27 16:10:21.550851306 +0000 UTC m=+217.640123769" lastFinishedPulling="2026-02-27 16:12:00.143890911 +0000 UTC m=+316.233163374" observedRunningTime="2026-02-27 16:12:00.741092894 +0000 UTC m=+316.830365377" watchObservedRunningTime="2026-02-27 16:12:00.744415927 +0000 UTC m=+316.833688400" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.776805 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c8bb8b2-5d49-4500-910d-b8f48097bbcc" path="/var/lib/kubelet/pods/5c8bb8b2-5d49-4500-910d-b8f48097bbcc/volumes" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.777197 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-598f9554f5-fh5l2"] Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.777223 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-598f9554f5-fh5l2"] Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.898171 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tr5cj" podStartSLOduration=3.409443608 podStartE2EDuration="1m38.8981494s" podCreationTimestamp="2026-02-27 16:10:22 +0000 UTC" firstStartedPulling="2026-02-27 16:10:24.731294557 +0000 UTC m=+220.820567020" lastFinishedPulling="2026-02-27 16:12:00.220000349 +0000 UTC m=+316.309272812" observedRunningTime="2026-02-27 16:12:00.793218083 +0000 UTC m=+316.882490536" watchObservedRunningTime="2026-02-27 16:12:00.8981494 +0000 UTC m=+316.987421863" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.900954 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-867c8bbbf4-q5zb8"] Feb 27 16:12:00 crc kubenswrapper[4830]: E0227 16:12:00.901511 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c8bb8b2-5d49-4500-910d-b8f48097bbcc" containerName="controller-manager" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.901528 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c8bb8b2-5d49-4500-910d-b8f48097bbcc" containerName="controller-manager" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.901659 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c8bb8b2-5d49-4500-910d-b8f48097bbcc" containerName="controller-manager" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.902141 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-867c8bbbf4-q5zb8" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.905390 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.905472 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.905526 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.905880 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.905969 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.906042 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.907749 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-549499c84f-9qrdr"] Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.912521 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-549499c84f-9qrdr" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.914111 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.914258 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.914822 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.915028 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.915180 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.915637 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.922830 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-867c8bbbf4-q5zb8"] Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.924786 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.943260 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-549499c84f-9qrdr"] Feb 27 16:12:00 crc kubenswrapper[4830]: I0227 16:12:00.973697 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536812-7wkbt"] Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.048995 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b67cded3-a953-4525-bdab-c6452dde691c-serving-cert\") pod \"controller-manager-549499c84f-9qrdr\" (UID: \"b67cded3-a953-4525-bdab-c6452dde691c\") " pod="openshift-controller-manager/controller-manager-549499c84f-9qrdr" Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.049035 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/794a70c6-624e-46f9-97ae-d1c5eadc84bb-serving-cert\") pod \"route-controller-manager-867c8bbbf4-q5zb8\" (UID: \"794a70c6-624e-46f9-97ae-d1c5eadc84bb\") " pod="openshift-route-controller-manager/route-controller-manager-867c8bbbf4-q5zb8" Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.049061 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/794a70c6-624e-46f9-97ae-d1c5eadc84bb-config\") pod \"route-controller-manager-867c8bbbf4-q5zb8\" (UID: \"794a70c6-624e-46f9-97ae-d1c5eadc84bb\") " pod="openshift-route-controller-manager/route-controller-manager-867c8bbbf4-q5zb8" Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.049103 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbv5j\" (UniqueName: \"kubernetes.io/projected/b67cded3-a953-4525-bdab-c6452dde691c-kube-api-access-zbv5j\") pod \"controller-manager-549499c84f-9qrdr\" (UID: \"b67cded3-a953-4525-bdab-c6452dde691c\") " pod="openshift-controller-manager/controller-manager-549499c84f-9qrdr" Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.049127 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b67cded3-a953-4525-bdab-c6452dde691c-config\") pod \"controller-manager-549499c84f-9qrdr\" (UID: \"b67cded3-a953-4525-bdab-c6452dde691c\") " pod="openshift-controller-manager/controller-manager-549499c84f-9qrdr" Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.049148 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b67cded3-a953-4525-bdab-c6452dde691c-client-ca\") pod \"controller-manager-549499c84f-9qrdr\" (UID: \"b67cded3-a953-4525-bdab-c6452dde691c\") " pod="openshift-controller-manager/controller-manager-549499c84f-9qrdr" Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.049169 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn5nl\" (UniqueName: \"kubernetes.io/projected/794a70c6-624e-46f9-97ae-d1c5eadc84bb-kube-api-access-wn5nl\") pod \"route-controller-manager-867c8bbbf4-q5zb8\" (UID: \"794a70c6-624e-46f9-97ae-d1c5eadc84bb\") " pod="openshift-route-controller-manager/route-controller-manager-867c8bbbf4-q5zb8" Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.049200 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/794a70c6-624e-46f9-97ae-d1c5eadc84bb-client-ca\") pod \"route-controller-manager-867c8bbbf4-q5zb8\" (UID: \"794a70c6-624e-46f9-97ae-d1c5eadc84bb\") " pod="openshift-route-controller-manager/route-controller-manager-867c8bbbf4-q5zb8" Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.049220 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b67cded3-a953-4525-bdab-c6452dde691c-proxy-ca-bundles\") pod \"controller-manager-549499c84f-9qrdr\" (UID: \"b67cded3-a953-4525-bdab-c6452dde691c\") " pod="openshift-controller-manager/controller-manager-549499c84f-9qrdr" Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.150456 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbv5j\" (UniqueName: \"kubernetes.io/projected/b67cded3-a953-4525-bdab-c6452dde691c-kube-api-access-zbv5j\") pod \"controller-manager-549499c84f-9qrdr\" (UID: \"b67cded3-a953-4525-bdab-c6452dde691c\") " pod="openshift-controller-manager/controller-manager-549499c84f-9qrdr" Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.150759 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b67cded3-a953-4525-bdab-c6452dde691c-config\") pod \"controller-manager-549499c84f-9qrdr\" (UID: \"b67cded3-a953-4525-bdab-c6452dde691c\") " pod="openshift-controller-manager/controller-manager-549499c84f-9qrdr" Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.150854 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b67cded3-a953-4525-bdab-c6452dde691c-client-ca\") pod \"controller-manager-549499c84f-9qrdr\" (UID: \"b67cded3-a953-4525-bdab-c6452dde691c\") " pod="openshift-controller-manager/controller-manager-549499c84f-9qrdr" Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.150982 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn5nl\" (UniqueName: \"kubernetes.io/projected/794a70c6-624e-46f9-97ae-d1c5eadc84bb-kube-api-access-wn5nl\") pod \"route-controller-manager-867c8bbbf4-q5zb8\" (UID: \"794a70c6-624e-46f9-97ae-d1c5eadc84bb\") " pod="openshift-route-controller-manager/route-controller-manager-867c8bbbf4-q5zb8" Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.151074 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/794a70c6-624e-46f9-97ae-d1c5eadc84bb-client-ca\") pod \"route-controller-manager-867c8bbbf4-q5zb8\" (UID: \"794a70c6-624e-46f9-97ae-d1c5eadc84bb\") " pod="openshift-route-controller-manager/route-controller-manager-867c8bbbf4-q5zb8" Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.151189 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b67cded3-a953-4525-bdab-c6452dde691c-proxy-ca-bundles\") pod \"controller-manager-549499c84f-9qrdr\" (UID: \"b67cded3-a953-4525-bdab-c6452dde691c\") " pod="openshift-controller-manager/controller-manager-549499c84f-9qrdr" Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.151859 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b67cded3-a953-4525-bdab-c6452dde691c-client-ca\") pod \"controller-manager-549499c84f-9qrdr\" (UID: \"b67cded3-a953-4525-bdab-c6452dde691c\") " pod="openshift-controller-manager/controller-manager-549499c84f-9qrdr" Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.152107 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b67cded3-a953-4525-bdab-c6452dde691c-proxy-ca-bundles\") pod \"controller-manager-549499c84f-9qrdr\" (UID: \"b67cded3-a953-4525-bdab-c6452dde691c\") " pod="openshift-controller-manager/controller-manager-549499c84f-9qrdr" Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.152150 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/794a70c6-624e-46f9-97ae-d1c5eadc84bb-client-ca\") pod \"route-controller-manager-867c8bbbf4-q5zb8\" (UID: \"794a70c6-624e-46f9-97ae-d1c5eadc84bb\") " pod="openshift-route-controller-manager/route-controller-manager-867c8bbbf4-q5zb8" Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.152154 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b67cded3-a953-4525-bdab-c6452dde691c-config\") pod \"controller-manager-549499c84f-9qrdr\" (UID: \"b67cded3-a953-4525-bdab-c6452dde691c\") " pod="openshift-controller-manager/controller-manager-549499c84f-9qrdr" Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.152429 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b67cded3-a953-4525-bdab-c6452dde691c-serving-cert\") pod \"controller-manager-549499c84f-9qrdr\" (UID: \"b67cded3-a953-4525-bdab-c6452dde691c\") " pod="openshift-controller-manager/controller-manager-549499c84f-9qrdr" Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.152521 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/794a70c6-624e-46f9-97ae-d1c5eadc84bb-serving-cert\") pod \"route-controller-manager-867c8bbbf4-q5zb8\" (UID: \"794a70c6-624e-46f9-97ae-d1c5eadc84bb\") " pod="openshift-route-controller-manager/route-controller-manager-867c8bbbf4-q5zb8" Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.152603 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/794a70c6-624e-46f9-97ae-d1c5eadc84bb-config\") pod \"route-controller-manager-867c8bbbf4-q5zb8\" (UID: \"794a70c6-624e-46f9-97ae-d1c5eadc84bb\") " pod="openshift-route-controller-manager/route-controller-manager-867c8bbbf4-q5zb8" Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.153752 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/794a70c6-624e-46f9-97ae-d1c5eadc84bb-config\") pod \"route-controller-manager-867c8bbbf4-q5zb8\" (UID: \"794a70c6-624e-46f9-97ae-d1c5eadc84bb\") " pod="openshift-route-controller-manager/route-controller-manager-867c8bbbf4-q5zb8" Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.157552 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/794a70c6-624e-46f9-97ae-d1c5eadc84bb-serving-cert\") pod \"route-controller-manager-867c8bbbf4-q5zb8\" (UID: \"794a70c6-624e-46f9-97ae-d1c5eadc84bb\") " pod="openshift-route-controller-manager/route-controller-manager-867c8bbbf4-q5zb8" Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.157673 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b67cded3-a953-4525-bdab-c6452dde691c-serving-cert\") pod \"controller-manager-549499c84f-9qrdr\" (UID: \"b67cded3-a953-4525-bdab-c6452dde691c\") " pod="openshift-controller-manager/controller-manager-549499c84f-9qrdr" Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.166904 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbv5j\" (UniqueName: \"kubernetes.io/projected/b67cded3-a953-4525-bdab-c6452dde691c-kube-api-access-zbv5j\") pod \"controller-manager-549499c84f-9qrdr\" (UID: \"b67cded3-a953-4525-bdab-c6452dde691c\") " pod="openshift-controller-manager/controller-manager-549499c84f-9qrdr" Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.170808 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn5nl\" (UniqueName: \"kubernetes.io/projected/794a70c6-624e-46f9-97ae-d1c5eadc84bb-kube-api-access-wn5nl\") pod \"route-controller-manager-867c8bbbf4-q5zb8\" (UID: \"794a70c6-624e-46f9-97ae-d1c5eadc84bb\") " pod="openshift-route-controller-manager/route-controller-manager-867c8bbbf4-q5zb8" Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.215128 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-867c8bbbf4-q5zb8" Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.225515 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-549499c84f-9qrdr" Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.396007 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vs8sq"] Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.439197 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-966h2" podUID="8b33138a-5b9d-4af8-b13d-4db4c2613983" containerName="registry-server" probeResult="failure" output=< Feb 27 16:12:01 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Feb 27 16:12:01 crc kubenswrapper[4830]: > Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.439453 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-dnpxp" podUID="789ee180-dd8e-4cb2-884e-beea08667c53" containerName="registry-server" probeResult="failure" output=< Feb 27 16:12:01 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Feb 27 16:12:01 crc kubenswrapper[4830]: > Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.536602 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-867c8bbbf4-q5zb8"] Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.598975 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-549499c84f-9qrdr"] Feb 27 16:12:01 crc kubenswrapper[4830]: W0227 16:12:01.628718 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb67cded3_a953_4525_bdab_c6452dde691c.slice/crio-1a1735fdacb9ab45bf9642bc64268d19e5b9a569ae9ad01e897a1846014883ca WatchSource:0}: Error finding container 1a1735fdacb9ab45bf9642bc64268d19e5b9a569ae9ad01e897a1846014883ca: Status 404 returned error can't find the container with id 1a1735fdacb9ab45bf9642bc64268d19e5b9a569ae9ad01e897a1846014883ca Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.733047 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4bpk" event={"ID":"1c5e2cae-7890-48fb-ab76-7e53c52fd6ac","Type":"ContainerStarted","Data":"f6d6173839dc489a5784d5306cc3a3b42d8583326f84a6455d829e1ac8c12462"} Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.739499 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536812-7wkbt" event={"ID":"dce3358b-25c4-4fe9-a3fa-0a0be053e8f0","Type":"ContainerStarted","Data":"08da63b546208f401c0a8ef19dc8f27e0b7fedaa80a5b7e24a46aef56cc1c31d"} Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.741714 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwcdd" event={"ID":"728cab24-3fc3-4249-b37e-183d5676c191","Type":"ContainerStarted","Data":"1406def1089b142cf3b2b616ae9f6fffa9e0de03b2f398b478459ebdbb937a55"} Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.743574 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-867c8bbbf4-q5zb8" event={"ID":"794a70c6-624e-46f9-97ae-d1c5eadc84bb","Type":"ContainerStarted","Data":"e90162ea4e16e7c2fd74f52edf8cc31e08375aa24586d1f529c6ed3fb585cbca"} Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.743603 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-867c8bbbf4-q5zb8" event={"ID":"794a70c6-624e-46f9-97ae-d1c5eadc84bb","Type":"ContainerStarted","Data":"87257e0c3f4da0f25581d7ee6822974a2e52a1351b91fe8383666f35f6a1e884"} Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.748021 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkwcl" event={"ID":"a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc","Type":"ContainerStarted","Data":"1eba51cb0d496eae004fbfc8ec830b99d548b112ebca7690cb09e1819ee1e443"} Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.749546 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-549499c84f-9qrdr" event={"ID":"b67cded3-a953-4525-bdab-c6452dde691c","Type":"ContainerStarted","Data":"1a1735fdacb9ab45bf9642bc64268d19e5b9a569ae9ad01e897a1846014883ca"} Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.775598 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s4bpk" podStartSLOduration=2.787422091 podStartE2EDuration="1m41.775580841s" podCreationTimestamp="2026-02-27 16:10:20 +0000 UTC" firstStartedPulling="2026-02-27 16:10:21.503984985 +0000 UTC m=+217.593257448" lastFinishedPulling="2026-02-27 16:12:00.492143735 +0000 UTC m=+316.581416198" observedRunningTime="2026-02-27 16:12:01.758879795 +0000 UTC m=+317.848152258" watchObservedRunningTime="2026-02-27 16:12:01.775580841 +0000 UTC m=+317.864853304" Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.796023 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zwcdd" podStartSLOduration=3.5816311880000002 podStartE2EDuration="1m39.796008931s" podCreationTimestamp="2026-02-27 16:10:22 +0000 UTC" firstStartedPulling="2026-02-27 16:10:24.715584851 +0000 UTC m=+220.804857314" lastFinishedPulling="2026-02-27 16:12:00.929962594 +0000 UTC m=+317.019235057" observedRunningTime="2026-02-27 16:12:01.795133199 +0000 UTC m=+317.884405662" watchObservedRunningTime="2026-02-27 16:12:01.796008931 +0000 UTC m=+317.885281394" Feb 27 16:12:01 crc kubenswrapper[4830]: I0227 16:12:01.799300 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kkwcl" podStartSLOduration=3.934674389 podStartE2EDuration="1m40.799292413s" podCreationTimestamp="2026-02-27 16:10:21 +0000 UTC" firstStartedPulling="2026-02-27 16:10:23.736722874 +0000 UTC m=+219.825995327" lastFinishedPulling="2026-02-27 16:12:00.601340888 +0000 UTC m=+316.690613351" observedRunningTime="2026-02-27 16:12:01.77632003 +0000 UTC m=+317.865592493" watchObservedRunningTime="2026-02-27 16:12:01.799292413 +0000 UTC m=+317.888564876" Feb 27 16:12:02 crc kubenswrapper[4830]: I0227 16:12:02.120614 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kkwcl" Feb 27 16:12:02 crc kubenswrapper[4830]: I0227 16:12:02.120664 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kkwcl" Feb 27 16:12:02 crc kubenswrapper[4830]: I0227 16:12:02.521053 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zwcdd" Feb 27 16:12:02 crc kubenswrapper[4830]: I0227 16:12:02.521343 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zwcdd" Feb 27 16:12:02 crc kubenswrapper[4830]: I0227 16:12:02.756705 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536812-7wkbt" event={"ID":"dce3358b-25c4-4fe9-a3fa-0a0be053e8f0","Type":"ContainerStarted","Data":"7669a6f647f383b53f489bdf9bfd485dae7bcaf4da2d4c3f77794eda9777dccf"} Feb 27 16:12:02 crc kubenswrapper[4830]: I0227 16:12:02.758279 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-549499c84f-9qrdr" event={"ID":"b67cded3-a953-4525-bdab-c6452dde691c","Type":"ContainerStarted","Data":"a7998612e3241e6233b1a6208f436d1bdf543202e2c8a56d19490f3976bc2d8c"} Feb 27 16:12:02 crc kubenswrapper[4830]: I0227 16:12:02.758677 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-867c8bbbf4-q5zb8" Feb 27 16:12:02 crc kubenswrapper[4830]: I0227 16:12:02.769582 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14683c90-aea4-45e5-88ed-c6d9a95a18af" path="/var/lib/kubelet/pods/14683c90-aea4-45e5-88ed-c6d9a95a18af/volumes" Feb 27 16:12:02 crc kubenswrapper[4830]: I0227 16:12:02.770331 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-867c8bbbf4-q5zb8" Feb 27 16:12:02 crc kubenswrapper[4830]: I0227 16:12:02.790705 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536812-7wkbt" podStartSLOduration=1.852290454 podStartE2EDuration="2.790685685s" podCreationTimestamp="2026-02-27 16:12:00 +0000 UTC" firstStartedPulling="2026-02-27 16:12:00.988846502 +0000 UTC m=+317.078118965" lastFinishedPulling="2026-02-27 16:12:01.927241733 +0000 UTC m=+318.016514196" observedRunningTime="2026-02-27 16:12:02.773786674 +0000 UTC m=+318.863059137" watchObservedRunningTime="2026-02-27 16:12:02.790685685 +0000 UTC m=+318.879958158" Feb 27 16:12:02 crc kubenswrapper[4830]: I0227 16:12:02.791568 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-867c8bbbf4-q5zb8" podStartSLOduration=3.791562958 podStartE2EDuration="3.791562958s" podCreationTimestamp="2026-02-27 16:11:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:12:02.789128837 +0000 UTC m=+318.878401300" watchObservedRunningTime="2026-02-27 16:12:02.791562958 +0000 UTC m=+318.880835421" Feb 27 16:12:02 crc kubenswrapper[4830]: I0227 16:12:02.848469 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-549499c84f-9qrdr" podStartSLOduration=4.8484550859999995 podStartE2EDuration="4.848455086s" podCreationTimestamp="2026-02-27 16:11:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:12:02.846816965 +0000 UTC m=+318.936089428" watchObservedRunningTime="2026-02-27 16:12:02.848455086 +0000 UTC m=+318.937727549" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.156887 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-kkwcl" podUID="a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc" containerName="registry-server" probeResult="failure" output=< Feb 27 16:12:03 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Feb 27 16:12:03 crc kubenswrapper[4830]: > Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.252761 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tr5cj" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.252818 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tr5cj" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.518768 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s5z2n" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.518899 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s5z2n" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.570207 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-zwcdd" podUID="728cab24-3fc3-4249-b37e-183d5676c191" containerName="registry-server" probeResult="failure" output=< Feb 27 16:12:03 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Feb 27 16:12:03 crc kubenswrapper[4830]: > Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.759999 4830 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.760337 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5" gracePeriod=15 Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.760402 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://2fd9cca8c6af573373bdd5d9c9bea88412fc8611274c4856a98f11a59599fcb6" gracePeriod=15 Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.760462 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89" gracePeriod=15 Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.760449 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a" gracePeriod=15 Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.760462 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241" gracePeriod=15 Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.761969 4830 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 27 16:12:03 crc kubenswrapper[4830]: E0227 16:12:03.762288 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.762306 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 27 16:12:03 crc kubenswrapper[4830]: E0227 16:12:03.762332 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.762349 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 27 16:12:03 crc kubenswrapper[4830]: E0227 16:12:03.762362 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.762377 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 16:12:03 crc kubenswrapper[4830]: E0227 16:12:03.762398 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.762410 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 27 16:12:03 crc kubenswrapper[4830]: E0227 16:12:03.762426 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.762438 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 16:12:03 crc kubenswrapper[4830]: E0227 16:12:03.762453 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.762464 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 16:12:03 crc kubenswrapper[4830]: E0227 16:12:03.762479 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.762491 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 27 16:12:03 crc kubenswrapper[4830]: E0227 16:12:03.762509 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.762521 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 27 16:12:03 crc kubenswrapper[4830]: E0227 16:12:03.762536 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.762548 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.762732 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.762750 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.762772 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.762791 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.762812 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.762829 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.762846 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 16:12:03 crc kubenswrapper[4830]: E0227 16:12:03.763053 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.763068 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.763231 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.763248 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.765162 4830 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.765984 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.773848 4830 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.774754 4830 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 27 16:12:03 crc kubenswrapper[4830]: [+]log ok Feb 27 16:12:03 crc kubenswrapper[4830]: [+]api-openshift-apiserver-available ok Feb 27 16:12:03 crc kubenswrapper[4830]: [+]api-openshift-oauth-apiserver-available ok Feb 27 16:12:03 crc kubenswrapper[4830]: [+]informer-sync ok Feb 27 16:12:03 crc kubenswrapper[4830]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 27 16:12:03 crc kubenswrapper[4830]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 27 16:12:03 crc kubenswrapper[4830]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 27 16:12:03 crc kubenswrapper[4830]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 27 16:12:03 crc kubenswrapper[4830]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 27 16:12:03 crc kubenswrapper[4830]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 27 16:12:03 crc kubenswrapper[4830]: [+]poststarthook/generic-apiserver-start-informers ok Feb 27 16:12:03 crc kubenswrapper[4830]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 27 16:12:03 crc kubenswrapper[4830]: [+]poststarthook/priority-and-fairness-filter ok Feb 27 16:12:03 crc kubenswrapper[4830]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 27 16:12:03 crc kubenswrapper[4830]: [+]poststarthook/start-apiextensions-informers ok Feb 27 16:12:03 crc kubenswrapper[4830]: [+]poststarthook/start-apiextensions-controllers ok Feb 27 16:12:03 crc kubenswrapper[4830]: [+]poststarthook/crd-informer-synced ok Feb 27 16:12:03 crc kubenswrapper[4830]: [+]poststarthook/start-system-namespaces-controller ok Feb 27 16:12:03 crc kubenswrapper[4830]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 27 16:12:03 crc kubenswrapper[4830]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 27 16:12:03 crc kubenswrapper[4830]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 27 16:12:03 crc kubenswrapper[4830]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 27 16:12:03 crc kubenswrapper[4830]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 27 16:12:03 crc kubenswrapper[4830]: [+]poststarthook/rbac/bootstrap-roles ok Feb 27 16:12:03 crc kubenswrapper[4830]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Feb 27 16:12:03 crc kubenswrapper[4830]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 27 16:12:03 crc kubenswrapper[4830]: [+]poststarthook/bootstrap-controller ok Feb 27 16:12:03 crc kubenswrapper[4830]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 27 16:12:03 crc kubenswrapper[4830]: [+]poststarthook/start-kube-aggregator-informers ok Feb 27 16:12:03 crc kubenswrapper[4830]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 27 16:12:03 crc kubenswrapper[4830]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 27 16:12:03 crc kubenswrapper[4830]: [+]poststarthook/apiservice-registration-controller ok Feb 27 16:12:03 crc kubenswrapper[4830]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 27 16:12:03 crc kubenswrapper[4830]: [+]poststarthook/apiservice-discovery-controller ok Feb 27 16:12:03 crc kubenswrapper[4830]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 27 16:12:03 crc kubenswrapper[4830]: [+]autoregister-completion ok Feb 27 16:12:03 crc kubenswrapper[4830]: [+]poststarthook/apiservice-openapi-controller ok Feb 27 16:12:03 crc kubenswrapper[4830]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 27 16:12:03 crc kubenswrapper[4830]: [-]shutdown failed: reason withheld Feb 27 16:12:03 crc kubenswrapper[4830]: readyz check failed Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.774816 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.775035 4830 generic.go:334] "Generic (PLEG): container finished" podID="dce3358b-25c4-4fe9-a3fa-0a0be053e8f0" containerID="7669a6f647f383b53f489bdf9bfd485dae7bcaf4da2d4c3f77794eda9777dccf" exitCode=0 Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.775266 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536812-7wkbt" event={"ID":"dce3358b-25c4-4fe9-a3fa-0a0be053e8f0","Type":"ContainerDied","Data":"7669a6f647f383b53f489bdf9bfd485dae7bcaf4da2d4c3f77794eda9777dccf"} Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.776223 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-549499c84f-9qrdr" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.792318 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-549499c84f-9qrdr" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.825176 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.885826 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.886169 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.886320 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.886427 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.886546 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.886646 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.886765 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.886915 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.988456 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.988508 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.988537 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.988563 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.988567 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.988580 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.988609 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.988612 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.988634 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.988645 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.988650 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.988671 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.988707 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.988752 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.988769 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:12:03 crc kubenswrapper[4830]: I0227 16:12:03.988823 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:12:04 crc kubenswrapper[4830]: I0227 16:12:04.116205 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 16:12:04 crc kubenswrapper[4830]: W0227 16:12:04.136490 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-1596a34e13fe7cd7341a2b9a068673b9160f385668c9d600f24fb93c9a89bb4e WatchSource:0}: Error finding container 1596a34e13fe7cd7341a2b9a068673b9160f385668c9d600f24fb93c9a89bb4e: Status 404 returned error can't find the container with id 1596a34e13fe7cd7341a2b9a068673b9160f385668c9d600f24fb93c9a89bb4e Feb 27 16:12:04 crc kubenswrapper[4830]: E0227 16:12:04.139425 4830 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.36:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18982671ee21da69 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:12:04.138646121 +0000 UTC m=+320.227918584,LastTimestamp:2026-02-27 16:12:04.138646121 +0000 UTC m=+320.227918584,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:12:04 crc kubenswrapper[4830]: I0227 16:12:04.291740 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tr5cj" podUID="48011108-ee2c-4d3b-9f28-65cfc91b90ab" containerName="registry-server" probeResult="failure" output=< Feb 27 16:12:04 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Feb 27 16:12:04 crc kubenswrapper[4830]: > Feb 27 16:12:04 crc kubenswrapper[4830]: I0227 16:12:04.562777 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s5z2n" podUID="514ae4c6-322a-458e-a1e5-df6d6a47fc88" containerName="registry-server" probeResult="failure" output=< Feb 27 16:12:04 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Feb 27 16:12:04 crc kubenswrapper[4830]: > Feb 27 16:12:04 crc kubenswrapper[4830]: I0227 16:12:04.766288 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:04 crc kubenswrapper[4830]: I0227 16:12:04.766696 4830 status_manager.go:851] "Failed to get status for pod" podUID="dce3358b-25c4-4fe9-a3fa-0a0be053e8f0" pod="openshift-infra/auto-csr-approver-29536812-7wkbt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29536812-7wkbt\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:04 crc kubenswrapper[4830]: I0227 16:12:04.792772 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 27 16:12:04 crc kubenswrapper[4830]: I0227 16:12:04.794348 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 27 16:12:04 crc kubenswrapper[4830]: I0227 16:12:04.796616 4830 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2fd9cca8c6af573373bdd5d9c9bea88412fc8611274c4856a98f11a59599fcb6" exitCode=0 Feb 27 16:12:04 crc kubenswrapper[4830]: I0227 16:12:04.796655 4830 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89" exitCode=0 Feb 27 16:12:04 crc kubenswrapper[4830]: I0227 16:12:04.796668 4830 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241" exitCode=2 Feb 27 16:12:04 crc kubenswrapper[4830]: I0227 16:12:04.796693 4830 scope.go:117] "RemoveContainer" containerID="acf80f37f1e38e051b10c7a1c0eeef8f0c3db7fc0e5653c54d82da7956a8eed2" Feb 27 16:12:04 crc kubenswrapper[4830]: I0227 16:12:04.798472 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"1596a34e13fe7cd7341a2b9a068673b9160f385668c9d600f24fb93c9a89bb4e"} Feb 27 16:12:05 crc kubenswrapper[4830]: I0227 16:12:05.181483 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536812-7wkbt" Feb 27 16:12:05 crc kubenswrapper[4830]: I0227 16:12:05.182641 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:05 crc kubenswrapper[4830]: I0227 16:12:05.183215 4830 status_manager.go:851] "Failed to get status for pod" podUID="dce3358b-25c4-4fe9-a3fa-0a0be053e8f0" pod="openshift-infra/auto-csr-approver-29536812-7wkbt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29536812-7wkbt\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:05 crc kubenswrapper[4830]: I0227 16:12:05.310415 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6brj\" (UniqueName: \"kubernetes.io/projected/dce3358b-25c4-4fe9-a3fa-0a0be053e8f0-kube-api-access-b6brj\") pod \"dce3358b-25c4-4fe9-a3fa-0a0be053e8f0\" (UID: \"dce3358b-25c4-4fe9-a3fa-0a0be053e8f0\") " Feb 27 16:12:05 crc kubenswrapper[4830]: I0227 16:12:05.319869 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dce3358b-25c4-4fe9-a3fa-0a0be053e8f0-kube-api-access-b6brj" (OuterVolumeSpecName: "kube-api-access-b6brj") pod "dce3358b-25c4-4fe9-a3fa-0a0be053e8f0" (UID: "dce3358b-25c4-4fe9-a3fa-0a0be053e8f0"). InnerVolumeSpecName "kube-api-access-b6brj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:12:05 crc kubenswrapper[4830]: I0227 16:12:05.412377 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6brj\" (UniqueName: \"kubernetes.io/projected/dce3358b-25c4-4fe9-a3fa-0a0be053e8f0-kube-api-access-b6brj\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:05 crc kubenswrapper[4830]: I0227 16:12:05.808789 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 27 16:12:05 crc kubenswrapper[4830]: I0227 16:12:05.809635 4830 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a" exitCode=0 Feb 27 16:12:05 crc kubenswrapper[4830]: I0227 16:12:05.811316 4830 generic.go:334] "Generic (PLEG): container finished" podID="d3e80191-de07-41aa-b0d7-69b826f5378b" containerID="f23a9800b7f520d81ac1102678715aa9664e7d9924714bc31ef74545233594f0" exitCode=0 Feb 27 16:12:05 crc kubenswrapper[4830]: I0227 16:12:05.811423 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d3e80191-de07-41aa-b0d7-69b826f5378b","Type":"ContainerDied","Data":"f23a9800b7f520d81ac1102678715aa9664e7d9924714bc31ef74545233594f0"} Feb 27 16:12:05 crc kubenswrapper[4830]: I0227 16:12:05.812396 4830 status_manager.go:851] "Failed to get status for pod" podUID="d3e80191-de07-41aa-b0d7-69b826f5378b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:05 crc kubenswrapper[4830]: I0227 16:12:05.812618 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536812-7wkbt" Feb 27 16:12:05 crc kubenswrapper[4830]: I0227 16:12:05.812648 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536812-7wkbt" event={"ID":"dce3358b-25c4-4fe9-a3fa-0a0be053e8f0","Type":"ContainerDied","Data":"08da63b546208f401c0a8ef19dc8f27e0b7fedaa80a5b7e24a46aef56cc1c31d"} Feb 27 16:12:05 crc kubenswrapper[4830]: I0227 16:12:05.812692 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08da63b546208f401c0a8ef19dc8f27e0b7fedaa80a5b7e24a46aef56cc1c31d" Feb 27 16:12:05 crc kubenswrapper[4830]: I0227 16:12:05.813784 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:05 crc kubenswrapper[4830]: I0227 16:12:05.814287 4830 status_manager.go:851] "Failed to get status for pod" podUID="dce3358b-25c4-4fe9-a3fa-0a0be053e8f0" pod="openshift-infra/auto-csr-approver-29536812-7wkbt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29536812-7wkbt\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:05 crc kubenswrapper[4830]: I0227 16:12:05.848380 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:05 crc kubenswrapper[4830]: I0227 16:12:05.849251 4830 status_manager.go:851] "Failed to get status for pod" podUID="dce3358b-25c4-4fe9-a3fa-0a0be053e8f0" pod="openshift-infra/auto-csr-approver-29536812-7wkbt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29536812-7wkbt\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:05 crc kubenswrapper[4830]: I0227 16:12:05.849731 4830 status_manager.go:851] "Failed to get status for pod" podUID="d3e80191-de07-41aa-b0d7-69b826f5378b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:05 crc kubenswrapper[4830]: E0227 16:12:05.965162 4830 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.36:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18982671ee21da69 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:12:04.138646121 +0000 UTC m=+320.227918584,LastTimestamp:2026-02-27 16:12:04.138646121 +0000 UTC m=+320.227918584,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:12:06 crc kubenswrapper[4830]: I0227 16:12:06.820469 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"a82af140a1f416de6c34ca481cb88c11b78fa468ae226b7548fcbc9172c84ac8"} Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.248413 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.249226 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.249449 4830 status_manager.go:851] "Failed to get status for pod" podUID="dce3358b-25c4-4fe9-a3fa-0a0be053e8f0" pod="openshift-infra/auto-csr-approver-29536812-7wkbt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29536812-7wkbt\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.249744 4830 status_manager.go:851] "Failed to get status for pod" podUID="d3e80191-de07-41aa-b0d7-69b826f5378b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.350654 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3e80191-de07-41aa-b0d7-69b826f5378b-kube-api-access\") pod \"d3e80191-de07-41aa-b0d7-69b826f5378b\" (UID: \"d3e80191-de07-41aa-b0d7-69b826f5378b\") " Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.350728 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d3e80191-de07-41aa-b0d7-69b826f5378b-var-lock\") pod \"d3e80191-de07-41aa-b0d7-69b826f5378b\" (UID: \"d3e80191-de07-41aa-b0d7-69b826f5378b\") " Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.350784 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d3e80191-de07-41aa-b0d7-69b826f5378b-kubelet-dir\") pod \"d3e80191-de07-41aa-b0d7-69b826f5378b\" (UID: \"d3e80191-de07-41aa-b0d7-69b826f5378b\") " Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.351074 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3e80191-de07-41aa-b0d7-69b826f5378b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d3e80191-de07-41aa-b0d7-69b826f5378b" (UID: "d3e80191-de07-41aa-b0d7-69b826f5378b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.351303 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3e80191-de07-41aa-b0d7-69b826f5378b-var-lock" (OuterVolumeSpecName: "var-lock") pod "d3e80191-de07-41aa-b0d7-69b826f5378b" (UID: "d3e80191-de07-41aa-b0d7-69b826f5378b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.362362 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3e80191-de07-41aa-b0d7-69b826f5378b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d3e80191-de07-41aa-b0d7-69b826f5378b" (UID: "d3e80191-de07-41aa-b0d7-69b826f5378b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.452237 4830 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d3e80191-de07-41aa-b0d7-69b826f5378b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.452507 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d3e80191-de07-41aa-b0d7-69b826f5378b-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.452587 4830 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d3e80191-de07-41aa-b0d7-69b826f5378b-var-lock\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.649542 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.650466 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.651168 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.651556 4830 status_manager.go:851] "Failed to get status for pod" podUID="dce3358b-25c4-4fe9-a3fa-0a0be053e8f0" pod="openshift-infra/auto-csr-approver-29536812-7wkbt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29536812-7wkbt\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.651922 4830 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.652370 4830 status_manager.go:851] "Failed to get status for pod" podUID="d3e80191-de07-41aa-b0d7-69b826f5378b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.755787 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.755989 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.756005 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.756095 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.756210 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.756483 4830 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.756516 4830 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.756592 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.829108 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.830051 4830 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5" exitCode=0 Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.830138 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.830155 4830 scope.go:117] "RemoveContainer" containerID="2fd9cca8c6af573373bdd5d9c9bea88412fc8611274c4856a98f11a59599fcb6" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.831803 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.831802 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"d3e80191-de07-41aa-b0d7-69b826f5378b","Type":"ContainerDied","Data":"132b04e1399ac3f5e9b1da70094189822a66567c85a1c5bd88e2b2441d1440f4"} Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.832301 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="132b04e1399ac3f5e9b1da70094189822a66567c85a1c5bd88e2b2441d1440f4" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.832772 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.833281 4830 status_manager.go:851] "Failed to get status for pod" podUID="dce3358b-25c4-4fe9-a3fa-0a0be053e8f0" pod="openshift-infra/auto-csr-approver-29536812-7wkbt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29536812-7wkbt\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.833796 4830 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.834187 4830 status_manager.go:851] "Failed to get status for pod" podUID="d3e80191-de07-41aa-b0d7-69b826f5378b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.850135 4830 scope.go:117] "RemoveContainer" containerID="97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.850680 4830 status_manager.go:851] "Failed to get status for pod" podUID="d3e80191-de07-41aa-b0d7-69b826f5378b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.851261 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.851651 4830 status_manager.go:851] "Failed to get status for pod" podUID="dce3358b-25c4-4fe9-a3fa-0a0be053e8f0" pod="openshift-infra/auto-csr-approver-29536812-7wkbt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29536812-7wkbt\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.852439 4830 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.852986 4830 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.853286 4830 status_manager.go:851] "Failed to get status for pod" podUID="d3e80191-de07-41aa-b0d7-69b826f5378b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.853654 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.854044 4830 status_manager.go:851] "Failed to get status for pod" podUID="dce3358b-25c4-4fe9-a3fa-0a0be053e8f0" pod="openshift-infra/auto-csr-approver-29536812-7wkbt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29536812-7wkbt\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.857800 4830 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.873594 4830 scope.go:117] "RemoveContainer" containerID="ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.894386 4830 scope.go:117] "RemoveContainer" containerID="b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.912158 4830 scope.go:117] "RemoveContainer" containerID="1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.940789 4830 scope.go:117] "RemoveContainer" containerID="e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.968268 4830 scope.go:117] "RemoveContainer" containerID="2fd9cca8c6af573373bdd5d9c9bea88412fc8611274c4856a98f11a59599fcb6" Feb 27 16:12:07 crc kubenswrapper[4830]: E0227 16:12:07.968809 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fd9cca8c6af573373bdd5d9c9bea88412fc8611274c4856a98f11a59599fcb6\": container with ID starting with 2fd9cca8c6af573373bdd5d9c9bea88412fc8611274c4856a98f11a59599fcb6 not found: ID does not exist" containerID="2fd9cca8c6af573373bdd5d9c9bea88412fc8611274c4856a98f11a59599fcb6" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.968876 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fd9cca8c6af573373bdd5d9c9bea88412fc8611274c4856a98f11a59599fcb6"} err="failed to get container status \"2fd9cca8c6af573373bdd5d9c9bea88412fc8611274c4856a98f11a59599fcb6\": rpc error: code = NotFound desc = could not find container \"2fd9cca8c6af573373bdd5d9c9bea88412fc8611274c4856a98f11a59599fcb6\": container with ID starting with 2fd9cca8c6af573373bdd5d9c9bea88412fc8611274c4856a98f11a59599fcb6 not found: ID does not exist" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.968957 4830 scope.go:117] "RemoveContainer" containerID="97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89" Feb 27 16:12:07 crc kubenswrapper[4830]: E0227 16:12:07.972430 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89\": container with ID starting with 97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89 not found: ID does not exist" containerID="97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.972454 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89"} err="failed to get container status \"97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89\": rpc error: code = NotFound desc = could not find container \"97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89\": container with ID starting with 97ffad51412be8330198e944d4f687380d5066b777b54e362e8f8b772b808c89 not found: ID does not exist" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.972503 4830 scope.go:117] "RemoveContainer" containerID="ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a" Feb 27 16:12:07 crc kubenswrapper[4830]: E0227 16:12:07.973544 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a\": container with ID starting with ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a not found: ID does not exist" containerID="ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.973755 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a"} err="failed to get container status \"ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a\": rpc error: code = NotFound desc = could not find container \"ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a\": container with ID starting with ec37d31a82aedaacdf47fa540d508e6ca53626d082c45933c11a72f41a819a8a not found: ID does not exist" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.973991 4830 scope.go:117] "RemoveContainer" containerID="b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241" Feb 27 16:12:07 crc kubenswrapper[4830]: E0227 16:12:07.974538 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241\": container with ID starting with b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241 not found: ID does not exist" containerID="b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.974574 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241"} err="failed to get container status \"b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241\": rpc error: code = NotFound desc = could not find container \"b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241\": container with ID starting with b6cb8e7a94967cd24e883272ba2f923a3adae43cc6105ea7506c1fba144de241 not found: ID does not exist" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.974621 4830 scope.go:117] "RemoveContainer" containerID="1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5" Feb 27 16:12:07 crc kubenswrapper[4830]: E0227 16:12:07.975157 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5\": container with ID starting with 1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5 not found: ID does not exist" containerID="1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.975198 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5"} err="failed to get container status \"1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5\": rpc error: code = NotFound desc = could not find container \"1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5\": container with ID starting with 1a3f71ef2ccab58baa94377c8f63ded83ca5f73a4d7a24d754719670685c55a5 not found: ID does not exist" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.975221 4830 scope.go:117] "RemoveContainer" containerID="e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2" Feb 27 16:12:07 crc kubenswrapper[4830]: E0227 16:12:07.975862 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\": container with ID starting with e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2 not found: ID does not exist" containerID="e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2" Feb 27 16:12:07 crc kubenswrapper[4830]: I0227 16:12:07.976086 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2"} err="failed to get container status \"e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\": rpc error: code = NotFound desc = could not find container \"e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2\": container with ID starting with e1fc7975111dde3841671f046742eea3d0eefcb040eee8aadb99a7ca8eba8bd2 not found: ID does not exist" Feb 27 16:12:08 crc kubenswrapper[4830]: I0227 16:12:08.773165 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.240086 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-966h2" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.240370 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k7l8d" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.240452 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k7l8d" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.244562 4830 status_manager.go:851] "Failed to get status for pod" podUID="d3e80191-de07-41aa-b0d7-69b826f5378b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.246186 4830 status_manager.go:851] "Failed to get status for pod" podUID="8b33138a-5b9d-4af8-b13d-4db4c2613983" pod="openshift-marketplace/community-operators-966h2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-966h2\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.246765 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.247727 4830 status_manager.go:851] "Failed to get status for pod" podUID="dce3358b-25c4-4fe9-a3fa-0a0be053e8f0" pod="openshift-infra/auto-csr-approver-29536812-7wkbt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29536812-7wkbt\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.299338 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-966h2" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.299686 4830 status_manager.go:851] "Failed to get status for pod" podUID="d3e80191-de07-41aa-b0d7-69b826f5378b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.299892 4830 status_manager.go:851] "Failed to get status for pod" podUID="8b33138a-5b9d-4af8-b13d-4db4c2613983" pod="openshift-marketplace/community-operators-966h2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-966h2\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.300107 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.300293 4830 status_manager.go:851] "Failed to get status for pod" podUID="dce3358b-25c4-4fe9-a3fa-0a0be053e8f0" pod="openshift-infra/auto-csr-approver-29536812-7wkbt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29536812-7wkbt\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.303365 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k7l8d" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.304175 4830 status_manager.go:851] "Failed to get status for pod" podUID="8b33138a-5b9d-4af8-b13d-4db4c2613983" pod="openshift-marketplace/community-operators-966h2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-966h2\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.305197 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.305923 4830 status_manager.go:851] "Failed to get status for pod" podUID="dce3358b-25c4-4fe9-a3fa-0a0be053e8f0" pod="openshift-infra/auto-csr-approver-29536812-7wkbt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29536812-7wkbt\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.306503 4830 status_manager.go:851] "Failed to get status for pod" podUID="f2579681-6b81-4b58-9d2c-c26b123be8ec" pod="openshift-marketplace/certified-operators-k7l8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-k7l8d\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.307028 4830 status_manager.go:851] "Failed to get status for pod" podUID="d3e80191-de07-41aa-b0d7-69b826f5378b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: E0227 16:12:10.345938 4830 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: E0227 16:12:10.347069 4830 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: E0227 16:12:10.347511 4830 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: E0227 16:12:10.347991 4830 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: E0227 16:12:10.348495 4830 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.348540 4830 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 27 16:12:10 crc kubenswrapper[4830]: E0227 16:12:10.348891 4830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.36:6443: connect: connection refused" interval="200ms" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.395220 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dnpxp" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.395852 4830 status_manager.go:851] "Failed to get status for pod" podUID="f2579681-6b81-4b58-9d2c-c26b123be8ec" pod="openshift-marketplace/certified-operators-k7l8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-k7l8d\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.396341 4830 status_manager.go:851] "Failed to get status for pod" podUID="dce3358b-25c4-4fe9-a3fa-0a0be053e8f0" pod="openshift-infra/auto-csr-approver-29536812-7wkbt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29536812-7wkbt\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.396810 4830 status_manager.go:851] "Failed to get status for pod" podUID="789ee180-dd8e-4cb2-884e-beea08667c53" pod="openshift-marketplace/certified-operators-dnpxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dnpxp\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.397315 4830 status_manager.go:851] "Failed to get status for pod" podUID="d3e80191-de07-41aa-b0d7-69b826f5378b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.397859 4830 status_manager.go:851] "Failed to get status for pod" podUID="8b33138a-5b9d-4af8-b13d-4db4c2613983" pod="openshift-marketplace/community-operators-966h2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-966h2\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.398346 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.459475 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dnpxp" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.460122 4830 status_manager.go:851] "Failed to get status for pod" podUID="f2579681-6b81-4b58-9d2c-c26b123be8ec" pod="openshift-marketplace/certified-operators-k7l8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-k7l8d\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.461338 4830 status_manager.go:851] "Failed to get status for pod" podUID="dce3358b-25c4-4fe9-a3fa-0a0be053e8f0" pod="openshift-infra/auto-csr-approver-29536812-7wkbt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29536812-7wkbt\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.462105 4830 status_manager.go:851] "Failed to get status for pod" podUID="789ee180-dd8e-4cb2-884e-beea08667c53" pod="openshift-marketplace/certified-operators-dnpxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dnpxp\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.463256 4830 status_manager.go:851] "Failed to get status for pod" podUID="d3e80191-de07-41aa-b0d7-69b826f5378b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.463674 4830 status_manager.go:851] "Failed to get status for pod" podUID="8b33138a-5b9d-4af8-b13d-4db4c2613983" pod="openshift-marketplace/community-operators-966h2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-966h2\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.464192 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: E0227 16:12:10.550257 4830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.36:6443: connect: connection refused" interval="400ms" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.568132 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s4bpk" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.568198 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s4bpk" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.639290 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s4bpk" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.639967 4830 status_manager.go:851] "Failed to get status for pod" podUID="8b33138a-5b9d-4af8-b13d-4db4c2613983" pod="openshift-marketplace/community-operators-966h2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-966h2\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.640641 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.641091 4830 status_manager.go:851] "Failed to get status for pod" podUID="f2579681-6b81-4b58-9d2c-c26b123be8ec" pod="openshift-marketplace/certified-operators-k7l8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-k7l8d\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.641588 4830 status_manager.go:851] "Failed to get status for pod" podUID="dce3358b-25c4-4fe9-a3fa-0a0be053e8f0" pod="openshift-infra/auto-csr-approver-29536812-7wkbt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29536812-7wkbt\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.642119 4830 status_manager.go:851] "Failed to get status for pod" podUID="789ee180-dd8e-4cb2-884e-beea08667c53" pod="openshift-marketplace/certified-operators-dnpxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dnpxp\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.642677 4830 status_manager.go:851] "Failed to get status for pod" podUID="1c5e2cae-7890-48fb-ab76-7e53c52fd6ac" pod="openshift-marketplace/community-operators-s4bpk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s4bpk\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.643126 4830 status_manager.go:851] "Failed to get status for pod" podUID="d3e80191-de07-41aa-b0d7-69b826f5378b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.927559 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k7l8d" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.928432 4830 status_manager.go:851] "Failed to get status for pod" podUID="8b33138a-5b9d-4af8-b13d-4db4c2613983" pod="openshift-marketplace/community-operators-966h2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-966h2\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.929026 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.929359 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s4bpk" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.929610 4830 status_manager.go:851] "Failed to get status for pod" podUID="f2579681-6b81-4b58-9d2c-c26b123be8ec" pod="openshift-marketplace/certified-operators-k7l8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-k7l8d\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.930154 4830 status_manager.go:851] "Failed to get status for pod" podUID="dce3358b-25c4-4fe9-a3fa-0a0be053e8f0" pod="openshift-infra/auto-csr-approver-29536812-7wkbt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29536812-7wkbt\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.930749 4830 status_manager.go:851] "Failed to get status for pod" podUID="789ee180-dd8e-4cb2-884e-beea08667c53" pod="openshift-marketplace/certified-operators-dnpxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dnpxp\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.931238 4830 status_manager.go:851] "Failed to get status for pod" podUID="1c5e2cae-7890-48fb-ab76-7e53c52fd6ac" pod="openshift-marketplace/community-operators-s4bpk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s4bpk\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.931677 4830 status_manager.go:851] "Failed to get status for pod" podUID="d3e80191-de07-41aa-b0d7-69b826f5378b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.932287 4830 status_manager.go:851] "Failed to get status for pod" podUID="8b33138a-5b9d-4af8-b13d-4db4c2613983" pod="openshift-marketplace/community-operators-966h2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-966h2\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.932749 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.933092 4830 status_manager.go:851] "Failed to get status for pod" podUID="f2579681-6b81-4b58-9d2c-c26b123be8ec" pod="openshift-marketplace/certified-operators-k7l8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-k7l8d\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.933351 4830 status_manager.go:851] "Failed to get status for pod" podUID="dce3358b-25c4-4fe9-a3fa-0a0be053e8f0" pod="openshift-infra/auto-csr-approver-29536812-7wkbt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29536812-7wkbt\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.933605 4830 status_manager.go:851] "Failed to get status for pod" podUID="789ee180-dd8e-4cb2-884e-beea08667c53" pod="openshift-marketplace/certified-operators-dnpxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dnpxp\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.934057 4830 status_manager.go:851] "Failed to get status for pod" podUID="1c5e2cae-7890-48fb-ab76-7e53c52fd6ac" pod="openshift-marketplace/community-operators-s4bpk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s4bpk\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: I0227 16:12:10.934578 4830 status_manager.go:851] "Failed to get status for pod" podUID="d3e80191-de07-41aa-b0d7-69b826f5378b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:10 crc kubenswrapper[4830]: E0227 16:12:10.951989 4830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.36:6443: connect: connection refused" interval="800ms" Feb 27 16:12:11 crc kubenswrapper[4830]: E0227 16:12:11.753088 4830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.36:6443: connect: connection refused" interval="1.6s" Feb 27 16:12:12 crc kubenswrapper[4830]: I0227 16:12:12.191805 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kkwcl" Feb 27 16:12:12 crc kubenswrapper[4830]: I0227 16:12:12.192583 4830 status_manager.go:851] "Failed to get status for pod" podUID="a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc" pod="openshift-marketplace/redhat-marketplace-kkwcl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kkwcl\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:12 crc kubenswrapper[4830]: I0227 16:12:12.193271 4830 status_manager.go:851] "Failed to get status for pod" podUID="8b33138a-5b9d-4af8-b13d-4db4c2613983" pod="openshift-marketplace/community-operators-966h2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-966h2\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:12 crc kubenswrapper[4830]: I0227 16:12:12.193815 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:12 crc kubenswrapper[4830]: I0227 16:12:12.194413 4830 status_manager.go:851] "Failed to get status for pod" podUID="f2579681-6b81-4b58-9d2c-c26b123be8ec" pod="openshift-marketplace/certified-operators-k7l8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-k7l8d\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:12 crc kubenswrapper[4830]: I0227 16:12:12.194848 4830 status_manager.go:851] "Failed to get status for pod" podUID="dce3358b-25c4-4fe9-a3fa-0a0be053e8f0" pod="openshift-infra/auto-csr-approver-29536812-7wkbt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29536812-7wkbt\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:12 crc kubenswrapper[4830]: I0227 16:12:12.195268 4830 status_manager.go:851] "Failed to get status for pod" podUID="789ee180-dd8e-4cb2-884e-beea08667c53" pod="openshift-marketplace/certified-operators-dnpxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dnpxp\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:12 crc kubenswrapper[4830]: I0227 16:12:12.195802 4830 status_manager.go:851] "Failed to get status for pod" podUID="1c5e2cae-7890-48fb-ab76-7e53c52fd6ac" pod="openshift-marketplace/community-operators-s4bpk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s4bpk\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:12 crc kubenswrapper[4830]: I0227 16:12:12.196607 4830 status_manager.go:851] "Failed to get status for pod" podUID="d3e80191-de07-41aa-b0d7-69b826f5378b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:12 crc kubenswrapper[4830]: I0227 16:12:12.258024 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kkwcl" Feb 27 16:12:12 crc kubenswrapper[4830]: I0227 16:12:12.258826 4830 status_manager.go:851] "Failed to get status for pod" podUID="789ee180-dd8e-4cb2-884e-beea08667c53" pod="openshift-marketplace/certified-operators-dnpxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dnpxp\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:12 crc kubenswrapper[4830]: I0227 16:12:12.259417 4830 status_manager.go:851] "Failed to get status for pod" podUID="1c5e2cae-7890-48fb-ab76-7e53c52fd6ac" pod="openshift-marketplace/community-operators-s4bpk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s4bpk\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:12 crc kubenswrapper[4830]: I0227 16:12:12.259848 4830 status_manager.go:851] "Failed to get status for pod" podUID="d3e80191-de07-41aa-b0d7-69b826f5378b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:12 crc kubenswrapper[4830]: I0227 16:12:12.260381 4830 status_manager.go:851] "Failed to get status for pod" podUID="a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc" pod="openshift-marketplace/redhat-marketplace-kkwcl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kkwcl\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:12 crc kubenswrapper[4830]: I0227 16:12:12.260901 4830 status_manager.go:851] "Failed to get status for pod" podUID="8b33138a-5b9d-4af8-b13d-4db4c2613983" pod="openshift-marketplace/community-operators-966h2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-966h2\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:12 crc kubenswrapper[4830]: I0227 16:12:12.261371 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:12 crc kubenswrapper[4830]: I0227 16:12:12.261737 4830 status_manager.go:851] "Failed to get status for pod" podUID="f2579681-6b81-4b58-9d2c-c26b123be8ec" pod="openshift-marketplace/certified-operators-k7l8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-k7l8d\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:12 crc kubenswrapper[4830]: I0227 16:12:12.262231 4830 status_manager.go:851] "Failed to get status for pod" podUID="dce3358b-25c4-4fe9-a3fa-0a0be053e8f0" pod="openshift-infra/auto-csr-approver-29536812-7wkbt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29536812-7wkbt\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:12 crc kubenswrapper[4830]: I0227 16:12:12.586698 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zwcdd" Feb 27 16:12:12 crc kubenswrapper[4830]: I0227 16:12:12.587652 4830 status_manager.go:851] "Failed to get status for pod" podUID="789ee180-dd8e-4cb2-884e-beea08667c53" pod="openshift-marketplace/certified-operators-dnpxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dnpxp\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:12 crc kubenswrapper[4830]: I0227 16:12:12.588381 4830 status_manager.go:851] "Failed to get status for pod" podUID="1c5e2cae-7890-48fb-ab76-7e53c52fd6ac" pod="openshift-marketplace/community-operators-s4bpk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s4bpk\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:12 crc kubenswrapper[4830]: I0227 16:12:12.589279 4830 status_manager.go:851] "Failed to get status for pod" podUID="d3e80191-de07-41aa-b0d7-69b826f5378b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:12 crc kubenswrapper[4830]: I0227 16:12:12.590235 4830 status_manager.go:851] "Failed to get status for pod" podUID="a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc" pod="openshift-marketplace/redhat-marketplace-kkwcl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kkwcl\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:12 crc kubenswrapper[4830]: I0227 16:12:12.590679 4830 status_manager.go:851] "Failed to get status for pod" podUID="8b33138a-5b9d-4af8-b13d-4db4c2613983" pod="openshift-marketplace/community-operators-966h2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-966h2\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:12 crc kubenswrapper[4830]: I0227 16:12:12.591369 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:12 crc kubenswrapper[4830]: I0227 16:12:12.592294 4830 status_manager.go:851] "Failed to get status for pod" podUID="f2579681-6b81-4b58-9d2c-c26b123be8ec" pod="openshift-marketplace/certified-operators-k7l8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-k7l8d\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:12 crc kubenswrapper[4830]: I0227 16:12:12.592819 4830 status_manager.go:851] "Failed to get status for pod" podUID="dce3358b-25c4-4fe9-a3fa-0a0be053e8f0" pod="openshift-infra/auto-csr-approver-29536812-7wkbt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29536812-7wkbt\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:12 crc kubenswrapper[4830]: I0227 16:12:12.593354 4830 status_manager.go:851] "Failed to get status for pod" podUID="728cab24-3fc3-4249-b37e-183d5676c191" pod="openshift-marketplace/redhat-marketplace-zwcdd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zwcdd\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:12 crc kubenswrapper[4830]: I0227 16:12:12.654303 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zwcdd" Feb 27 16:12:12 crc kubenswrapper[4830]: I0227 16:12:12.654929 4830 status_manager.go:851] "Failed to get status for pod" podUID="a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc" pod="openshift-marketplace/redhat-marketplace-kkwcl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kkwcl\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:12 crc kubenswrapper[4830]: I0227 16:12:12.655492 4830 status_manager.go:851] "Failed to get status for pod" podUID="8b33138a-5b9d-4af8-b13d-4db4c2613983" pod="openshift-marketplace/community-operators-966h2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-966h2\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:12 crc kubenswrapper[4830]: I0227 16:12:12.655931 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:12 crc kubenswrapper[4830]: I0227 16:12:12.656347 4830 status_manager.go:851] "Failed to get status for pod" podUID="f2579681-6b81-4b58-9d2c-c26b123be8ec" pod="openshift-marketplace/certified-operators-k7l8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-k7l8d\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:12 crc kubenswrapper[4830]: I0227 16:12:12.656835 4830 status_manager.go:851] "Failed to get status for pod" podUID="dce3358b-25c4-4fe9-a3fa-0a0be053e8f0" pod="openshift-infra/auto-csr-approver-29536812-7wkbt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29536812-7wkbt\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:12 crc kubenswrapper[4830]: I0227 16:12:12.657346 4830 status_manager.go:851] "Failed to get status for pod" podUID="728cab24-3fc3-4249-b37e-183d5676c191" pod="openshift-marketplace/redhat-marketplace-zwcdd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zwcdd\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:12 crc kubenswrapper[4830]: I0227 16:12:12.657743 4830 status_manager.go:851] "Failed to get status for pod" podUID="789ee180-dd8e-4cb2-884e-beea08667c53" pod="openshift-marketplace/certified-operators-dnpxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dnpxp\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:12 crc kubenswrapper[4830]: I0227 16:12:12.658211 4830 status_manager.go:851] "Failed to get status for pod" podUID="1c5e2cae-7890-48fb-ab76-7e53c52fd6ac" pod="openshift-marketplace/community-operators-s4bpk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s4bpk\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:12 crc kubenswrapper[4830]: I0227 16:12:12.658609 4830 status_manager.go:851] "Failed to get status for pod" podUID="d3e80191-de07-41aa-b0d7-69b826f5378b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.319246 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tr5cj" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.320021 4830 status_manager.go:851] "Failed to get status for pod" podUID="48011108-ee2c-4d3b-9f28-65cfc91b90ab" pod="openshift-marketplace/redhat-operators-tr5cj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tr5cj\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.320558 4830 status_manager.go:851] "Failed to get status for pod" podUID="8b33138a-5b9d-4af8-b13d-4db4c2613983" pod="openshift-marketplace/community-operators-966h2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-966h2\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.321206 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.321792 4830 status_manager.go:851] "Failed to get status for pod" podUID="dce3358b-25c4-4fe9-a3fa-0a0be053e8f0" pod="openshift-infra/auto-csr-approver-29536812-7wkbt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29536812-7wkbt\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.322364 4830 status_manager.go:851] "Failed to get status for pod" podUID="f2579681-6b81-4b58-9d2c-c26b123be8ec" pod="openshift-marketplace/certified-operators-k7l8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-k7l8d\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.322843 4830 status_manager.go:851] "Failed to get status for pod" podUID="728cab24-3fc3-4249-b37e-183d5676c191" pod="openshift-marketplace/redhat-marketplace-zwcdd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zwcdd\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.323334 4830 status_manager.go:851] "Failed to get status for pod" podUID="789ee180-dd8e-4cb2-884e-beea08667c53" pod="openshift-marketplace/certified-operators-dnpxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dnpxp\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.323802 4830 status_manager.go:851] "Failed to get status for pod" podUID="1c5e2cae-7890-48fb-ab76-7e53c52fd6ac" pod="openshift-marketplace/community-operators-s4bpk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s4bpk\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.324328 4830 status_manager.go:851] "Failed to get status for pod" podUID="d3e80191-de07-41aa-b0d7-69b826f5378b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.324751 4830 status_manager.go:851] "Failed to get status for pod" podUID="a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc" pod="openshift-marketplace/redhat-marketplace-kkwcl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kkwcl\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: E0227 16:12:13.354735 4830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.36:6443: connect: connection refused" interval="3.2s" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.385559 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tr5cj" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.386180 4830 status_manager.go:851] "Failed to get status for pod" podUID="a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc" pod="openshift-marketplace/redhat-marketplace-kkwcl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kkwcl\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.386664 4830 status_manager.go:851] "Failed to get status for pod" podUID="48011108-ee2c-4d3b-9f28-65cfc91b90ab" pod="openshift-marketplace/redhat-operators-tr5cj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tr5cj\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.387432 4830 status_manager.go:851] "Failed to get status for pod" podUID="8b33138a-5b9d-4af8-b13d-4db4c2613983" pod="openshift-marketplace/community-operators-966h2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-966h2\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.387848 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.388289 4830 status_manager.go:851] "Failed to get status for pod" podUID="dce3358b-25c4-4fe9-a3fa-0a0be053e8f0" pod="openshift-infra/auto-csr-approver-29536812-7wkbt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29536812-7wkbt\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.388773 4830 status_manager.go:851] "Failed to get status for pod" podUID="f2579681-6b81-4b58-9d2c-c26b123be8ec" pod="openshift-marketplace/certified-operators-k7l8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-k7l8d\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.389332 4830 status_manager.go:851] "Failed to get status for pod" podUID="728cab24-3fc3-4249-b37e-183d5676c191" pod="openshift-marketplace/redhat-marketplace-zwcdd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zwcdd\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.389750 4830 status_manager.go:851] "Failed to get status for pod" podUID="789ee180-dd8e-4cb2-884e-beea08667c53" pod="openshift-marketplace/certified-operators-dnpxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dnpxp\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.390256 4830 status_manager.go:851] "Failed to get status for pod" podUID="1c5e2cae-7890-48fb-ab76-7e53c52fd6ac" pod="openshift-marketplace/community-operators-s4bpk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s4bpk\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.390711 4830 status_manager.go:851] "Failed to get status for pod" podUID="d3e80191-de07-41aa-b0d7-69b826f5378b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.585647 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s5z2n" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.586844 4830 status_manager.go:851] "Failed to get status for pod" podUID="a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc" pod="openshift-marketplace/redhat-marketplace-kkwcl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kkwcl\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.587437 4830 status_manager.go:851] "Failed to get status for pod" podUID="8b33138a-5b9d-4af8-b13d-4db4c2613983" pod="openshift-marketplace/community-operators-966h2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-966h2\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.588030 4830 status_manager.go:851] "Failed to get status for pod" podUID="48011108-ee2c-4d3b-9f28-65cfc91b90ab" pod="openshift-marketplace/redhat-operators-tr5cj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tr5cj\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.588570 4830 status_manager.go:851] "Failed to get status for pod" podUID="514ae4c6-322a-458e-a1e5-df6d6a47fc88" pod="openshift-marketplace/redhat-operators-s5z2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s5z2n\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.589108 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.589605 4830 status_manager.go:851] "Failed to get status for pod" podUID="f2579681-6b81-4b58-9d2c-c26b123be8ec" pod="openshift-marketplace/certified-operators-k7l8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-k7l8d\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.590177 4830 status_manager.go:851] "Failed to get status for pod" podUID="dce3358b-25c4-4fe9-a3fa-0a0be053e8f0" pod="openshift-infra/auto-csr-approver-29536812-7wkbt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29536812-7wkbt\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.590768 4830 status_manager.go:851] "Failed to get status for pod" podUID="728cab24-3fc3-4249-b37e-183d5676c191" pod="openshift-marketplace/redhat-marketplace-zwcdd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zwcdd\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.591294 4830 status_manager.go:851] "Failed to get status for pod" podUID="789ee180-dd8e-4cb2-884e-beea08667c53" pod="openshift-marketplace/certified-operators-dnpxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dnpxp\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.591742 4830 status_manager.go:851] "Failed to get status for pod" podUID="1c5e2cae-7890-48fb-ab76-7e53c52fd6ac" pod="openshift-marketplace/community-operators-s4bpk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s4bpk\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.592252 4830 status_manager.go:851] "Failed to get status for pod" podUID="d3e80191-de07-41aa-b0d7-69b826f5378b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.657704 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s5z2n" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.658505 4830 status_manager.go:851] "Failed to get status for pod" podUID="a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc" pod="openshift-marketplace/redhat-marketplace-kkwcl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kkwcl\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.659367 4830 status_manager.go:851] "Failed to get status for pod" podUID="8b33138a-5b9d-4af8-b13d-4db4c2613983" pod="openshift-marketplace/community-operators-966h2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-966h2\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.660011 4830 status_manager.go:851] "Failed to get status for pod" podUID="48011108-ee2c-4d3b-9f28-65cfc91b90ab" pod="openshift-marketplace/redhat-operators-tr5cj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tr5cj\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.660835 4830 status_manager.go:851] "Failed to get status for pod" podUID="514ae4c6-322a-458e-a1e5-df6d6a47fc88" pod="openshift-marketplace/redhat-operators-s5z2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s5z2n\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.661535 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.662111 4830 status_manager.go:851] "Failed to get status for pod" podUID="f2579681-6b81-4b58-9d2c-c26b123be8ec" pod="openshift-marketplace/certified-operators-k7l8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-k7l8d\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.662639 4830 status_manager.go:851] "Failed to get status for pod" podUID="dce3358b-25c4-4fe9-a3fa-0a0be053e8f0" pod="openshift-infra/auto-csr-approver-29536812-7wkbt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29536812-7wkbt\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.663339 4830 status_manager.go:851] "Failed to get status for pod" podUID="728cab24-3fc3-4249-b37e-183d5676c191" pod="openshift-marketplace/redhat-marketplace-zwcdd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zwcdd\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.663877 4830 status_manager.go:851] "Failed to get status for pod" podUID="789ee180-dd8e-4cb2-884e-beea08667c53" pod="openshift-marketplace/certified-operators-dnpxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dnpxp\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.664368 4830 status_manager.go:851] "Failed to get status for pod" podUID="1c5e2cae-7890-48fb-ab76-7e53c52fd6ac" pod="openshift-marketplace/community-operators-s4bpk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s4bpk\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:13 crc kubenswrapper[4830]: I0227 16:12:13.664882 4830 status_manager.go:851] "Failed to get status for pod" podUID="d3e80191-de07-41aa-b0d7-69b826f5378b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:14 crc kubenswrapper[4830]: I0227 16:12:14.767809 4830 status_manager.go:851] "Failed to get status for pod" podUID="d3e80191-de07-41aa-b0d7-69b826f5378b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:14 crc kubenswrapper[4830]: I0227 16:12:14.769897 4830 status_manager.go:851] "Failed to get status for pod" podUID="a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc" pod="openshift-marketplace/redhat-marketplace-kkwcl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kkwcl\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:14 crc kubenswrapper[4830]: I0227 16:12:14.771590 4830 status_manager.go:851] "Failed to get status for pod" podUID="8b33138a-5b9d-4af8-b13d-4db4c2613983" pod="openshift-marketplace/community-operators-966h2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-966h2\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:14 crc kubenswrapper[4830]: I0227 16:12:14.772726 4830 status_manager.go:851] "Failed to get status for pod" podUID="48011108-ee2c-4d3b-9f28-65cfc91b90ab" pod="openshift-marketplace/redhat-operators-tr5cj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tr5cj\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:14 crc kubenswrapper[4830]: I0227 16:12:14.773385 4830 status_manager.go:851] "Failed to get status for pod" podUID="514ae4c6-322a-458e-a1e5-df6d6a47fc88" pod="openshift-marketplace/redhat-operators-s5z2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s5z2n\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:14 crc kubenswrapper[4830]: I0227 16:12:14.773815 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:14 crc kubenswrapper[4830]: I0227 16:12:14.774656 4830 status_manager.go:851] "Failed to get status for pod" podUID="f2579681-6b81-4b58-9d2c-c26b123be8ec" pod="openshift-marketplace/certified-operators-k7l8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-k7l8d\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:14 crc kubenswrapper[4830]: I0227 16:12:14.775308 4830 status_manager.go:851] "Failed to get status for pod" podUID="dce3358b-25c4-4fe9-a3fa-0a0be053e8f0" pod="openshift-infra/auto-csr-approver-29536812-7wkbt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29536812-7wkbt\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:14 crc kubenswrapper[4830]: I0227 16:12:14.776472 4830 status_manager.go:851] "Failed to get status for pod" podUID="728cab24-3fc3-4249-b37e-183d5676c191" pod="openshift-marketplace/redhat-marketplace-zwcdd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zwcdd\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:14 crc kubenswrapper[4830]: I0227 16:12:14.777342 4830 status_manager.go:851] "Failed to get status for pod" podUID="789ee180-dd8e-4cb2-884e-beea08667c53" pod="openshift-marketplace/certified-operators-dnpxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dnpxp\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:14 crc kubenswrapper[4830]: I0227 16:12:14.777701 4830 status_manager.go:851] "Failed to get status for pod" podUID="1c5e2cae-7890-48fb-ab76-7e53c52fd6ac" pod="openshift-marketplace/community-operators-s4bpk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s4bpk\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:15 crc kubenswrapper[4830]: I0227 16:12:15.761774 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:12:15 crc kubenswrapper[4830]: I0227 16:12:15.762888 4830 status_manager.go:851] "Failed to get status for pod" podUID="a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc" pod="openshift-marketplace/redhat-marketplace-kkwcl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kkwcl\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:15 crc kubenswrapper[4830]: I0227 16:12:15.763504 4830 status_manager.go:851] "Failed to get status for pod" podUID="8b33138a-5b9d-4af8-b13d-4db4c2613983" pod="openshift-marketplace/community-operators-966h2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-966h2\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:15 crc kubenswrapper[4830]: I0227 16:12:15.765019 4830 status_manager.go:851] "Failed to get status for pod" podUID="48011108-ee2c-4d3b-9f28-65cfc91b90ab" pod="openshift-marketplace/redhat-operators-tr5cj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tr5cj\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:15 crc kubenswrapper[4830]: I0227 16:12:15.765457 4830 status_manager.go:851] "Failed to get status for pod" podUID="514ae4c6-322a-458e-a1e5-df6d6a47fc88" pod="openshift-marketplace/redhat-operators-s5z2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s5z2n\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:15 crc kubenswrapper[4830]: I0227 16:12:15.765886 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:15 crc kubenswrapper[4830]: I0227 16:12:15.766286 4830 status_manager.go:851] "Failed to get status for pod" podUID="f2579681-6b81-4b58-9d2c-c26b123be8ec" pod="openshift-marketplace/certified-operators-k7l8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-k7l8d\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:15 crc kubenswrapper[4830]: I0227 16:12:15.766613 4830 status_manager.go:851] "Failed to get status for pod" podUID="dce3358b-25c4-4fe9-a3fa-0a0be053e8f0" pod="openshift-infra/auto-csr-approver-29536812-7wkbt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29536812-7wkbt\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:15 crc kubenswrapper[4830]: I0227 16:12:15.766925 4830 status_manager.go:851] "Failed to get status for pod" podUID="728cab24-3fc3-4249-b37e-183d5676c191" pod="openshift-marketplace/redhat-marketplace-zwcdd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zwcdd\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:15 crc kubenswrapper[4830]: I0227 16:12:15.767589 4830 status_manager.go:851] "Failed to get status for pod" podUID="789ee180-dd8e-4cb2-884e-beea08667c53" pod="openshift-marketplace/certified-operators-dnpxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dnpxp\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:15 crc kubenswrapper[4830]: I0227 16:12:15.768103 4830 status_manager.go:851] "Failed to get status for pod" podUID="1c5e2cae-7890-48fb-ab76-7e53c52fd6ac" pod="openshift-marketplace/community-operators-s4bpk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s4bpk\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:15 crc kubenswrapper[4830]: I0227 16:12:15.768408 4830 status_manager.go:851] "Failed to get status for pod" podUID="d3e80191-de07-41aa-b0d7-69b826f5378b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:15 crc kubenswrapper[4830]: I0227 16:12:15.789343 4830 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d20f886-cfdb-48c7-9754-6b7255b1124f" Feb 27 16:12:15 crc kubenswrapper[4830]: I0227 16:12:15.789391 4830 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d20f886-cfdb-48c7-9754-6b7255b1124f" Feb 27 16:12:15 crc kubenswrapper[4830]: E0227 16:12:15.789887 4830 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:12:15 crc kubenswrapper[4830]: I0227 16:12:15.790653 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:12:15 crc kubenswrapper[4830]: W0227 16:12:15.817639 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-c33ddbc59b868157a5b58be5e8ab5df1e015b9be55ed0c1cb671211e21c508e3 WatchSource:0}: Error finding container c33ddbc59b868157a5b58be5e8ab5df1e015b9be55ed0c1cb671211e21c508e3: Status 404 returned error can't find the container with id c33ddbc59b868157a5b58be5e8ab5df1e015b9be55ed0c1cb671211e21c508e3 Feb 27 16:12:15 crc kubenswrapper[4830]: I0227 16:12:15.895541 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c33ddbc59b868157a5b58be5e8ab5df1e015b9be55ed0c1cb671211e21c508e3"} Feb 27 16:12:15 crc kubenswrapper[4830]: E0227 16:12:15.966297 4830 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.36:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18982671ee21da69 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:12:04.138646121 +0000 UTC m=+320.227918584,LastTimestamp:2026-02-27 16:12:04.138646121 +0000 UTC m=+320.227918584,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:12:16 crc kubenswrapper[4830]: E0227 16:12:16.556177 4830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.36:6443: connect: connection refused" interval="6.4s" Feb 27 16:12:17 crc kubenswrapper[4830]: E0227 16:12:17.815789 4830 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-a576bccb73520a452eea30512283d7bca1141a05d0c697885d106fb132c3df7a.scope\": RecentStats: unable to find data in memory cache]" Feb 27 16:12:17 crc kubenswrapper[4830]: I0227 16:12:17.916187 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c2a16a0873f0a68d55dd0f33142ce480e76e3e1fe291f615d9b7a5991d5a599f"} Feb 27 16:12:18 crc kubenswrapper[4830]: I0227 16:12:18.961584 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 27 16:12:18 crc kubenswrapper[4830]: I0227 16:12:18.962813 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 27 16:12:18 crc kubenswrapper[4830]: I0227 16:12:18.962899 4830 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="a576bccb73520a452eea30512283d7bca1141a05d0c697885d106fb132c3df7a" exitCode=1 Feb 27 16:12:18 crc kubenswrapper[4830]: I0227 16:12:18.963052 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"a576bccb73520a452eea30512283d7bca1141a05d0c697885d106fb132c3df7a"} Feb 27 16:12:18 crc kubenswrapper[4830]: I0227 16:12:18.963821 4830 scope.go:117] "RemoveContainer" containerID="a576bccb73520a452eea30512283d7bca1141a05d0c697885d106fb132c3df7a" Feb 27 16:12:18 crc kubenswrapper[4830]: I0227 16:12:18.964138 4830 status_manager.go:851] "Failed to get status for pod" podUID="a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc" pod="openshift-marketplace/redhat-marketplace-kkwcl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kkwcl\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:18 crc kubenswrapper[4830]: I0227 16:12:18.964670 4830 status_manager.go:851] "Failed to get status for pod" podUID="8b33138a-5b9d-4af8-b13d-4db4c2613983" pod="openshift-marketplace/community-operators-966h2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-966h2\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:18 crc kubenswrapper[4830]: I0227 16:12:18.965279 4830 status_manager.go:851] "Failed to get status for pod" podUID="48011108-ee2c-4d3b-9f28-65cfc91b90ab" pod="openshift-marketplace/redhat-operators-tr5cj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tr5cj\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:18 crc kubenswrapper[4830]: I0227 16:12:18.965695 4830 status_manager.go:851] "Failed to get status for pod" podUID="514ae4c6-322a-458e-a1e5-df6d6a47fc88" pod="openshift-marketplace/redhat-operators-s5z2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s5z2n\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:18 crc kubenswrapper[4830]: I0227 16:12:18.966219 4830 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:18 crc kubenswrapper[4830]: I0227 16:12:18.966626 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:18 crc kubenswrapper[4830]: I0227 16:12:18.967517 4830 status_manager.go:851] "Failed to get status for pod" podUID="f2579681-6b81-4b58-9d2c-c26b123be8ec" pod="openshift-marketplace/certified-operators-k7l8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-k7l8d\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:18 crc kubenswrapper[4830]: I0227 16:12:18.968108 4830 status_manager.go:851] "Failed to get status for pod" podUID="dce3358b-25c4-4fe9-a3fa-0a0be053e8f0" pod="openshift-infra/auto-csr-approver-29536812-7wkbt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29536812-7wkbt\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:18 crc kubenswrapper[4830]: I0227 16:12:18.968631 4830 status_manager.go:851] "Failed to get status for pod" podUID="728cab24-3fc3-4249-b37e-183d5676c191" pod="openshift-marketplace/redhat-marketplace-zwcdd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zwcdd\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:18 crc kubenswrapper[4830]: I0227 16:12:18.969076 4830 status_manager.go:851] "Failed to get status for pod" podUID="789ee180-dd8e-4cb2-884e-beea08667c53" pod="openshift-marketplace/certified-operators-dnpxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dnpxp\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:18 crc kubenswrapper[4830]: I0227 16:12:18.969435 4830 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="c2a16a0873f0a68d55dd0f33142ce480e76e3e1fe291f615d9b7a5991d5a599f" exitCode=0 Feb 27 16:12:18 crc kubenswrapper[4830]: I0227 16:12:18.969488 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"c2a16a0873f0a68d55dd0f33142ce480e76e3e1fe291f615d9b7a5991d5a599f"} Feb 27 16:12:18 crc kubenswrapper[4830]: I0227 16:12:18.969556 4830 status_manager.go:851] "Failed to get status for pod" podUID="1c5e2cae-7890-48fb-ab76-7e53c52fd6ac" pod="openshift-marketplace/community-operators-s4bpk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s4bpk\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:18 crc kubenswrapper[4830]: I0227 16:12:18.969780 4830 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d20f886-cfdb-48c7-9754-6b7255b1124f" Feb 27 16:12:18 crc kubenswrapper[4830]: I0227 16:12:18.969818 4830 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d20f886-cfdb-48c7-9754-6b7255b1124f" Feb 27 16:12:18 crc kubenswrapper[4830]: I0227 16:12:18.970067 4830 status_manager.go:851] "Failed to get status for pod" podUID="d3e80191-de07-41aa-b0d7-69b826f5378b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:18 crc kubenswrapper[4830]: E0227 16:12:18.970379 4830 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:12:18 crc kubenswrapper[4830]: I0227 16:12:18.970624 4830 status_manager.go:851] "Failed to get status for pod" podUID="a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc" pod="openshift-marketplace/redhat-marketplace-kkwcl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kkwcl\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:18 crc kubenswrapper[4830]: I0227 16:12:18.971152 4830 status_manager.go:851] "Failed to get status for pod" podUID="8b33138a-5b9d-4af8-b13d-4db4c2613983" pod="openshift-marketplace/community-operators-966h2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-966h2\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:18 crc kubenswrapper[4830]: I0227 16:12:18.971595 4830 status_manager.go:851] "Failed to get status for pod" podUID="48011108-ee2c-4d3b-9f28-65cfc91b90ab" pod="openshift-marketplace/redhat-operators-tr5cj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tr5cj\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:18 crc kubenswrapper[4830]: I0227 16:12:18.972045 4830 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:18 crc kubenswrapper[4830]: I0227 16:12:18.972384 4830 status_manager.go:851] "Failed to get status for pod" podUID="514ae4c6-322a-458e-a1e5-df6d6a47fc88" pod="openshift-marketplace/redhat-operators-s5z2n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s5z2n\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:18 crc kubenswrapper[4830]: I0227 16:12:18.972927 4830 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:18 crc kubenswrapper[4830]: I0227 16:12:18.973393 4830 status_manager.go:851] "Failed to get status for pod" podUID="f2579681-6b81-4b58-9d2c-c26b123be8ec" pod="openshift-marketplace/certified-operators-k7l8d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-k7l8d\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:18 crc kubenswrapper[4830]: I0227 16:12:18.973740 4830 status_manager.go:851] "Failed to get status for pod" podUID="dce3358b-25c4-4fe9-a3fa-0a0be053e8f0" pod="openshift-infra/auto-csr-approver-29536812-7wkbt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-infra/pods/auto-csr-approver-29536812-7wkbt\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:18 crc kubenswrapper[4830]: I0227 16:12:18.974218 4830 status_manager.go:851] "Failed to get status for pod" podUID="728cab24-3fc3-4249-b37e-183d5676c191" pod="openshift-marketplace/redhat-marketplace-zwcdd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zwcdd\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:18 crc kubenswrapper[4830]: I0227 16:12:18.974683 4830 status_manager.go:851] "Failed to get status for pod" podUID="789ee180-dd8e-4cb2-884e-beea08667c53" pod="openshift-marketplace/certified-operators-dnpxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-dnpxp\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:18 crc kubenswrapper[4830]: I0227 16:12:18.975170 4830 status_manager.go:851] "Failed to get status for pod" podUID="1c5e2cae-7890-48fb-ab76-7e53c52fd6ac" pod="openshift-marketplace/community-operators-s4bpk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-s4bpk\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:18 crc kubenswrapper[4830]: I0227 16:12:18.975591 4830 status_manager.go:851] "Failed to get status for pod" podUID="d3e80191-de07-41aa-b0d7-69b826f5378b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.36:6443: connect: connection refused" Feb 27 16:12:19 crc kubenswrapper[4830]: I0227 16:12:19.987229 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6b05d7f3c5abb14f757d04764a87026f7e35065654144b0c6b4ff4fcc88a78b5"} Feb 27 16:12:19 crc kubenswrapper[4830]: I0227 16:12:19.988100 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a8f0b1c5222be9e6b49b8f2b86aa5b442395bb5fac26ded4a2961d4964d396ed"} Feb 27 16:12:19 crc kubenswrapper[4830]: I0227 16:12:19.995553 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 27 16:12:19 crc kubenswrapper[4830]: I0227 16:12:19.996594 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 27 16:12:19 crc kubenswrapper[4830]: I0227 16:12:19.996645 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"96ed598b50959187b0d70255050cb0af0be001bd3a82b3958edceba665c96d1a"} Feb 27 16:12:20 crc kubenswrapper[4830]: I0227 16:12:20.870215 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:12:20 crc kubenswrapper[4830]: I0227 16:12:20.870476 4830 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 27 16:12:20 crc kubenswrapper[4830]: I0227 16:12:20.870537 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 27 16:12:21 crc kubenswrapper[4830]: I0227 16:12:21.006875 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"836256ea44a213eab952f36fdd4b08ed13c7168cf3546baeb36eecb39ecdb445"} Feb 27 16:12:21 crc kubenswrapper[4830]: I0227 16:12:21.007189 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cda370ec44f6d82ba102a75362316c044d178d94af1f4a6234b5029ac5d03d53"} Feb 27 16:12:21 crc kubenswrapper[4830]: I0227 16:12:21.007199 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e85e94b9e38499e96f1ffd9e776c3bf6853736c2e783fcc729ed4ea8f5a741b8"} Feb 27 16:12:21 crc kubenswrapper[4830]: I0227 16:12:21.007309 4830 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d20f886-cfdb-48c7-9754-6b7255b1124f" Feb 27 16:12:21 crc kubenswrapper[4830]: I0227 16:12:21.007341 4830 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d20f886-cfdb-48c7-9754-6b7255b1124f" Feb 27 16:12:25 crc kubenswrapper[4830]: I0227 16:12:25.790834 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:12:25 crc kubenswrapper[4830]: I0227 16:12:25.791194 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:12:25 crc kubenswrapper[4830]: I0227 16:12:25.791210 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:12:25 crc kubenswrapper[4830]: I0227 16:12:25.796363 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:12:26 crc kubenswrapper[4830]: I0227 16:12:26.019008 4830 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:12:26 crc kubenswrapper[4830]: I0227 16:12:26.036284 4830 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d20f886-cfdb-48c7-9754-6b7255b1124f" Feb 27 16:12:26 crc kubenswrapper[4830]: I0227 16:12:26.036318 4830 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d20f886-cfdb-48c7-9754-6b7255b1124f" Feb 27 16:12:26 crc kubenswrapper[4830]: I0227 16:12:26.039244 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:12:26 crc kubenswrapper[4830]: I0227 16:12:26.137643 4830 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="7f2d168a-0f63-45da-b435-14e1192dee9a" Feb 27 16:12:26 crc kubenswrapper[4830]: I0227 16:12:26.456921 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" podUID="f18ef53a-23d0-4f48-b7a4-96f2716e137f" containerName="oauth-openshift" containerID="cri-o://2c6a7624dc201e01490accbe5f90f7436104c0a890bb255b4a85fd5b80e889da" gracePeriod=15 Feb 27 16:12:26 crc kubenswrapper[4830]: I0227 16:12:26.652743 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.022637 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.045840 4830 generic.go:334] "Generic (PLEG): container finished" podID="f18ef53a-23d0-4f48-b7a4-96f2716e137f" containerID="2c6a7624dc201e01490accbe5f90f7436104c0a890bb255b4a85fd5b80e889da" exitCode=0 Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.045905 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" event={"ID":"f18ef53a-23d0-4f48-b7a4-96f2716e137f","Type":"ContainerDied","Data":"2c6a7624dc201e01490accbe5f90f7436104c0a890bb255b4a85fd5b80e889da"} Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.045935 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.045985 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vs8sq" event={"ID":"f18ef53a-23d0-4f48-b7a4-96f2716e137f","Type":"ContainerDied","Data":"6b51f4484e7a3a8e1a60b7c39c00240728b6a4fa179b2594fb0271caab50eb68"} Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.046018 4830 scope.go:117] "RemoveContainer" containerID="2c6a7624dc201e01490accbe5f90f7436104c0a890bb255b4a85fd5b80e889da" Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.046416 4830 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d20f886-cfdb-48c7-9754-6b7255b1124f" Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.046439 4830 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9d20f886-cfdb-48c7-9754-6b7255b1124f" Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.060887 4830 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="7f2d168a-0f63-45da-b435-14e1192dee9a" Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.070059 4830 scope.go:117] "RemoveContainer" containerID="2c6a7624dc201e01490accbe5f90f7436104c0a890bb255b4a85fd5b80e889da" Feb 27 16:12:27 crc kubenswrapper[4830]: E0227 16:12:27.070507 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c6a7624dc201e01490accbe5f90f7436104c0a890bb255b4a85fd5b80e889da\": container with ID starting with 2c6a7624dc201e01490accbe5f90f7436104c0a890bb255b4a85fd5b80e889da not found: ID does not exist" containerID="2c6a7624dc201e01490accbe5f90f7436104c0a890bb255b4a85fd5b80e889da" Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.070554 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c6a7624dc201e01490accbe5f90f7436104c0a890bb255b4a85fd5b80e889da"} err="failed to get container status \"2c6a7624dc201e01490accbe5f90f7436104c0a890bb255b4a85fd5b80e889da\": rpc error: code = NotFound desc = could not find container \"2c6a7624dc201e01490accbe5f90f7436104c0a890bb255b4a85fd5b80e889da\": container with ID starting with 2c6a7624dc201e01490accbe5f90f7436104c0a890bb255b4a85fd5b80e889da not found: ID does not exist" Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.131126 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-router-certs\") pod \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.131170 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-serving-cert\") pod \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.131191 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-user-template-error\") pod \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.131211 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f18ef53a-23d0-4f48-b7a4-96f2716e137f-audit-dir\") pod \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.131242 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-service-ca\") pod \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.131260 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-user-idp-0-file-data\") pod \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.131279 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94mnb\" (UniqueName: \"kubernetes.io/projected/f18ef53a-23d0-4f48-b7a4-96f2716e137f-kube-api-access-94mnb\") pod \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.131296 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f18ef53a-23d0-4f48-b7a4-96f2716e137f-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f18ef53a-23d0-4f48-b7a4-96f2716e137f" (UID: "f18ef53a-23d0-4f48-b7a4-96f2716e137f"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.131317 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-user-template-provider-selection\") pod \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.131426 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-trusted-ca-bundle\") pod \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.131458 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-session\") pod \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.131493 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-cliconfig\") pod \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.131511 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-user-template-login\") pod \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.131534 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f18ef53a-23d0-4f48-b7a4-96f2716e137f-audit-policies\") pod \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.131559 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-ocp-branding-template\") pod \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\" (UID: \"f18ef53a-23d0-4f48-b7a4-96f2716e137f\") " Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.131886 4830 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f18ef53a-23d0-4f48-b7a4-96f2716e137f-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.132265 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "f18ef53a-23d0-4f48-b7a4-96f2716e137f" (UID: "f18ef53a-23d0-4f48-b7a4-96f2716e137f"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.132299 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f18ef53a-23d0-4f48-b7a4-96f2716e137f-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f18ef53a-23d0-4f48-b7a4-96f2716e137f" (UID: "f18ef53a-23d0-4f48-b7a4-96f2716e137f"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.132295 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "f18ef53a-23d0-4f48-b7a4-96f2716e137f" (UID: "f18ef53a-23d0-4f48-b7a4-96f2716e137f"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.132641 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "f18ef53a-23d0-4f48-b7a4-96f2716e137f" (UID: "f18ef53a-23d0-4f48-b7a4-96f2716e137f"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.137320 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "f18ef53a-23d0-4f48-b7a4-96f2716e137f" (UID: "f18ef53a-23d0-4f48-b7a4-96f2716e137f"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.138073 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "f18ef53a-23d0-4f48-b7a4-96f2716e137f" (UID: "f18ef53a-23d0-4f48-b7a4-96f2716e137f"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.138338 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "f18ef53a-23d0-4f48-b7a4-96f2716e137f" (UID: "f18ef53a-23d0-4f48-b7a4-96f2716e137f"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.138730 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "f18ef53a-23d0-4f48-b7a4-96f2716e137f" (UID: "f18ef53a-23d0-4f48-b7a4-96f2716e137f"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.138772 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "f18ef53a-23d0-4f48-b7a4-96f2716e137f" (UID: "f18ef53a-23d0-4f48-b7a4-96f2716e137f"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.139932 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "f18ef53a-23d0-4f48-b7a4-96f2716e137f" (UID: "f18ef53a-23d0-4f48-b7a4-96f2716e137f"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.143812 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "f18ef53a-23d0-4f48-b7a4-96f2716e137f" (UID: "f18ef53a-23d0-4f48-b7a4-96f2716e137f"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.146119 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f18ef53a-23d0-4f48-b7a4-96f2716e137f-kube-api-access-94mnb" (OuterVolumeSpecName: "kube-api-access-94mnb") pod "f18ef53a-23d0-4f48-b7a4-96f2716e137f" (UID: "f18ef53a-23d0-4f48-b7a4-96f2716e137f"). InnerVolumeSpecName "kube-api-access-94mnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.146594 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "f18ef53a-23d0-4f48-b7a4-96f2716e137f" (UID: "f18ef53a-23d0-4f48-b7a4-96f2716e137f"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.232792 4830 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f18ef53a-23d0-4f48-b7a4-96f2716e137f-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.232837 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.232853 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.232865 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.232889 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.232901 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.232915 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.232929 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94mnb\" (UniqueName: \"kubernetes.io/projected/f18ef53a-23d0-4f48-b7a4-96f2716e137f-kube-api-access-94mnb\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.232960 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.232974 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.232987 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.232999 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:27 crc kubenswrapper[4830]: I0227 16:12:27.233010 4830 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f18ef53a-23d0-4f48-b7a4-96f2716e137f-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:30 crc kubenswrapper[4830]: I0227 16:12:30.878771 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:12:30 crc kubenswrapper[4830]: I0227 16:12:30.888812 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:12:35 crc kubenswrapper[4830]: I0227 16:12:35.609036 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 27 16:12:35 crc kubenswrapper[4830]: I0227 16:12:35.790014 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 27 16:12:36 crc kubenswrapper[4830]: I0227 16:12:36.018581 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 27 16:12:36 crc kubenswrapper[4830]: I0227 16:12:36.177339 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 27 16:12:36 crc kubenswrapper[4830]: I0227 16:12:36.177438 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 27 16:12:36 crc kubenswrapper[4830]: I0227 16:12:36.353635 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 27 16:12:36 crc kubenswrapper[4830]: I0227 16:12:36.730218 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 27 16:12:36 crc kubenswrapper[4830]: I0227 16:12:36.878929 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 27 16:12:37 crc kubenswrapper[4830]: I0227 16:12:37.632564 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 27 16:12:37 crc kubenswrapper[4830]: I0227 16:12:37.690896 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 27 16:12:37 crc kubenswrapper[4830]: I0227 16:12:37.723888 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 27 16:12:37 crc kubenswrapper[4830]: I0227 16:12:37.800874 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 27 16:12:37 crc kubenswrapper[4830]: I0227 16:12:37.892926 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 27 16:12:37 crc kubenswrapper[4830]: I0227 16:12:37.894752 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 27 16:12:38 crc kubenswrapper[4830]: I0227 16:12:38.029717 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 27 16:12:38 crc kubenswrapper[4830]: I0227 16:12:38.089339 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 27 16:12:38 crc kubenswrapper[4830]: I0227 16:12:38.167483 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 27 16:12:38 crc kubenswrapper[4830]: I0227 16:12:38.173793 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 27 16:12:38 crc kubenswrapper[4830]: I0227 16:12:38.208126 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 27 16:12:38 crc kubenswrapper[4830]: I0227 16:12:38.233251 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 27 16:12:38 crc kubenswrapper[4830]: I0227 16:12:38.453306 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 27 16:12:38 crc kubenswrapper[4830]: I0227 16:12:38.642150 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 27 16:12:38 crc kubenswrapper[4830]: I0227 16:12:38.713327 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 27 16:12:38 crc kubenswrapper[4830]: I0227 16:12:38.727539 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 27 16:12:38 crc kubenswrapper[4830]: I0227 16:12:38.934539 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 27 16:12:39 crc kubenswrapper[4830]: I0227 16:12:39.167725 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 27 16:12:39 crc kubenswrapper[4830]: I0227 16:12:39.234470 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 27 16:12:39 crc kubenswrapper[4830]: I0227 16:12:39.386275 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 27 16:12:39 crc kubenswrapper[4830]: I0227 16:12:39.447053 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 27 16:12:39 crc kubenswrapper[4830]: I0227 16:12:39.488733 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 27 16:12:39 crc kubenswrapper[4830]: I0227 16:12:39.549290 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 27 16:12:39 crc kubenswrapper[4830]: I0227 16:12:39.797360 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 27 16:12:39 crc kubenswrapper[4830]: I0227 16:12:39.798651 4830 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 27 16:12:39 crc kubenswrapper[4830]: I0227 16:12:39.802775 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=36.802749919 podStartE2EDuration="36.802749919s" podCreationTimestamp="2026-02-27 16:12:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:12:25.946478228 +0000 UTC m=+342.035750711" watchObservedRunningTime="2026-02-27 16:12:39.802749919 +0000 UTC m=+355.892022422" Feb 27 16:12:39 crc kubenswrapper[4830]: I0227 16:12:39.807342 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vs8sq","openshift-kube-apiserver/kube-apiserver-crc"] Feb 27 16:12:39 crc kubenswrapper[4830]: I0227 16:12:39.807437 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 27 16:12:39 crc kubenswrapper[4830]: I0227 16:12:39.812520 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:12:39 crc kubenswrapper[4830]: I0227 16:12:39.837753 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=13.837732702 podStartE2EDuration="13.837732702s" podCreationTimestamp="2026-02-27 16:12:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:12:39.83376486 +0000 UTC m=+355.923037343" watchObservedRunningTime="2026-02-27 16:12:39.837732702 +0000 UTC m=+355.927005175" Feb 27 16:12:39 crc kubenswrapper[4830]: I0227 16:12:39.893005 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 27 16:12:39 crc kubenswrapper[4830]: I0227 16:12:39.956416 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 27 16:12:40 crc kubenswrapper[4830]: I0227 16:12:40.259084 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 27 16:12:40 crc kubenswrapper[4830]: I0227 16:12:40.289941 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 27 16:12:40 crc kubenswrapper[4830]: I0227 16:12:40.344250 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 27 16:12:40 crc kubenswrapper[4830]: I0227 16:12:40.348347 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 27 16:12:40 crc kubenswrapper[4830]: I0227 16:12:40.369842 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 27 16:12:40 crc kubenswrapper[4830]: I0227 16:12:40.389162 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 27 16:12:40 crc kubenswrapper[4830]: I0227 16:12:40.423608 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 27 16:12:40 crc kubenswrapper[4830]: I0227 16:12:40.452972 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 27 16:12:40 crc kubenswrapper[4830]: I0227 16:12:40.522560 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 27 16:12:40 crc kubenswrapper[4830]: I0227 16:12:40.533039 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 27 16:12:40 crc kubenswrapper[4830]: I0227 16:12:40.553470 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 27 16:12:40 crc kubenswrapper[4830]: I0227 16:12:40.646709 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 27 16:12:40 crc kubenswrapper[4830]: I0227 16:12:40.666307 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 27 16:12:40 crc kubenswrapper[4830]: I0227 16:12:40.669524 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 27 16:12:40 crc kubenswrapper[4830]: I0227 16:12:40.698392 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 27 16:12:40 crc kubenswrapper[4830]: I0227 16:12:40.752825 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 27 16:12:40 crc kubenswrapper[4830]: I0227 16:12:40.774115 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f18ef53a-23d0-4f48-b7a4-96f2716e137f" path="/var/lib/kubelet/pods/f18ef53a-23d0-4f48-b7a4-96f2716e137f/volumes" Feb 27 16:12:40 crc kubenswrapper[4830]: I0227 16:12:40.847396 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 27 16:12:40 crc kubenswrapper[4830]: I0227 16:12:40.858386 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 27 16:12:40 crc kubenswrapper[4830]: I0227 16:12:40.871392 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 27 16:12:40 crc kubenswrapper[4830]: I0227 16:12:40.966040 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 27 16:12:40 crc kubenswrapper[4830]: I0227 16:12:40.979845 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 27 16:12:40 crc kubenswrapper[4830]: I0227 16:12:40.998043 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.002569 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.021267 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.029380 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.029803 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.081520 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.104596 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.122109 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.147583 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.189392 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.250160 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.271842 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.296937 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.364706 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.490050 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5475f99f5f-pgn76"] Feb 27 16:12:41 crc kubenswrapper[4830]: E0227 16:12:41.490356 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f18ef53a-23d0-4f48-b7a4-96f2716e137f" containerName="oauth-openshift" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.490385 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f18ef53a-23d0-4f48-b7a4-96f2716e137f" containerName="oauth-openshift" Feb 27 16:12:41 crc kubenswrapper[4830]: E0227 16:12:41.490422 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce3358b-25c4-4fe9-a3fa-0a0be053e8f0" containerName="oc" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.490435 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce3358b-25c4-4fe9-a3fa-0a0be053e8f0" containerName="oc" Feb 27 16:12:41 crc kubenswrapper[4830]: E0227 16:12:41.490451 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3e80191-de07-41aa-b0d7-69b826f5378b" containerName="installer" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.490465 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3e80191-de07-41aa-b0d7-69b826f5378b" containerName="installer" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.490657 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="dce3358b-25c4-4fe9-a3fa-0a0be053e8f0" containerName="oc" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.490692 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f18ef53a-23d0-4f48-b7a4-96f2716e137f" containerName="oauth-openshift" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.490723 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3e80191-de07-41aa-b0d7-69b826f5378b" containerName="installer" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.491338 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.495599 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.496117 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.498929 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.508493 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.509500 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.509621 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.509705 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.509532 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.510052 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.510099 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.510897 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.512877 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.526735 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.528501 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.539458 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.587682 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.672896 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/487f5a8d-4f6a-4b62-be31-660ff353d84b-v4-0-config-system-service-ca\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.673285 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/487f5a8d-4f6a-4b62-be31-660ff353d84b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.674197 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/487f5a8d-4f6a-4b62-be31-660ff353d84b-v4-0-config-system-session\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.674509 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/487f5a8d-4f6a-4b62-be31-660ff353d84b-v4-0-config-user-template-login\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.674738 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/487f5a8d-4f6a-4b62-be31-660ff353d84b-audit-policies\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.675050 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/487f5a8d-4f6a-4b62-be31-660ff353d84b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.675246 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/487f5a8d-4f6a-4b62-be31-660ff353d84b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.675433 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/487f5a8d-4f6a-4b62-be31-660ff353d84b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.675639 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/487f5a8d-4f6a-4b62-be31-660ff353d84b-v4-0-config-user-template-error\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.675827 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/487f5a8d-4f6a-4b62-be31-660ff353d84b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.676050 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/487f5a8d-4f6a-4b62-be31-660ff353d84b-v4-0-config-system-router-certs\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.676221 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pddng\" (UniqueName: \"kubernetes.io/projected/487f5a8d-4f6a-4b62-be31-660ff353d84b-kube-api-access-pddng\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.676388 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/487f5a8d-4f6a-4b62-be31-660ff353d84b-audit-dir\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.676593 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/487f5a8d-4f6a-4b62-be31-660ff353d84b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.703591 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.717642 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.777916 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/487f5a8d-4f6a-4b62-be31-660ff353d84b-v4-0-config-user-template-error\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.777996 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/487f5a8d-4f6a-4b62-be31-660ff353d84b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.778032 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pddng\" (UniqueName: \"kubernetes.io/projected/487f5a8d-4f6a-4b62-be31-660ff353d84b-kube-api-access-pddng\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.778056 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/487f5a8d-4f6a-4b62-be31-660ff353d84b-v4-0-config-system-router-certs\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.778081 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/487f5a8d-4f6a-4b62-be31-660ff353d84b-audit-dir\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.778115 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/487f5a8d-4f6a-4b62-be31-660ff353d84b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.778149 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/487f5a8d-4f6a-4b62-be31-660ff353d84b-v4-0-config-system-service-ca\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.778176 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/487f5a8d-4f6a-4b62-be31-660ff353d84b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.778209 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/487f5a8d-4f6a-4b62-be31-660ff353d84b-v4-0-config-system-session\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.778239 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/487f5a8d-4f6a-4b62-be31-660ff353d84b-v4-0-config-user-template-login\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.778286 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/487f5a8d-4f6a-4b62-be31-660ff353d84b-audit-policies\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.778320 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/487f5a8d-4f6a-4b62-be31-660ff353d84b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.778344 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/487f5a8d-4f6a-4b62-be31-660ff353d84b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.778369 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/487f5a8d-4f6a-4b62-be31-660ff353d84b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.779070 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/487f5a8d-4f6a-4b62-be31-660ff353d84b-audit-dir\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.779995 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/487f5a8d-4f6a-4b62-be31-660ff353d84b-audit-policies\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.780159 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/487f5a8d-4f6a-4b62-be31-660ff353d84b-v4-0-config-system-service-ca\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.780256 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/487f5a8d-4f6a-4b62-be31-660ff353d84b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.780819 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/487f5a8d-4f6a-4b62-be31-660ff353d84b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.788320 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/487f5a8d-4f6a-4b62-be31-660ff353d84b-v4-0-config-system-session\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.788344 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/487f5a8d-4f6a-4b62-be31-660ff353d84b-v4-0-config-user-template-error\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.788641 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/487f5a8d-4f6a-4b62-be31-660ff353d84b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.788669 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/487f5a8d-4f6a-4b62-be31-660ff353d84b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.788761 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/487f5a8d-4f6a-4b62-be31-660ff353d84b-v4-0-config-user-template-login\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.790182 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/487f5a8d-4f6a-4b62-be31-660ff353d84b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.790774 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/487f5a8d-4f6a-4b62-be31-660ff353d84b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.791136 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/487f5a8d-4f6a-4b62-be31-660ff353d84b-v4-0-config-system-router-certs\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.803836 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pddng\" (UniqueName: \"kubernetes.io/projected/487f5a8d-4f6a-4b62-be31-660ff353d84b-kube-api-access-pddng\") pod \"oauth-openshift-5475f99f5f-pgn76\" (UID: \"487f5a8d-4f6a-4b62-be31-660ff353d84b\") " pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.867526 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.878212 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:41 crc kubenswrapper[4830]: I0227 16:12:41.941105 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 27 16:12:42 crc kubenswrapper[4830]: I0227 16:12:42.018746 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 27 16:12:42 crc kubenswrapper[4830]: I0227 16:12:42.178696 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 27 16:12:42 crc kubenswrapper[4830]: I0227 16:12:42.187805 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 27 16:12:42 crc kubenswrapper[4830]: I0227 16:12:42.225003 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 27 16:12:42 crc kubenswrapper[4830]: I0227 16:12:42.244482 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 27 16:12:42 crc kubenswrapper[4830]: I0227 16:12:42.267366 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 27 16:12:42 crc kubenswrapper[4830]: I0227 16:12:42.382254 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 27 16:12:42 crc kubenswrapper[4830]: I0227 16:12:42.390560 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 27 16:12:42 crc kubenswrapper[4830]: I0227 16:12:42.400418 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 27 16:12:42 crc kubenswrapper[4830]: I0227 16:12:42.449393 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 27 16:12:42 crc kubenswrapper[4830]: I0227 16:12:42.575581 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 27 16:12:42 crc kubenswrapper[4830]: I0227 16:12:42.597078 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 27 16:12:42 crc kubenswrapper[4830]: I0227 16:12:42.644684 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 27 16:12:42 crc kubenswrapper[4830]: I0227 16:12:42.764704 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 27 16:12:42 crc kubenswrapper[4830]: I0227 16:12:42.970482 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 27 16:12:43 crc kubenswrapper[4830]: I0227 16:12:43.004624 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 27 16:12:43 crc kubenswrapper[4830]: I0227 16:12:43.126666 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 27 16:12:43 crc kubenswrapper[4830]: I0227 16:12:43.189799 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 27 16:12:43 crc kubenswrapper[4830]: I0227 16:12:43.195405 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 27 16:12:43 crc kubenswrapper[4830]: I0227 16:12:43.197218 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 27 16:12:43 crc kubenswrapper[4830]: I0227 16:12:43.211850 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 27 16:12:43 crc kubenswrapper[4830]: I0227 16:12:43.244518 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 27 16:12:43 crc kubenswrapper[4830]: I0227 16:12:43.298061 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 27 16:12:43 crc kubenswrapper[4830]: I0227 16:12:43.432485 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 27 16:12:43 crc kubenswrapper[4830]: I0227 16:12:43.472684 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 27 16:12:43 crc kubenswrapper[4830]: I0227 16:12:43.488427 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 27 16:12:43 crc kubenswrapper[4830]: I0227 16:12:43.536859 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 27 16:12:43 crc kubenswrapper[4830]: I0227 16:12:43.690735 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 27 16:12:43 crc kubenswrapper[4830]: I0227 16:12:43.701666 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 27 16:12:43 crc kubenswrapper[4830]: I0227 16:12:43.773078 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 27 16:12:43 crc kubenswrapper[4830]: I0227 16:12:43.780119 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 27 16:12:43 crc kubenswrapper[4830]: I0227 16:12:43.790135 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 27 16:12:43 crc kubenswrapper[4830]: I0227 16:12:43.841260 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 27 16:12:43 crc kubenswrapper[4830]: I0227 16:12:43.850827 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 27 16:12:43 crc kubenswrapper[4830]: I0227 16:12:43.867928 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 27 16:12:43 crc kubenswrapper[4830]: I0227 16:12:43.880199 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 27 16:12:43 crc kubenswrapper[4830]: I0227 16:12:43.974938 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 27 16:12:44 crc kubenswrapper[4830]: I0227 16:12:44.014728 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 27 16:12:44 crc kubenswrapper[4830]: I0227 16:12:44.022775 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 27 16:12:44 crc kubenswrapper[4830]: I0227 16:12:44.074213 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 27 16:12:44 crc kubenswrapper[4830]: I0227 16:12:44.083715 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 27 16:12:44 crc kubenswrapper[4830]: I0227 16:12:44.095617 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 27 16:12:44 crc kubenswrapper[4830]: I0227 16:12:44.127419 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 27 16:12:44 crc kubenswrapper[4830]: I0227 16:12:44.130151 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 27 16:12:44 crc kubenswrapper[4830]: I0227 16:12:44.141516 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 27 16:12:44 crc kubenswrapper[4830]: I0227 16:12:44.154748 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 27 16:12:44 crc kubenswrapper[4830]: I0227 16:12:44.197222 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 27 16:12:44 crc kubenswrapper[4830]: I0227 16:12:44.253784 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5475f99f5f-pgn76"] Feb 27 16:12:44 crc kubenswrapper[4830]: I0227 16:12:44.254839 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 27 16:12:44 crc kubenswrapper[4830]: I0227 16:12:44.299174 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 27 16:12:44 crc kubenswrapper[4830]: I0227 16:12:44.300354 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 27 16:12:44 crc kubenswrapper[4830]: I0227 16:12:44.307055 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 27 16:12:44 crc kubenswrapper[4830]: I0227 16:12:44.370847 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 27 16:12:44 crc kubenswrapper[4830]: I0227 16:12:44.429430 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 27 16:12:44 crc kubenswrapper[4830]: I0227 16:12:44.479440 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 27 16:12:44 crc kubenswrapper[4830]: I0227 16:12:44.521902 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 27 16:12:44 crc kubenswrapper[4830]: I0227 16:12:44.568473 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 27 16:12:44 crc kubenswrapper[4830]: I0227 16:12:44.588902 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 27 16:12:44 crc kubenswrapper[4830]: I0227 16:12:44.611584 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 27 16:12:44 crc kubenswrapper[4830]: I0227 16:12:44.683411 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 27 16:12:44 crc kubenswrapper[4830]: I0227 16:12:44.738817 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 27 16:12:44 crc kubenswrapper[4830]: I0227 16:12:44.778909 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5475f99f5f-pgn76"] Feb 27 16:12:44 crc kubenswrapper[4830]: I0227 16:12:44.858212 4830 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 27 16:12:44 crc kubenswrapper[4830]: I0227 16:12:44.864008 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 27 16:12:44 crc kubenswrapper[4830]: I0227 16:12:44.893042 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 27 16:12:44 crc kubenswrapper[4830]: I0227 16:12:44.922374 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 27 16:12:45 crc kubenswrapper[4830]: I0227 16:12:45.032474 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 27 16:12:45 crc kubenswrapper[4830]: I0227 16:12:45.179712 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" event={"ID":"487f5a8d-4f6a-4b62-be31-660ff353d84b","Type":"ContainerStarted","Data":"55a4f29dbfc14b9852770b9373485394d994fb8d5c676575a76d8488ef914493"} Feb 27 16:12:45 crc kubenswrapper[4830]: I0227 16:12:45.179773 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" event={"ID":"487f5a8d-4f6a-4b62-be31-660ff353d84b","Type":"ContainerStarted","Data":"0cfee395c79c4c1d7182e3fde277d0399db3f526b63645ca206c648e3899f0e9"} Feb 27 16:12:45 crc kubenswrapper[4830]: I0227 16:12:45.180124 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:45 crc kubenswrapper[4830]: I0227 16:12:45.181726 4830 patch_prober.go:28] interesting pod/oauth-openshift-5475f99f5f-pgn76 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.67:6443/healthz\": dial tcp 10.217.0.67:6443: connect: connection refused" start-of-body= Feb 27 16:12:45 crc kubenswrapper[4830]: I0227 16:12:45.181779 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" podUID="487f5a8d-4f6a-4b62-be31-660ff353d84b" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.67:6443/healthz\": dial tcp 10.217.0.67:6443: connect: connection refused" Feb 27 16:12:45 crc kubenswrapper[4830]: I0227 16:12:45.213015 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" podStartSLOduration=44.21298711 podStartE2EDuration="44.21298711s" podCreationTimestamp="2026-02-27 16:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:12:45.209431349 +0000 UTC m=+361.298703842" watchObservedRunningTime="2026-02-27 16:12:45.21298711 +0000 UTC m=+361.302259593" Feb 27 16:12:45 crc kubenswrapper[4830]: I0227 16:12:45.220347 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 27 16:12:45 crc kubenswrapper[4830]: I0227 16:12:45.281736 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 27 16:12:45 crc kubenswrapper[4830]: I0227 16:12:45.295233 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 27 16:12:45 crc kubenswrapper[4830]: I0227 16:12:45.328279 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 27 16:12:45 crc kubenswrapper[4830]: I0227 16:12:45.526799 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 27 16:12:45 crc kubenswrapper[4830]: I0227 16:12:45.537141 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 27 16:12:45 crc kubenswrapper[4830]: I0227 16:12:45.553068 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 27 16:12:45 crc kubenswrapper[4830]: I0227 16:12:45.589648 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 27 16:12:45 crc kubenswrapper[4830]: I0227 16:12:45.707019 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 27 16:12:45 crc kubenswrapper[4830]: I0227 16:12:45.764334 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 27 16:12:45 crc kubenswrapper[4830]: I0227 16:12:45.782491 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 27 16:12:45 crc kubenswrapper[4830]: I0227 16:12:45.849481 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 27 16:12:45 crc kubenswrapper[4830]: I0227 16:12:45.902776 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 27 16:12:45 crc kubenswrapper[4830]: I0227 16:12:45.945184 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 27 16:12:46 crc kubenswrapper[4830]: I0227 16:12:46.009410 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 27 16:12:46 crc kubenswrapper[4830]: I0227 16:12:46.039443 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 27 16:12:46 crc kubenswrapper[4830]: I0227 16:12:46.191829 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5475f99f5f-pgn76_487f5a8d-4f6a-4b62-be31-660ff353d84b/oauth-openshift/0.log" Feb 27 16:12:46 crc kubenswrapper[4830]: I0227 16:12:46.191937 4830 generic.go:334] "Generic (PLEG): container finished" podID="487f5a8d-4f6a-4b62-be31-660ff353d84b" containerID="55a4f29dbfc14b9852770b9373485394d994fb8d5c676575a76d8488ef914493" exitCode=255 Feb 27 16:12:46 crc kubenswrapper[4830]: I0227 16:12:46.192027 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" event={"ID":"487f5a8d-4f6a-4b62-be31-660ff353d84b","Type":"ContainerDied","Data":"55a4f29dbfc14b9852770b9373485394d994fb8d5c676575a76d8488ef914493"} Feb 27 16:12:46 crc kubenswrapper[4830]: I0227 16:12:46.192843 4830 scope.go:117] "RemoveContainer" containerID="55a4f29dbfc14b9852770b9373485394d994fb8d5c676575a76d8488ef914493" Feb 27 16:12:46 crc kubenswrapper[4830]: I0227 16:12:46.196570 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 27 16:12:46 crc kubenswrapper[4830]: I0227 16:12:46.362223 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 27 16:12:46 crc kubenswrapper[4830]: I0227 16:12:46.370744 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 27 16:12:46 crc kubenswrapper[4830]: I0227 16:12:46.375749 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 27 16:12:46 crc kubenswrapper[4830]: I0227 16:12:46.445461 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 27 16:12:46 crc kubenswrapper[4830]: I0227 16:12:46.505421 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 27 16:12:46 crc kubenswrapper[4830]: I0227 16:12:46.551461 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 27 16:12:46 crc kubenswrapper[4830]: I0227 16:12:46.551647 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 27 16:12:46 crc kubenswrapper[4830]: I0227 16:12:46.554492 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 27 16:12:46 crc kubenswrapper[4830]: I0227 16:12:46.624547 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 27 16:12:46 crc kubenswrapper[4830]: I0227 16:12:46.638553 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 27 16:12:46 crc kubenswrapper[4830]: I0227 16:12:46.696774 4830 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 27 16:12:46 crc kubenswrapper[4830]: I0227 16:12:46.755626 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 27 16:12:46 crc kubenswrapper[4830]: I0227 16:12:46.773065 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 27 16:12:46 crc kubenswrapper[4830]: I0227 16:12:46.779849 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 27 16:12:46 crc kubenswrapper[4830]: I0227 16:12:46.787569 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 27 16:12:46 crc kubenswrapper[4830]: I0227 16:12:46.809010 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 27 16:12:46 crc kubenswrapper[4830]: I0227 16:12:46.898012 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 27 16:12:46 crc kubenswrapper[4830]: I0227 16:12:46.904235 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 27 16:12:46 crc kubenswrapper[4830]: I0227 16:12:46.937325 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 27 16:12:46 crc kubenswrapper[4830]: I0227 16:12:46.948552 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 27 16:12:46 crc kubenswrapper[4830]: I0227 16:12:46.995598 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 27 16:12:47 crc kubenswrapper[4830]: I0227 16:12:47.006384 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 27 16:12:47 crc kubenswrapper[4830]: I0227 16:12:47.064996 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 27 16:12:47 crc kubenswrapper[4830]: I0227 16:12:47.088107 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 27 16:12:47 crc kubenswrapper[4830]: I0227 16:12:47.088201 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 27 16:12:47 crc kubenswrapper[4830]: I0227 16:12:47.140908 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 27 16:12:47 crc kubenswrapper[4830]: I0227 16:12:47.141204 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 27 16:12:47 crc kubenswrapper[4830]: I0227 16:12:47.191486 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 27 16:12:47 crc kubenswrapper[4830]: I0227 16:12:47.205043 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5475f99f5f-pgn76_487f5a8d-4f6a-4b62-be31-660ff353d84b/oauth-openshift/0.log" Feb 27 16:12:47 crc kubenswrapper[4830]: I0227 16:12:47.205107 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" event={"ID":"487f5a8d-4f6a-4b62-be31-660ff353d84b","Type":"ContainerStarted","Data":"fe9b0e85a3747ea3e46d3ac0d0f1ccf9faaf1d99e5bd6850865640f32db2df55"} Feb 27 16:12:47 crc kubenswrapper[4830]: I0227 16:12:47.205454 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:47 crc kubenswrapper[4830]: I0227 16:12:47.209857 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5475f99f5f-pgn76" Feb 27 16:12:47 crc kubenswrapper[4830]: I0227 16:12:47.312078 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 27 16:12:47 crc kubenswrapper[4830]: I0227 16:12:47.332219 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 27 16:12:47 crc kubenswrapper[4830]: I0227 16:12:47.355625 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 27 16:12:47 crc kubenswrapper[4830]: I0227 16:12:47.665477 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 27 16:12:47 crc kubenswrapper[4830]: I0227 16:12:47.710752 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 27 16:12:47 crc kubenswrapper[4830]: I0227 16:12:47.947511 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 27 16:12:48 crc kubenswrapper[4830]: I0227 16:12:48.028468 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 27 16:12:48 crc kubenswrapper[4830]: I0227 16:12:48.134219 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 27 16:12:48 crc kubenswrapper[4830]: I0227 16:12:48.192167 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 27 16:12:48 crc kubenswrapper[4830]: I0227 16:12:48.220811 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 27 16:12:48 crc kubenswrapper[4830]: I0227 16:12:48.256479 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 27 16:12:48 crc kubenswrapper[4830]: I0227 16:12:48.265811 4830 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 27 16:12:48 crc kubenswrapper[4830]: I0227 16:12:48.444263 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 27 16:12:48 crc kubenswrapper[4830]: I0227 16:12:48.467045 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 27 16:12:48 crc kubenswrapper[4830]: I0227 16:12:48.539408 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 27 16:12:48 crc kubenswrapper[4830]: I0227 16:12:48.545581 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 27 16:12:48 crc kubenswrapper[4830]: I0227 16:12:48.600817 4830 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 27 16:12:48 crc kubenswrapper[4830]: I0227 16:12:48.601149 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://a82af140a1f416de6c34ca481cb88c11b78fa468ae226b7548fcbc9172c84ac8" gracePeriod=5 Feb 27 16:12:48 crc kubenswrapper[4830]: I0227 16:12:48.745526 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 27 16:12:48 crc kubenswrapper[4830]: I0227 16:12:48.810565 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 27 16:12:48 crc kubenswrapper[4830]: I0227 16:12:48.817752 4830 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 27 16:12:48 crc kubenswrapper[4830]: I0227 16:12:48.950386 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 27 16:12:48 crc kubenswrapper[4830]: I0227 16:12:48.990120 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 27 16:12:49 crc kubenswrapper[4830]: I0227 16:12:49.079615 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 27 16:12:49 crc kubenswrapper[4830]: I0227 16:12:49.080518 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 27 16:12:49 crc kubenswrapper[4830]: I0227 16:12:49.081392 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 27 16:12:49 crc kubenswrapper[4830]: I0227 16:12:49.172614 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 27 16:12:49 crc kubenswrapper[4830]: I0227 16:12:49.219024 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 27 16:12:49 crc kubenswrapper[4830]: I0227 16:12:49.267545 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 27 16:12:49 crc kubenswrapper[4830]: I0227 16:12:49.313406 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 27 16:12:49 crc kubenswrapper[4830]: I0227 16:12:49.430727 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 27 16:12:49 crc kubenswrapper[4830]: I0227 16:12:49.496255 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 27 16:12:49 crc kubenswrapper[4830]: I0227 16:12:49.564528 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 27 16:12:49 crc kubenswrapper[4830]: I0227 16:12:49.794200 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 27 16:12:49 crc kubenswrapper[4830]: I0227 16:12:49.814372 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 27 16:12:49 crc kubenswrapper[4830]: I0227 16:12:49.916882 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 27 16:12:49 crc kubenswrapper[4830]: I0227 16:12:49.957116 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 27 16:12:50 crc kubenswrapper[4830]: I0227 16:12:50.189216 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 27 16:12:50 crc kubenswrapper[4830]: I0227 16:12:50.289602 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 27 16:12:50 crc kubenswrapper[4830]: I0227 16:12:50.389713 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 27 16:12:50 crc kubenswrapper[4830]: I0227 16:12:50.452845 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 27 16:12:50 crc kubenswrapper[4830]: I0227 16:12:50.634566 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 27 16:12:50 crc kubenswrapper[4830]: I0227 16:12:50.672541 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 27 16:12:50 crc kubenswrapper[4830]: I0227 16:12:50.728661 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 27 16:12:50 crc kubenswrapper[4830]: I0227 16:12:50.898559 4830 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 27 16:12:51 crc kubenswrapper[4830]: I0227 16:12:51.079424 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 27 16:12:51 crc kubenswrapper[4830]: I0227 16:12:51.193607 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 27 16:12:51 crc kubenswrapper[4830]: I0227 16:12:51.446646 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 27 16:12:51 crc kubenswrapper[4830]: I0227 16:12:51.662398 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 27 16:12:51 crc kubenswrapper[4830]: I0227 16:12:51.808682 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 27 16:12:52 crc kubenswrapper[4830]: I0227 16:12:52.163592 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 27 16:12:53 crc kubenswrapper[4830]: I0227 16:12:53.115672 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 27 16:12:54 crc kubenswrapper[4830]: I0227 16:12:54.177782 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 27 16:12:54 crc kubenswrapper[4830]: I0227 16:12:54.177892 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 16:12:54 crc kubenswrapper[4830]: I0227 16:12:54.262313 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 27 16:12:54 crc kubenswrapper[4830]: I0227 16:12:54.262415 4830 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="a82af140a1f416de6c34ca481cb88c11b78fa468ae226b7548fcbc9172c84ac8" exitCode=137 Feb 27 16:12:54 crc kubenswrapper[4830]: I0227 16:12:54.262506 4830 scope.go:117] "RemoveContainer" containerID="a82af140a1f416de6c34ca481cb88c11b78fa468ae226b7548fcbc9172c84ac8" Feb 27 16:12:54 crc kubenswrapper[4830]: I0227 16:12:54.262521 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 16:12:54 crc kubenswrapper[4830]: I0227 16:12:54.286248 4830 scope.go:117] "RemoveContainer" containerID="a82af140a1f416de6c34ca481cb88c11b78fa468ae226b7548fcbc9172c84ac8" Feb 27 16:12:54 crc kubenswrapper[4830]: E0227 16:12:54.286912 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a82af140a1f416de6c34ca481cb88c11b78fa468ae226b7548fcbc9172c84ac8\": container with ID starting with a82af140a1f416de6c34ca481cb88c11b78fa468ae226b7548fcbc9172c84ac8 not found: ID does not exist" containerID="a82af140a1f416de6c34ca481cb88c11b78fa468ae226b7548fcbc9172c84ac8" Feb 27 16:12:54 crc kubenswrapper[4830]: I0227 16:12:54.286993 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a82af140a1f416de6c34ca481cb88c11b78fa468ae226b7548fcbc9172c84ac8"} err="failed to get container status \"a82af140a1f416de6c34ca481cb88c11b78fa468ae226b7548fcbc9172c84ac8\": rpc error: code = NotFound desc = could not find container \"a82af140a1f416de6c34ca481cb88c11b78fa468ae226b7548fcbc9172c84ac8\": container with ID starting with a82af140a1f416de6c34ca481cb88c11b78fa468ae226b7548fcbc9172c84ac8 not found: ID does not exist" Feb 27 16:12:54 crc kubenswrapper[4830]: I0227 16:12:54.375777 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 27 16:12:54 crc kubenswrapper[4830]: I0227 16:12:54.375842 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 27 16:12:54 crc kubenswrapper[4830]: I0227 16:12:54.375898 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 27 16:12:54 crc kubenswrapper[4830]: I0227 16:12:54.376048 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 27 16:12:54 crc kubenswrapper[4830]: I0227 16:12:54.376086 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 27 16:12:54 crc kubenswrapper[4830]: I0227 16:12:54.376256 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:12:54 crc kubenswrapper[4830]: I0227 16:12:54.376359 4830 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:54 crc kubenswrapper[4830]: I0227 16:12:54.376427 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:12:54 crc kubenswrapper[4830]: I0227 16:12:54.376457 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:12:54 crc kubenswrapper[4830]: I0227 16:12:54.376484 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:12:54 crc kubenswrapper[4830]: I0227 16:12:54.388421 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:12:54 crc kubenswrapper[4830]: I0227 16:12:54.477525 4830 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:54 crc kubenswrapper[4830]: I0227 16:12:54.477572 4830 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:54 crc kubenswrapper[4830]: I0227 16:12:54.477592 4830 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:54 crc kubenswrapper[4830]: I0227 16:12:54.477612 4830 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:54 crc kubenswrapper[4830]: I0227 16:12:54.775541 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 27 16:12:54 crc kubenswrapper[4830]: I0227 16:12:54.776053 4830 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 27 16:12:54 crc kubenswrapper[4830]: I0227 16:12:54.790871 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 27 16:12:54 crc kubenswrapper[4830]: I0227 16:12:54.790923 4830 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="2cbe738f-7482-4e87-80d0-1a1c7a3a72ae" Feb 27 16:12:54 crc kubenswrapper[4830]: I0227 16:12:54.798508 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 27 16:12:54 crc kubenswrapper[4830]: I0227 16:12:54.798566 4830 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="2cbe738f-7482-4e87-80d0-1a1c7a3a72ae" Feb 27 16:12:58 crc kubenswrapper[4830]: I0227 16:12:58.932269 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-549499c84f-9qrdr"] Feb 27 16:12:58 crc kubenswrapper[4830]: I0227 16:12:58.934712 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-549499c84f-9qrdr" podUID="b67cded3-a953-4525-bdab-c6452dde691c" containerName="controller-manager" containerID="cri-o://a7998612e3241e6233b1a6208f436d1bdf543202e2c8a56d19490f3976bc2d8c" gracePeriod=30 Feb 27 16:12:59 crc kubenswrapper[4830]: I0227 16:12:59.030200 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-867c8bbbf4-q5zb8"] Feb 27 16:12:59 crc kubenswrapper[4830]: I0227 16:12:59.030442 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-867c8bbbf4-q5zb8" podUID="794a70c6-624e-46f9-97ae-d1c5eadc84bb" containerName="route-controller-manager" containerID="cri-o://e90162ea4e16e7c2fd74f52edf8cc31e08375aa24586d1f529c6ed3fb585cbca" gracePeriod=30 Feb 27 16:12:59 crc kubenswrapper[4830]: I0227 16:12:59.296271 4830 generic.go:334] "Generic (PLEG): container finished" podID="794a70c6-624e-46f9-97ae-d1c5eadc84bb" containerID="e90162ea4e16e7c2fd74f52edf8cc31e08375aa24586d1f529c6ed3fb585cbca" exitCode=0 Feb 27 16:12:59 crc kubenswrapper[4830]: I0227 16:12:59.296336 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-867c8bbbf4-q5zb8" event={"ID":"794a70c6-624e-46f9-97ae-d1c5eadc84bb","Type":"ContainerDied","Data":"e90162ea4e16e7c2fd74f52edf8cc31e08375aa24586d1f529c6ed3fb585cbca"} Feb 27 16:12:59 crc kubenswrapper[4830]: I0227 16:12:59.297892 4830 generic.go:334] "Generic (PLEG): container finished" podID="b67cded3-a953-4525-bdab-c6452dde691c" containerID="a7998612e3241e6233b1a6208f436d1bdf543202e2c8a56d19490f3976bc2d8c" exitCode=0 Feb 27 16:12:59 crc kubenswrapper[4830]: I0227 16:12:59.297914 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-549499c84f-9qrdr" event={"ID":"b67cded3-a953-4525-bdab-c6452dde691c","Type":"ContainerDied","Data":"a7998612e3241e6233b1a6208f436d1bdf543202e2c8a56d19490f3976bc2d8c"} Feb 27 16:12:59 crc kubenswrapper[4830]: I0227 16:12:59.362760 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-549499c84f-9qrdr" Feb 27 16:12:59 crc kubenswrapper[4830]: I0227 16:12:59.441987 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-867c8bbbf4-q5zb8" Feb 27 16:12:59 crc kubenswrapper[4830]: I0227 16:12:59.556760 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/794a70c6-624e-46f9-97ae-d1c5eadc84bb-config\") pod \"794a70c6-624e-46f9-97ae-d1c5eadc84bb\" (UID: \"794a70c6-624e-46f9-97ae-d1c5eadc84bb\") " Feb 27 16:12:59 crc kubenswrapper[4830]: I0227 16:12:59.556823 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b67cded3-a953-4525-bdab-c6452dde691c-proxy-ca-bundles\") pod \"b67cded3-a953-4525-bdab-c6452dde691c\" (UID: \"b67cded3-a953-4525-bdab-c6452dde691c\") " Feb 27 16:12:59 crc kubenswrapper[4830]: I0227 16:12:59.556857 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbv5j\" (UniqueName: \"kubernetes.io/projected/b67cded3-a953-4525-bdab-c6452dde691c-kube-api-access-zbv5j\") pod \"b67cded3-a953-4525-bdab-c6452dde691c\" (UID: \"b67cded3-a953-4525-bdab-c6452dde691c\") " Feb 27 16:12:59 crc kubenswrapper[4830]: I0227 16:12:59.556881 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/794a70c6-624e-46f9-97ae-d1c5eadc84bb-client-ca\") pod \"794a70c6-624e-46f9-97ae-d1c5eadc84bb\" (UID: \"794a70c6-624e-46f9-97ae-d1c5eadc84bb\") " Feb 27 16:12:59 crc kubenswrapper[4830]: I0227 16:12:59.556903 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn5nl\" (UniqueName: \"kubernetes.io/projected/794a70c6-624e-46f9-97ae-d1c5eadc84bb-kube-api-access-wn5nl\") pod \"794a70c6-624e-46f9-97ae-d1c5eadc84bb\" (UID: \"794a70c6-624e-46f9-97ae-d1c5eadc84bb\") " Feb 27 16:12:59 crc kubenswrapper[4830]: I0227 16:12:59.556980 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b67cded3-a953-4525-bdab-c6452dde691c-client-ca\") pod \"b67cded3-a953-4525-bdab-c6452dde691c\" (UID: \"b67cded3-a953-4525-bdab-c6452dde691c\") " Feb 27 16:12:59 crc kubenswrapper[4830]: I0227 16:12:59.558204 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b67cded3-a953-4525-bdab-c6452dde691c-serving-cert\") pod \"b67cded3-a953-4525-bdab-c6452dde691c\" (UID: \"b67cded3-a953-4525-bdab-c6452dde691c\") " Feb 27 16:12:59 crc kubenswrapper[4830]: I0227 16:12:59.558202 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b67cded3-a953-4525-bdab-c6452dde691c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b67cded3-a953-4525-bdab-c6452dde691c" (UID: "b67cded3-a953-4525-bdab-c6452dde691c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:12:59 crc kubenswrapper[4830]: I0227 16:12:59.558212 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/794a70c6-624e-46f9-97ae-d1c5eadc84bb-config" (OuterVolumeSpecName: "config") pod "794a70c6-624e-46f9-97ae-d1c5eadc84bb" (UID: "794a70c6-624e-46f9-97ae-d1c5eadc84bb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:12:59 crc kubenswrapper[4830]: I0227 16:12:59.558252 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/794a70c6-624e-46f9-97ae-d1c5eadc84bb-serving-cert\") pod \"794a70c6-624e-46f9-97ae-d1c5eadc84bb\" (UID: \"794a70c6-624e-46f9-97ae-d1c5eadc84bb\") " Feb 27 16:12:59 crc kubenswrapper[4830]: I0227 16:12:59.558432 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b67cded3-a953-4525-bdab-c6452dde691c-client-ca" (OuterVolumeSpecName: "client-ca") pod "b67cded3-a953-4525-bdab-c6452dde691c" (UID: "b67cded3-a953-4525-bdab-c6452dde691c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:12:59 crc kubenswrapper[4830]: I0227 16:12:59.558449 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b67cded3-a953-4525-bdab-c6452dde691c-config\") pod \"b67cded3-a953-4525-bdab-c6452dde691c\" (UID: \"b67cded3-a953-4525-bdab-c6452dde691c\") " Feb 27 16:12:59 crc kubenswrapper[4830]: I0227 16:12:59.559032 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/794a70c6-624e-46f9-97ae-d1c5eadc84bb-client-ca" (OuterVolumeSpecName: "client-ca") pod "794a70c6-624e-46f9-97ae-d1c5eadc84bb" (UID: "794a70c6-624e-46f9-97ae-d1c5eadc84bb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:12:59 crc kubenswrapper[4830]: I0227 16:12:59.559402 4830 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b67cded3-a953-4525-bdab-c6452dde691c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:59 crc kubenswrapper[4830]: I0227 16:12:59.559456 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/794a70c6-624e-46f9-97ae-d1c5eadc84bb-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:59 crc kubenswrapper[4830]: I0227 16:12:59.559476 4830 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b67cded3-a953-4525-bdab-c6452dde691c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:59 crc kubenswrapper[4830]: I0227 16:12:59.559498 4830 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/794a70c6-624e-46f9-97ae-d1c5eadc84bb-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:59 crc kubenswrapper[4830]: I0227 16:12:59.559606 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b67cded3-a953-4525-bdab-c6452dde691c-config" (OuterVolumeSpecName: "config") pod "b67cded3-a953-4525-bdab-c6452dde691c" (UID: "b67cded3-a953-4525-bdab-c6452dde691c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:12:59 crc kubenswrapper[4830]: I0227 16:12:59.563276 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b67cded3-a953-4525-bdab-c6452dde691c-kube-api-access-zbv5j" (OuterVolumeSpecName: "kube-api-access-zbv5j") pod "b67cded3-a953-4525-bdab-c6452dde691c" (UID: "b67cded3-a953-4525-bdab-c6452dde691c"). InnerVolumeSpecName "kube-api-access-zbv5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:12:59 crc kubenswrapper[4830]: I0227 16:12:59.563575 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/794a70c6-624e-46f9-97ae-d1c5eadc84bb-kube-api-access-wn5nl" (OuterVolumeSpecName: "kube-api-access-wn5nl") pod "794a70c6-624e-46f9-97ae-d1c5eadc84bb" (UID: "794a70c6-624e-46f9-97ae-d1c5eadc84bb"). InnerVolumeSpecName "kube-api-access-wn5nl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:12:59 crc kubenswrapper[4830]: I0227 16:12:59.563663 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/794a70c6-624e-46f9-97ae-d1c5eadc84bb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "794a70c6-624e-46f9-97ae-d1c5eadc84bb" (UID: "794a70c6-624e-46f9-97ae-d1c5eadc84bb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:12:59 crc kubenswrapper[4830]: I0227 16:12:59.563853 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b67cded3-a953-4525-bdab-c6452dde691c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b67cded3-a953-4525-bdab-c6452dde691c" (UID: "b67cded3-a953-4525-bdab-c6452dde691c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:12:59 crc kubenswrapper[4830]: I0227 16:12:59.661120 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b67cded3-a953-4525-bdab-c6452dde691c-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:59 crc kubenswrapper[4830]: I0227 16:12:59.661158 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbv5j\" (UniqueName: \"kubernetes.io/projected/b67cded3-a953-4525-bdab-c6452dde691c-kube-api-access-zbv5j\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:59 crc kubenswrapper[4830]: I0227 16:12:59.661173 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn5nl\" (UniqueName: \"kubernetes.io/projected/794a70c6-624e-46f9-97ae-d1c5eadc84bb-kube-api-access-wn5nl\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:59 crc kubenswrapper[4830]: I0227 16:12:59.661185 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b67cded3-a953-4525-bdab-c6452dde691c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:12:59 crc kubenswrapper[4830]: I0227 16:12:59.661195 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/794a70c6-624e-46f9-97ae-d1c5eadc84bb-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:13:00 crc kubenswrapper[4830]: I0227 16:13:00.312191 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-867c8bbbf4-q5zb8" event={"ID":"794a70c6-624e-46f9-97ae-d1c5eadc84bb","Type":"ContainerDied","Data":"87257e0c3f4da0f25581d7ee6822974a2e52a1351b91fe8383666f35f6a1e884"} Feb 27 16:13:00 crc kubenswrapper[4830]: I0227 16:13:00.312404 4830 scope.go:117] "RemoveContainer" containerID="e90162ea4e16e7c2fd74f52edf8cc31e08375aa24586d1f529c6ed3fb585cbca" Feb 27 16:13:00 crc kubenswrapper[4830]: I0227 16:13:00.312210 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-867c8bbbf4-q5zb8" Feb 27 16:13:00 crc kubenswrapper[4830]: I0227 16:13:00.315438 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-549499c84f-9qrdr" event={"ID":"b67cded3-a953-4525-bdab-c6452dde691c","Type":"ContainerDied","Data":"1a1735fdacb9ab45bf9642bc64268d19e5b9a569ae9ad01e897a1846014883ca"} Feb 27 16:13:00 crc kubenswrapper[4830]: I0227 16:13:00.315550 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-549499c84f-9qrdr" Feb 27 16:13:00 crc kubenswrapper[4830]: I0227 16:13:00.336234 4830 scope.go:117] "RemoveContainer" containerID="a7998612e3241e6233b1a6208f436d1bdf543202e2c8a56d19490f3976bc2d8c" Feb 27 16:13:00 crc kubenswrapper[4830]: I0227 16:13:00.361111 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-549499c84f-9qrdr"] Feb 27 16:13:00 crc kubenswrapper[4830]: I0227 16:13:00.369336 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-549499c84f-9qrdr"] Feb 27 16:13:00 crc kubenswrapper[4830]: I0227 16:13:00.375357 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-867c8bbbf4-q5zb8"] Feb 27 16:13:00 crc kubenswrapper[4830]: I0227 16:13:00.385119 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-867c8bbbf4-q5zb8"] Feb 27 16:13:00 crc kubenswrapper[4830]: I0227 16:13:00.775591 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="794a70c6-624e-46f9-97ae-d1c5eadc84bb" path="/var/lib/kubelet/pods/794a70c6-624e-46f9-97ae-d1c5eadc84bb/volumes" Feb 27 16:13:00 crc kubenswrapper[4830]: I0227 16:13:00.776939 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b67cded3-a953-4525-bdab-c6452dde691c" path="/var/lib/kubelet/pods/b67cded3-a953-4525-bdab-c6452dde691c/volumes" Feb 27 16:13:00 crc kubenswrapper[4830]: I0227 16:13:00.947207 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-849bff8645-6j7qr"] Feb 27 16:13:00 crc kubenswrapper[4830]: E0227 16:13:00.947561 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="794a70c6-624e-46f9-97ae-d1c5eadc84bb" containerName="route-controller-manager" Feb 27 16:13:00 crc kubenswrapper[4830]: I0227 16:13:00.947590 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="794a70c6-624e-46f9-97ae-d1c5eadc84bb" containerName="route-controller-manager" Feb 27 16:13:00 crc kubenswrapper[4830]: E0227 16:13:00.947612 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 27 16:13:00 crc kubenswrapper[4830]: I0227 16:13:00.947626 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 27 16:13:00 crc kubenswrapper[4830]: E0227 16:13:00.947654 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b67cded3-a953-4525-bdab-c6452dde691c" containerName="controller-manager" Feb 27 16:13:00 crc kubenswrapper[4830]: I0227 16:13:00.947666 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b67cded3-a953-4525-bdab-c6452dde691c" containerName="controller-manager" Feb 27 16:13:00 crc kubenswrapper[4830]: I0227 16:13:00.947859 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="b67cded3-a953-4525-bdab-c6452dde691c" containerName="controller-manager" Feb 27 16:13:00 crc kubenswrapper[4830]: I0227 16:13:00.947901 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="794a70c6-624e-46f9-97ae-d1c5eadc84bb" containerName="route-controller-manager" Feb 27 16:13:00 crc kubenswrapper[4830]: I0227 16:13:00.947919 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 27 16:13:00 crc kubenswrapper[4830]: I0227 16:13:00.948515 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-849bff8645-6j7qr" Feb 27 16:13:00 crc kubenswrapper[4830]: I0227 16:13:00.951694 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 27 16:13:00 crc kubenswrapper[4830]: I0227 16:13:00.952046 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 27 16:13:00 crc kubenswrapper[4830]: I0227 16:13:00.952715 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 27 16:13:00 crc kubenswrapper[4830]: I0227 16:13:00.952911 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 27 16:13:00 crc kubenswrapper[4830]: I0227 16:13:00.953962 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-c48b6b8bc-xjhr4"] Feb 27 16:13:00 crc kubenswrapper[4830]: I0227 16:13:00.954967 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c48b6b8bc-xjhr4" Feb 27 16:13:00 crc kubenswrapper[4830]: I0227 16:13:00.955624 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 27 16:13:00 crc kubenswrapper[4830]: I0227 16:13:00.955942 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 27 16:13:00 crc kubenswrapper[4830]: I0227 16:13:00.959884 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 27 16:13:00 crc kubenswrapper[4830]: I0227 16:13:00.960076 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 27 16:13:00 crc kubenswrapper[4830]: I0227 16:13:00.960111 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 27 16:13:00 crc kubenswrapper[4830]: I0227 16:13:00.960120 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 27 16:13:00 crc kubenswrapper[4830]: I0227 16:13:00.960149 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 27 16:13:00 crc kubenswrapper[4830]: I0227 16:13:00.960230 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 27 16:13:00 crc kubenswrapper[4830]: I0227 16:13:00.963206 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-849bff8645-6j7qr"] Feb 27 16:13:00 crc kubenswrapper[4830]: I0227 16:13:00.970372 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 27 16:13:00 crc kubenswrapper[4830]: I0227 16:13:00.971915 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c48b6b8bc-xjhr4"] Feb 27 16:13:01 crc kubenswrapper[4830]: I0227 16:13:01.081610 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/264690bd-7360-4111-8d54-258b2f960185-client-ca\") pod \"route-controller-manager-849bff8645-6j7qr\" (UID: \"264690bd-7360-4111-8d54-258b2f960185\") " pod="openshift-route-controller-manager/route-controller-manager-849bff8645-6j7qr" Feb 27 16:13:01 crc kubenswrapper[4830]: I0227 16:13:01.081693 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtd44\" (UniqueName: \"kubernetes.io/projected/264690bd-7360-4111-8d54-258b2f960185-kube-api-access-dtd44\") pod \"route-controller-manager-849bff8645-6j7qr\" (UID: \"264690bd-7360-4111-8d54-258b2f960185\") " pod="openshift-route-controller-manager/route-controller-manager-849bff8645-6j7qr" Feb 27 16:13:01 crc kubenswrapper[4830]: I0227 16:13:01.081728 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cp8t\" (UniqueName: \"kubernetes.io/projected/df408cab-89ac-4635-acac-90f7625c3e98-kube-api-access-5cp8t\") pod \"controller-manager-c48b6b8bc-xjhr4\" (UID: \"df408cab-89ac-4635-acac-90f7625c3e98\") " pod="openshift-controller-manager/controller-manager-c48b6b8bc-xjhr4" Feb 27 16:13:01 crc kubenswrapper[4830]: I0227 16:13:01.081761 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df408cab-89ac-4635-acac-90f7625c3e98-config\") pod \"controller-manager-c48b6b8bc-xjhr4\" (UID: \"df408cab-89ac-4635-acac-90f7625c3e98\") " pod="openshift-controller-manager/controller-manager-c48b6b8bc-xjhr4" Feb 27 16:13:01 crc kubenswrapper[4830]: I0227 16:13:01.081808 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df408cab-89ac-4635-acac-90f7625c3e98-proxy-ca-bundles\") pod \"controller-manager-c48b6b8bc-xjhr4\" (UID: \"df408cab-89ac-4635-acac-90f7625c3e98\") " pod="openshift-controller-manager/controller-manager-c48b6b8bc-xjhr4" Feb 27 16:13:01 crc kubenswrapper[4830]: I0227 16:13:01.081978 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df408cab-89ac-4635-acac-90f7625c3e98-client-ca\") pod \"controller-manager-c48b6b8bc-xjhr4\" (UID: \"df408cab-89ac-4635-acac-90f7625c3e98\") " pod="openshift-controller-manager/controller-manager-c48b6b8bc-xjhr4" Feb 27 16:13:01 crc kubenswrapper[4830]: I0227 16:13:01.082015 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/264690bd-7360-4111-8d54-258b2f960185-config\") pod \"route-controller-manager-849bff8645-6j7qr\" (UID: \"264690bd-7360-4111-8d54-258b2f960185\") " pod="openshift-route-controller-manager/route-controller-manager-849bff8645-6j7qr" Feb 27 16:13:01 crc kubenswrapper[4830]: I0227 16:13:01.082055 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/264690bd-7360-4111-8d54-258b2f960185-serving-cert\") pod \"route-controller-manager-849bff8645-6j7qr\" (UID: \"264690bd-7360-4111-8d54-258b2f960185\") " pod="openshift-route-controller-manager/route-controller-manager-849bff8645-6j7qr" Feb 27 16:13:01 crc kubenswrapper[4830]: I0227 16:13:01.082086 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df408cab-89ac-4635-acac-90f7625c3e98-serving-cert\") pod \"controller-manager-c48b6b8bc-xjhr4\" (UID: \"df408cab-89ac-4635-acac-90f7625c3e98\") " pod="openshift-controller-manager/controller-manager-c48b6b8bc-xjhr4" Feb 27 16:13:01 crc kubenswrapper[4830]: I0227 16:13:01.182817 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df408cab-89ac-4635-acac-90f7625c3e98-config\") pod \"controller-manager-c48b6b8bc-xjhr4\" (UID: \"df408cab-89ac-4635-acac-90f7625c3e98\") " pod="openshift-controller-manager/controller-manager-c48b6b8bc-xjhr4" Feb 27 16:13:01 crc kubenswrapper[4830]: I0227 16:13:01.182902 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df408cab-89ac-4635-acac-90f7625c3e98-proxy-ca-bundles\") pod \"controller-manager-c48b6b8bc-xjhr4\" (UID: \"df408cab-89ac-4635-acac-90f7625c3e98\") " pod="openshift-controller-manager/controller-manager-c48b6b8bc-xjhr4" Feb 27 16:13:01 crc kubenswrapper[4830]: I0227 16:13:01.183077 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df408cab-89ac-4635-acac-90f7625c3e98-client-ca\") pod \"controller-manager-c48b6b8bc-xjhr4\" (UID: \"df408cab-89ac-4635-acac-90f7625c3e98\") " pod="openshift-controller-manager/controller-manager-c48b6b8bc-xjhr4" Feb 27 16:13:01 crc kubenswrapper[4830]: I0227 16:13:01.183133 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/264690bd-7360-4111-8d54-258b2f960185-config\") pod \"route-controller-manager-849bff8645-6j7qr\" (UID: \"264690bd-7360-4111-8d54-258b2f960185\") " pod="openshift-route-controller-manager/route-controller-manager-849bff8645-6j7qr" Feb 27 16:13:01 crc kubenswrapper[4830]: I0227 16:13:01.183211 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/264690bd-7360-4111-8d54-258b2f960185-serving-cert\") pod \"route-controller-manager-849bff8645-6j7qr\" (UID: \"264690bd-7360-4111-8d54-258b2f960185\") " pod="openshift-route-controller-manager/route-controller-manager-849bff8645-6j7qr" Feb 27 16:13:01 crc kubenswrapper[4830]: I0227 16:13:01.183270 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df408cab-89ac-4635-acac-90f7625c3e98-serving-cert\") pod \"controller-manager-c48b6b8bc-xjhr4\" (UID: \"df408cab-89ac-4635-acac-90f7625c3e98\") " pod="openshift-controller-manager/controller-manager-c48b6b8bc-xjhr4" Feb 27 16:13:01 crc kubenswrapper[4830]: I0227 16:13:01.183328 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/264690bd-7360-4111-8d54-258b2f960185-client-ca\") pod \"route-controller-manager-849bff8645-6j7qr\" (UID: \"264690bd-7360-4111-8d54-258b2f960185\") " pod="openshift-route-controller-manager/route-controller-manager-849bff8645-6j7qr" Feb 27 16:13:01 crc kubenswrapper[4830]: I0227 16:13:01.183387 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtd44\" (UniqueName: \"kubernetes.io/projected/264690bd-7360-4111-8d54-258b2f960185-kube-api-access-dtd44\") pod \"route-controller-manager-849bff8645-6j7qr\" (UID: \"264690bd-7360-4111-8d54-258b2f960185\") " pod="openshift-route-controller-manager/route-controller-manager-849bff8645-6j7qr" Feb 27 16:13:01 crc kubenswrapper[4830]: I0227 16:13:01.183447 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cp8t\" (UniqueName: \"kubernetes.io/projected/df408cab-89ac-4635-acac-90f7625c3e98-kube-api-access-5cp8t\") pod \"controller-manager-c48b6b8bc-xjhr4\" (UID: \"df408cab-89ac-4635-acac-90f7625c3e98\") " pod="openshift-controller-manager/controller-manager-c48b6b8bc-xjhr4" Feb 27 16:13:01 crc kubenswrapper[4830]: I0227 16:13:01.184247 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df408cab-89ac-4635-acac-90f7625c3e98-client-ca\") pod \"controller-manager-c48b6b8bc-xjhr4\" (UID: \"df408cab-89ac-4635-acac-90f7625c3e98\") " pod="openshift-controller-manager/controller-manager-c48b6b8bc-xjhr4" Feb 27 16:13:01 crc kubenswrapper[4830]: I0227 16:13:01.184885 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df408cab-89ac-4635-acac-90f7625c3e98-proxy-ca-bundles\") pod \"controller-manager-c48b6b8bc-xjhr4\" (UID: \"df408cab-89ac-4635-acac-90f7625c3e98\") " pod="openshift-controller-manager/controller-manager-c48b6b8bc-xjhr4" Feb 27 16:13:01 crc kubenswrapper[4830]: I0227 16:13:01.185113 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df408cab-89ac-4635-acac-90f7625c3e98-config\") pod \"controller-manager-c48b6b8bc-xjhr4\" (UID: \"df408cab-89ac-4635-acac-90f7625c3e98\") " pod="openshift-controller-manager/controller-manager-c48b6b8bc-xjhr4" Feb 27 16:13:01 crc kubenswrapper[4830]: I0227 16:13:01.185238 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/264690bd-7360-4111-8d54-258b2f960185-client-ca\") pod \"route-controller-manager-849bff8645-6j7qr\" (UID: \"264690bd-7360-4111-8d54-258b2f960185\") " pod="openshift-route-controller-manager/route-controller-manager-849bff8645-6j7qr" Feb 27 16:13:01 crc kubenswrapper[4830]: I0227 16:13:01.185372 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/264690bd-7360-4111-8d54-258b2f960185-config\") pod \"route-controller-manager-849bff8645-6j7qr\" (UID: \"264690bd-7360-4111-8d54-258b2f960185\") " pod="openshift-route-controller-manager/route-controller-manager-849bff8645-6j7qr" Feb 27 16:13:01 crc kubenswrapper[4830]: I0227 16:13:01.188161 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/264690bd-7360-4111-8d54-258b2f960185-serving-cert\") pod \"route-controller-manager-849bff8645-6j7qr\" (UID: \"264690bd-7360-4111-8d54-258b2f960185\") " pod="openshift-route-controller-manager/route-controller-manager-849bff8645-6j7qr" Feb 27 16:13:01 crc kubenswrapper[4830]: I0227 16:13:01.199993 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df408cab-89ac-4635-acac-90f7625c3e98-serving-cert\") pod \"controller-manager-c48b6b8bc-xjhr4\" (UID: \"df408cab-89ac-4635-acac-90f7625c3e98\") " pod="openshift-controller-manager/controller-manager-c48b6b8bc-xjhr4" Feb 27 16:13:01 crc kubenswrapper[4830]: I0227 16:13:01.208746 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtd44\" (UniqueName: \"kubernetes.io/projected/264690bd-7360-4111-8d54-258b2f960185-kube-api-access-dtd44\") pod \"route-controller-manager-849bff8645-6j7qr\" (UID: \"264690bd-7360-4111-8d54-258b2f960185\") " pod="openshift-route-controller-manager/route-controller-manager-849bff8645-6j7qr" Feb 27 16:13:01 crc kubenswrapper[4830]: I0227 16:13:01.210699 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cp8t\" (UniqueName: \"kubernetes.io/projected/df408cab-89ac-4635-acac-90f7625c3e98-kube-api-access-5cp8t\") pod \"controller-manager-c48b6b8bc-xjhr4\" (UID: \"df408cab-89ac-4635-acac-90f7625c3e98\") " pod="openshift-controller-manager/controller-manager-c48b6b8bc-xjhr4" Feb 27 16:13:01 crc kubenswrapper[4830]: I0227 16:13:01.315967 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-849bff8645-6j7qr" Feb 27 16:13:01 crc kubenswrapper[4830]: I0227 16:13:01.324332 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c48b6b8bc-xjhr4" Feb 27 16:13:01 crc kubenswrapper[4830]: I0227 16:13:01.612863 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-849bff8645-6j7qr"] Feb 27 16:13:01 crc kubenswrapper[4830]: W0227 16:13:01.653933 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf408cab_89ac_4635_acac_90f7625c3e98.slice/crio-1d5a2ea338575ccd571ff7d14d8d67de8a0f7e0aa891f7066120d597184df545 WatchSource:0}: Error finding container 1d5a2ea338575ccd571ff7d14d8d67de8a0f7e0aa891f7066120d597184df545: Status 404 returned error can't find the container with id 1d5a2ea338575ccd571ff7d14d8d67de8a0f7e0aa891f7066120d597184df545 Feb 27 16:13:01 crc kubenswrapper[4830]: I0227 16:13:01.654153 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c48b6b8bc-xjhr4"] Feb 27 16:13:02 crc kubenswrapper[4830]: I0227 16:13:02.333840 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-849bff8645-6j7qr" event={"ID":"264690bd-7360-4111-8d54-258b2f960185","Type":"ContainerStarted","Data":"b43e08b7ad49958644fd555ae929a651fe32455014c47fd901e7cfcc0eabd1fb"} Feb 27 16:13:02 crc kubenswrapper[4830]: I0227 16:13:02.333900 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-849bff8645-6j7qr" event={"ID":"264690bd-7360-4111-8d54-258b2f960185","Type":"ContainerStarted","Data":"6b3a537b3db926f6ec4d12caeb40545968c8383d0f8fc4ff70c69811957e9025"} Feb 27 16:13:02 crc kubenswrapper[4830]: I0227 16:13:02.334072 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-849bff8645-6j7qr" Feb 27 16:13:02 crc kubenswrapper[4830]: I0227 16:13:02.336794 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c48b6b8bc-xjhr4" event={"ID":"df408cab-89ac-4635-acac-90f7625c3e98","Type":"ContainerStarted","Data":"db9ae80987e234a43153a8157412766bcec753cb2fe398f0bdb97b20c91fc35c"} Feb 27 16:13:02 crc kubenswrapper[4830]: I0227 16:13:02.336822 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c48b6b8bc-xjhr4" event={"ID":"df408cab-89ac-4635-acac-90f7625c3e98","Type":"ContainerStarted","Data":"1d5a2ea338575ccd571ff7d14d8d67de8a0f7e0aa891f7066120d597184df545"} Feb 27 16:13:02 crc kubenswrapper[4830]: I0227 16:13:02.337050 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-c48b6b8bc-xjhr4" Feb 27 16:13:02 crc kubenswrapper[4830]: I0227 16:13:02.339741 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-849bff8645-6j7qr" Feb 27 16:13:02 crc kubenswrapper[4830]: I0227 16:13:02.341847 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-c48b6b8bc-xjhr4" Feb 27 16:13:02 crc kubenswrapper[4830]: I0227 16:13:02.355843 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-849bff8645-6j7qr" podStartSLOduration=3.355827277 podStartE2EDuration="3.355827277s" podCreationTimestamp="2026-02-27 16:12:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:13:02.354296288 +0000 UTC m=+378.443568761" watchObservedRunningTime="2026-02-27 16:13:02.355827277 +0000 UTC m=+378.445099740" Feb 27 16:13:02 crc kubenswrapper[4830]: I0227 16:13:02.401601 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-c48b6b8bc-xjhr4" podStartSLOduration=4.401585995 podStartE2EDuration="4.401585995s" podCreationTimestamp="2026-02-27 16:12:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:13:02.400546398 +0000 UTC m=+378.489818871" watchObservedRunningTime="2026-02-27 16:13:02.401585995 +0000 UTC m=+378.490858468" Feb 27 16:13:15 crc kubenswrapper[4830]: I0227 16:13:15.337773 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 27 16:13:18 crc kubenswrapper[4830]: I0227 16:13:18.907036 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c48b6b8bc-xjhr4"] Feb 27 16:13:18 crc kubenswrapper[4830]: I0227 16:13:18.907367 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-c48b6b8bc-xjhr4" podUID="df408cab-89ac-4635-acac-90f7625c3e98" containerName="controller-manager" containerID="cri-o://db9ae80987e234a43153a8157412766bcec753cb2fe398f0bdb97b20c91fc35c" gracePeriod=30 Feb 27 16:13:18 crc kubenswrapper[4830]: I0227 16:13:18.931601 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-849bff8645-6j7qr"] Feb 27 16:13:18 crc kubenswrapper[4830]: I0227 16:13:18.932076 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-849bff8645-6j7qr" podUID="264690bd-7360-4111-8d54-258b2f960185" containerName="route-controller-manager" containerID="cri-o://b43e08b7ad49958644fd555ae929a651fe32455014c47fd901e7cfcc0eabd1fb" gracePeriod=30 Feb 27 16:13:20 crc kubenswrapper[4830]: I0227 16:13:20.457716 4830 generic.go:334] "Generic (PLEG): container finished" podID="264690bd-7360-4111-8d54-258b2f960185" containerID="b43e08b7ad49958644fd555ae929a651fe32455014c47fd901e7cfcc0eabd1fb" exitCode=0 Feb 27 16:13:20 crc kubenswrapper[4830]: I0227 16:13:20.457825 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-849bff8645-6j7qr" event={"ID":"264690bd-7360-4111-8d54-258b2f960185","Type":"ContainerDied","Data":"b43e08b7ad49958644fd555ae929a651fe32455014c47fd901e7cfcc0eabd1fb"} Feb 27 16:13:20 crc kubenswrapper[4830]: I0227 16:13:20.460427 4830 generic.go:334] "Generic (PLEG): container finished" podID="df408cab-89ac-4635-acac-90f7625c3e98" containerID="db9ae80987e234a43153a8157412766bcec753cb2fe398f0bdb97b20c91fc35c" exitCode=0 Feb 27 16:13:20 crc kubenswrapper[4830]: I0227 16:13:20.460491 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c48b6b8bc-xjhr4" event={"ID":"df408cab-89ac-4635-acac-90f7625c3e98","Type":"ContainerDied","Data":"db9ae80987e234a43153a8157412766bcec753cb2fe398f0bdb97b20c91fc35c"} Feb 27 16:13:20 crc kubenswrapper[4830]: I0227 16:13:20.502093 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.316560 4830 patch_prober.go:28] interesting pod/route-controller-manager-849bff8645-6j7qr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": dial tcp 10.217.0.68:8443: connect: connection refused" start-of-body= Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.316635 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-849bff8645-6j7qr" podUID="264690bd-7360-4111-8d54-258b2f960185" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": dial tcp 10.217.0.68:8443: connect: connection refused" Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.317762 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c48b6b8bc-xjhr4" Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.417178 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-58c9fbdd4b-qlgdj"] Feb 27 16:13:21 crc kubenswrapper[4830]: E0227 16:13:21.417842 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df408cab-89ac-4635-acac-90f7625c3e98" containerName="controller-manager" Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.417873 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="df408cab-89ac-4635-acac-90f7625c3e98" containerName="controller-manager" Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.418582 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="df408cab-89ac-4635-acac-90f7625c3e98" containerName="controller-manager" Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.423597 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58c9fbdd4b-qlgdj" Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.433751 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58c9fbdd4b-qlgdj"] Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.469780 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df408cab-89ac-4635-acac-90f7625c3e98-config\") pod \"df408cab-89ac-4635-acac-90f7625c3e98\" (UID: \"df408cab-89ac-4635-acac-90f7625c3e98\") " Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.469846 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df408cab-89ac-4635-acac-90f7625c3e98-proxy-ca-bundles\") pod \"df408cab-89ac-4635-acac-90f7625c3e98\" (UID: \"df408cab-89ac-4635-acac-90f7625c3e98\") " Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.469918 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cp8t\" (UniqueName: \"kubernetes.io/projected/df408cab-89ac-4635-acac-90f7625c3e98-kube-api-access-5cp8t\") pod \"df408cab-89ac-4635-acac-90f7625c3e98\" (UID: \"df408cab-89ac-4635-acac-90f7625c3e98\") " Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.470065 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df408cab-89ac-4635-acac-90f7625c3e98-client-ca\") pod \"df408cab-89ac-4635-acac-90f7625c3e98\" (UID: \"df408cab-89ac-4635-acac-90f7625c3e98\") " Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.470143 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df408cab-89ac-4635-acac-90f7625c3e98-serving-cert\") pod \"df408cab-89ac-4635-acac-90f7625c3e98\" (UID: \"df408cab-89ac-4635-acac-90f7625c3e98\") " Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.473429 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df408cab-89ac-4635-acac-90f7625c3e98-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "df408cab-89ac-4635-acac-90f7625c3e98" (UID: "df408cab-89ac-4635-acac-90f7625c3e98"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.478645 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df408cab-89ac-4635-acac-90f7625c3e98-config" (OuterVolumeSpecName: "config") pod "df408cab-89ac-4635-acac-90f7625c3e98" (UID: "df408cab-89ac-4635-acac-90f7625c3e98"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.478966 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df408cab-89ac-4635-acac-90f7625c3e98-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "df408cab-89ac-4635-acac-90f7625c3e98" (UID: "df408cab-89ac-4635-acac-90f7625c3e98"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.478874 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df408cab-89ac-4635-acac-90f7625c3e98-client-ca" (OuterVolumeSpecName: "client-ca") pod "df408cab-89ac-4635-acac-90f7625c3e98" (UID: "df408cab-89ac-4635-acac-90f7625c3e98"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.480334 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df408cab-89ac-4635-acac-90f7625c3e98-kube-api-access-5cp8t" (OuterVolumeSpecName: "kube-api-access-5cp8t") pod "df408cab-89ac-4635-acac-90f7625c3e98" (UID: "df408cab-89ac-4635-acac-90f7625c3e98"). InnerVolumeSpecName "kube-api-access-5cp8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.508338 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c48b6b8bc-xjhr4" event={"ID":"df408cab-89ac-4635-acac-90f7625c3e98","Type":"ContainerDied","Data":"1d5a2ea338575ccd571ff7d14d8d67de8a0f7e0aa891f7066120d597184df545"} Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.508394 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c48b6b8bc-xjhr4" Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.508404 4830 scope.go:117] "RemoveContainer" containerID="db9ae80987e234a43153a8157412766bcec753cb2fe398f0bdb97b20c91fc35c" Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.537787 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c48b6b8bc-xjhr4"] Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.541378 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-c48b6b8bc-xjhr4"] Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.571722 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5bb5794f-0af3-4d3d-aff1-73d8fb49b63f-client-ca\") pod \"controller-manager-58c9fbdd4b-qlgdj\" (UID: \"5bb5794f-0af3-4d3d-aff1-73d8fb49b63f\") " pod="openshift-controller-manager/controller-manager-58c9fbdd4b-qlgdj" Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.571755 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mnh6\" (UniqueName: \"kubernetes.io/projected/5bb5794f-0af3-4d3d-aff1-73d8fb49b63f-kube-api-access-2mnh6\") pod \"controller-manager-58c9fbdd4b-qlgdj\" (UID: \"5bb5794f-0af3-4d3d-aff1-73d8fb49b63f\") " pod="openshift-controller-manager/controller-manager-58c9fbdd4b-qlgdj" Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.571783 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bb5794f-0af3-4d3d-aff1-73d8fb49b63f-serving-cert\") pod \"controller-manager-58c9fbdd4b-qlgdj\" (UID: \"5bb5794f-0af3-4d3d-aff1-73d8fb49b63f\") " pod="openshift-controller-manager/controller-manager-58c9fbdd4b-qlgdj" Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.571799 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5bb5794f-0af3-4d3d-aff1-73d8fb49b63f-proxy-ca-bundles\") pod \"controller-manager-58c9fbdd4b-qlgdj\" (UID: \"5bb5794f-0af3-4d3d-aff1-73d8fb49b63f\") " pod="openshift-controller-manager/controller-manager-58c9fbdd4b-qlgdj" Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.571828 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bb5794f-0af3-4d3d-aff1-73d8fb49b63f-config\") pod \"controller-manager-58c9fbdd4b-qlgdj\" (UID: \"5bb5794f-0af3-4d3d-aff1-73d8fb49b63f\") " pod="openshift-controller-manager/controller-manager-58c9fbdd4b-qlgdj" Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.571865 4830 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/df408cab-89ac-4635-acac-90f7625c3e98-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.571875 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df408cab-89ac-4635-acac-90f7625c3e98-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.571883 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df408cab-89ac-4635-acac-90f7625c3e98-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.571892 4830 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/df408cab-89ac-4635-acac-90f7625c3e98-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.571901 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cp8t\" (UniqueName: \"kubernetes.io/projected/df408cab-89ac-4635-acac-90f7625c3e98-kube-api-access-5cp8t\") on node \"crc\" DevicePath \"\"" Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.673190 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bb5794f-0af3-4d3d-aff1-73d8fb49b63f-serving-cert\") pod \"controller-manager-58c9fbdd4b-qlgdj\" (UID: \"5bb5794f-0af3-4d3d-aff1-73d8fb49b63f\") " pod="openshift-controller-manager/controller-manager-58c9fbdd4b-qlgdj" Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.673235 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5bb5794f-0af3-4d3d-aff1-73d8fb49b63f-proxy-ca-bundles\") pod \"controller-manager-58c9fbdd4b-qlgdj\" (UID: \"5bb5794f-0af3-4d3d-aff1-73d8fb49b63f\") " pod="openshift-controller-manager/controller-manager-58c9fbdd4b-qlgdj" Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.673284 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bb5794f-0af3-4d3d-aff1-73d8fb49b63f-config\") pod \"controller-manager-58c9fbdd4b-qlgdj\" (UID: \"5bb5794f-0af3-4d3d-aff1-73d8fb49b63f\") " pod="openshift-controller-manager/controller-manager-58c9fbdd4b-qlgdj" Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.673398 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5bb5794f-0af3-4d3d-aff1-73d8fb49b63f-client-ca\") pod \"controller-manager-58c9fbdd4b-qlgdj\" (UID: \"5bb5794f-0af3-4d3d-aff1-73d8fb49b63f\") " pod="openshift-controller-manager/controller-manager-58c9fbdd4b-qlgdj" Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.673432 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mnh6\" (UniqueName: \"kubernetes.io/projected/5bb5794f-0af3-4d3d-aff1-73d8fb49b63f-kube-api-access-2mnh6\") pod \"controller-manager-58c9fbdd4b-qlgdj\" (UID: \"5bb5794f-0af3-4d3d-aff1-73d8fb49b63f\") " pod="openshift-controller-manager/controller-manager-58c9fbdd4b-qlgdj" Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.674460 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5bb5794f-0af3-4d3d-aff1-73d8fb49b63f-client-ca\") pod \"controller-manager-58c9fbdd4b-qlgdj\" (UID: \"5bb5794f-0af3-4d3d-aff1-73d8fb49b63f\") " pod="openshift-controller-manager/controller-manager-58c9fbdd4b-qlgdj" Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.674781 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bb5794f-0af3-4d3d-aff1-73d8fb49b63f-config\") pod \"controller-manager-58c9fbdd4b-qlgdj\" (UID: \"5bb5794f-0af3-4d3d-aff1-73d8fb49b63f\") " pod="openshift-controller-manager/controller-manager-58c9fbdd4b-qlgdj" Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.674972 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5bb5794f-0af3-4d3d-aff1-73d8fb49b63f-proxy-ca-bundles\") pod \"controller-manager-58c9fbdd4b-qlgdj\" (UID: \"5bb5794f-0af3-4d3d-aff1-73d8fb49b63f\") " pod="openshift-controller-manager/controller-manager-58c9fbdd4b-qlgdj" Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.677562 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bb5794f-0af3-4d3d-aff1-73d8fb49b63f-serving-cert\") pod \"controller-manager-58c9fbdd4b-qlgdj\" (UID: \"5bb5794f-0af3-4d3d-aff1-73d8fb49b63f\") " pod="openshift-controller-manager/controller-manager-58c9fbdd4b-qlgdj" Feb 27 16:13:21 crc kubenswrapper[4830]: I0227 16:13:21.931287 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mnh6\" (UniqueName: \"kubernetes.io/projected/5bb5794f-0af3-4d3d-aff1-73d8fb49b63f-kube-api-access-2mnh6\") pod \"controller-manager-58c9fbdd4b-qlgdj\" (UID: \"5bb5794f-0af3-4d3d-aff1-73d8fb49b63f\") " pod="openshift-controller-manager/controller-manager-58c9fbdd4b-qlgdj" Feb 27 16:13:22 crc kubenswrapper[4830]: I0227 16:13:22.081975 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58c9fbdd4b-qlgdj" Feb 27 16:13:22 crc kubenswrapper[4830]: I0227 16:13:22.914856 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df408cab-89ac-4635-acac-90f7625c3e98" path="/var/lib/kubelet/pods/df408cab-89ac-4635-acac-90f7625c3e98/volumes" Feb 27 16:13:22 crc kubenswrapper[4830]: I0227 16:13:22.986412 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-849bff8645-6j7qr" Feb 27 16:13:23 crc kubenswrapper[4830]: I0227 16:13:23.090726 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/264690bd-7360-4111-8d54-258b2f960185-client-ca\") pod \"264690bd-7360-4111-8d54-258b2f960185\" (UID: \"264690bd-7360-4111-8d54-258b2f960185\") " Feb 27 16:13:23 crc kubenswrapper[4830]: I0227 16:13:23.090827 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/264690bd-7360-4111-8d54-258b2f960185-config\") pod \"264690bd-7360-4111-8d54-258b2f960185\" (UID: \"264690bd-7360-4111-8d54-258b2f960185\") " Feb 27 16:13:23 crc kubenswrapper[4830]: I0227 16:13:23.090856 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtd44\" (UniqueName: \"kubernetes.io/projected/264690bd-7360-4111-8d54-258b2f960185-kube-api-access-dtd44\") pod \"264690bd-7360-4111-8d54-258b2f960185\" (UID: \"264690bd-7360-4111-8d54-258b2f960185\") " Feb 27 16:13:23 crc kubenswrapper[4830]: I0227 16:13:23.090985 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/264690bd-7360-4111-8d54-258b2f960185-serving-cert\") pod \"264690bd-7360-4111-8d54-258b2f960185\" (UID: \"264690bd-7360-4111-8d54-258b2f960185\") " Feb 27 16:13:23 crc kubenswrapper[4830]: I0227 16:13:23.092722 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/264690bd-7360-4111-8d54-258b2f960185-client-ca" (OuterVolumeSpecName: "client-ca") pod "264690bd-7360-4111-8d54-258b2f960185" (UID: "264690bd-7360-4111-8d54-258b2f960185"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:13:23 crc kubenswrapper[4830]: I0227 16:13:23.093274 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/264690bd-7360-4111-8d54-258b2f960185-config" (OuterVolumeSpecName: "config") pod "264690bd-7360-4111-8d54-258b2f960185" (UID: "264690bd-7360-4111-8d54-258b2f960185"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:13:23 crc kubenswrapper[4830]: I0227 16:13:23.103396 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/264690bd-7360-4111-8d54-258b2f960185-kube-api-access-dtd44" (OuterVolumeSpecName: "kube-api-access-dtd44") pod "264690bd-7360-4111-8d54-258b2f960185" (UID: "264690bd-7360-4111-8d54-258b2f960185"). InnerVolumeSpecName "kube-api-access-dtd44". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:13:23 crc kubenswrapper[4830]: I0227 16:13:23.114257 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/264690bd-7360-4111-8d54-258b2f960185-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "264690bd-7360-4111-8d54-258b2f960185" (UID: "264690bd-7360-4111-8d54-258b2f960185"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:13:23 crc kubenswrapper[4830]: I0227 16:13:23.192084 4830 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/264690bd-7360-4111-8d54-258b2f960185-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:13:23 crc kubenswrapper[4830]: I0227 16:13:23.192109 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtd44\" (UniqueName: \"kubernetes.io/projected/264690bd-7360-4111-8d54-258b2f960185-kube-api-access-dtd44\") on node \"crc\" DevicePath \"\"" Feb 27 16:13:23 crc kubenswrapper[4830]: I0227 16:13:23.192123 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/264690bd-7360-4111-8d54-258b2f960185-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:13:23 crc kubenswrapper[4830]: I0227 16:13:23.192133 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/264690bd-7360-4111-8d54-258b2f960185-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:13:23 crc kubenswrapper[4830]: I0227 16:13:23.445952 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58c9fbdd4b-qlgdj"] Feb 27 16:13:23 crc kubenswrapper[4830]: I0227 16:13:23.524938 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-849bff8645-6j7qr" event={"ID":"264690bd-7360-4111-8d54-258b2f960185","Type":"ContainerDied","Data":"6b3a537b3db926f6ec4d12caeb40545968c8383d0f8fc4ff70c69811957e9025"} Feb 27 16:13:23 crc kubenswrapper[4830]: I0227 16:13:23.525017 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-849bff8645-6j7qr" Feb 27 16:13:23 crc kubenswrapper[4830]: I0227 16:13:23.525027 4830 scope.go:117] "RemoveContainer" containerID="b43e08b7ad49958644fd555ae929a651fe32455014c47fd901e7cfcc0eabd1fb" Feb 27 16:13:23 crc kubenswrapper[4830]: I0227 16:13:23.527134 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58c9fbdd4b-qlgdj" event={"ID":"5bb5794f-0af3-4d3d-aff1-73d8fb49b63f","Type":"ContainerStarted","Data":"237032bb66afbea1acde83ffb4edf6d8d66dc40083533dac1543cd4a4eaace0b"} Feb 27 16:13:23 crc kubenswrapper[4830]: I0227 16:13:23.581600 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-849bff8645-6j7qr"] Feb 27 16:13:23 crc kubenswrapper[4830]: I0227 16:13:23.587059 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-849bff8645-6j7qr"] Feb 27 16:13:23 crc kubenswrapper[4830]: I0227 16:13:23.964516 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7447957dcb-tg5k8"] Feb 27 16:13:23 crc kubenswrapper[4830]: E0227 16:13:23.965895 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="264690bd-7360-4111-8d54-258b2f960185" containerName="route-controller-manager" Feb 27 16:13:23 crc kubenswrapper[4830]: I0227 16:13:23.965919 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="264690bd-7360-4111-8d54-258b2f960185" containerName="route-controller-manager" Feb 27 16:13:23 crc kubenswrapper[4830]: I0227 16:13:23.966201 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="264690bd-7360-4111-8d54-258b2f960185" containerName="route-controller-manager" Feb 27 16:13:23 crc kubenswrapper[4830]: I0227 16:13:23.967059 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7447957dcb-tg5k8" Feb 27 16:13:23 crc kubenswrapper[4830]: I0227 16:13:23.970097 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 27 16:13:23 crc kubenswrapper[4830]: I0227 16:13:23.970670 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 27 16:13:23 crc kubenswrapper[4830]: I0227 16:13:23.971151 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 27 16:13:23 crc kubenswrapper[4830]: I0227 16:13:23.971471 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 27 16:13:23 crc kubenswrapper[4830]: I0227 16:13:23.971775 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 27 16:13:23 crc kubenswrapper[4830]: I0227 16:13:23.972257 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 27 16:13:23 crc kubenswrapper[4830]: I0227 16:13:23.983293 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7447957dcb-tg5k8"] Feb 27 16:13:24 crc kubenswrapper[4830]: I0227 16:13:24.106014 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd2f243e-4291-4cdb-989f-e285347ce7e7-config\") pod \"route-controller-manager-7447957dcb-tg5k8\" (UID: \"dd2f243e-4291-4cdb-989f-e285347ce7e7\") " pod="openshift-route-controller-manager/route-controller-manager-7447957dcb-tg5k8" Feb 27 16:13:24 crc kubenswrapper[4830]: I0227 16:13:24.106088 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zwvd\" (UniqueName: \"kubernetes.io/projected/dd2f243e-4291-4cdb-989f-e285347ce7e7-kube-api-access-5zwvd\") pod \"route-controller-manager-7447957dcb-tg5k8\" (UID: \"dd2f243e-4291-4cdb-989f-e285347ce7e7\") " pod="openshift-route-controller-manager/route-controller-manager-7447957dcb-tg5k8" Feb 27 16:13:24 crc kubenswrapper[4830]: I0227 16:13:24.106243 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd2f243e-4291-4cdb-989f-e285347ce7e7-client-ca\") pod \"route-controller-manager-7447957dcb-tg5k8\" (UID: \"dd2f243e-4291-4cdb-989f-e285347ce7e7\") " pod="openshift-route-controller-manager/route-controller-manager-7447957dcb-tg5k8" Feb 27 16:13:24 crc kubenswrapper[4830]: I0227 16:13:24.106338 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd2f243e-4291-4cdb-989f-e285347ce7e7-serving-cert\") pod \"route-controller-manager-7447957dcb-tg5k8\" (UID: \"dd2f243e-4291-4cdb-989f-e285347ce7e7\") " pod="openshift-route-controller-manager/route-controller-manager-7447957dcb-tg5k8" Feb 27 16:13:24 crc kubenswrapper[4830]: I0227 16:13:24.208298 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd2f243e-4291-4cdb-989f-e285347ce7e7-config\") pod \"route-controller-manager-7447957dcb-tg5k8\" (UID: \"dd2f243e-4291-4cdb-989f-e285347ce7e7\") " pod="openshift-route-controller-manager/route-controller-manager-7447957dcb-tg5k8" Feb 27 16:13:24 crc kubenswrapper[4830]: I0227 16:13:24.208371 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zwvd\" (UniqueName: \"kubernetes.io/projected/dd2f243e-4291-4cdb-989f-e285347ce7e7-kube-api-access-5zwvd\") pod \"route-controller-manager-7447957dcb-tg5k8\" (UID: \"dd2f243e-4291-4cdb-989f-e285347ce7e7\") " pod="openshift-route-controller-manager/route-controller-manager-7447957dcb-tg5k8" Feb 27 16:13:24 crc kubenswrapper[4830]: I0227 16:13:24.208430 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd2f243e-4291-4cdb-989f-e285347ce7e7-client-ca\") pod \"route-controller-manager-7447957dcb-tg5k8\" (UID: \"dd2f243e-4291-4cdb-989f-e285347ce7e7\") " pod="openshift-route-controller-manager/route-controller-manager-7447957dcb-tg5k8" Feb 27 16:13:24 crc kubenswrapper[4830]: I0227 16:13:24.208514 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd2f243e-4291-4cdb-989f-e285347ce7e7-serving-cert\") pod \"route-controller-manager-7447957dcb-tg5k8\" (UID: \"dd2f243e-4291-4cdb-989f-e285347ce7e7\") " pod="openshift-route-controller-manager/route-controller-manager-7447957dcb-tg5k8" Feb 27 16:13:24 crc kubenswrapper[4830]: I0227 16:13:24.210062 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd2f243e-4291-4cdb-989f-e285347ce7e7-client-ca\") pod \"route-controller-manager-7447957dcb-tg5k8\" (UID: \"dd2f243e-4291-4cdb-989f-e285347ce7e7\") " pod="openshift-route-controller-manager/route-controller-manager-7447957dcb-tg5k8" Feb 27 16:13:24 crc kubenswrapper[4830]: I0227 16:13:24.210251 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd2f243e-4291-4cdb-989f-e285347ce7e7-config\") pod \"route-controller-manager-7447957dcb-tg5k8\" (UID: \"dd2f243e-4291-4cdb-989f-e285347ce7e7\") " pod="openshift-route-controller-manager/route-controller-manager-7447957dcb-tg5k8" Feb 27 16:13:24 crc kubenswrapper[4830]: I0227 16:13:24.218662 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd2f243e-4291-4cdb-989f-e285347ce7e7-serving-cert\") pod \"route-controller-manager-7447957dcb-tg5k8\" (UID: \"dd2f243e-4291-4cdb-989f-e285347ce7e7\") " pod="openshift-route-controller-manager/route-controller-manager-7447957dcb-tg5k8" Feb 27 16:13:24 crc kubenswrapper[4830]: I0227 16:13:24.230168 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zwvd\" (UniqueName: \"kubernetes.io/projected/dd2f243e-4291-4cdb-989f-e285347ce7e7-kube-api-access-5zwvd\") pod \"route-controller-manager-7447957dcb-tg5k8\" (UID: \"dd2f243e-4291-4cdb-989f-e285347ce7e7\") " pod="openshift-route-controller-manager/route-controller-manager-7447957dcb-tg5k8" Feb 27 16:13:24 crc kubenswrapper[4830]: I0227 16:13:24.296115 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7447957dcb-tg5k8" Feb 27 16:13:28 crc kubenswrapper[4830]: I0227 16:13:24.584615 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7447957dcb-tg5k8"] Feb 27 16:13:28 crc kubenswrapper[4830]: I0227 16:13:24.775867 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="264690bd-7360-4111-8d54-258b2f960185" path="/var/lib/kubelet/pods/264690bd-7360-4111-8d54-258b2f960185/volumes" Feb 27 16:13:28 crc kubenswrapper[4830]: I0227 16:13:25.545473 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7447957dcb-tg5k8" event={"ID":"dd2f243e-4291-4cdb-989f-e285347ce7e7","Type":"ContainerStarted","Data":"57be6305866053d4099854dde21b7ac96c8c0926b3f622b8e91deff17e408ec3"} Feb 27 16:13:28 crc kubenswrapper[4830]: I0227 16:13:25.545812 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7447957dcb-tg5k8" event={"ID":"dd2f243e-4291-4cdb-989f-e285347ce7e7","Type":"ContainerStarted","Data":"221e0c87c87b0a46a2530e31015d643b49632e8ea8fb109eacba8861c2124be6"} Feb 27 16:13:28 crc kubenswrapper[4830]: I0227 16:13:25.545834 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7447957dcb-tg5k8" Feb 27 16:13:28 crc kubenswrapper[4830]: I0227 16:13:25.550446 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58c9fbdd4b-qlgdj" event={"ID":"5bb5794f-0af3-4d3d-aff1-73d8fb49b63f","Type":"ContainerStarted","Data":"57a4f3fb2792af52b24b3b516384780ac987315fef50aee4f851a594b13085fd"} Feb 27 16:13:28 crc kubenswrapper[4830]: I0227 16:13:25.550767 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-58c9fbdd4b-qlgdj" Feb 27 16:13:28 crc kubenswrapper[4830]: I0227 16:13:25.557172 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-58c9fbdd4b-qlgdj" Feb 27 16:13:28 crc kubenswrapper[4830]: I0227 16:13:25.570362 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7447957dcb-tg5k8" podStartSLOduration=7.570337976 podStartE2EDuration="7.570337976s" podCreationTimestamp="2026-02-27 16:13:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:13:25.568802606 +0000 UTC m=+401.658075079" watchObservedRunningTime="2026-02-27 16:13:25.570337976 +0000 UTC m=+401.659610479" Feb 27 16:13:28 crc kubenswrapper[4830]: I0227 16:13:25.587913 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-58c9fbdd4b-qlgdj" podStartSLOduration=7.587897536 podStartE2EDuration="7.587897536s" podCreationTimestamp="2026-02-27 16:13:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:13:25.587209768 +0000 UTC m=+401.676482261" watchObservedRunningTime="2026-02-27 16:13:25.587897536 +0000 UTC m=+401.677169999" Feb 27 16:13:28 crc kubenswrapper[4830]: I0227 16:13:25.666478 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7447957dcb-tg5k8" Feb 27 16:13:33 crc kubenswrapper[4830]: I0227 16:13:33.160116 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 16:13:33 crc kubenswrapper[4830]: I0227 16:13:33.160755 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 16:13:58 crc kubenswrapper[4830]: I0227 16:13:58.511065 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-67gxz"] Feb 27 16:13:58 crc kubenswrapper[4830]: I0227 16:13:58.512735 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-67gxz" Feb 27 16:13:58 crc kubenswrapper[4830]: I0227 16:13:58.527445 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-67gxz"] Feb 27 16:13:58 crc kubenswrapper[4830]: I0227 16:13:58.683285 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/df1d6f86-a21d-4487-98f4-f5eb7d3248f6-registry-tls\") pod \"image-registry-66df7c8f76-67gxz\" (UID: \"df1d6f86-a21d-4487-98f4-f5eb7d3248f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-67gxz" Feb 27 16:13:58 crc kubenswrapper[4830]: I0227 16:13:58.683357 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/df1d6f86-a21d-4487-98f4-f5eb7d3248f6-bound-sa-token\") pod \"image-registry-66df7c8f76-67gxz\" (UID: \"df1d6f86-a21d-4487-98f4-f5eb7d3248f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-67gxz" Feb 27 16:13:58 crc kubenswrapper[4830]: I0227 16:13:58.683485 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/df1d6f86-a21d-4487-98f4-f5eb7d3248f6-trusted-ca\") pod \"image-registry-66df7c8f76-67gxz\" (UID: \"df1d6f86-a21d-4487-98f4-f5eb7d3248f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-67gxz" Feb 27 16:13:58 crc kubenswrapper[4830]: I0227 16:13:58.683581 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-67gxz\" (UID: \"df1d6f86-a21d-4487-98f4-f5eb7d3248f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-67gxz" Feb 27 16:13:58 crc kubenswrapper[4830]: I0227 16:13:58.683652 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/df1d6f86-a21d-4487-98f4-f5eb7d3248f6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-67gxz\" (UID: \"df1d6f86-a21d-4487-98f4-f5eb7d3248f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-67gxz" Feb 27 16:13:58 crc kubenswrapper[4830]: I0227 16:13:58.683696 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/df1d6f86-a21d-4487-98f4-f5eb7d3248f6-registry-certificates\") pod \"image-registry-66df7c8f76-67gxz\" (UID: \"df1d6f86-a21d-4487-98f4-f5eb7d3248f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-67gxz" Feb 27 16:13:58 crc kubenswrapper[4830]: I0227 16:13:58.683737 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbw7v\" (UniqueName: \"kubernetes.io/projected/df1d6f86-a21d-4487-98f4-f5eb7d3248f6-kube-api-access-qbw7v\") pod \"image-registry-66df7c8f76-67gxz\" (UID: \"df1d6f86-a21d-4487-98f4-f5eb7d3248f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-67gxz" Feb 27 16:13:58 crc kubenswrapper[4830]: I0227 16:13:58.683814 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/df1d6f86-a21d-4487-98f4-f5eb7d3248f6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-67gxz\" (UID: \"df1d6f86-a21d-4487-98f4-f5eb7d3248f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-67gxz" Feb 27 16:13:58 crc kubenswrapper[4830]: I0227 16:13:58.713597 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-67gxz\" (UID: \"df1d6f86-a21d-4487-98f4-f5eb7d3248f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-67gxz" Feb 27 16:13:58 crc kubenswrapper[4830]: I0227 16:13:58.784823 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/df1d6f86-a21d-4487-98f4-f5eb7d3248f6-trusted-ca\") pod \"image-registry-66df7c8f76-67gxz\" (UID: \"df1d6f86-a21d-4487-98f4-f5eb7d3248f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-67gxz" Feb 27 16:13:58 crc kubenswrapper[4830]: I0227 16:13:58.785034 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/df1d6f86-a21d-4487-98f4-f5eb7d3248f6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-67gxz\" (UID: \"df1d6f86-a21d-4487-98f4-f5eb7d3248f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-67gxz" Feb 27 16:13:58 crc kubenswrapper[4830]: I0227 16:13:58.785097 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/df1d6f86-a21d-4487-98f4-f5eb7d3248f6-registry-certificates\") pod \"image-registry-66df7c8f76-67gxz\" (UID: \"df1d6f86-a21d-4487-98f4-f5eb7d3248f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-67gxz" Feb 27 16:13:58 crc kubenswrapper[4830]: I0227 16:13:58.785158 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbw7v\" (UniqueName: \"kubernetes.io/projected/df1d6f86-a21d-4487-98f4-f5eb7d3248f6-kube-api-access-qbw7v\") pod \"image-registry-66df7c8f76-67gxz\" (UID: \"df1d6f86-a21d-4487-98f4-f5eb7d3248f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-67gxz" Feb 27 16:13:58 crc kubenswrapper[4830]: I0227 16:13:58.785245 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/df1d6f86-a21d-4487-98f4-f5eb7d3248f6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-67gxz\" (UID: \"df1d6f86-a21d-4487-98f4-f5eb7d3248f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-67gxz" Feb 27 16:13:58 crc kubenswrapper[4830]: I0227 16:13:58.785369 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/df1d6f86-a21d-4487-98f4-f5eb7d3248f6-registry-tls\") pod \"image-registry-66df7c8f76-67gxz\" (UID: \"df1d6f86-a21d-4487-98f4-f5eb7d3248f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-67gxz" Feb 27 16:13:58 crc kubenswrapper[4830]: I0227 16:13:58.785421 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/df1d6f86-a21d-4487-98f4-f5eb7d3248f6-bound-sa-token\") pod \"image-registry-66df7c8f76-67gxz\" (UID: \"df1d6f86-a21d-4487-98f4-f5eb7d3248f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-67gxz" Feb 27 16:13:58 crc kubenswrapper[4830]: I0227 16:13:58.787325 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/df1d6f86-a21d-4487-98f4-f5eb7d3248f6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-67gxz\" (UID: \"df1d6f86-a21d-4487-98f4-f5eb7d3248f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-67gxz" Feb 27 16:13:58 crc kubenswrapper[4830]: I0227 16:13:58.788030 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/df1d6f86-a21d-4487-98f4-f5eb7d3248f6-trusted-ca\") pod \"image-registry-66df7c8f76-67gxz\" (UID: \"df1d6f86-a21d-4487-98f4-f5eb7d3248f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-67gxz" Feb 27 16:13:58 crc kubenswrapper[4830]: I0227 16:13:58.788360 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/df1d6f86-a21d-4487-98f4-f5eb7d3248f6-registry-certificates\") pod \"image-registry-66df7c8f76-67gxz\" (UID: \"df1d6f86-a21d-4487-98f4-f5eb7d3248f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-67gxz" Feb 27 16:13:58 crc kubenswrapper[4830]: I0227 16:13:58.794892 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/df1d6f86-a21d-4487-98f4-f5eb7d3248f6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-67gxz\" (UID: \"df1d6f86-a21d-4487-98f4-f5eb7d3248f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-67gxz" Feb 27 16:13:58 crc kubenswrapper[4830]: I0227 16:13:58.796377 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/df1d6f86-a21d-4487-98f4-f5eb7d3248f6-registry-tls\") pod \"image-registry-66df7c8f76-67gxz\" (UID: \"df1d6f86-a21d-4487-98f4-f5eb7d3248f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-67gxz" Feb 27 16:13:58 crc kubenswrapper[4830]: I0227 16:13:58.817774 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/df1d6f86-a21d-4487-98f4-f5eb7d3248f6-bound-sa-token\") pod \"image-registry-66df7c8f76-67gxz\" (UID: \"df1d6f86-a21d-4487-98f4-f5eb7d3248f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-67gxz" Feb 27 16:13:58 crc kubenswrapper[4830]: I0227 16:13:58.821210 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbw7v\" (UniqueName: \"kubernetes.io/projected/df1d6f86-a21d-4487-98f4-f5eb7d3248f6-kube-api-access-qbw7v\") pod \"image-registry-66df7c8f76-67gxz\" (UID: \"df1d6f86-a21d-4487-98f4-f5eb7d3248f6\") " pod="openshift-image-registry/image-registry-66df7c8f76-67gxz" Feb 27 16:13:58 crc kubenswrapper[4830]: I0227 16:13:58.827892 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-67gxz" Feb 27 16:13:59 crc kubenswrapper[4830]: I0227 16:13:59.330274 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-67gxz"] Feb 27 16:13:59 crc kubenswrapper[4830]: W0227 16:13:59.340610 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf1d6f86_a21d_4487_98f4_f5eb7d3248f6.slice/crio-76e2fa6ca6e799c0effa28f7ab75043f830a9bf5673f4c18b7017c80e21600ad WatchSource:0}: Error finding container 76e2fa6ca6e799c0effa28f7ab75043f830a9bf5673f4c18b7017c80e21600ad: Status 404 returned error can't find the container with id 76e2fa6ca6e799c0effa28f7ab75043f830a9bf5673f4c18b7017c80e21600ad Feb 27 16:13:59 crc kubenswrapper[4830]: I0227 16:13:59.787354 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-67gxz" event={"ID":"df1d6f86-a21d-4487-98f4-f5eb7d3248f6","Type":"ContainerStarted","Data":"e75558f31f72bc40320956075bab537300bd9247aa55f2495a6dc0fa3bd952ee"} Feb 27 16:13:59 crc kubenswrapper[4830]: I0227 16:13:59.787797 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-67gxz" Feb 27 16:13:59 crc kubenswrapper[4830]: I0227 16:13:59.787815 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-67gxz" event={"ID":"df1d6f86-a21d-4487-98f4-f5eb7d3248f6","Type":"ContainerStarted","Data":"76e2fa6ca6e799c0effa28f7ab75043f830a9bf5673f4c18b7017c80e21600ad"} Feb 27 16:13:59 crc kubenswrapper[4830]: I0227 16:13:59.810051 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-67gxz" podStartSLOduration=1.810031473 podStartE2EDuration="1.810031473s" podCreationTimestamp="2026-02-27 16:13:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:13:59.80877633 +0000 UTC m=+435.898048783" watchObservedRunningTime="2026-02-27 16:13:59.810031473 +0000 UTC m=+435.899303936" Feb 27 16:14:00 crc kubenswrapper[4830]: I0227 16:14:00.132414 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536814-mtslw"] Feb 27 16:14:00 crc kubenswrapper[4830]: I0227 16:14:00.133550 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536814-mtslw" Feb 27 16:14:00 crc kubenswrapper[4830]: I0227 16:14:00.135754 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 16:14:00 crc kubenswrapper[4830]: I0227 16:14:00.136039 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 16:14:00 crc kubenswrapper[4830]: I0227 16:14:00.136249 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 16:14:00 crc kubenswrapper[4830]: I0227 16:14:00.136715 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536814-mtslw"] Feb 27 16:14:00 crc kubenswrapper[4830]: I0227 16:14:00.334809 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggcsc\" (UniqueName: \"kubernetes.io/projected/7cc8e4cc-918f-47f8-8baf-b531cbeedc76-kube-api-access-ggcsc\") pod \"auto-csr-approver-29536814-mtslw\" (UID: \"7cc8e4cc-918f-47f8-8baf-b531cbeedc76\") " pod="openshift-infra/auto-csr-approver-29536814-mtslw" Feb 27 16:14:00 crc kubenswrapper[4830]: I0227 16:14:00.436148 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggcsc\" (UniqueName: \"kubernetes.io/projected/7cc8e4cc-918f-47f8-8baf-b531cbeedc76-kube-api-access-ggcsc\") pod \"auto-csr-approver-29536814-mtslw\" (UID: \"7cc8e4cc-918f-47f8-8baf-b531cbeedc76\") " pod="openshift-infra/auto-csr-approver-29536814-mtslw" Feb 27 16:14:00 crc kubenswrapper[4830]: I0227 16:14:00.476835 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggcsc\" (UniqueName: \"kubernetes.io/projected/7cc8e4cc-918f-47f8-8baf-b531cbeedc76-kube-api-access-ggcsc\") pod \"auto-csr-approver-29536814-mtslw\" (UID: \"7cc8e4cc-918f-47f8-8baf-b531cbeedc76\") " pod="openshift-infra/auto-csr-approver-29536814-mtslw" Feb 27 16:14:00 crc kubenswrapper[4830]: I0227 16:14:00.760700 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536814-mtslw" Feb 27 16:14:01 crc kubenswrapper[4830]: I0227 16:14:01.294848 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536814-mtslw"] Feb 27 16:14:01 crc kubenswrapper[4830]: W0227 16:14:01.302228 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cc8e4cc_918f_47f8_8baf_b531cbeedc76.slice/crio-fc89f76864375c51470be0ada430e01a40be5afd08053cda70b1a432a678309a WatchSource:0}: Error finding container fc89f76864375c51470be0ada430e01a40be5afd08053cda70b1a432a678309a: Status 404 returned error can't find the container with id fc89f76864375c51470be0ada430e01a40be5afd08053cda70b1a432a678309a Feb 27 16:14:01 crc kubenswrapper[4830]: I0227 16:14:01.804323 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536814-mtslw" event={"ID":"7cc8e4cc-918f-47f8-8baf-b531cbeedc76","Type":"ContainerStarted","Data":"fc89f76864375c51470be0ada430e01a40be5afd08053cda70b1a432a678309a"} Feb 27 16:14:03 crc kubenswrapper[4830]: I0227 16:14:03.160309 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 16:14:03 crc kubenswrapper[4830]: I0227 16:14:03.160805 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 16:14:03 crc kubenswrapper[4830]: I0227 16:14:03.822218 4830 generic.go:334] "Generic (PLEG): container finished" podID="7cc8e4cc-918f-47f8-8baf-b531cbeedc76" containerID="1f0a4854add6d99771670874866bcf97f5e49cc2c063cc2b8cf4261525405a9f" exitCode=0 Feb 27 16:14:03 crc kubenswrapper[4830]: I0227 16:14:03.822407 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536814-mtslw" event={"ID":"7cc8e4cc-918f-47f8-8baf-b531cbeedc76","Type":"ContainerDied","Data":"1f0a4854add6d99771670874866bcf97f5e49cc2c063cc2b8cf4261525405a9f"} Feb 27 16:14:05 crc kubenswrapper[4830]: I0227 16:14:05.215029 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536814-mtslw" Feb 27 16:14:05 crc kubenswrapper[4830]: I0227 16:14:05.312401 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggcsc\" (UniqueName: \"kubernetes.io/projected/7cc8e4cc-918f-47f8-8baf-b531cbeedc76-kube-api-access-ggcsc\") pod \"7cc8e4cc-918f-47f8-8baf-b531cbeedc76\" (UID: \"7cc8e4cc-918f-47f8-8baf-b531cbeedc76\") " Feb 27 16:14:05 crc kubenswrapper[4830]: I0227 16:14:05.319215 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cc8e4cc-918f-47f8-8baf-b531cbeedc76-kube-api-access-ggcsc" (OuterVolumeSpecName: "kube-api-access-ggcsc") pod "7cc8e4cc-918f-47f8-8baf-b531cbeedc76" (UID: "7cc8e4cc-918f-47f8-8baf-b531cbeedc76"). InnerVolumeSpecName "kube-api-access-ggcsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:14:05 crc kubenswrapper[4830]: I0227 16:14:05.413559 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggcsc\" (UniqueName: \"kubernetes.io/projected/7cc8e4cc-918f-47f8-8baf-b531cbeedc76-kube-api-access-ggcsc\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:05 crc kubenswrapper[4830]: I0227 16:14:05.839537 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536814-mtslw" event={"ID":"7cc8e4cc-918f-47f8-8baf-b531cbeedc76","Type":"ContainerDied","Data":"fc89f76864375c51470be0ada430e01a40be5afd08053cda70b1a432a678309a"} Feb 27 16:14:05 crc kubenswrapper[4830]: I0227 16:14:05.839840 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc89f76864375c51470be0ada430e01a40be5afd08053cda70b1a432a678309a" Feb 27 16:14:05 crc kubenswrapper[4830]: I0227 16:14:05.839568 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536814-mtslw" Feb 27 16:14:06 crc kubenswrapper[4830]: I0227 16:14:06.068908 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s4bpk"] Feb 27 16:14:06 crc kubenswrapper[4830]: I0227 16:14:06.069241 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s4bpk" podUID="1c5e2cae-7890-48fb-ab76-7e53c52fd6ac" containerName="registry-server" containerID="cri-o://f6d6173839dc489a5784d5306cc3a3b42d8583326f84a6455d829e1ac8c12462" gracePeriod=2 Feb 27 16:14:06 crc kubenswrapper[4830]: I0227 16:14:06.293433 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dnpxp"] Feb 27 16:14:06 crc kubenswrapper[4830]: I0227 16:14:06.295298 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dnpxp" podUID="789ee180-dd8e-4cb2-884e-beea08667c53" containerName="registry-server" containerID="cri-o://921f9a872842fc060faffc6334f320e1f4a8ad67594a2cc96ae83a856abd0c29" gracePeriod=2 Feb 27 16:14:06 crc kubenswrapper[4830]: I0227 16:14:06.855670 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dnpxp" Feb 27 16:14:06 crc kubenswrapper[4830]: I0227 16:14:06.863790 4830 generic.go:334] "Generic (PLEG): container finished" podID="1c5e2cae-7890-48fb-ab76-7e53c52fd6ac" containerID="f6d6173839dc489a5784d5306cc3a3b42d8583326f84a6455d829e1ac8c12462" exitCode=0 Feb 27 16:14:06 crc kubenswrapper[4830]: I0227 16:14:06.863877 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4bpk" event={"ID":"1c5e2cae-7890-48fb-ab76-7e53c52fd6ac","Type":"ContainerDied","Data":"f6d6173839dc489a5784d5306cc3a3b42d8583326f84a6455d829e1ac8c12462"} Feb 27 16:14:06 crc kubenswrapper[4830]: I0227 16:14:06.871040 4830 generic.go:334] "Generic (PLEG): container finished" podID="789ee180-dd8e-4cb2-884e-beea08667c53" containerID="921f9a872842fc060faffc6334f320e1f4a8ad67594a2cc96ae83a856abd0c29" exitCode=0 Feb 27 16:14:06 crc kubenswrapper[4830]: I0227 16:14:06.871087 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnpxp" event={"ID":"789ee180-dd8e-4cb2-884e-beea08667c53","Type":"ContainerDied","Data":"921f9a872842fc060faffc6334f320e1f4a8ad67594a2cc96ae83a856abd0c29"} Feb 27 16:14:06 crc kubenswrapper[4830]: I0227 16:14:06.871123 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnpxp" event={"ID":"789ee180-dd8e-4cb2-884e-beea08667c53","Type":"ContainerDied","Data":"fbedc23b91b7d31b19327fd5a20984e57e48532a563c7a1024a77329cb7ac5b6"} Feb 27 16:14:06 crc kubenswrapper[4830]: I0227 16:14:06.871143 4830 scope.go:117] "RemoveContainer" containerID="921f9a872842fc060faffc6334f320e1f4a8ad67594a2cc96ae83a856abd0c29" Feb 27 16:14:06 crc kubenswrapper[4830]: I0227 16:14:06.871287 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dnpxp" Feb 27 16:14:06 crc kubenswrapper[4830]: I0227 16:14:06.902230 4830 scope.go:117] "RemoveContainer" containerID="e87c5c09c93b3a772e9b716d5f7da922b173132f3fc61fb11a463475894474b0" Feb 27 16:14:06 crc kubenswrapper[4830]: I0227 16:14:06.919259 4830 scope.go:117] "RemoveContainer" containerID="a92b59e6489f19adce074bfbc81353c0bb9b1718b3c77f9d2eec7547c43b3655" Feb 27 16:14:06 crc kubenswrapper[4830]: I0227 16:14:06.952419 4830 scope.go:117] "RemoveContainer" containerID="921f9a872842fc060faffc6334f320e1f4a8ad67594a2cc96ae83a856abd0c29" Feb 27 16:14:06 crc kubenswrapper[4830]: E0227 16:14:06.953159 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"921f9a872842fc060faffc6334f320e1f4a8ad67594a2cc96ae83a856abd0c29\": container with ID starting with 921f9a872842fc060faffc6334f320e1f4a8ad67594a2cc96ae83a856abd0c29 not found: ID does not exist" containerID="921f9a872842fc060faffc6334f320e1f4a8ad67594a2cc96ae83a856abd0c29" Feb 27 16:14:06 crc kubenswrapper[4830]: I0227 16:14:06.953228 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"921f9a872842fc060faffc6334f320e1f4a8ad67594a2cc96ae83a856abd0c29"} err="failed to get container status \"921f9a872842fc060faffc6334f320e1f4a8ad67594a2cc96ae83a856abd0c29\": rpc error: code = NotFound desc = could not find container \"921f9a872842fc060faffc6334f320e1f4a8ad67594a2cc96ae83a856abd0c29\": container with ID starting with 921f9a872842fc060faffc6334f320e1f4a8ad67594a2cc96ae83a856abd0c29 not found: ID does not exist" Feb 27 16:14:06 crc kubenswrapper[4830]: I0227 16:14:06.953264 4830 scope.go:117] "RemoveContainer" containerID="e87c5c09c93b3a772e9b716d5f7da922b173132f3fc61fb11a463475894474b0" Feb 27 16:14:06 crc kubenswrapper[4830]: E0227 16:14:06.955157 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e87c5c09c93b3a772e9b716d5f7da922b173132f3fc61fb11a463475894474b0\": container with ID starting with e87c5c09c93b3a772e9b716d5f7da922b173132f3fc61fb11a463475894474b0 not found: ID does not exist" containerID="e87c5c09c93b3a772e9b716d5f7da922b173132f3fc61fb11a463475894474b0" Feb 27 16:14:06 crc kubenswrapper[4830]: I0227 16:14:06.955222 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e87c5c09c93b3a772e9b716d5f7da922b173132f3fc61fb11a463475894474b0"} err="failed to get container status \"e87c5c09c93b3a772e9b716d5f7da922b173132f3fc61fb11a463475894474b0\": rpc error: code = NotFound desc = could not find container \"e87c5c09c93b3a772e9b716d5f7da922b173132f3fc61fb11a463475894474b0\": container with ID starting with e87c5c09c93b3a772e9b716d5f7da922b173132f3fc61fb11a463475894474b0 not found: ID does not exist" Feb 27 16:14:06 crc kubenswrapper[4830]: I0227 16:14:06.955257 4830 scope.go:117] "RemoveContainer" containerID="a92b59e6489f19adce074bfbc81353c0bb9b1718b3c77f9d2eec7547c43b3655" Feb 27 16:14:06 crc kubenswrapper[4830]: E0227 16:14:06.956006 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a92b59e6489f19adce074bfbc81353c0bb9b1718b3c77f9d2eec7547c43b3655\": container with ID starting with a92b59e6489f19adce074bfbc81353c0bb9b1718b3c77f9d2eec7547c43b3655 not found: ID does not exist" containerID="a92b59e6489f19adce074bfbc81353c0bb9b1718b3c77f9d2eec7547c43b3655" Feb 27 16:14:06 crc kubenswrapper[4830]: I0227 16:14:06.956064 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a92b59e6489f19adce074bfbc81353c0bb9b1718b3c77f9d2eec7547c43b3655"} err="failed to get container status \"a92b59e6489f19adce074bfbc81353c0bb9b1718b3c77f9d2eec7547c43b3655\": rpc error: code = NotFound desc = could not find container \"a92b59e6489f19adce074bfbc81353c0bb9b1718b3c77f9d2eec7547c43b3655\": container with ID starting with a92b59e6489f19adce074bfbc81353c0bb9b1718b3c77f9d2eec7547c43b3655 not found: ID does not exist" Feb 27 16:14:07 crc kubenswrapper[4830]: I0227 16:14:07.036478 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/789ee180-dd8e-4cb2-884e-beea08667c53-utilities\") pod \"789ee180-dd8e-4cb2-884e-beea08667c53\" (UID: \"789ee180-dd8e-4cb2-884e-beea08667c53\") " Feb 27 16:14:07 crc kubenswrapper[4830]: I0227 16:14:07.036545 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzbpj\" (UniqueName: \"kubernetes.io/projected/789ee180-dd8e-4cb2-884e-beea08667c53-kube-api-access-lzbpj\") pod \"789ee180-dd8e-4cb2-884e-beea08667c53\" (UID: \"789ee180-dd8e-4cb2-884e-beea08667c53\") " Feb 27 16:14:07 crc kubenswrapper[4830]: I0227 16:14:07.036588 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/789ee180-dd8e-4cb2-884e-beea08667c53-catalog-content\") pod \"789ee180-dd8e-4cb2-884e-beea08667c53\" (UID: \"789ee180-dd8e-4cb2-884e-beea08667c53\") " Feb 27 16:14:07 crc kubenswrapper[4830]: I0227 16:14:07.037893 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/789ee180-dd8e-4cb2-884e-beea08667c53-utilities" (OuterVolumeSpecName: "utilities") pod "789ee180-dd8e-4cb2-884e-beea08667c53" (UID: "789ee180-dd8e-4cb2-884e-beea08667c53"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:14:07 crc kubenswrapper[4830]: I0227 16:14:07.047095 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/789ee180-dd8e-4cb2-884e-beea08667c53-kube-api-access-lzbpj" (OuterVolumeSpecName: "kube-api-access-lzbpj") pod "789ee180-dd8e-4cb2-884e-beea08667c53" (UID: "789ee180-dd8e-4cb2-884e-beea08667c53"). InnerVolumeSpecName "kube-api-access-lzbpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:14:07 crc kubenswrapper[4830]: I0227 16:14:07.052200 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s4bpk" Feb 27 16:14:07 crc kubenswrapper[4830]: I0227 16:14:07.121673 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/789ee180-dd8e-4cb2-884e-beea08667c53-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "789ee180-dd8e-4cb2-884e-beea08667c53" (UID: "789ee180-dd8e-4cb2-884e-beea08667c53"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:14:07 crc kubenswrapper[4830]: I0227 16:14:07.137933 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c5e2cae-7890-48fb-ab76-7e53c52fd6ac-catalog-content\") pod \"1c5e2cae-7890-48fb-ab76-7e53c52fd6ac\" (UID: \"1c5e2cae-7890-48fb-ab76-7e53c52fd6ac\") " Feb 27 16:14:07 crc kubenswrapper[4830]: I0227 16:14:07.138035 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mngln\" (UniqueName: \"kubernetes.io/projected/1c5e2cae-7890-48fb-ab76-7e53c52fd6ac-kube-api-access-mngln\") pod \"1c5e2cae-7890-48fb-ab76-7e53c52fd6ac\" (UID: \"1c5e2cae-7890-48fb-ab76-7e53c52fd6ac\") " Feb 27 16:14:07 crc kubenswrapper[4830]: I0227 16:14:07.138076 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c5e2cae-7890-48fb-ab76-7e53c52fd6ac-utilities\") pod \"1c5e2cae-7890-48fb-ab76-7e53c52fd6ac\" (UID: \"1c5e2cae-7890-48fb-ab76-7e53c52fd6ac\") " Feb 27 16:14:07 crc kubenswrapper[4830]: I0227 16:14:07.138422 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/789ee180-dd8e-4cb2-884e-beea08667c53-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:07 crc kubenswrapper[4830]: I0227 16:14:07.138449 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/789ee180-dd8e-4cb2-884e-beea08667c53-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:07 crc kubenswrapper[4830]: I0227 16:14:07.138462 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzbpj\" (UniqueName: \"kubernetes.io/projected/789ee180-dd8e-4cb2-884e-beea08667c53-kube-api-access-lzbpj\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:07 crc kubenswrapper[4830]: I0227 16:14:07.138860 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c5e2cae-7890-48fb-ab76-7e53c52fd6ac-utilities" (OuterVolumeSpecName: "utilities") pod "1c5e2cae-7890-48fb-ab76-7e53c52fd6ac" (UID: "1c5e2cae-7890-48fb-ab76-7e53c52fd6ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:14:07 crc kubenswrapper[4830]: I0227 16:14:07.142315 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c5e2cae-7890-48fb-ab76-7e53c52fd6ac-kube-api-access-mngln" (OuterVolumeSpecName: "kube-api-access-mngln") pod "1c5e2cae-7890-48fb-ab76-7e53c52fd6ac" (UID: "1c5e2cae-7890-48fb-ab76-7e53c52fd6ac"). InnerVolumeSpecName "kube-api-access-mngln". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:14:07 crc kubenswrapper[4830]: I0227 16:14:07.196674 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c5e2cae-7890-48fb-ab76-7e53c52fd6ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c5e2cae-7890-48fb-ab76-7e53c52fd6ac" (UID: "1c5e2cae-7890-48fb-ab76-7e53c52fd6ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:14:07 crc kubenswrapper[4830]: I0227 16:14:07.233547 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dnpxp"] Feb 27 16:14:07 crc kubenswrapper[4830]: I0227 16:14:07.239923 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c5e2cae-7890-48fb-ab76-7e53c52fd6ac-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:07 crc kubenswrapper[4830]: I0227 16:14:07.239980 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mngln\" (UniqueName: \"kubernetes.io/projected/1c5e2cae-7890-48fb-ab76-7e53c52fd6ac-kube-api-access-mngln\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:07 crc kubenswrapper[4830]: I0227 16:14:07.239997 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c5e2cae-7890-48fb-ab76-7e53c52fd6ac-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:07 crc kubenswrapper[4830]: I0227 16:14:07.242789 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dnpxp"] Feb 27 16:14:07 crc kubenswrapper[4830]: I0227 16:14:07.882746 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4bpk" event={"ID":"1c5e2cae-7890-48fb-ab76-7e53c52fd6ac","Type":"ContainerDied","Data":"51b8e035ea8513e95d44b35eaa5b469325b15d1a8ba826b39d84ee669e11e1aa"} Feb 27 16:14:07 crc kubenswrapper[4830]: I0227 16:14:07.883264 4830 scope.go:117] "RemoveContainer" containerID="f6d6173839dc489a5784d5306cc3a3b42d8583326f84a6455d829e1ac8c12462" Feb 27 16:14:07 crc kubenswrapper[4830]: I0227 16:14:07.883108 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s4bpk" Feb 27 16:14:07 crc kubenswrapper[4830]: I0227 16:14:07.907508 4830 scope.go:117] "RemoveContainer" containerID="e339fc82a2d616e77fbf1f1320e48ce61fd5bb06ddd415acedd19281d147df0f" Feb 27 16:14:07 crc kubenswrapper[4830]: I0227 16:14:07.938322 4830 scope.go:117] "RemoveContainer" containerID="1b77db50d0d117fa49b266837e58b661e70cc8d35c82ff5810f8ea72d6daf765" Feb 27 16:14:07 crc kubenswrapper[4830]: I0227 16:14:07.941711 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s4bpk"] Feb 27 16:14:07 crc kubenswrapper[4830]: I0227 16:14:07.948786 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s4bpk"] Feb 27 16:14:08 crc kubenswrapper[4830]: I0227 16:14:08.474304 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwcdd"] Feb 27 16:14:08 crc kubenswrapper[4830]: I0227 16:14:08.474727 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zwcdd" podUID="728cab24-3fc3-4249-b37e-183d5676c191" containerName="registry-server" containerID="cri-o://1406def1089b142cf3b2b616ae9f6fffa9e0de03b2f398b478459ebdbb937a55" gracePeriod=2 Feb 27 16:14:08 crc kubenswrapper[4830]: I0227 16:14:08.670712 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s5z2n"] Feb 27 16:14:08 crc kubenswrapper[4830]: I0227 16:14:08.671103 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s5z2n" podUID="514ae4c6-322a-458e-a1e5-df6d6a47fc88" containerName="registry-server" containerID="cri-o://e2341099010dc017a2b84fe2d9ea379fea8c45cce2448de08f9e21ea1ca4f046" gracePeriod=2 Feb 27 16:14:08 crc kubenswrapper[4830]: I0227 16:14:08.771593 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c5e2cae-7890-48fb-ab76-7e53c52fd6ac" path="/var/lib/kubelet/pods/1c5e2cae-7890-48fb-ab76-7e53c52fd6ac/volumes" Feb 27 16:14:08 crc kubenswrapper[4830]: I0227 16:14:08.773011 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="789ee180-dd8e-4cb2-884e-beea08667c53" path="/var/lib/kubelet/pods/789ee180-dd8e-4cb2-884e-beea08667c53/volumes" Feb 27 16:14:09 crc kubenswrapper[4830]: I0227 16:14:09.509163 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwcdd" Feb 27 16:14:09 crc kubenswrapper[4830]: I0227 16:14:09.679089 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/728cab24-3fc3-4249-b37e-183d5676c191-utilities\") pod \"728cab24-3fc3-4249-b37e-183d5676c191\" (UID: \"728cab24-3fc3-4249-b37e-183d5676c191\") " Feb 27 16:14:09 crc kubenswrapper[4830]: I0227 16:14:09.679165 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/728cab24-3fc3-4249-b37e-183d5676c191-catalog-content\") pod \"728cab24-3fc3-4249-b37e-183d5676c191\" (UID: \"728cab24-3fc3-4249-b37e-183d5676c191\") " Feb 27 16:14:09 crc kubenswrapper[4830]: I0227 16:14:09.679197 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4j5c\" (UniqueName: \"kubernetes.io/projected/728cab24-3fc3-4249-b37e-183d5676c191-kube-api-access-z4j5c\") pod \"728cab24-3fc3-4249-b37e-183d5676c191\" (UID: \"728cab24-3fc3-4249-b37e-183d5676c191\") " Feb 27 16:14:09 crc kubenswrapper[4830]: I0227 16:14:09.681335 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/728cab24-3fc3-4249-b37e-183d5676c191-utilities" (OuterVolumeSpecName: "utilities") pod "728cab24-3fc3-4249-b37e-183d5676c191" (UID: "728cab24-3fc3-4249-b37e-183d5676c191"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:14:09 crc kubenswrapper[4830]: I0227 16:14:09.683872 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/728cab24-3fc3-4249-b37e-183d5676c191-kube-api-access-z4j5c" (OuterVolumeSpecName: "kube-api-access-z4j5c") pod "728cab24-3fc3-4249-b37e-183d5676c191" (UID: "728cab24-3fc3-4249-b37e-183d5676c191"). InnerVolumeSpecName "kube-api-access-z4j5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:14:09 crc kubenswrapper[4830]: I0227 16:14:09.727246 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/728cab24-3fc3-4249-b37e-183d5676c191-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "728cab24-3fc3-4249-b37e-183d5676c191" (UID: "728cab24-3fc3-4249-b37e-183d5676c191"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:14:09 crc kubenswrapper[4830]: I0227 16:14:09.781188 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/728cab24-3fc3-4249-b37e-183d5676c191-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:09 crc kubenswrapper[4830]: I0227 16:14:09.781234 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4j5c\" (UniqueName: \"kubernetes.io/projected/728cab24-3fc3-4249-b37e-183d5676c191-kube-api-access-z4j5c\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:09 crc kubenswrapper[4830]: I0227 16:14:09.781257 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/728cab24-3fc3-4249-b37e-183d5676c191-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:09 crc kubenswrapper[4830]: I0227 16:14:09.824502 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s5z2n" Feb 27 16:14:09 crc kubenswrapper[4830]: I0227 16:14:09.899205 4830 generic.go:334] "Generic (PLEG): container finished" podID="728cab24-3fc3-4249-b37e-183d5676c191" containerID="1406def1089b142cf3b2b616ae9f6fffa9e0de03b2f398b478459ebdbb937a55" exitCode=0 Feb 27 16:14:09 crc kubenswrapper[4830]: I0227 16:14:09.899264 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwcdd" event={"ID":"728cab24-3fc3-4249-b37e-183d5676c191","Type":"ContainerDied","Data":"1406def1089b142cf3b2b616ae9f6fffa9e0de03b2f398b478459ebdbb937a55"} Feb 27 16:14:09 crc kubenswrapper[4830]: I0227 16:14:09.899287 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zwcdd" event={"ID":"728cab24-3fc3-4249-b37e-183d5676c191","Type":"ContainerDied","Data":"11a15f4f81a0c8ba7bfcfb0caae3d29a2f928469187a063f087b8d427c5179e1"} Feb 27 16:14:09 crc kubenswrapper[4830]: I0227 16:14:09.899306 4830 scope.go:117] "RemoveContainer" containerID="1406def1089b142cf3b2b616ae9f6fffa9e0de03b2f398b478459ebdbb937a55" Feb 27 16:14:09 crc kubenswrapper[4830]: I0227 16:14:09.899330 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zwcdd" Feb 27 16:14:09 crc kubenswrapper[4830]: I0227 16:14:09.901802 4830 generic.go:334] "Generic (PLEG): container finished" podID="514ae4c6-322a-458e-a1e5-df6d6a47fc88" containerID="e2341099010dc017a2b84fe2d9ea379fea8c45cce2448de08f9e21ea1ca4f046" exitCode=0 Feb 27 16:14:09 crc kubenswrapper[4830]: I0227 16:14:09.901843 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5z2n" event={"ID":"514ae4c6-322a-458e-a1e5-df6d6a47fc88","Type":"ContainerDied","Data":"e2341099010dc017a2b84fe2d9ea379fea8c45cce2448de08f9e21ea1ca4f046"} Feb 27 16:14:09 crc kubenswrapper[4830]: I0227 16:14:09.901868 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5z2n" event={"ID":"514ae4c6-322a-458e-a1e5-df6d6a47fc88","Type":"ContainerDied","Data":"92a08b63a14ad202a1d5ae495c67d3ba3864adcf0339c9ae9f83cb24e4ba2c07"} Feb 27 16:14:09 crc kubenswrapper[4830]: I0227 16:14:09.901941 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s5z2n" Feb 27 16:14:09 crc kubenswrapper[4830]: I0227 16:14:09.921589 4830 scope.go:117] "RemoveContainer" containerID="f04e60e18187ba1d3282128f864582da59bd93aa71c25dae624bdb15480fcfa0" Feb 27 16:14:09 crc kubenswrapper[4830]: I0227 16:14:09.937185 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwcdd"] Feb 27 16:14:09 crc kubenswrapper[4830]: I0227 16:14:09.942816 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zwcdd"] Feb 27 16:14:09 crc kubenswrapper[4830]: I0227 16:14:09.945477 4830 scope.go:117] "RemoveContainer" containerID="56e9a05684abd121b608488eee870ec02035de2ff2ffe382701155153d851688" Feb 27 16:14:09 crc kubenswrapper[4830]: I0227 16:14:09.967132 4830 scope.go:117] "RemoveContainer" containerID="1406def1089b142cf3b2b616ae9f6fffa9e0de03b2f398b478459ebdbb937a55" Feb 27 16:14:09 crc kubenswrapper[4830]: E0227 16:14:09.967725 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1406def1089b142cf3b2b616ae9f6fffa9e0de03b2f398b478459ebdbb937a55\": container with ID starting with 1406def1089b142cf3b2b616ae9f6fffa9e0de03b2f398b478459ebdbb937a55 not found: ID does not exist" containerID="1406def1089b142cf3b2b616ae9f6fffa9e0de03b2f398b478459ebdbb937a55" Feb 27 16:14:09 crc kubenswrapper[4830]: I0227 16:14:09.967766 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1406def1089b142cf3b2b616ae9f6fffa9e0de03b2f398b478459ebdbb937a55"} err="failed to get container status \"1406def1089b142cf3b2b616ae9f6fffa9e0de03b2f398b478459ebdbb937a55\": rpc error: code = NotFound desc = could not find container \"1406def1089b142cf3b2b616ae9f6fffa9e0de03b2f398b478459ebdbb937a55\": container with ID starting with 1406def1089b142cf3b2b616ae9f6fffa9e0de03b2f398b478459ebdbb937a55 not found: ID does not exist" Feb 27 16:14:09 crc kubenswrapper[4830]: I0227 16:14:09.967796 4830 scope.go:117] "RemoveContainer" containerID="f04e60e18187ba1d3282128f864582da59bd93aa71c25dae624bdb15480fcfa0" Feb 27 16:14:09 crc kubenswrapper[4830]: E0227 16:14:09.968206 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f04e60e18187ba1d3282128f864582da59bd93aa71c25dae624bdb15480fcfa0\": container with ID starting with f04e60e18187ba1d3282128f864582da59bd93aa71c25dae624bdb15480fcfa0 not found: ID does not exist" containerID="f04e60e18187ba1d3282128f864582da59bd93aa71c25dae624bdb15480fcfa0" Feb 27 16:14:09 crc kubenswrapper[4830]: I0227 16:14:09.968248 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f04e60e18187ba1d3282128f864582da59bd93aa71c25dae624bdb15480fcfa0"} err="failed to get container status \"f04e60e18187ba1d3282128f864582da59bd93aa71c25dae624bdb15480fcfa0\": rpc error: code = NotFound desc = could not find container \"f04e60e18187ba1d3282128f864582da59bd93aa71c25dae624bdb15480fcfa0\": container with ID starting with f04e60e18187ba1d3282128f864582da59bd93aa71c25dae624bdb15480fcfa0 not found: ID does not exist" Feb 27 16:14:09 crc kubenswrapper[4830]: I0227 16:14:09.968271 4830 scope.go:117] "RemoveContainer" containerID="56e9a05684abd121b608488eee870ec02035de2ff2ffe382701155153d851688" Feb 27 16:14:09 crc kubenswrapper[4830]: E0227 16:14:09.968897 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56e9a05684abd121b608488eee870ec02035de2ff2ffe382701155153d851688\": container with ID starting with 56e9a05684abd121b608488eee870ec02035de2ff2ffe382701155153d851688 not found: ID does not exist" containerID="56e9a05684abd121b608488eee870ec02035de2ff2ffe382701155153d851688" Feb 27 16:14:09 crc kubenswrapper[4830]: I0227 16:14:09.968930 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56e9a05684abd121b608488eee870ec02035de2ff2ffe382701155153d851688"} err="failed to get container status \"56e9a05684abd121b608488eee870ec02035de2ff2ffe382701155153d851688\": rpc error: code = NotFound desc = could not find container \"56e9a05684abd121b608488eee870ec02035de2ff2ffe382701155153d851688\": container with ID starting with 56e9a05684abd121b608488eee870ec02035de2ff2ffe382701155153d851688 not found: ID does not exist" Feb 27 16:14:09 crc kubenswrapper[4830]: I0227 16:14:09.968966 4830 scope.go:117] "RemoveContainer" containerID="e2341099010dc017a2b84fe2d9ea379fea8c45cce2448de08f9e21ea1ca4f046" Feb 27 16:14:09 crc kubenswrapper[4830]: I0227 16:14:09.985352 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/514ae4c6-322a-458e-a1e5-df6d6a47fc88-utilities\") pod \"514ae4c6-322a-458e-a1e5-df6d6a47fc88\" (UID: \"514ae4c6-322a-458e-a1e5-df6d6a47fc88\") " Feb 27 16:14:09 crc kubenswrapper[4830]: I0227 16:14:09.985415 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s82dh\" (UniqueName: \"kubernetes.io/projected/514ae4c6-322a-458e-a1e5-df6d6a47fc88-kube-api-access-s82dh\") pod \"514ae4c6-322a-458e-a1e5-df6d6a47fc88\" (UID: \"514ae4c6-322a-458e-a1e5-df6d6a47fc88\") " Feb 27 16:14:09 crc kubenswrapper[4830]: I0227 16:14:09.985527 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/514ae4c6-322a-458e-a1e5-df6d6a47fc88-catalog-content\") pod \"514ae4c6-322a-458e-a1e5-df6d6a47fc88\" (UID: \"514ae4c6-322a-458e-a1e5-df6d6a47fc88\") " Feb 27 16:14:09 crc kubenswrapper[4830]: I0227 16:14:09.986374 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/514ae4c6-322a-458e-a1e5-df6d6a47fc88-utilities" (OuterVolumeSpecName: "utilities") pod "514ae4c6-322a-458e-a1e5-df6d6a47fc88" (UID: "514ae4c6-322a-458e-a1e5-df6d6a47fc88"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:14:09 crc kubenswrapper[4830]: I0227 16:14:09.989336 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/514ae4c6-322a-458e-a1e5-df6d6a47fc88-kube-api-access-s82dh" (OuterVolumeSpecName: "kube-api-access-s82dh") pod "514ae4c6-322a-458e-a1e5-df6d6a47fc88" (UID: "514ae4c6-322a-458e-a1e5-df6d6a47fc88"). InnerVolumeSpecName "kube-api-access-s82dh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:14:09 crc kubenswrapper[4830]: I0227 16:14:09.992530 4830 scope.go:117] "RemoveContainer" containerID="1274ee2697a94c60c1ceef4cc65ab3e5bb7f2453521c41698fdacd8ff1e99dc5" Feb 27 16:14:10 crc kubenswrapper[4830]: I0227 16:14:10.008778 4830 scope.go:117] "RemoveContainer" containerID="e451a4ea3c51636710a864c002dc901d70b11039337616deb3fc447374e38648" Feb 27 16:14:10 crc kubenswrapper[4830]: I0227 16:14:10.023368 4830 scope.go:117] "RemoveContainer" containerID="e2341099010dc017a2b84fe2d9ea379fea8c45cce2448de08f9e21ea1ca4f046" Feb 27 16:14:10 crc kubenswrapper[4830]: E0227 16:14:10.023864 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2341099010dc017a2b84fe2d9ea379fea8c45cce2448de08f9e21ea1ca4f046\": container with ID starting with e2341099010dc017a2b84fe2d9ea379fea8c45cce2448de08f9e21ea1ca4f046 not found: ID does not exist" containerID="e2341099010dc017a2b84fe2d9ea379fea8c45cce2448de08f9e21ea1ca4f046" Feb 27 16:14:10 crc kubenswrapper[4830]: I0227 16:14:10.023890 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2341099010dc017a2b84fe2d9ea379fea8c45cce2448de08f9e21ea1ca4f046"} err="failed to get container status \"e2341099010dc017a2b84fe2d9ea379fea8c45cce2448de08f9e21ea1ca4f046\": rpc error: code = NotFound desc = could not find container \"e2341099010dc017a2b84fe2d9ea379fea8c45cce2448de08f9e21ea1ca4f046\": container with ID starting with e2341099010dc017a2b84fe2d9ea379fea8c45cce2448de08f9e21ea1ca4f046 not found: ID does not exist" Feb 27 16:14:10 crc kubenswrapper[4830]: I0227 16:14:10.023911 4830 scope.go:117] "RemoveContainer" containerID="1274ee2697a94c60c1ceef4cc65ab3e5bb7f2453521c41698fdacd8ff1e99dc5" Feb 27 16:14:10 crc kubenswrapper[4830]: E0227 16:14:10.024227 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1274ee2697a94c60c1ceef4cc65ab3e5bb7f2453521c41698fdacd8ff1e99dc5\": container with ID starting with 1274ee2697a94c60c1ceef4cc65ab3e5bb7f2453521c41698fdacd8ff1e99dc5 not found: ID does not exist" containerID="1274ee2697a94c60c1ceef4cc65ab3e5bb7f2453521c41698fdacd8ff1e99dc5" Feb 27 16:14:10 crc kubenswrapper[4830]: I0227 16:14:10.024244 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1274ee2697a94c60c1ceef4cc65ab3e5bb7f2453521c41698fdacd8ff1e99dc5"} err="failed to get container status \"1274ee2697a94c60c1ceef4cc65ab3e5bb7f2453521c41698fdacd8ff1e99dc5\": rpc error: code = NotFound desc = could not find container \"1274ee2697a94c60c1ceef4cc65ab3e5bb7f2453521c41698fdacd8ff1e99dc5\": container with ID starting with 1274ee2697a94c60c1ceef4cc65ab3e5bb7f2453521c41698fdacd8ff1e99dc5 not found: ID does not exist" Feb 27 16:14:10 crc kubenswrapper[4830]: I0227 16:14:10.024256 4830 scope.go:117] "RemoveContainer" containerID="e451a4ea3c51636710a864c002dc901d70b11039337616deb3fc447374e38648" Feb 27 16:14:10 crc kubenswrapper[4830]: E0227 16:14:10.024557 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e451a4ea3c51636710a864c002dc901d70b11039337616deb3fc447374e38648\": container with ID starting with e451a4ea3c51636710a864c002dc901d70b11039337616deb3fc447374e38648 not found: ID does not exist" containerID="e451a4ea3c51636710a864c002dc901d70b11039337616deb3fc447374e38648" Feb 27 16:14:10 crc kubenswrapper[4830]: I0227 16:14:10.024601 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e451a4ea3c51636710a864c002dc901d70b11039337616deb3fc447374e38648"} err="failed to get container status \"e451a4ea3c51636710a864c002dc901d70b11039337616deb3fc447374e38648\": rpc error: code = NotFound desc = could not find container \"e451a4ea3c51636710a864c002dc901d70b11039337616deb3fc447374e38648\": container with ID starting with e451a4ea3c51636710a864c002dc901d70b11039337616deb3fc447374e38648 not found: ID does not exist" Feb 27 16:14:10 crc kubenswrapper[4830]: I0227 16:14:10.086832 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s82dh\" (UniqueName: \"kubernetes.io/projected/514ae4c6-322a-458e-a1e5-df6d6a47fc88-kube-api-access-s82dh\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:10 crc kubenswrapper[4830]: I0227 16:14:10.086873 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/514ae4c6-322a-458e-a1e5-df6d6a47fc88-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:10 crc kubenswrapper[4830]: I0227 16:14:10.133996 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/514ae4c6-322a-458e-a1e5-df6d6a47fc88-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "514ae4c6-322a-458e-a1e5-df6d6a47fc88" (UID: "514ae4c6-322a-458e-a1e5-df6d6a47fc88"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:14:10 crc kubenswrapper[4830]: I0227 16:14:10.188588 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/514ae4c6-322a-458e-a1e5-df6d6a47fc88-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:10 crc kubenswrapper[4830]: I0227 16:14:10.232698 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s5z2n"] Feb 27 16:14:10 crc kubenswrapper[4830]: I0227 16:14:10.237519 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s5z2n"] Feb 27 16:14:10 crc kubenswrapper[4830]: I0227 16:14:10.774062 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="514ae4c6-322a-458e-a1e5-df6d6a47fc88" path="/var/lib/kubelet/pods/514ae4c6-322a-458e-a1e5-df6d6a47fc88/volumes" Feb 27 16:14:10 crc kubenswrapper[4830]: I0227 16:14:10.775443 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="728cab24-3fc3-4249-b37e-183d5676c191" path="/var/lib/kubelet/pods/728cab24-3fc3-4249-b37e-183d5676c191/volumes" Feb 27 16:14:18 crc kubenswrapper[4830]: I0227 16:14:18.839452 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-67gxz" Feb 27 16:14:18 crc kubenswrapper[4830]: I0227 16:14:18.926151 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9gfr4"] Feb 27 16:14:19 crc kubenswrapper[4830]: I0227 16:14:19.007436 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-58c9fbdd4b-qlgdj"] Feb 27 16:14:19 crc kubenswrapper[4830]: I0227 16:14:19.007889 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-58c9fbdd4b-qlgdj" podUID="5bb5794f-0af3-4d3d-aff1-73d8fb49b63f" containerName="controller-manager" containerID="cri-o://57a4f3fb2792af52b24b3b516384780ac987315fef50aee4f851a594b13085fd" gracePeriod=30 Feb 27 16:14:19 crc kubenswrapper[4830]: I0227 16:14:19.980779 4830 generic.go:334] "Generic (PLEG): container finished" podID="5bb5794f-0af3-4d3d-aff1-73d8fb49b63f" containerID="57a4f3fb2792af52b24b3b516384780ac987315fef50aee4f851a594b13085fd" exitCode=0 Feb 27 16:14:19 crc kubenswrapper[4830]: I0227 16:14:19.980901 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58c9fbdd4b-qlgdj" event={"ID":"5bb5794f-0af3-4d3d-aff1-73d8fb49b63f","Type":"ContainerDied","Data":"57a4f3fb2792af52b24b3b516384780ac987315fef50aee4f851a594b13085fd"} Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.076817 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58c9fbdd4b-qlgdj" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.102036 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-c48b6b8bc-g7btc"] Feb 27 16:14:20 crc kubenswrapper[4830]: E0227 16:14:20.102232 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="789ee180-dd8e-4cb2-884e-beea08667c53" containerName="registry-server" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.102244 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="789ee180-dd8e-4cb2-884e-beea08667c53" containerName="registry-server" Feb 27 16:14:20 crc kubenswrapper[4830]: E0227 16:14:20.102254 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bb5794f-0af3-4d3d-aff1-73d8fb49b63f" containerName="controller-manager" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.102260 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bb5794f-0af3-4d3d-aff1-73d8fb49b63f" containerName="controller-manager" Feb 27 16:14:20 crc kubenswrapper[4830]: E0227 16:14:20.102270 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cc8e4cc-918f-47f8-8baf-b531cbeedc76" containerName="oc" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.102276 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cc8e4cc-918f-47f8-8baf-b531cbeedc76" containerName="oc" Feb 27 16:14:20 crc kubenswrapper[4830]: E0227 16:14:20.102283 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="789ee180-dd8e-4cb2-884e-beea08667c53" containerName="extract-content" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.102288 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="789ee180-dd8e-4cb2-884e-beea08667c53" containerName="extract-content" Feb 27 16:14:20 crc kubenswrapper[4830]: E0227 16:14:20.102296 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="728cab24-3fc3-4249-b37e-183d5676c191" containerName="extract-content" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.102303 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="728cab24-3fc3-4249-b37e-183d5676c191" containerName="extract-content" Feb 27 16:14:20 crc kubenswrapper[4830]: E0227 16:14:20.102309 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="728cab24-3fc3-4249-b37e-183d5676c191" containerName="registry-server" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.102315 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="728cab24-3fc3-4249-b37e-183d5676c191" containerName="registry-server" Feb 27 16:14:20 crc kubenswrapper[4830]: E0227 16:14:20.102322 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="789ee180-dd8e-4cb2-884e-beea08667c53" containerName="extract-utilities" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.102328 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="789ee180-dd8e-4cb2-884e-beea08667c53" containerName="extract-utilities" Feb 27 16:14:20 crc kubenswrapper[4830]: E0227 16:14:20.102336 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c5e2cae-7890-48fb-ab76-7e53c52fd6ac" containerName="extract-utilities" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.102343 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c5e2cae-7890-48fb-ab76-7e53c52fd6ac" containerName="extract-utilities" Feb 27 16:14:20 crc kubenswrapper[4830]: E0227 16:14:20.102351 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="514ae4c6-322a-458e-a1e5-df6d6a47fc88" containerName="extract-content" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.102357 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="514ae4c6-322a-458e-a1e5-df6d6a47fc88" containerName="extract-content" Feb 27 16:14:20 crc kubenswrapper[4830]: E0227 16:14:20.102364 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="514ae4c6-322a-458e-a1e5-df6d6a47fc88" containerName="extract-utilities" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.102369 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="514ae4c6-322a-458e-a1e5-df6d6a47fc88" containerName="extract-utilities" Feb 27 16:14:20 crc kubenswrapper[4830]: E0227 16:14:20.102379 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="514ae4c6-322a-458e-a1e5-df6d6a47fc88" containerName="registry-server" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.102385 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="514ae4c6-322a-458e-a1e5-df6d6a47fc88" containerName="registry-server" Feb 27 16:14:20 crc kubenswrapper[4830]: E0227 16:14:20.102395 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c5e2cae-7890-48fb-ab76-7e53c52fd6ac" containerName="extract-content" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.102401 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c5e2cae-7890-48fb-ab76-7e53c52fd6ac" containerName="extract-content" Feb 27 16:14:20 crc kubenswrapper[4830]: E0227 16:14:20.102407 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c5e2cae-7890-48fb-ab76-7e53c52fd6ac" containerName="registry-server" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.102413 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c5e2cae-7890-48fb-ab76-7e53c52fd6ac" containerName="registry-server" Feb 27 16:14:20 crc kubenswrapper[4830]: E0227 16:14:20.102422 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="728cab24-3fc3-4249-b37e-183d5676c191" containerName="extract-utilities" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.102443 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="728cab24-3fc3-4249-b37e-183d5676c191" containerName="extract-utilities" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.102530 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bb5794f-0af3-4d3d-aff1-73d8fb49b63f" containerName="controller-manager" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.102539 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c5e2cae-7890-48fb-ab76-7e53c52fd6ac" containerName="registry-server" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.102548 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="728cab24-3fc3-4249-b37e-183d5676c191" containerName="registry-server" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.102556 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="514ae4c6-322a-458e-a1e5-df6d6a47fc88" containerName="registry-server" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.102565 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="789ee180-dd8e-4cb2-884e-beea08667c53" containerName="registry-server" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.102573 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cc8e4cc-918f-47f8-8baf-b531cbeedc76" containerName="oc" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.102898 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c48b6b8bc-g7btc" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.124359 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c48b6b8bc-g7btc"] Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.225955 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bb5794f-0af3-4d3d-aff1-73d8fb49b63f-serving-cert\") pod \"5bb5794f-0af3-4d3d-aff1-73d8fb49b63f\" (UID: \"5bb5794f-0af3-4d3d-aff1-73d8fb49b63f\") " Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.226022 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mnh6\" (UniqueName: \"kubernetes.io/projected/5bb5794f-0af3-4d3d-aff1-73d8fb49b63f-kube-api-access-2mnh6\") pod \"5bb5794f-0af3-4d3d-aff1-73d8fb49b63f\" (UID: \"5bb5794f-0af3-4d3d-aff1-73d8fb49b63f\") " Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.226051 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5bb5794f-0af3-4d3d-aff1-73d8fb49b63f-proxy-ca-bundles\") pod \"5bb5794f-0af3-4d3d-aff1-73d8fb49b63f\" (UID: \"5bb5794f-0af3-4d3d-aff1-73d8fb49b63f\") " Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.226072 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bb5794f-0af3-4d3d-aff1-73d8fb49b63f-config\") pod \"5bb5794f-0af3-4d3d-aff1-73d8fb49b63f\" (UID: \"5bb5794f-0af3-4d3d-aff1-73d8fb49b63f\") " Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.226132 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5bb5794f-0af3-4d3d-aff1-73d8fb49b63f-client-ca\") pod \"5bb5794f-0af3-4d3d-aff1-73d8fb49b63f\" (UID: \"5bb5794f-0af3-4d3d-aff1-73d8fb49b63f\") " Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.227105 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bb5794f-0af3-4d3d-aff1-73d8fb49b63f-config" (OuterVolumeSpecName: "config") pod "5bb5794f-0af3-4d3d-aff1-73d8fb49b63f" (UID: "5bb5794f-0af3-4d3d-aff1-73d8fb49b63f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.227084 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bb5794f-0af3-4d3d-aff1-73d8fb49b63f-client-ca" (OuterVolumeSpecName: "client-ca") pod "5bb5794f-0af3-4d3d-aff1-73d8fb49b63f" (UID: "5bb5794f-0af3-4d3d-aff1-73d8fb49b63f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.227181 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bb5794f-0af3-4d3d-aff1-73d8fb49b63f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5bb5794f-0af3-4d3d-aff1-73d8fb49b63f" (UID: "5bb5794f-0af3-4d3d-aff1-73d8fb49b63f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.227275 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/796b8dfe-7288-4edc-bde6-176befffb3a3-config\") pod \"controller-manager-c48b6b8bc-g7btc\" (UID: \"796b8dfe-7288-4edc-bde6-176befffb3a3\") " pod="openshift-controller-manager/controller-manager-c48b6b8bc-g7btc" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.227321 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/796b8dfe-7288-4edc-bde6-176befffb3a3-serving-cert\") pod \"controller-manager-c48b6b8bc-g7btc\" (UID: \"796b8dfe-7288-4edc-bde6-176befffb3a3\") " pod="openshift-controller-manager/controller-manager-c48b6b8bc-g7btc" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.227377 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/796b8dfe-7288-4edc-bde6-176befffb3a3-proxy-ca-bundles\") pod \"controller-manager-c48b6b8bc-g7btc\" (UID: \"796b8dfe-7288-4edc-bde6-176befffb3a3\") " pod="openshift-controller-manager/controller-manager-c48b6b8bc-g7btc" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.227403 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/796b8dfe-7288-4edc-bde6-176befffb3a3-client-ca\") pod \"controller-manager-c48b6b8bc-g7btc\" (UID: \"796b8dfe-7288-4edc-bde6-176befffb3a3\") " pod="openshift-controller-manager/controller-manager-c48b6b8bc-g7btc" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.227437 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkkfv\" (UniqueName: \"kubernetes.io/projected/796b8dfe-7288-4edc-bde6-176befffb3a3-kube-api-access-xkkfv\") pod \"controller-manager-c48b6b8bc-g7btc\" (UID: \"796b8dfe-7288-4edc-bde6-176befffb3a3\") " pod="openshift-controller-manager/controller-manager-c48b6b8bc-g7btc" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.227472 4830 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5bb5794f-0af3-4d3d-aff1-73d8fb49b63f-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.227482 4830 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5bb5794f-0af3-4d3d-aff1-73d8fb49b63f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.227491 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bb5794f-0af3-4d3d-aff1-73d8fb49b63f-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.231241 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bb5794f-0af3-4d3d-aff1-73d8fb49b63f-kube-api-access-2mnh6" (OuterVolumeSpecName: "kube-api-access-2mnh6") pod "5bb5794f-0af3-4d3d-aff1-73d8fb49b63f" (UID: "5bb5794f-0af3-4d3d-aff1-73d8fb49b63f"). InnerVolumeSpecName "kube-api-access-2mnh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.232136 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bb5794f-0af3-4d3d-aff1-73d8fb49b63f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5bb5794f-0af3-4d3d-aff1-73d8fb49b63f" (UID: "5bb5794f-0af3-4d3d-aff1-73d8fb49b63f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.328118 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/796b8dfe-7288-4edc-bde6-176befffb3a3-proxy-ca-bundles\") pod \"controller-manager-c48b6b8bc-g7btc\" (UID: \"796b8dfe-7288-4edc-bde6-176befffb3a3\") " pod="openshift-controller-manager/controller-manager-c48b6b8bc-g7btc" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.328168 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/796b8dfe-7288-4edc-bde6-176befffb3a3-client-ca\") pod \"controller-manager-c48b6b8bc-g7btc\" (UID: \"796b8dfe-7288-4edc-bde6-176befffb3a3\") " pod="openshift-controller-manager/controller-manager-c48b6b8bc-g7btc" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.328200 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkkfv\" (UniqueName: \"kubernetes.io/projected/796b8dfe-7288-4edc-bde6-176befffb3a3-kube-api-access-xkkfv\") pod \"controller-manager-c48b6b8bc-g7btc\" (UID: \"796b8dfe-7288-4edc-bde6-176befffb3a3\") " pod="openshift-controller-manager/controller-manager-c48b6b8bc-g7btc" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.328225 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/796b8dfe-7288-4edc-bde6-176befffb3a3-config\") pod \"controller-manager-c48b6b8bc-g7btc\" (UID: \"796b8dfe-7288-4edc-bde6-176befffb3a3\") " pod="openshift-controller-manager/controller-manager-c48b6b8bc-g7btc" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.328254 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/796b8dfe-7288-4edc-bde6-176befffb3a3-serving-cert\") pod \"controller-manager-c48b6b8bc-g7btc\" (UID: \"796b8dfe-7288-4edc-bde6-176befffb3a3\") " pod="openshift-controller-manager/controller-manager-c48b6b8bc-g7btc" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.328330 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bb5794f-0af3-4d3d-aff1-73d8fb49b63f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.328342 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mnh6\" (UniqueName: \"kubernetes.io/projected/5bb5794f-0af3-4d3d-aff1-73d8fb49b63f-kube-api-access-2mnh6\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.329858 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/796b8dfe-7288-4edc-bde6-176befffb3a3-proxy-ca-bundles\") pod \"controller-manager-c48b6b8bc-g7btc\" (UID: \"796b8dfe-7288-4edc-bde6-176befffb3a3\") " pod="openshift-controller-manager/controller-manager-c48b6b8bc-g7btc" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.329982 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/796b8dfe-7288-4edc-bde6-176befffb3a3-client-ca\") pod \"controller-manager-c48b6b8bc-g7btc\" (UID: \"796b8dfe-7288-4edc-bde6-176befffb3a3\") " pod="openshift-controller-manager/controller-manager-c48b6b8bc-g7btc" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.330740 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/796b8dfe-7288-4edc-bde6-176befffb3a3-config\") pod \"controller-manager-c48b6b8bc-g7btc\" (UID: \"796b8dfe-7288-4edc-bde6-176befffb3a3\") " pod="openshift-controller-manager/controller-manager-c48b6b8bc-g7btc" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.333354 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/796b8dfe-7288-4edc-bde6-176befffb3a3-serving-cert\") pod \"controller-manager-c48b6b8bc-g7btc\" (UID: \"796b8dfe-7288-4edc-bde6-176befffb3a3\") " pod="openshift-controller-manager/controller-manager-c48b6b8bc-g7btc" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.359589 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkkfv\" (UniqueName: \"kubernetes.io/projected/796b8dfe-7288-4edc-bde6-176befffb3a3-kube-api-access-xkkfv\") pod \"controller-manager-c48b6b8bc-g7btc\" (UID: \"796b8dfe-7288-4edc-bde6-176befffb3a3\") " pod="openshift-controller-manager/controller-manager-c48b6b8bc-g7btc" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.433108 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c48b6b8bc-g7btc" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.726759 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c48b6b8bc-g7btc"] Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.995726 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58c9fbdd4b-qlgdj" event={"ID":"5bb5794f-0af3-4d3d-aff1-73d8fb49b63f","Type":"ContainerDied","Data":"237032bb66afbea1acde83ffb4edf6d8d66dc40083533dac1543cd4a4eaace0b"} Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.995788 4830 scope.go:117] "RemoveContainer" containerID="57a4f3fb2792af52b24b3b516384780ac987315fef50aee4f851a594b13085fd" Feb 27 16:14:20 crc kubenswrapper[4830]: I0227 16:14:20.995814 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58c9fbdd4b-qlgdj" Feb 27 16:14:21 crc kubenswrapper[4830]: I0227 16:14:21.003205 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c48b6b8bc-g7btc" event={"ID":"796b8dfe-7288-4edc-bde6-176befffb3a3","Type":"ContainerStarted","Data":"6dabce0fab66fde881cc2f43cd743892a90bd945a222e81a750a0bcb3b7358e8"} Feb 27 16:14:21 crc kubenswrapper[4830]: I0227 16:14:21.018435 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-58c9fbdd4b-qlgdj"] Feb 27 16:14:21 crc kubenswrapper[4830]: I0227 16:14:21.022743 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-58c9fbdd4b-qlgdj"] Feb 27 16:14:22 crc kubenswrapper[4830]: I0227 16:14:22.011990 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c48b6b8bc-g7btc" event={"ID":"796b8dfe-7288-4edc-bde6-176befffb3a3","Type":"ContainerStarted","Data":"52bef728be30c07576df47a5e59a1e1340172a765f66b96dabf2d3b581b549d1"} Feb 27 16:14:22 crc kubenswrapper[4830]: I0227 16:14:22.012294 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-c48b6b8bc-g7btc" Feb 27 16:14:22 crc kubenswrapper[4830]: I0227 16:14:22.016626 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-c48b6b8bc-g7btc" Feb 27 16:14:22 crc kubenswrapper[4830]: I0227 16:14:22.035478 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-c48b6b8bc-g7btc" podStartSLOduration=3.035460193 podStartE2EDuration="3.035460193s" podCreationTimestamp="2026-02-27 16:14:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:14:22.031971875 +0000 UTC m=+458.121244348" watchObservedRunningTime="2026-02-27 16:14:22.035460193 +0000 UTC m=+458.124732666" Feb 27 16:14:22 crc kubenswrapper[4830]: I0227 16:14:22.771883 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bb5794f-0af3-4d3d-aff1-73d8fb49b63f" path="/var/lib/kubelet/pods/5bb5794f-0af3-4d3d-aff1-73d8fb49b63f/volumes" Feb 27 16:14:33 crc kubenswrapper[4830]: I0227 16:14:33.160016 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 16:14:33 crc kubenswrapper[4830]: I0227 16:14:33.160717 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 16:14:33 crc kubenswrapper[4830]: I0227 16:14:33.160786 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 16:14:33 crc kubenswrapper[4830]: I0227 16:14:33.161634 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ad7b3479bfc7bc824e438e72666ce37c850e7de1824a4243534d5a7cc2b790bd"} pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 16:14:33 crc kubenswrapper[4830]: I0227 16:14:33.161730 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" containerID="cri-o://ad7b3479bfc7bc824e438e72666ce37c850e7de1824a4243534d5a7cc2b790bd" gracePeriod=600 Feb 27 16:14:34 crc kubenswrapper[4830]: I0227 16:14:34.110893 4830 generic.go:334] "Generic (PLEG): container finished" podID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerID="ad7b3479bfc7bc824e438e72666ce37c850e7de1824a4243534d5a7cc2b790bd" exitCode=0 Feb 27 16:14:34 crc kubenswrapper[4830]: I0227 16:14:34.111019 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerDied","Data":"ad7b3479bfc7bc824e438e72666ce37c850e7de1824a4243534d5a7cc2b790bd"} Feb 27 16:14:34 crc kubenswrapper[4830]: I0227 16:14:34.111681 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerStarted","Data":"a6e439bde057753a649382c8178958e1e7d593adbfc771d6e3b530cc84fe06fb"} Feb 27 16:14:34 crc kubenswrapper[4830]: I0227 16:14:34.111733 4830 scope.go:117] "RemoveContainer" containerID="5d403fb60beeb11f3cb72e7ca134b83a9c519375fbb6727d2070118a6b924516" Feb 27 16:14:38 crc kubenswrapper[4830]: I0227 16:14:38.952115 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7447957dcb-tg5k8"] Feb 27 16:14:38 crc kubenswrapper[4830]: I0227 16:14:38.952700 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7447957dcb-tg5k8" podUID="dd2f243e-4291-4cdb-989f-e285347ce7e7" containerName="route-controller-manager" containerID="cri-o://57be6305866053d4099854dde21b7ac96c8c0926b3f622b8e91deff17e408ec3" gracePeriod=30 Feb 27 16:14:39 crc kubenswrapper[4830]: I0227 16:14:39.190191 4830 generic.go:334] "Generic (PLEG): container finished" podID="dd2f243e-4291-4cdb-989f-e285347ce7e7" containerID="57be6305866053d4099854dde21b7ac96c8c0926b3f622b8e91deff17e408ec3" exitCode=0 Feb 27 16:14:39 crc kubenswrapper[4830]: I0227 16:14:39.190326 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7447957dcb-tg5k8" event={"ID":"dd2f243e-4291-4cdb-989f-e285347ce7e7","Type":"ContainerDied","Data":"57be6305866053d4099854dde21b7ac96c8c0926b3f622b8e91deff17e408ec3"} Feb 27 16:14:40 crc kubenswrapper[4830]: I0227 16:14:40.162888 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7447957dcb-tg5k8" Feb 27 16:14:40 crc kubenswrapper[4830]: I0227 16:14:40.208557 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7447957dcb-tg5k8" Feb 27 16:14:40 crc kubenswrapper[4830]: I0227 16:14:40.211255 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-849bff8645-24cf7"] Feb 27 16:14:40 crc kubenswrapper[4830]: E0227 16:14:40.211514 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd2f243e-4291-4cdb-989f-e285347ce7e7" containerName="route-controller-manager" Feb 27 16:14:40 crc kubenswrapper[4830]: I0227 16:14:40.211525 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd2f243e-4291-4cdb-989f-e285347ce7e7" containerName="route-controller-manager" Feb 27 16:14:40 crc kubenswrapper[4830]: I0227 16:14:40.211617 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd2f243e-4291-4cdb-989f-e285347ce7e7" containerName="route-controller-manager" Feb 27 16:14:40 crc kubenswrapper[4830]: I0227 16:14:40.211936 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7447957dcb-tg5k8" event={"ID":"dd2f243e-4291-4cdb-989f-e285347ce7e7","Type":"ContainerDied","Data":"221e0c87c87b0a46a2530e31015d643b49632e8ea8fb109eacba8861c2124be6"} Feb 27 16:14:40 crc kubenswrapper[4830]: I0227 16:14:40.211988 4830 scope.go:117] "RemoveContainer" containerID="57be6305866053d4099854dde21b7ac96c8c0926b3f622b8e91deff17e408ec3" Feb 27 16:14:40 crc kubenswrapper[4830]: I0227 16:14:40.212114 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-849bff8645-24cf7" Feb 27 16:14:40 crc kubenswrapper[4830]: I0227 16:14:40.213312 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-849bff8645-24cf7"] Feb 27 16:14:40 crc kubenswrapper[4830]: I0227 16:14:40.248313 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd2f243e-4291-4cdb-989f-e285347ce7e7-client-ca\") pod \"dd2f243e-4291-4cdb-989f-e285347ce7e7\" (UID: \"dd2f243e-4291-4cdb-989f-e285347ce7e7\") " Feb 27 16:14:40 crc kubenswrapper[4830]: I0227 16:14:40.248370 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zwvd\" (UniqueName: \"kubernetes.io/projected/dd2f243e-4291-4cdb-989f-e285347ce7e7-kube-api-access-5zwvd\") pod \"dd2f243e-4291-4cdb-989f-e285347ce7e7\" (UID: \"dd2f243e-4291-4cdb-989f-e285347ce7e7\") " Feb 27 16:14:40 crc kubenswrapper[4830]: I0227 16:14:40.248400 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd2f243e-4291-4cdb-989f-e285347ce7e7-serving-cert\") pod \"dd2f243e-4291-4cdb-989f-e285347ce7e7\" (UID: \"dd2f243e-4291-4cdb-989f-e285347ce7e7\") " Feb 27 16:14:40 crc kubenswrapper[4830]: I0227 16:14:40.248431 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd2f243e-4291-4cdb-989f-e285347ce7e7-config\") pod \"dd2f243e-4291-4cdb-989f-e285347ce7e7\" (UID: \"dd2f243e-4291-4cdb-989f-e285347ce7e7\") " Feb 27 16:14:40 crc kubenswrapper[4830]: I0227 16:14:40.248541 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e59f2c0-eefd-4554-8502-422f4ed6633c-config\") pod \"route-controller-manager-849bff8645-24cf7\" (UID: \"1e59f2c0-eefd-4554-8502-422f4ed6633c\") " pod="openshift-route-controller-manager/route-controller-manager-849bff8645-24cf7" Feb 27 16:14:40 crc kubenswrapper[4830]: I0227 16:14:40.248573 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e59f2c0-eefd-4554-8502-422f4ed6633c-client-ca\") pod \"route-controller-manager-849bff8645-24cf7\" (UID: \"1e59f2c0-eefd-4554-8502-422f4ed6633c\") " pod="openshift-route-controller-manager/route-controller-manager-849bff8645-24cf7" Feb 27 16:14:40 crc kubenswrapper[4830]: I0227 16:14:40.248601 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwp7s\" (UniqueName: \"kubernetes.io/projected/1e59f2c0-eefd-4554-8502-422f4ed6633c-kube-api-access-fwp7s\") pod \"route-controller-manager-849bff8645-24cf7\" (UID: \"1e59f2c0-eefd-4554-8502-422f4ed6633c\") " pod="openshift-route-controller-manager/route-controller-manager-849bff8645-24cf7" Feb 27 16:14:40 crc kubenswrapper[4830]: I0227 16:14:40.248658 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e59f2c0-eefd-4554-8502-422f4ed6633c-serving-cert\") pod \"route-controller-manager-849bff8645-24cf7\" (UID: \"1e59f2c0-eefd-4554-8502-422f4ed6633c\") " pod="openshift-route-controller-manager/route-controller-manager-849bff8645-24cf7" Feb 27 16:14:40 crc kubenswrapper[4830]: I0227 16:14:40.249473 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd2f243e-4291-4cdb-989f-e285347ce7e7-client-ca" (OuterVolumeSpecName: "client-ca") pod "dd2f243e-4291-4cdb-989f-e285347ce7e7" (UID: "dd2f243e-4291-4cdb-989f-e285347ce7e7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:14:40 crc kubenswrapper[4830]: I0227 16:14:40.250649 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd2f243e-4291-4cdb-989f-e285347ce7e7-config" (OuterVolumeSpecName: "config") pod "dd2f243e-4291-4cdb-989f-e285347ce7e7" (UID: "dd2f243e-4291-4cdb-989f-e285347ce7e7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:14:40 crc kubenswrapper[4830]: I0227 16:14:40.255278 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd2f243e-4291-4cdb-989f-e285347ce7e7-kube-api-access-5zwvd" (OuterVolumeSpecName: "kube-api-access-5zwvd") pod "dd2f243e-4291-4cdb-989f-e285347ce7e7" (UID: "dd2f243e-4291-4cdb-989f-e285347ce7e7"). InnerVolumeSpecName "kube-api-access-5zwvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:14:40 crc kubenswrapper[4830]: I0227 16:14:40.255333 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd2f243e-4291-4cdb-989f-e285347ce7e7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dd2f243e-4291-4cdb-989f-e285347ce7e7" (UID: "dd2f243e-4291-4cdb-989f-e285347ce7e7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:14:40 crc kubenswrapper[4830]: I0227 16:14:40.349985 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e59f2c0-eefd-4554-8502-422f4ed6633c-client-ca\") pod \"route-controller-manager-849bff8645-24cf7\" (UID: \"1e59f2c0-eefd-4554-8502-422f4ed6633c\") " pod="openshift-route-controller-manager/route-controller-manager-849bff8645-24cf7" Feb 27 16:14:40 crc kubenswrapper[4830]: I0227 16:14:40.350039 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwp7s\" (UniqueName: \"kubernetes.io/projected/1e59f2c0-eefd-4554-8502-422f4ed6633c-kube-api-access-fwp7s\") pod \"route-controller-manager-849bff8645-24cf7\" (UID: \"1e59f2c0-eefd-4554-8502-422f4ed6633c\") " pod="openshift-route-controller-manager/route-controller-manager-849bff8645-24cf7" Feb 27 16:14:40 crc kubenswrapper[4830]: I0227 16:14:40.350074 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e59f2c0-eefd-4554-8502-422f4ed6633c-serving-cert\") pod \"route-controller-manager-849bff8645-24cf7\" (UID: \"1e59f2c0-eefd-4554-8502-422f4ed6633c\") " pod="openshift-route-controller-manager/route-controller-manager-849bff8645-24cf7" Feb 27 16:14:40 crc kubenswrapper[4830]: I0227 16:14:40.350117 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e59f2c0-eefd-4554-8502-422f4ed6633c-config\") pod \"route-controller-manager-849bff8645-24cf7\" (UID: \"1e59f2c0-eefd-4554-8502-422f4ed6633c\") " pod="openshift-route-controller-manager/route-controller-manager-849bff8645-24cf7" Feb 27 16:14:40 crc kubenswrapper[4830]: I0227 16:14:40.350153 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd2f243e-4291-4cdb-989f-e285347ce7e7-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:40 crc kubenswrapper[4830]: I0227 16:14:40.350163 4830 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd2f243e-4291-4cdb-989f-e285347ce7e7-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:40 crc kubenswrapper[4830]: I0227 16:14:40.350173 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zwvd\" (UniqueName: \"kubernetes.io/projected/dd2f243e-4291-4cdb-989f-e285347ce7e7-kube-api-access-5zwvd\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:40 crc kubenswrapper[4830]: I0227 16:14:40.350182 4830 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd2f243e-4291-4cdb-989f-e285347ce7e7-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:40 crc kubenswrapper[4830]: I0227 16:14:40.351198 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e59f2c0-eefd-4554-8502-422f4ed6633c-client-ca\") pod \"route-controller-manager-849bff8645-24cf7\" (UID: \"1e59f2c0-eefd-4554-8502-422f4ed6633c\") " pod="openshift-route-controller-manager/route-controller-manager-849bff8645-24cf7" Feb 27 16:14:40 crc kubenswrapper[4830]: I0227 16:14:40.351591 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e59f2c0-eefd-4554-8502-422f4ed6633c-config\") pod \"route-controller-manager-849bff8645-24cf7\" (UID: \"1e59f2c0-eefd-4554-8502-422f4ed6633c\") " pod="openshift-route-controller-manager/route-controller-manager-849bff8645-24cf7" Feb 27 16:14:40 crc kubenswrapper[4830]: I0227 16:14:40.353698 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e59f2c0-eefd-4554-8502-422f4ed6633c-serving-cert\") pod \"route-controller-manager-849bff8645-24cf7\" (UID: \"1e59f2c0-eefd-4554-8502-422f4ed6633c\") " pod="openshift-route-controller-manager/route-controller-manager-849bff8645-24cf7" Feb 27 16:14:40 crc kubenswrapper[4830]: I0227 16:14:40.373822 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwp7s\" (UniqueName: \"kubernetes.io/projected/1e59f2c0-eefd-4554-8502-422f4ed6633c-kube-api-access-fwp7s\") pod \"route-controller-manager-849bff8645-24cf7\" (UID: \"1e59f2c0-eefd-4554-8502-422f4ed6633c\") " pod="openshift-route-controller-manager/route-controller-manager-849bff8645-24cf7" Feb 27 16:14:40 crc kubenswrapper[4830]: I0227 16:14:40.531017 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-849bff8645-24cf7" Feb 27 16:14:40 crc kubenswrapper[4830]: I0227 16:14:40.550880 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7447957dcb-tg5k8"] Feb 27 16:14:40 crc kubenswrapper[4830]: I0227 16:14:40.560810 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7447957dcb-tg5k8"] Feb 27 16:14:40 crc kubenswrapper[4830]: I0227 16:14:40.776804 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd2f243e-4291-4cdb-989f-e285347ce7e7" path="/var/lib/kubelet/pods/dd2f243e-4291-4cdb-989f-e285347ce7e7/volumes" Feb 27 16:14:41 crc kubenswrapper[4830]: I0227 16:14:41.020654 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-849bff8645-24cf7"] Feb 27 16:14:41 crc kubenswrapper[4830]: I0227 16:14:41.233217 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-849bff8645-24cf7" event={"ID":"1e59f2c0-eefd-4554-8502-422f4ed6633c","Type":"ContainerStarted","Data":"f55cea231d5d59558781133c00470e8d4bae2947936b85cdf6a6b9335c5ebb05"} Feb 27 16:14:42 crc kubenswrapper[4830]: I0227 16:14:42.242221 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-849bff8645-24cf7" event={"ID":"1e59f2c0-eefd-4554-8502-422f4ed6633c","Type":"ContainerStarted","Data":"affba5eb857f32c8a3cff0b2d1488873edebe2ac4bd6850ae986ca51494e56b4"} Feb 27 16:14:42 crc kubenswrapper[4830]: I0227 16:14:42.242930 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-849bff8645-24cf7" Feb 27 16:14:42 crc kubenswrapper[4830]: I0227 16:14:42.247578 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-849bff8645-24cf7" Feb 27 16:14:42 crc kubenswrapper[4830]: I0227 16:14:42.265695 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-849bff8645-24cf7" podStartSLOduration=4.26567428 podStartE2EDuration="4.26567428s" podCreationTimestamp="2026-02-27 16:14:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:14:42.262190782 +0000 UTC m=+478.351463285" watchObservedRunningTime="2026-02-27 16:14:42.26567428 +0000 UTC m=+478.354946743" Feb 27 16:14:43 crc kubenswrapper[4830]: I0227 16:14:43.977401 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" podUID="e98d0941-0faf-4719-88a1-ff04ca46eece" containerName="registry" containerID="cri-o://b86aa15a8c214c6b6148673776675862a26671cf68d29f3585b2fdffbe01f6a2" gracePeriod=30 Feb 27 16:14:44 crc kubenswrapper[4830]: I0227 16:14:44.256686 4830 generic.go:334] "Generic (PLEG): container finished" podID="e98d0941-0faf-4719-88a1-ff04ca46eece" containerID="b86aa15a8c214c6b6148673776675862a26671cf68d29f3585b2fdffbe01f6a2" exitCode=0 Feb 27 16:14:44 crc kubenswrapper[4830]: I0227 16:14:44.256819 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" event={"ID":"e98d0941-0faf-4719-88a1-ff04ca46eece","Type":"ContainerDied","Data":"b86aa15a8c214c6b6148673776675862a26671cf68d29f3585b2fdffbe01f6a2"} Feb 27 16:14:45 crc kubenswrapper[4830]: I0227 16:14:45.063989 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:14:45 crc kubenswrapper[4830]: I0227 16:14:45.224270 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e98d0941-0faf-4719-88a1-ff04ca46eece-bound-sa-token\") pod \"e98d0941-0faf-4719-88a1-ff04ca46eece\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " Feb 27 16:14:45 crc kubenswrapper[4830]: I0227 16:14:45.224330 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e98d0941-0faf-4719-88a1-ff04ca46eece-registry-certificates\") pod \"e98d0941-0faf-4719-88a1-ff04ca46eece\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " Feb 27 16:14:45 crc kubenswrapper[4830]: I0227 16:14:45.224355 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4whrx\" (UniqueName: \"kubernetes.io/projected/e98d0941-0faf-4719-88a1-ff04ca46eece-kube-api-access-4whrx\") pod \"e98d0941-0faf-4719-88a1-ff04ca46eece\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " Feb 27 16:14:45 crc kubenswrapper[4830]: I0227 16:14:45.224377 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e98d0941-0faf-4719-88a1-ff04ca46eece-trusted-ca\") pod \"e98d0941-0faf-4719-88a1-ff04ca46eece\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " Feb 27 16:14:45 crc kubenswrapper[4830]: I0227 16:14:45.224396 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e98d0941-0faf-4719-88a1-ff04ca46eece-ca-trust-extracted\") pod \"e98d0941-0faf-4719-88a1-ff04ca46eece\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " Feb 27 16:14:45 crc kubenswrapper[4830]: I0227 16:14:45.224441 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e98d0941-0faf-4719-88a1-ff04ca46eece-installation-pull-secrets\") pod \"e98d0941-0faf-4719-88a1-ff04ca46eece\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " Feb 27 16:14:45 crc kubenswrapper[4830]: I0227 16:14:45.224459 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e98d0941-0faf-4719-88a1-ff04ca46eece-registry-tls\") pod \"e98d0941-0faf-4719-88a1-ff04ca46eece\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " Feb 27 16:14:45 crc kubenswrapper[4830]: I0227 16:14:45.224622 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"e98d0941-0faf-4719-88a1-ff04ca46eece\" (UID: \"e98d0941-0faf-4719-88a1-ff04ca46eece\") " Feb 27 16:14:45 crc kubenswrapper[4830]: I0227 16:14:45.226360 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e98d0941-0faf-4719-88a1-ff04ca46eece-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "e98d0941-0faf-4719-88a1-ff04ca46eece" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:14:45 crc kubenswrapper[4830]: I0227 16:14:45.227443 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e98d0941-0faf-4719-88a1-ff04ca46eece-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "e98d0941-0faf-4719-88a1-ff04ca46eece" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:14:45 crc kubenswrapper[4830]: I0227 16:14:45.232908 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e98d0941-0faf-4719-88a1-ff04ca46eece-kube-api-access-4whrx" (OuterVolumeSpecName: "kube-api-access-4whrx") pod "e98d0941-0faf-4719-88a1-ff04ca46eece" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece"). InnerVolumeSpecName "kube-api-access-4whrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:14:45 crc kubenswrapper[4830]: I0227 16:14:45.233763 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e98d0941-0faf-4719-88a1-ff04ca46eece-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "e98d0941-0faf-4719-88a1-ff04ca46eece" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:14:45 crc kubenswrapper[4830]: I0227 16:14:45.234674 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e98d0941-0faf-4719-88a1-ff04ca46eece-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "e98d0941-0faf-4719-88a1-ff04ca46eece" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:14:45 crc kubenswrapper[4830]: I0227 16:14:45.237043 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "e98d0941-0faf-4719-88a1-ff04ca46eece" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 27 16:14:45 crc kubenswrapper[4830]: I0227 16:14:45.237238 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e98d0941-0faf-4719-88a1-ff04ca46eece-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "e98d0941-0faf-4719-88a1-ff04ca46eece" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:14:45 crc kubenswrapper[4830]: I0227 16:14:45.260308 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e98d0941-0faf-4719-88a1-ff04ca46eece-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "e98d0941-0faf-4719-88a1-ff04ca46eece" (UID: "e98d0941-0faf-4719-88a1-ff04ca46eece"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:14:45 crc kubenswrapper[4830]: I0227 16:14:45.264749 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" event={"ID":"e98d0941-0faf-4719-88a1-ff04ca46eece","Type":"ContainerDied","Data":"00271d2f05b33a512bb343f4ac3027c7f04d956d7222bb1ce98e8ceb2b3ca9ac"} Feb 27 16:14:45 crc kubenswrapper[4830]: I0227 16:14:45.264795 4830 scope.go:117] "RemoveContainer" containerID="b86aa15a8c214c6b6148673776675862a26671cf68d29f3585b2fdffbe01f6a2" Feb 27 16:14:45 crc kubenswrapper[4830]: I0227 16:14:45.264862 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9gfr4" Feb 27 16:14:45 crc kubenswrapper[4830]: I0227 16:14:45.312559 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9gfr4"] Feb 27 16:14:45 crc kubenswrapper[4830]: I0227 16:14:45.315846 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9gfr4"] Feb 27 16:14:45 crc kubenswrapper[4830]: I0227 16:14:45.325869 4830 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e98d0941-0faf-4719-88a1-ff04ca46eece-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:45 crc kubenswrapper[4830]: I0227 16:14:45.325906 4830 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e98d0941-0faf-4719-88a1-ff04ca46eece-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:45 crc kubenswrapper[4830]: I0227 16:14:45.325919 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4whrx\" (UniqueName: \"kubernetes.io/projected/e98d0941-0faf-4719-88a1-ff04ca46eece-kube-api-access-4whrx\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:45 crc kubenswrapper[4830]: I0227 16:14:45.325963 4830 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e98d0941-0faf-4719-88a1-ff04ca46eece-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:45 crc kubenswrapper[4830]: I0227 16:14:45.325976 4830 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e98d0941-0faf-4719-88a1-ff04ca46eece-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:45 crc kubenswrapper[4830]: I0227 16:14:45.326613 4830 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e98d0941-0faf-4719-88a1-ff04ca46eece-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:45 crc kubenswrapper[4830]: I0227 16:14:45.326652 4830 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e98d0941-0faf-4719-88a1-ff04ca46eece-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:46 crc kubenswrapper[4830]: I0227 16:14:46.773417 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e98d0941-0faf-4719-88a1-ff04ca46eece" path="/var/lib/kubelet/pods/e98d0941-0faf-4719-88a1-ff04ca46eece/volumes" Feb 27 16:14:46 crc kubenswrapper[4830]: I0227 16:14:46.881778 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k7l8d"] Feb 27 16:14:46 crc kubenswrapper[4830]: I0227 16:14:46.882088 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k7l8d" podUID="f2579681-6b81-4b58-9d2c-c26b123be8ec" containerName="registry-server" containerID="cri-o://bf8f7f00dabc83ed88321c54eb8ecc1093da98806c893dfc048a629d090d59ac" gracePeriod=30 Feb 27 16:14:46 crc kubenswrapper[4830]: I0227 16:14:46.903978 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-966h2"] Feb 27 16:14:46 crc kubenswrapper[4830]: I0227 16:14:46.908479 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-45mg7"] Feb 27 16:14:46 crc kubenswrapper[4830]: I0227 16:14:46.908739 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-45mg7" podUID="32e984aa-8399-4cf1-8a4a-b36525c67e35" containerName="marketplace-operator" containerID="cri-o://0204c4fa0fcb07e9b828264151181c41186fb5044b2e7a58f9938a37bc377462" gracePeriod=30 Feb 27 16:14:46 crc kubenswrapper[4830]: I0227 16:14:46.919062 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkwcl"] Feb 27 16:14:46 crc kubenswrapper[4830]: I0227 16:14:46.919388 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kkwcl" podUID="a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc" containerName="registry-server" containerID="cri-o://1eba51cb0d496eae004fbfc8ec830b99d548b112ebca7690cb09e1819ee1e443" gracePeriod=30 Feb 27 16:14:46 crc kubenswrapper[4830]: I0227 16:14:46.924366 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-w2snv"] Feb 27 16:14:46 crc kubenswrapper[4830]: E0227 16:14:46.929936 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e98d0941-0faf-4719-88a1-ff04ca46eece" containerName="registry" Feb 27 16:14:46 crc kubenswrapper[4830]: I0227 16:14:46.929985 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e98d0941-0faf-4719-88a1-ff04ca46eece" containerName="registry" Feb 27 16:14:46 crc kubenswrapper[4830]: I0227 16:14:46.930124 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="e98d0941-0faf-4719-88a1-ff04ca46eece" containerName="registry" Feb 27 16:14:46 crc kubenswrapper[4830]: I0227 16:14:46.930519 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tr5cj"] Feb 27 16:14:46 crc kubenswrapper[4830]: I0227 16:14:46.930660 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-w2snv" Feb 27 16:14:46 crc kubenswrapper[4830]: I0227 16:14:46.930758 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tr5cj" podUID="48011108-ee2c-4d3b-9f28-65cfc91b90ab" containerName="registry-server" containerID="cri-o://98e33657c6848e8254bbea28a416aa7f0ca69f675bd06430a56e87cfef186663" gracePeriod=30 Feb 27 16:14:46 crc kubenswrapper[4830]: I0227 16:14:46.948660 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-w2snv"] Feb 27 16:14:47 crc kubenswrapper[4830]: I0227 16:14:47.049750 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/79d764bd-68e2-4846-a2c3-3f6bdc2db5e7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-w2snv\" (UID: \"79d764bd-68e2-4846-a2c3-3f6bdc2db5e7\") " pod="openshift-marketplace/marketplace-operator-79b997595-w2snv" Feb 27 16:14:47 crc kubenswrapper[4830]: I0227 16:14:47.049795 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79d764bd-68e2-4846-a2c3-3f6bdc2db5e7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-w2snv\" (UID: \"79d764bd-68e2-4846-a2c3-3f6bdc2db5e7\") " pod="openshift-marketplace/marketplace-operator-79b997595-w2snv" Feb 27 16:14:47 crc kubenswrapper[4830]: I0227 16:14:47.049837 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkdm6\" (UniqueName: \"kubernetes.io/projected/79d764bd-68e2-4846-a2c3-3f6bdc2db5e7-kube-api-access-lkdm6\") pod \"marketplace-operator-79b997595-w2snv\" (UID: \"79d764bd-68e2-4846-a2c3-3f6bdc2db5e7\") " pod="openshift-marketplace/marketplace-operator-79b997595-w2snv" Feb 27 16:14:47 crc kubenswrapper[4830]: I0227 16:14:47.150869 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/79d764bd-68e2-4846-a2c3-3f6bdc2db5e7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-w2snv\" (UID: \"79d764bd-68e2-4846-a2c3-3f6bdc2db5e7\") " pod="openshift-marketplace/marketplace-operator-79b997595-w2snv" Feb 27 16:14:47 crc kubenswrapper[4830]: I0227 16:14:47.150927 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79d764bd-68e2-4846-a2c3-3f6bdc2db5e7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-w2snv\" (UID: \"79d764bd-68e2-4846-a2c3-3f6bdc2db5e7\") " pod="openshift-marketplace/marketplace-operator-79b997595-w2snv" Feb 27 16:14:47 crc kubenswrapper[4830]: I0227 16:14:47.150994 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkdm6\" (UniqueName: \"kubernetes.io/projected/79d764bd-68e2-4846-a2c3-3f6bdc2db5e7-kube-api-access-lkdm6\") pod \"marketplace-operator-79b997595-w2snv\" (UID: \"79d764bd-68e2-4846-a2c3-3f6bdc2db5e7\") " pod="openshift-marketplace/marketplace-operator-79b997595-w2snv" Feb 27 16:14:47 crc kubenswrapper[4830]: I0227 16:14:47.152437 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79d764bd-68e2-4846-a2c3-3f6bdc2db5e7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-w2snv\" (UID: \"79d764bd-68e2-4846-a2c3-3f6bdc2db5e7\") " pod="openshift-marketplace/marketplace-operator-79b997595-w2snv" Feb 27 16:14:47 crc kubenswrapper[4830]: I0227 16:14:47.157797 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/79d764bd-68e2-4846-a2c3-3f6bdc2db5e7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-w2snv\" (UID: \"79d764bd-68e2-4846-a2c3-3f6bdc2db5e7\") " pod="openshift-marketplace/marketplace-operator-79b997595-w2snv" Feb 27 16:14:47 crc kubenswrapper[4830]: I0227 16:14:47.169047 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkdm6\" (UniqueName: \"kubernetes.io/projected/79d764bd-68e2-4846-a2c3-3f6bdc2db5e7-kube-api-access-lkdm6\") pod \"marketplace-operator-79b997595-w2snv\" (UID: \"79d764bd-68e2-4846-a2c3-3f6bdc2db5e7\") " pod="openshift-marketplace/marketplace-operator-79b997595-w2snv" Feb 27 16:14:47 crc kubenswrapper[4830]: I0227 16:14:47.281853 4830 generic.go:334] "Generic (PLEG): container finished" podID="f2579681-6b81-4b58-9d2c-c26b123be8ec" containerID="bf8f7f00dabc83ed88321c54eb8ecc1093da98806c893dfc048a629d090d59ac" exitCode=0 Feb 27 16:14:47 crc kubenswrapper[4830]: I0227 16:14:47.282337 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-966h2" podUID="8b33138a-5b9d-4af8-b13d-4db4c2613983" containerName="registry-server" containerID="cri-o://425f05b409c5b9847f770836cb23fa92d243640eae8fc7ca0ac2121b3fb5332b" gracePeriod=30 Feb 27 16:14:47 crc kubenswrapper[4830]: I0227 16:14:47.282396 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k7l8d" event={"ID":"f2579681-6b81-4b58-9d2c-c26b123be8ec","Type":"ContainerDied","Data":"bf8f7f00dabc83ed88321c54eb8ecc1093da98806c893dfc048a629d090d59ac"} Feb 27 16:14:47 crc kubenswrapper[4830]: I0227 16:14:47.286982 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-w2snv" Feb 27 16:14:47 crc kubenswrapper[4830]: I0227 16:14:47.769213 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-w2snv"] Feb 27 16:14:47 crc kubenswrapper[4830]: I0227 16:14:47.931192 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-45mg7" Feb 27 16:14:47 crc kubenswrapper[4830]: I0227 16:14:47.965076 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pf5f\" (UniqueName: \"kubernetes.io/projected/32e984aa-8399-4cf1-8a4a-b36525c67e35-kube-api-access-8pf5f\") pod \"32e984aa-8399-4cf1-8a4a-b36525c67e35\" (UID: \"32e984aa-8399-4cf1-8a4a-b36525c67e35\") " Feb 27 16:14:47 crc kubenswrapper[4830]: I0227 16:14:47.965375 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32e984aa-8399-4cf1-8a4a-b36525c67e35-marketplace-trusted-ca\") pod \"32e984aa-8399-4cf1-8a4a-b36525c67e35\" (UID: \"32e984aa-8399-4cf1-8a4a-b36525c67e35\") " Feb 27 16:14:47 crc kubenswrapper[4830]: I0227 16:14:47.965455 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/32e984aa-8399-4cf1-8a4a-b36525c67e35-marketplace-operator-metrics\") pod \"32e984aa-8399-4cf1-8a4a-b36525c67e35\" (UID: \"32e984aa-8399-4cf1-8a4a-b36525c67e35\") " Feb 27 16:14:47 crc kubenswrapper[4830]: I0227 16:14:47.966085 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32e984aa-8399-4cf1-8a4a-b36525c67e35-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "32e984aa-8399-4cf1-8a4a-b36525c67e35" (UID: "32e984aa-8399-4cf1-8a4a-b36525c67e35"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:14:47 crc kubenswrapper[4830]: I0227 16:14:47.985149 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32e984aa-8399-4cf1-8a4a-b36525c67e35-kube-api-access-8pf5f" (OuterVolumeSpecName: "kube-api-access-8pf5f") pod "32e984aa-8399-4cf1-8a4a-b36525c67e35" (UID: "32e984aa-8399-4cf1-8a4a-b36525c67e35"). InnerVolumeSpecName "kube-api-access-8pf5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:14:47 crc kubenswrapper[4830]: I0227 16:14:47.986004 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32e984aa-8399-4cf1-8a4a-b36525c67e35-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "32e984aa-8399-4cf1-8a4a-b36525c67e35" (UID: "32e984aa-8399-4cf1-8a4a-b36525c67e35"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.066610 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pf5f\" (UniqueName: \"kubernetes.io/projected/32e984aa-8399-4cf1-8a4a-b36525c67e35-kube-api-access-8pf5f\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.066643 4830 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32e984aa-8399-4cf1-8a4a-b36525c67e35-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.066656 4830 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/32e984aa-8399-4cf1-8a4a-b36525c67e35-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.134922 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kkwcl" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.139457 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k7l8d" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.142902 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tr5cj" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.174719 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pq52\" (UniqueName: \"kubernetes.io/projected/f2579681-6b81-4b58-9d2c-c26b123be8ec-kube-api-access-4pq52\") pod \"f2579681-6b81-4b58-9d2c-c26b123be8ec\" (UID: \"f2579681-6b81-4b58-9d2c-c26b123be8ec\") " Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.174784 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48011108-ee2c-4d3b-9f28-65cfc91b90ab-utilities\") pod \"48011108-ee2c-4d3b-9f28-65cfc91b90ab\" (UID: \"48011108-ee2c-4d3b-9f28-65cfc91b90ab\") " Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.174819 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46ljt\" (UniqueName: \"kubernetes.io/projected/a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc-kube-api-access-46ljt\") pod \"a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc\" (UID: \"a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc\") " Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.174889 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48011108-ee2c-4d3b-9f28-65cfc91b90ab-catalog-content\") pod \"48011108-ee2c-4d3b-9f28-65cfc91b90ab\" (UID: \"48011108-ee2c-4d3b-9f28-65cfc91b90ab\") " Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.174923 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2579681-6b81-4b58-9d2c-c26b123be8ec-utilities\") pod \"f2579681-6b81-4b58-9d2c-c26b123be8ec\" (UID: \"f2579681-6b81-4b58-9d2c-c26b123be8ec\") " Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.175000 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc-catalog-content\") pod \"a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc\" (UID: \"a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc\") " Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.175026 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qk54l\" (UniqueName: \"kubernetes.io/projected/48011108-ee2c-4d3b-9f28-65cfc91b90ab-kube-api-access-qk54l\") pod \"48011108-ee2c-4d3b-9f28-65cfc91b90ab\" (UID: \"48011108-ee2c-4d3b-9f28-65cfc91b90ab\") " Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.175072 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2579681-6b81-4b58-9d2c-c26b123be8ec-catalog-content\") pod \"f2579681-6b81-4b58-9d2c-c26b123be8ec\" (UID: \"f2579681-6b81-4b58-9d2c-c26b123be8ec\") " Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.175092 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc-utilities\") pod \"a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc\" (UID: \"a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc\") " Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.176040 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc-utilities" (OuterVolumeSpecName: "utilities") pod "a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc" (UID: "a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.177169 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2579681-6b81-4b58-9d2c-c26b123be8ec-utilities" (OuterVolumeSpecName: "utilities") pod "f2579681-6b81-4b58-9d2c-c26b123be8ec" (UID: "f2579681-6b81-4b58-9d2c-c26b123be8ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.180800 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48011108-ee2c-4d3b-9f28-65cfc91b90ab-kube-api-access-qk54l" (OuterVolumeSpecName: "kube-api-access-qk54l") pod "48011108-ee2c-4d3b-9f28-65cfc91b90ab" (UID: "48011108-ee2c-4d3b-9f28-65cfc91b90ab"). InnerVolumeSpecName "kube-api-access-qk54l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.182847 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48011108-ee2c-4d3b-9f28-65cfc91b90ab-utilities" (OuterVolumeSpecName: "utilities") pod "48011108-ee2c-4d3b-9f28-65cfc91b90ab" (UID: "48011108-ee2c-4d3b-9f28-65cfc91b90ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.186003 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2579681-6b81-4b58-9d2c-c26b123be8ec-kube-api-access-4pq52" (OuterVolumeSpecName: "kube-api-access-4pq52") pod "f2579681-6b81-4b58-9d2c-c26b123be8ec" (UID: "f2579681-6b81-4b58-9d2c-c26b123be8ec"). InnerVolumeSpecName "kube-api-access-4pq52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.186039 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc-kube-api-access-46ljt" (OuterVolumeSpecName: "kube-api-access-46ljt") pod "a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc" (UID: "a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc"). InnerVolumeSpecName "kube-api-access-46ljt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.214706 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc" (UID: "a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.241347 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2579681-6b81-4b58-9d2c-c26b123be8ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f2579681-6b81-4b58-9d2c-c26b123be8ec" (UID: "f2579681-6b81-4b58-9d2c-c26b123be8ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.276818 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46ljt\" (UniqueName: \"kubernetes.io/projected/a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc-kube-api-access-46ljt\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.276849 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2579681-6b81-4b58-9d2c-c26b123be8ec-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.276860 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.276869 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qk54l\" (UniqueName: \"kubernetes.io/projected/48011108-ee2c-4d3b-9f28-65cfc91b90ab-kube-api-access-qk54l\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.276879 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2579681-6b81-4b58-9d2c-c26b123be8ec-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.276888 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.276897 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pq52\" (UniqueName: \"kubernetes.io/projected/f2579681-6b81-4b58-9d2c-c26b123be8ec-kube-api-access-4pq52\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.276905 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48011108-ee2c-4d3b-9f28-65cfc91b90ab-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.286843 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-w2snv" event={"ID":"79d764bd-68e2-4846-a2c3-3f6bdc2db5e7","Type":"ContainerStarted","Data":"a2e058fe2faf58db9896916e64f8b58af11486c19f1395619d8ca44f078315ec"} Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.286879 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-w2snv" event={"ID":"79d764bd-68e2-4846-a2c3-3f6bdc2db5e7","Type":"ContainerStarted","Data":"2d0149fe2f284a74d3b176519df7bdfa2a3f4a66a2e2afb1968d292bf78cbe7b"} Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.289028 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k7l8d" event={"ID":"f2579681-6b81-4b58-9d2c-c26b123be8ec","Type":"ContainerDied","Data":"84793fd5fcaacd431824841d6e2b8422e958b94c7956d1b13be0f87bbce99d67"} Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.289060 4830 scope.go:117] "RemoveContainer" containerID="bf8f7f00dabc83ed88321c54eb8ecc1093da98806c893dfc048a629d090d59ac" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.289155 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k7l8d" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.297155 4830 generic.go:334] "Generic (PLEG): container finished" podID="a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc" containerID="1eba51cb0d496eae004fbfc8ec830b99d548b112ebca7690cb09e1819ee1e443" exitCode=0 Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.297227 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkwcl" event={"ID":"a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc","Type":"ContainerDied","Data":"1eba51cb0d496eae004fbfc8ec830b99d548b112ebca7690cb09e1819ee1e443"} Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.297253 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkwcl" event={"ID":"a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc","Type":"ContainerDied","Data":"a9663769ecc6d6c865b5af6cebc5d814f7292ed85c27ca2c0aa948e8dc7dfc90"} Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.297398 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kkwcl" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.321469 4830 generic.go:334] "Generic (PLEG): container finished" podID="48011108-ee2c-4d3b-9f28-65cfc91b90ab" containerID="98e33657c6848e8254bbea28a416aa7f0ca69f675bd06430a56e87cfef186663" exitCode=0 Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.321570 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tr5cj" event={"ID":"48011108-ee2c-4d3b-9f28-65cfc91b90ab","Type":"ContainerDied","Data":"98e33657c6848e8254bbea28a416aa7f0ca69f675bd06430a56e87cfef186663"} Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.321603 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tr5cj" event={"ID":"48011108-ee2c-4d3b-9f28-65cfc91b90ab","Type":"ContainerDied","Data":"612e1dd9538265d782e323493367826df990f79faf4fd468a9fc7ab772bb8719"} Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.321689 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tr5cj" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.327600 4830 scope.go:117] "RemoveContainer" containerID="5006b9250f9894eb42bca91b07eebb8aab60e723730dfc9f81383c40b15104d1" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.328920 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkwcl"] Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.329879 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48011108-ee2c-4d3b-9f28-65cfc91b90ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48011108-ee2c-4d3b-9f28-65cfc91b90ab" (UID: "48011108-ee2c-4d3b-9f28-65cfc91b90ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.330333 4830 generic.go:334] "Generic (PLEG): container finished" podID="32e984aa-8399-4cf1-8a4a-b36525c67e35" containerID="0204c4fa0fcb07e9b828264151181c41186fb5044b2e7a58f9938a37bc377462" exitCode=0 Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.330396 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-45mg7" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.330402 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-45mg7" event={"ID":"32e984aa-8399-4cf1-8a4a-b36525c67e35","Type":"ContainerDied","Data":"0204c4fa0fcb07e9b828264151181c41186fb5044b2e7a58f9938a37bc377462"} Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.330446 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-45mg7" event={"ID":"32e984aa-8399-4cf1-8a4a-b36525c67e35","Type":"ContainerDied","Data":"49054df53335758b3881b76c3a3c62d68b35f8674db1ebfbd73ee163d939df11"} Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.333863 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkwcl"] Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.334360 4830 generic.go:334] "Generic (PLEG): container finished" podID="8b33138a-5b9d-4af8-b13d-4db4c2613983" containerID="425f05b409c5b9847f770836cb23fa92d243640eae8fc7ca0ac2121b3fb5332b" exitCode=0 Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.334383 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-966h2" event={"ID":"8b33138a-5b9d-4af8-b13d-4db4c2613983","Type":"ContainerDied","Data":"425f05b409c5b9847f770836cb23fa92d243640eae8fc7ca0ac2121b3fb5332b"} Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.343001 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k7l8d"] Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.349819 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k7l8d"] Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.362844 4830 scope.go:117] "RemoveContainer" containerID="580d57ccadc2b72e237a298049219e0b38ea38314a110fdcef6eddd7d98a3314" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.368906 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-45mg7"] Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.372089 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-45mg7"] Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.378972 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48011108-ee2c-4d3b-9f28-65cfc91b90ab-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.384865 4830 scope.go:117] "RemoveContainer" containerID="1eba51cb0d496eae004fbfc8ec830b99d548b112ebca7690cb09e1819ee1e443" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.419562 4830 scope.go:117] "RemoveContainer" containerID="8600cbda840369d0b64909468a7d15d1b52aef5711388bc2d83b0df75cfd43dc" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.444470 4830 scope.go:117] "RemoveContainer" containerID="73f0f72216ba308c51c1f84db1ce043b27aa6c0dfd99bda506b1cb5082cee083" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.462332 4830 scope.go:117] "RemoveContainer" containerID="1eba51cb0d496eae004fbfc8ec830b99d548b112ebca7690cb09e1819ee1e443" Feb 27 16:14:48 crc kubenswrapper[4830]: E0227 16:14:48.462784 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1eba51cb0d496eae004fbfc8ec830b99d548b112ebca7690cb09e1819ee1e443\": container with ID starting with 1eba51cb0d496eae004fbfc8ec830b99d548b112ebca7690cb09e1819ee1e443 not found: ID does not exist" containerID="1eba51cb0d496eae004fbfc8ec830b99d548b112ebca7690cb09e1819ee1e443" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.462860 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1eba51cb0d496eae004fbfc8ec830b99d548b112ebca7690cb09e1819ee1e443"} err="failed to get container status \"1eba51cb0d496eae004fbfc8ec830b99d548b112ebca7690cb09e1819ee1e443\": rpc error: code = NotFound desc = could not find container \"1eba51cb0d496eae004fbfc8ec830b99d548b112ebca7690cb09e1819ee1e443\": container with ID starting with 1eba51cb0d496eae004fbfc8ec830b99d548b112ebca7690cb09e1819ee1e443 not found: ID does not exist" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.462895 4830 scope.go:117] "RemoveContainer" containerID="8600cbda840369d0b64909468a7d15d1b52aef5711388bc2d83b0df75cfd43dc" Feb 27 16:14:48 crc kubenswrapper[4830]: E0227 16:14:48.463841 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8600cbda840369d0b64909468a7d15d1b52aef5711388bc2d83b0df75cfd43dc\": container with ID starting with 8600cbda840369d0b64909468a7d15d1b52aef5711388bc2d83b0df75cfd43dc not found: ID does not exist" containerID="8600cbda840369d0b64909468a7d15d1b52aef5711388bc2d83b0df75cfd43dc" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.463874 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8600cbda840369d0b64909468a7d15d1b52aef5711388bc2d83b0df75cfd43dc"} err="failed to get container status \"8600cbda840369d0b64909468a7d15d1b52aef5711388bc2d83b0df75cfd43dc\": rpc error: code = NotFound desc = could not find container \"8600cbda840369d0b64909468a7d15d1b52aef5711388bc2d83b0df75cfd43dc\": container with ID starting with 8600cbda840369d0b64909468a7d15d1b52aef5711388bc2d83b0df75cfd43dc not found: ID does not exist" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.463897 4830 scope.go:117] "RemoveContainer" containerID="73f0f72216ba308c51c1f84db1ce043b27aa6c0dfd99bda506b1cb5082cee083" Feb 27 16:14:48 crc kubenswrapper[4830]: E0227 16:14:48.464166 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73f0f72216ba308c51c1f84db1ce043b27aa6c0dfd99bda506b1cb5082cee083\": container with ID starting with 73f0f72216ba308c51c1f84db1ce043b27aa6c0dfd99bda506b1cb5082cee083 not found: ID does not exist" containerID="73f0f72216ba308c51c1f84db1ce043b27aa6c0dfd99bda506b1cb5082cee083" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.464186 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73f0f72216ba308c51c1f84db1ce043b27aa6c0dfd99bda506b1cb5082cee083"} err="failed to get container status \"73f0f72216ba308c51c1f84db1ce043b27aa6c0dfd99bda506b1cb5082cee083\": rpc error: code = NotFound desc = could not find container \"73f0f72216ba308c51c1f84db1ce043b27aa6c0dfd99bda506b1cb5082cee083\": container with ID starting with 73f0f72216ba308c51c1f84db1ce043b27aa6c0dfd99bda506b1cb5082cee083 not found: ID does not exist" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.464200 4830 scope.go:117] "RemoveContainer" containerID="98e33657c6848e8254bbea28a416aa7f0ca69f675bd06430a56e87cfef186663" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.475960 4830 scope.go:117] "RemoveContainer" containerID="d7e48c6aa9dd849482268d80a315d75cf18dcf794580cf30768ac6ce0a1c2753" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.490081 4830 scope.go:117] "RemoveContainer" containerID="19152c7e45c1d0d863dc124c17373bb842b76968ed71362f85243c4e84f80696" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.505813 4830 scope.go:117] "RemoveContainer" containerID="98e33657c6848e8254bbea28a416aa7f0ca69f675bd06430a56e87cfef186663" Feb 27 16:14:48 crc kubenswrapper[4830]: E0227 16:14:48.506250 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98e33657c6848e8254bbea28a416aa7f0ca69f675bd06430a56e87cfef186663\": container with ID starting with 98e33657c6848e8254bbea28a416aa7f0ca69f675bd06430a56e87cfef186663 not found: ID does not exist" containerID="98e33657c6848e8254bbea28a416aa7f0ca69f675bd06430a56e87cfef186663" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.506277 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98e33657c6848e8254bbea28a416aa7f0ca69f675bd06430a56e87cfef186663"} err="failed to get container status \"98e33657c6848e8254bbea28a416aa7f0ca69f675bd06430a56e87cfef186663\": rpc error: code = NotFound desc = could not find container \"98e33657c6848e8254bbea28a416aa7f0ca69f675bd06430a56e87cfef186663\": container with ID starting with 98e33657c6848e8254bbea28a416aa7f0ca69f675bd06430a56e87cfef186663 not found: ID does not exist" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.506298 4830 scope.go:117] "RemoveContainer" containerID="d7e48c6aa9dd849482268d80a315d75cf18dcf794580cf30768ac6ce0a1c2753" Feb 27 16:14:48 crc kubenswrapper[4830]: E0227 16:14:48.506586 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7e48c6aa9dd849482268d80a315d75cf18dcf794580cf30768ac6ce0a1c2753\": container with ID starting with d7e48c6aa9dd849482268d80a315d75cf18dcf794580cf30768ac6ce0a1c2753 not found: ID does not exist" containerID="d7e48c6aa9dd849482268d80a315d75cf18dcf794580cf30768ac6ce0a1c2753" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.506619 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7e48c6aa9dd849482268d80a315d75cf18dcf794580cf30768ac6ce0a1c2753"} err="failed to get container status \"d7e48c6aa9dd849482268d80a315d75cf18dcf794580cf30768ac6ce0a1c2753\": rpc error: code = NotFound desc = could not find container \"d7e48c6aa9dd849482268d80a315d75cf18dcf794580cf30768ac6ce0a1c2753\": container with ID starting with d7e48c6aa9dd849482268d80a315d75cf18dcf794580cf30768ac6ce0a1c2753 not found: ID does not exist" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.506645 4830 scope.go:117] "RemoveContainer" containerID="19152c7e45c1d0d863dc124c17373bb842b76968ed71362f85243c4e84f80696" Feb 27 16:14:48 crc kubenswrapper[4830]: E0227 16:14:48.506969 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19152c7e45c1d0d863dc124c17373bb842b76968ed71362f85243c4e84f80696\": container with ID starting with 19152c7e45c1d0d863dc124c17373bb842b76968ed71362f85243c4e84f80696 not found: ID does not exist" containerID="19152c7e45c1d0d863dc124c17373bb842b76968ed71362f85243c4e84f80696" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.506993 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19152c7e45c1d0d863dc124c17373bb842b76968ed71362f85243c4e84f80696"} err="failed to get container status \"19152c7e45c1d0d863dc124c17373bb842b76968ed71362f85243c4e84f80696\": rpc error: code = NotFound desc = could not find container \"19152c7e45c1d0d863dc124c17373bb842b76968ed71362f85243c4e84f80696\": container with ID starting with 19152c7e45c1d0d863dc124c17373bb842b76968ed71362f85243c4e84f80696 not found: ID does not exist" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.507006 4830 scope.go:117] "RemoveContainer" containerID="0204c4fa0fcb07e9b828264151181c41186fb5044b2e7a58f9938a37bc377462" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.519551 4830 scope.go:117] "RemoveContainer" containerID="0204c4fa0fcb07e9b828264151181c41186fb5044b2e7a58f9938a37bc377462" Feb 27 16:14:48 crc kubenswrapper[4830]: E0227 16:14:48.520007 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0204c4fa0fcb07e9b828264151181c41186fb5044b2e7a58f9938a37bc377462\": container with ID starting with 0204c4fa0fcb07e9b828264151181c41186fb5044b2e7a58f9938a37bc377462 not found: ID does not exist" containerID="0204c4fa0fcb07e9b828264151181c41186fb5044b2e7a58f9938a37bc377462" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.520052 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0204c4fa0fcb07e9b828264151181c41186fb5044b2e7a58f9938a37bc377462"} err="failed to get container status \"0204c4fa0fcb07e9b828264151181c41186fb5044b2e7a58f9938a37bc377462\": rpc error: code = NotFound desc = could not find container \"0204c4fa0fcb07e9b828264151181c41186fb5044b2e7a58f9938a37bc377462\": container with ID starting with 0204c4fa0fcb07e9b828264151181c41186fb5044b2e7a58f9938a37bc377462 not found: ID does not exist" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.533468 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-966h2" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.581169 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b33138a-5b9d-4af8-b13d-4db4c2613983-utilities\") pod \"8b33138a-5b9d-4af8-b13d-4db4c2613983\" (UID: \"8b33138a-5b9d-4af8-b13d-4db4c2613983\") " Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.581221 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x989f\" (UniqueName: \"kubernetes.io/projected/8b33138a-5b9d-4af8-b13d-4db4c2613983-kube-api-access-x989f\") pod \"8b33138a-5b9d-4af8-b13d-4db4c2613983\" (UID: \"8b33138a-5b9d-4af8-b13d-4db4c2613983\") " Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.581276 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b33138a-5b9d-4af8-b13d-4db4c2613983-catalog-content\") pod \"8b33138a-5b9d-4af8-b13d-4db4c2613983\" (UID: \"8b33138a-5b9d-4af8-b13d-4db4c2613983\") " Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.588669 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b33138a-5b9d-4af8-b13d-4db4c2613983-utilities" (OuterVolumeSpecName: "utilities") pod "8b33138a-5b9d-4af8-b13d-4db4c2613983" (UID: "8b33138a-5b9d-4af8-b13d-4db4c2613983"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.594188 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b33138a-5b9d-4af8-b13d-4db4c2613983-kube-api-access-x989f" (OuterVolumeSpecName: "kube-api-access-x989f") pod "8b33138a-5b9d-4af8-b13d-4db4c2613983" (UID: "8b33138a-5b9d-4af8-b13d-4db4c2613983"). InnerVolumeSpecName "kube-api-access-x989f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.633204 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b33138a-5b9d-4af8-b13d-4db4c2613983-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b33138a-5b9d-4af8-b13d-4db4c2613983" (UID: "8b33138a-5b9d-4af8-b13d-4db4c2613983"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.659007 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tr5cj"] Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.665739 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tr5cj"] Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.683225 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b33138a-5b9d-4af8-b13d-4db4c2613983-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.683281 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x989f\" (UniqueName: \"kubernetes.io/projected/8b33138a-5b9d-4af8-b13d-4db4c2613983-kube-api-access-x989f\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.683339 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b33138a-5b9d-4af8-b13d-4db4c2613983-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.771596 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32e984aa-8399-4cf1-8a4a-b36525c67e35" path="/var/lib/kubelet/pods/32e984aa-8399-4cf1-8a4a-b36525c67e35/volumes" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.772054 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48011108-ee2c-4d3b-9f28-65cfc91b90ab" path="/var/lib/kubelet/pods/48011108-ee2c-4d3b-9f28-65cfc91b90ab/volumes" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.772593 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc" path="/var/lib/kubelet/pods/a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc/volumes" Feb 27 16:14:48 crc kubenswrapper[4830]: I0227 16:14:48.773727 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2579681-6b81-4b58-9d2c-c26b123be8ec" path="/var/lib/kubelet/pods/f2579681-6b81-4b58-9d2c-c26b123be8ec/volumes" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.103723 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jt6jl"] Feb 27 16:14:49 crc kubenswrapper[4830]: E0227 16:14:49.104048 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32e984aa-8399-4cf1-8a4a-b36525c67e35" containerName="marketplace-operator" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.104064 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="32e984aa-8399-4cf1-8a4a-b36525c67e35" containerName="marketplace-operator" Feb 27 16:14:49 crc kubenswrapper[4830]: E0227 16:14:49.104075 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b33138a-5b9d-4af8-b13d-4db4c2613983" containerName="registry-server" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.104085 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b33138a-5b9d-4af8-b13d-4db4c2613983" containerName="registry-server" Feb 27 16:14:49 crc kubenswrapper[4830]: E0227 16:14:49.104096 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc" containerName="extract-utilities" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.104105 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc" containerName="extract-utilities" Feb 27 16:14:49 crc kubenswrapper[4830]: E0227 16:14:49.104116 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc" containerName="registry-server" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.104124 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc" containerName="registry-server" Feb 27 16:14:49 crc kubenswrapper[4830]: E0227 16:14:49.104135 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48011108-ee2c-4d3b-9f28-65cfc91b90ab" containerName="registry-server" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.104153 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="48011108-ee2c-4d3b-9f28-65cfc91b90ab" containerName="registry-server" Feb 27 16:14:49 crc kubenswrapper[4830]: E0227 16:14:49.104166 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b33138a-5b9d-4af8-b13d-4db4c2613983" containerName="extract-utilities" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.104173 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b33138a-5b9d-4af8-b13d-4db4c2613983" containerName="extract-utilities" Feb 27 16:14:49 crc kubenswrapper[4830]: E0227 16:14:49.104203 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48011108-ee2c-4d3b-9f28-65cfc91b90ab" containerName="extract-utilities" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.104213 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="48011108-ee2c-4d3b-9f28-65cfc91b90ab" containerName="extract-utilities" Feb 27 16:14:49 crc kubenswrapper[4830]: E0227 16:14:49.104222 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2579681-6b81-4b58-9d2c-c26b123be8ec" containerName="registry-server" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.104228 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2579681-6b81-4b58-9d2c-c26b123be8ec" containerName="registry-server" Feb 27 16:14:49 crc kubenswrapper[4830]: E0227 16:14:49.104238 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48011108-ee2c-4d3b-9f28-65cfc91b90ab" containerName="extract-content" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.104246 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="48011108-ee2c-4d3b-9f28-65cfc91b90ab" containerName="extract-content" Feb 27 16:14:49 crc kubenswrapper[4830]: E0227 16:14:49.104253 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2579681-6b81-4b58-9d2c-c26b123be8ec" containerName="extract-content" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.104259 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2579681-6b81-4b58-9d2c-c26b123be8ec" containerName="extract-content" Feb 27 16:14:49 crc kubenswrapper[4830]: E0227 16:14:49.104266 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2579681-6b81-4b58-9d2c-c26b123be8ec" containerName="extract-utilities" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.104272 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2579681-6b81-4b58-9d2c-c26b123be8ec" containerName="extract-utilities" Feb 27 16:14:49 crc kubenswrapper[4830]: E0227 16:14:49.104282 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc" containerName="extract-content" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.104291 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc" containerName="extract-content" Feb 27 16:14:49 crc kubenswrapper[4830]: E0227 16:14:49.104300 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b33138a-5b9d-4af8-b13d-4db4c2613983" containerName="extract-content" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.104309 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b33138a-5b9d-4af8-b13d-4db4c2613983" containerName="extract-content" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.104426 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b33138a-5b9d-4af8-b13d-4db4c2613983" containerName="registry-server" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.104440 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="48011108-ee2c-4d3b-9f28-65cfc91b90ab" containerName="registry-server" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.104449 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2579681-6b81-4b58-9d2c-c26b123be8ec" containerName="registry-server" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.104461 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="32e984aa-8399-4cf1-8a4a-b36525c67e35" containerName="marketplace-operator" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.104476 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7e1e0a3-a7d4-4508-b84e-6ba87fced6fc" containerName="registry-server" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.105200 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jt6jl" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.108161 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.114073 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jt6jl"] Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.188135 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07c2162b-fcb8-4423-b0c6-75eefad7b1f8-catalog-content\") pod \"certified-operators-jt6jl\" (UID: \"07c2162b-fcb8-4423-b0c6-75eefad7b1f8\") " pod="openshift-marketplace/certified-operators-jt6jl" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.188185 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07c2162b-fcb8-4423-b0c6-75eefad7b1f8-utilities\") pod \"certified-operators-jt6jl\" (UID: \"07c2162b-fcb8-4423-b0c6-75eefad7b1f8\") " pod="openshift-marketplace/certified-operators-jt6jl" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.188215 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp74f\" (UniqueName: \"kubernetes.io/projected/07c2162b-fcb8-4423-b0c6-75eefad7b1f8-kube-api-access-sp74f\") pod \"certified-operators-jt6jl\" (UID: \"07c2162b-fcb8-4423-b0c6-75eefad7b1f8\") " pod="openshift-marketplace/certified-operators-jt6jl" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.289759 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07c2162b-fcb8-4423-b0c6-75eefad7b1f8-catalog-content\") pod \"certified-operators-jt6jl\" (UID: \"07c2162b-fcb8-4423-b0c6-75eefad7b1f8\") " pod="openshift-marketplace/certified-operators-jt6jl" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.289858 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07c2162b-fcb8-4423-b0c6-75eefad7b1f8-utilities\") pod \"certified-operators-jt6jl\" (UID: \"07c2162b-fcb8-4423-b0c6-75eefad7b1f8\") " pod="openshift-marketplace/certified-operators-jt6jl" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.289915 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp74f\" (UniqueName: \"kubernetes.io/projected/07c2162b-fcb8-4423-b0c6-75eefad7b1f8-kube-api-access-sp74f\") pod \"certified-operators-jt6jl\" (UID: \"07c2162b-fcb8-4423-b0c6-75eefad7b1f8\") " pod="openshift-marketplace/certified-operators-jt6jl" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.290527 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07c2162b-fcb8-4423-b0c6-75eefad7b1f8-catalog-content\") pod \"certified-operators-jt6jl\" (UID: \"07c2162b-fcb8-4423-b0c6-75eefad7b1f8\") " pod="openshift-marketplace/certified-operators-jt6jl" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.290627 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07c2162b-fcb8-4423-b0c6-75eefad7b1f8-utilities\") pod \"certified-operators-jt6jl\" (UID: \"07c2162b-fcb8-4423-b0c6-75eefad7b1f8\") " pod="openshift-marketplace/certified-operators-jt6jl" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.309018 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-62jtg"] Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.311301 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-62jtg" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.315224 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp74f\" (UniqueName: \"kubernetes.io/projected/07c2162b-fcb8-4423-b0c6-75eefad7b1f8-kube-api-access-sp74f\") pod \"certified-operators-jt6jl\" (UID: \"07c2162b-fcb8-4423-b0c6-75eefad7b1f8\") " pod="openshift-marketplace/certified-operators-jt6jl" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.316859 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.320289 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-62jtg"] Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.360645 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-966h2" event={"ID":"8b33138a-5b9d-4af8-b13d-4db4c2613983","Type":"ContainerDied","Data":"23c33b55f6a16c12ef7ea8bc14ae6050c37f27dc6b7d943048c0535970ac6972"} Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.360782 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-w2snv" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.360808 4830 scope.go:117] "RemoveContainer" containerID="425f05b409c5b9847f770836cb23fa92d243640eae8fc7ca0ac2121b3fb5332b" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.365089 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-966h2" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.367019 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-w2snv" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.384778 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-w2snv" podStartSLOduration=3.384751884 podStartE2EDuration="3.384751884s" podCreationTimestamp="2026-02-27 16:14:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:14:49.379384398 +0000 UTC m=+485.468656861" watchObservedRunningTime="2026-02-27 16:14:49.384751884 +0000 UTC m=+485.474024367" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.390863 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85b6b000-62ad-4dfa-b384-c603bec84bbd-catalog-content\") pod \"redhat-marketplace-62jtg\" (UID: \"85b6b000-62ad-4dfa-b384-c603bec84bbd\") " pod="openshift-marketplace/redhat-marketplace-62jtg" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.390907 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlm59\" (UniqueName: \"kubernetes.io/projected/85b6b000-62ad-4dfa-b384-c603bec84bbd-kube-api-access-rlm59\") pod \"redhat-marketplace-62jtg\" (UID: \"85b6b000-62ad-4dfa-b384-c603bec84bbd\") " pod="openshift-marketplace/redhat-marketplace-62jtg" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.391056 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85b6b000-62ad-4dfa-b384-c603bec84bbd-utilities\") pod \"redhat-marketplace-62jtg\" (UID: \"85b6b000-62ad-4dfa-b384-c603bec84bbd\") " pod="openshift-marketplace/redhat-marketplace-62jtg" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.420877 4830 scope.go:117] "RemoveContainer" containerID="3d0927005c6ee0d40ef4812464f5f371dda4630446091c790bf59a1173396d25" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.426122 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-966h2"] Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.431156 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-966h2"] Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.434559 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jt6jl" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.454546 4830 scope.go:117] "RemoveContainer" containerID="5c9aab3a73c629bd869eed924733be4ab0e2f3a57268750c95d6c1598dcd566c" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.492059 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlm59\" (UniqueName: \"kubernetes.io/projected/85b6b000-62ad-4dfa-b384-c603bec84bbd-kube-api-access-rlm59\") pod \"redhat-marketplace-62jtg\" (UID: \"85b6b000-62ad-4dfa-b384-c603bec84bbd\") " pod="openshift-marketplace/redhat-marketplace-62jtg" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.492154 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85b6b000-62ad-4dfa-b384-c603bec84bbd-utilities\") pod \"redhat-marketplace-62jtg\" (UID: \"85b6b000-62ad-4dfa-b384-c603bec84bbd\") " pod="openshift-marketplace/redhat-marketplace-62jtg" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.492187 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85b6b000-62ad-4dfa-b384-c603bec84bbd-catalog-content\") pod \"redhat-marketplace-62jtg\" (UID: \"85b6b000-62ad-4dfa-b384-c603bec84bbd\") " pod="openshift-marketplace/redhat-marketplace-62jtg" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.492608 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85b6b000-62ad-4dfa-b384-c603bec84bbd-catalog-content\") pod \"redhat-marketplace-62jtg\" (UID: \"85b6b000-62ad-4dfa-b384-c603bec84bbd\") " pod="openshift-marketplace/redhat-marketplace-62jtg" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.493233 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85b6b000-62ad-4dfa-b384-c603bec84bbd-utilities\") pod \"redhat-marketplace-62jtg\" (UID: \"85b6b000-62ad-4dfa-b384-c603bec84bbd\") " pod="openshift-marketplace/redhat-marketplace-62jtg" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.515647 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlm59\" (UniqueName: \"kubernetes.io/projected/85b6b000-62ad-4dfa-b384-c603bec84bbd-kube-api-access-rlm59\") pod \"redhat-marketplace-62jtg\" (UID: \"85b6b000-62ad-4dfa-b384-c603bec84bbd\") " pod="openshift-marketplace/redhat-marketplace-62jtg" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.668186 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-62jtg" Feb 27 16:14:49 crc kubenswrapper[4830]: I0227 16:14:49.844179 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jt6jl"] Feb 27 16:14:50 crc kubenswrapper[4830]: I0227 16:14:50.081811 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-62jtg"] Feb 27 16:14:50 crc kubenswrapper[4830]: W0227 16:14:50.091581 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85b6b000_62ad_4dfa_b384_c603bec84bbd.slice/crio-d741af54dde7c6f6724981c4337da18c02e6a4ffb413bee875335c3d62b6b90e WatchSource:0}: Error finding container d741af54dde7c6f6724981c4337da18c02e6a4ffb413bee875335c3d62b6b90e: Status 404 returned error can't find the container with id d741af54dde7c6f6724981c4337da18c02e6a4ffb413bee875335c3d62b6b90e Feb 27 16:14:50 crc kubenswrapper[4830]: I0227 16:14:50.370767 4830 generic.go:334] "Generic (PLEG): container finished" podID="85b6b000-62ad-4dfa-b384-c603bec84bbd" containerID="56d416325ff1d6dac76d28894105ea322e80d4334cb93e943e978cce6184b6e7" exitCode=0 Feb 27 16:14:50 crc kubenswrapper[4830]: I0227 16:14:50.370880 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62jtg" event={"ID":"85b6b000-62ad-4dfa-b384-c603bec84bbd","Type":"ContainerDied","Data":"56d416325ff1d6dac76d28894105ea322e80d4334cb93e943e978cce6184b6e7"} Feb 27 16:14:50 crc kubenswrapper[4830]: I0227 16:14:50.371008 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62jtg" event={"ID":"85b6b000-62ad-4dfa-b384-c603bec84bbd","Type":"ContainerStarted","Data":"d741af54dde7c6f6724981c4337da18c02e6a4ffb413bee875335c3d62b6b90e"} Feb 27 16:14:50 crc kubenswrapper[4830]: I0227 16:14:50.374483 4830 generic.go:334] "Generic (PLEG): container finished" podID="07c2162b-fcb8-4423-b0c6-75eefad7b1f8" containerID="e3dab58ca71daa8e06ecd55b99936ff2fd36914c8c19964004de91fffec7a5e0" exitCode=0 Feb 27 16:14:50 crc kubenswrapper[4830]: I0227 16:14:50.374684 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jt6jl" event={"ID":"07c2162b-fcb8-4423-b0c6-75eefad7b1f8","Type":"ContainerDied","Data":"e3dab58ca71daa8e06ecd55b99936ff2fd36914c8c19964004de91fffec7a5e0"} Feb 27 16:14:50 crc kubenswrapper[4830]: I0227 16:14:50.374737 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jt6jl" event={"ID":"07c2162b-fcb8-4423-b0c6-75eefad7b1f8","Type":"ContainerStarted","Data":"8062a7294767a91727df0948ed7ae665be46cde3e52644c0e41a65f779e353c2"} Feb 27 16:14:50 crc kubenswrapper[4830]: I0227 16:14:50.882436 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b33138a-5b9d-4af8-b13d-4db4c2613983" path="/var/lib/kubelet/pods/8b33138a-5b9d-4af8-b13d-4db4c2613983/volumes" Feb 27 16:14:51 crc kubenswrapper[4830]: I0227 16:14:51.499116 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jq4fp"] Feb 27 16:14:51 crc kubenswrapper[4830]: I0227 16:14:51.505253 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jq4fp" Feb 27 16:14:51 crc kubenswrapper[4830]: I0227 16:14:51.509663 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 27 16:14:51 crc kubenswrapper[4830]: I0227 16:14:51.514536 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jq4fp"] Feb 27 16:14:51 crc kubenswrapper[4830]: I0227 16:14:51.518242 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r52ll\" (UniqueName: \"kubernetes.io/projected/6ce624ae-e85d-456f-9da1-fb880e9640ca-kube-api-access-r52ll\") pod \"redhat-operators-jq4fp\" (UID: \"6ce624ae-e85d-456f-9da1-fb880e9640ca\") " pod="openshift-marketplace/redhat-operators-jq4fp" Feb 27 16:14:51 crc kubenswrapper[4830]: I0227 16:14:51.518320 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ce624ae-e85d-456f-9da1-fb880e9640ca-utilities\") pod \"redhat-operators-jq4fp\" (UID: \"6ce624ae-e85d-456f-9da1-fb880e9640ca\") " pod="openshift-marketplace/redhat-operators-jq4fp" Feb 27 16:14:51 crc kubenswrapper[4830]: I0227 16:14:51.518407 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ce624ae-e85d-456f-9da1-fb880e9640ca-catalog-content\") pod \"redhat-operators-jq4fp\" (UID: \"6ce624ae-e85d-456f-9da1-fb880e9640ca\") " pod="openshift-marketplace/redhat-operators-jq4fp" Feb 27 16:14:51 crc kubenswrapper[4830]: I0227 16:14:51.619525 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r52ll\" (UniqueName: \"kubernetes.io/projected/6ce624ae-e85d-456f-9da1-fb880e9640ca-kube-api-access-r52ll\") pod \"redhat-operators-jq4fp\" (UID: \"6ce624ae-e85d-456f-9da1-fb880e9640ca\") " pod="openshift-marketplace/redhat-operators-jq4fp" Feb 27 16:14:51 crc kubenswrapper[4830]: I0227 16:14:51.619571 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ce624ae-e85d-456f-9da1-fb880e9640ca-utilities\") pod \"redhat-operators-jq4fp\" (UID: \"6ce624ae-e85d-456f-9da1-fb880e9640ca\") " pod="openshift-marketplace/redhat-operators-jq4fp" Feb 27 16:14:51 crc kubenswrapper[4830]: I0227 16:14:51.619616 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ce624ae-e85d-456f-9da1-fb880e9640ca-catalog-content\") pod \"redhat-operators-jq4fp\" (UID: \"6ce624ae-e85d-456f-9da1-fb880e9640ca\") " pod="openshift-marketplace/redhat-operators-jq4fp" Feb 27 16:14:51 crc kubenswrapper[4830]: I0227 16:14:51.620388 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ce624ae-e85d-456f-9da1-fb880e9640ca-utilities\") pod \"redhat-operators-jq4fp\" (UID: \"6ce624ae-e85d-456f-9da1-fb880e9640ca\") " pod="openshift-marketplace/redhat-operators-jq4fp" Feb 27 16:14:51 crc kubenswrapper[4830]: I0227 16:14:51.620446 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ce624ae-e85d-456f-9da1-fb880e9640ca-catalog-content\") pod \"redhat-operators-jq4fp\" (UID: \"6ce624ae-e85d-456f-9da1-fb880e9640ca\") " pod="openshift-marketplace/redhat-operators-jq4fp" Feb 27 16:14:51 crc kubenswrapper[4830]: I0227 16:14:51.649534 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r52ll\" (UniqueName: \"kubernetes.io/projected/6ce624ae-e85d-456f-9da1-fb880e9640ca-kube-api-access-r52ll\") pod \"redhat-operators-jq4fp\" (UID: \"6ce624ae-e85d-456f-9da1-fb880e9640ca\") " pod="openshift-marketplace/redhat-operators-jq4fp" Feb 27 16:14:51 crc kubenswrapper[4830]: I0227 16:14:51.708749 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nckd4"] Feb 27 16:14:51 crc kubenswrapper[4830]: I0227 16:14:51.712994 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nckd4" Feb 27 16:14:51 crc kubenswrapper[4830]: I0227 16:14:51.717514 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nckd4"] Feb 27 16:14:51 crc kubenswrapper[4830]: I0227 16:14:51.719586 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 27 16:14:51 crc kubenswrapper[4830]: I0227 16:14:51.822161 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0dbf914-3579-4535-94f5-ea7382816919-utilities\") pod \"community-operators-nckd4\" (UID: \"f0dbf914-3579-4535-94f5-ea7382816919\") " pod="openshift-marketplace/community-operators-nckd4" Feb 27 16:14:51 crc kubenswrapper[4830]: I0227 16:14:51.822241 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0dbf914-3579-4535-94f5-ea7382816919-catalog-content\") pod \"community-operators-nckd4\" (UID: \"f0dbf914-3579-4535-94f5-ea7382816919\") " pod="openshift-marketplace/community-operators-nckd4" Feb 27 16:14:51 crc kubenswrapper[4830]: I0227 16:14:51.822325 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtcm5\" (UniqueName: \"kubernetes.io/projected/f0dbf914-3579-4535-94f5-ea7382816919-kube-api-access-wtcm5\") pod \"community-operators-nckd4\" (UID: \"f0dbf914-3579-4535-94f5-ea7382816919\") " pod="openshift-marketplace/community-operators-nckd4" Feb 27 16:14:51 crc kubenswrapper[4830]: I0227 16:14:51.868717 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jq4fp" Feb 27 16:14:51 crc kubenswrapper[4830]: I0227 16:14:51.924498 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0dbf914-3579-4535-94f5-ea7382816919-utilities\") pod \"community-operators-nckd4\" (UID: \"f0dbf914-3579-4535-94f5-ea7382816919\") " pod="openshift-marketplace/community-operators-nckd4" Feb 27 16:14:51 crc kubenswrapper[4830]: I0227 16:14:51.924840 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0dbf914-3579-4535-94f5-ea7382816919-catalog-content\") pod \"community-operators-nckd4\" (UID: \"f0dbf914-3579-4535-94f5-ea7382816919\") " pod="openshift-marketplace/community-operators-nckd4" Feb 27 16:14:51 crc kubenswrapper[4830]: I0227 16:14:51.924902 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtcm5\" (UniqueName: \"kubernetes.io/projected/f0dbf914-3579-4535-94f5-ea7382816919-kube-api-access-wtcm5\") pod \"community-operators-nckd4\" (UID: \"f0dbf914-3579-4535-94f5-ea7382816919\") " pod="openshift-marketplace/community-operators-nckd4" Feb 27 16:14:51 crc kubenswrapper[4830]: I0227 16:14:51.925737 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0dbf914-3579-4535-94f5-ea7382816919-catalog-content\") pod \"community-operators-nckd4\" (UID: \"f0dbf914-3579-4535-94f5-ea7382816919\") " pod="openshift-marketplace/community-operators-nckd4" Feb 27 16:14:51 crc kubenswrapper[4830]: I0227 16:14:51.925769 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0dbf914-3579-4535-94f5-ea7382816919-utilities\") pod \"community-operators-nckd4\" (UID: \"f0dbf914-3579-4535-94f5-ea7382816919\") " pod="openshift-marketplace/community-operators-nckd4" Feb 27 16:14:51 crc kubenswrapper[4830]: I0227 16:14:51.946670 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtcm5\" (UniqueName: \"kubernetes.io/projected/f0dbf914-3579-4535-94f5-ea7382816919-kube-api-access-wtcm5\") pod \"community-operators-nckd4\" (UID: \"f0dbf914-3579-4535-94f5-ea7382816919\") " pod="openshift-marketplace/community-operators-nckd4" Feb 27 16:14:52 crc kubenswrapper[4830]: I0227 16:14:52.041243 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nckd4" Feb 27 16:14:52 crc kubenswrapper[4830]: I0227 16:14:52.333513 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jq4fp"] Feb 27 16:14:52 crc kubenswrapper[4830]: I0227 16:14:52.388821 4830 generic.go:334] "Generic (PLEG): container finished" podID="85b6b000-62ad-4dfa-b384-c603bec84bbd" containerID="6ad8fa6b59aea2592e305b3ef5cccc451a0529120ab0bed9bc3f760fcf6ba915" exitCode=0 Feb 27 16:14:52 crc kubenswrapper[4830]: I0227 16:14:52.388904 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62jtg" event={"ID":"85b6b000-62ad-4dfa-b384-c603bec84bbd","Type":"ContainerDied","Data":"6ad8fa6b59aea2592e305b3ef5cccc451a0529120ab0bed9bc3f760fcf6ba915"} Feb 27 16:14:52 crc kubenswrapper[4830]: I0227 16:14:52.397042 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jt6jl" event={"ID":"07c2162b-fcb8-4423-b0c6-75eefad7b1f8","Type":"ContainerStarted","Data":"9a9ff0563ccdf509a46a54799d82b5e6abf49aee6f0c8ae60d4db9d084ff65d3"} Feb 27 16:14:52 crc kubenswrapper[4830]: W0227 16:14:52.397913 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ce624ae_e85d_456f_9da1_fb880e9640ca.slice/crio-1a599ad8a5a601b5f2d964560f58309295a66494fdd914f432139e825919ba7d WatchSource:0}: Error finding container 1a599ad8a5a601b5f2d964560f58309295a66494fdd914f432139e825919ba7d: Status 404 returned error can't find the container with id 1a599ad8a5a601b5f2d964560f58309295a66494fdd914f432139e825919ba7d Feb 27 16:14:52 crc kubenswrapper[4830]: I0227 16:14:52.489050 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nckd4"] Feb 27 16:14:53 crc kubenswrapper[4830]: I0227 16:14:53.407105 4830 generic.go:334] "Generic (PLEG): container finished" podID="07c2162b-fcb8-4423-b0c6-75eefad7b1f8" containerID="9a9ff0563ccdf509a46a54799d82b5e6abf49aee6f0c8ae60d4db9d084ff65d3" exitCode=0 Feb 27 16:14:53 crc kubenswrapper[4830]: I0227 16:14:53.407238 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jt6jl" event={"ID":"07c2162b-fcb8-4423-b0c6-75eefad7b1f8","Type":"ContainerDied","Data":"9a9ff0563ccdf509a46a54799d82b5e6abf49aee6f0c8ae60d4db9d084ff65d3"} Feb 27 16:14:53 crc kubenswrapper[4830]: I0227 16:14:53.411938 4830 generic.go:334] "Generic (PLEG): container finished" podID="6ce624ae-e85d-456f-9da1-fb880e9640ca" containerID="29d530d863c17513c0412a004e1572baa23716c132b6ed3f6ebaf7a5e29485b5" exitCode=0 Feb 27 16:14:53 crc kubenswrapper[4830]: I0227 16:14:53.413206 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jq4fp" event={"ID":"6ce624ae-e85d-456f-9da1-fb880e9640ca","Type":"ContainerDied","Data":"29d530d863c17513c0412a004e1572baa23716c132b6ed3f6ebaf7a5e29485b5"} Feb 27 16:14:53 crc kubenswrapper[4830]: I0227 16:14:53.413320 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jq4fp" event={"ID":"6ce624ae-e85d-456f-9da1-fb880e9640ca","Type":"ContainerStarted","Data":"1a599ad8a5a601b5f2d964560f58309295a66494fdd914f432139e825919ba7d"} Feb 27 16:14:53 crc kubenswrapper[4830]: I0227 16:14:53.422790 4830 generic.go:334] "Generic (PLEG): container finished" podID="f0dbf914-3579-4535-94f5-ea7382816919" containerID="a318a9080d0d40a642928c7eb71b0739a2dcae925da555dbad2f3e3c0b67014c" exitCode=0 Feb 27 16:14:53 crc kubenswrapper[4830]: I0227 16:14:53.422918 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nckd4" event={"ID":"f0dbf914-3579-4535-94f5-ea7382816919","Type":"ContainerDied","Data":"a318a9080d0d40a642928c7eb71b0739a2dcae925da555dbad2f3e3c0b67014c"} Feb 27 16:14:53 crc kubenswrapper[4830]: I0227 16:14:53.422974 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nckd4" event={"ID":"f0dbf914-3579-4535-94f5-ea7382816919","Type":"ContainerStarted","Data":"edc6f18460257024fbcd83b35e4b01331f222dc79ff63cfbc1147e4acd6ea6a6"} Feb 27 16:14:53 crc kubenswrapper[4830]: I0227 16:14:53.440181 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-62jtg" event={"ID":"85b6b000-62ad-4dfa-b384-c603bec84bbd","Type":"ContainerStarted","Data":"5e7528da3e362e3215ee9df9e0583599e63d25ec03f2c7acbe6f262e04536df6"} Feb 27 16:14:53 crc kubenswrapper[4830]: I0227 16:14:53.506788 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-62jtg" podStartSLOduration=1.9536788459999999 podStartE2EDuration="4.506769061s" podCreationTimestamp="2026-02-27 16:14:49 +0000 UTC" firstStartedPulling="2026-02-27 16:14:50.372311872 +0000 UTC m=+486.461584335" lastFinishedPulling="2026-02-27 16:14:52.925402047 +0000 UTC m=+489.014674550" observedRunningTime="2026-02-27 16:14:53.501388315 +0000 UTC m=+489.590660808" watchObservedRunningTime="2026-02-27 16:14:53.506769061 +0000 UTC m=+489.596041544" Feb 27 16:14:54 crc kubenswrapper[4830]: I0227 16:14:54.477709 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jt6jl" event={"ID":"07c2162b-fcb8-4423-b0c6-75eefad7b1f8","Type":"ContainerStarted","Data":"1b6401b8d147a2db37c19e8880e46cca00360015fce4613f84d8255fb65355c2"} Feb 27 16:14:54 crc kubenswrapper[4830]: I0227 16:14:54.501433 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jt6jl" podStartSLOduration=1.964862708 podStartE2EDuration="5.501395938s" podCreationTimestamp="2026-02-27 16:14:49 +0000 UTC" firstStartedPulling="2026-02-27 16:14:50.376234152 +0000 UTC m=+486.465506615" lastFinishedPulling="2026-02-27 16:14:53.912767352 +0000 UTC m=+490.002039845" observedRunningTime="2026-02-27 16:14:54.497751286 +0000 UTC m=+490.587023759" watchObservedRunningTime="2026-02-27 16:14:54.501395938 +0000 UTC m=+490.590668442" Feb 27 16:14:56 crc kubenswrapper[4830]: I0227 16:14:56.492817 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jq4fp" event={"ID":"6ce624ae-e85d-456f-9da1-fb880e9640ca","Type":"ContainerStarted","Data":"54a25320571cb228c6816c54f1f9f29af84617095a05311d8909e46eada3cd5d"} Feb 27 16:14:56 crc kubenswrapper[4830]: I0227 16:14:56.503340 4830 generic.go:334] "Generic (PLEG): container finished" podID="f0dbf914-3579-4535-94f5-ea7382816919" containerID="a8c43b368280b1d13fb9612d19fe2ae6a95e154470972a17c53f4c3303fc2b3e" exitCode=0 Feb 27 16:14:56 crc kubenswrapper[4830]: I0227 16:14:56.503536 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nckd4" event={"ID":"f0dbf914-3579-4535-94f5-ea7382816919","Type":"ContainerDied","Data":"a8c43b368280b1d13fb9612d19fe2ae6a95e154470972a17c53f4c3303fc2b3e"} Feb 27 16:14:57 crc kubenswrapper[4830]: I0227 16:14:57.513371 4830 generic.go:334] "Generic (PLEG): container finished" podID="6ce624ae-e85d-456f-9da1-fb880e9640ca" containerID="54a25320571cb228c6816c54f1f9f29af84617095a05311d8909e46eada3cd5d" exitCode=0 Feb 27 16:14:57 crc kubenswrapper[4830]: I0227 16:14:57.513459 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jq4fp" event={"ID":"6ce624ae-e85d-456f-9da1-fb880e9640ca","Type":"ContainerDied","Data":"54a25320571cb228c6816c54f1f9f29af84617095a05311d8909e46eada3cd5d"} Feb 27 16:14:58 crc kubenswrapper[4830]: I0227 16:14:58.541341 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nckd4" event={"ID":"f0dbf914-3579-4535-94f5-ea7382816919","Type":"ContainerStarted","Data":"40d6e44fe47d1af4d9b4d3003f9b0088e31006c9a2e424cd48f63c77d75129bd"} Feb 27 16:14:58 crc kubenswrapper[4830]: I0227 16:14:58.572652 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nckd4" podStartSLOduration=3.755429289 podStartE2EDuration="7.572617753s" podCreationTimestamp="2026-02-27 16:14:51 +0000 UTC" firstStartedPulling="2026-02-27 16:14:53.427656911 +0000 UTC m=+489.516929394" lastFinishedPulling="2026-02-27 16:14:57.244845395 +0000 UTC m=+493.334117858" observedRunningTime="2026-02-27 16:14:58.566816066 +0000 UTC m=+494.656088529" watchObservedRunningTime="2026-02-27 16:14:58.572617753 +0000 UTC m=+494.661890256" Feb 27 16:14:59 crc kubenswrapper[4830]: I0227 16:14:59.435126 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jt6jl" Feb 27 16:14:59 crc kubenswrapper[4830]: I0227 16:14:59.435227 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jt6jl" Feb 27 16:14:59 crc kubenswrapper[4830]: I0227 16:14:59.505634 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jt6jl" Feb 27 16:14:59 crc kubenswrapper[4830]: I0227 16:14:59.553662 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jq4fp" event={"ID":"6ce624ae-e85d-456f-9da1-fb880e9640ca","Type":"ContainerStarted","Data":"afd5974f2fdc499677a0cb1dc1699eb5ca31160c38debdc76598889193447d5c"} Feb 27 16:14:59 crc kubenswrapper[4830]: I0227 16:14:59.577052 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jq4fp" podStartSLOduration=3.48798607 podStartE2EDuration="8.577024187s" podCreationTimestamp="2026-02-27 16:14:51 +0000 UTC" firstStartedPulling="2026-02-27 16:14:53.41927785 +0000 UTC m=+489.508550353" lastFinishedPulling="2026-02-27 16:14:58.508316007 +0000 UTC m=+494.597588470" observedRunningTime="2026-02-27 16:14:59.572492553 +0000 UTC m=+495.661765056" watchObservedRunningTime="2026-02-27 16:14:59.577024187 +0000 UTC m=+495.666296690" Feb 27 16:14:59 crc kubenswrapper[4830]: I0227 16:14:59.617210 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jt6jl" Feb 27 16:14:59 crc kubenswrapper[4830]: I0227 16:14:59.694354 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-62jtg" Feb 27 16:14:59 crc kubenswrapper[4830]: I0227 16:14:59.694423 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-62jtg" Feb 27 16:14:59 crc kubenswrapper[4830]: I0227 16:14:59.730584 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-62jtg" Feb 27 16:15:00 crc kubenswrapper[4830]: I0227 16:15:00.146391 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536815-jjxmq"] Feb 27 16:15:00 crc kubenswrapper[4830]: I0227 16:15:00.147222 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536815-jjxmq" Feb 27 16:15:00 crc kubenswrapper[4830]: I0227 16:15:00.150899 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 16:15:00 crc kubenswrapper[4830]: I0227 16:15:00.151285 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 16:15:00 crc kubenswrapper[4830]: I0227 16:15:00.162748 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536815-jjxmq"] Feb 27 16:15:00 crc kubenswrapper[4830]: I0227 16:15:00.309609 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1d91153-884f-474d-a8f6-e14287fd0a16-config-volume\") pod \"collect-profiles-29536815-jjxmq\" (UID: \"b1d91153-884f-474d-a8f6-e14287fd0a16\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536815-jjxmq" Feb 27 16:15:00 crc kubenswrapper[4830]: I0227 16:15:00.309993 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b1d91153-884f-474d-a8f6-e14287fd0a16-secret-volume\") pod \"collect-profiles-29536815-jjxmq\" (UID: \"b1d91153-884f-474d-a8f6-e14287fd0a16\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536815-jjxmq" Feb 27 16:15:00 crc kubenswrapper[4830]: I0227 16:15:00.312122 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54g7x\" (UniqueName: \"kubernetes.io/projected/b1d91153-884f-474d-a8f6-e14287fd0a16-kube-api-access-54g7x\") pod \"collect-profiles-29536815-jjxmq\" (UID: \"b1d91153-884f-474d-a8f6-e14287fd0a16\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536815-jjxmq" Feb 27 16:15:00 crc kubenswrapper[4830]: I0227 16:15:00.413321 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54g7x\" (UniqueName: \"kubernetes.io/projected/b1d91153-884f-474d-a8f6-e14287fd0a16-kube-api-access-54g7x\") pod \"collect-profiles-29536815-jjxmq\" (UID: \"b1d91153-884f-474d-a8f6-e14287fd0a16\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536815-jjxmq" Feb 27 16:15:00 crc kubenswrapper[4830]: I0227 16:15:00.413440 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1d91153-884f-474d-a8f6-e14287fd0a16-config-volume\") pod \"collect-profiles-29536815-jjxmq\" (UID: \"b1d91153-884f-474d-a8f6-e14287fd0a16\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536815-jjxmq" Feb 27 16:15:00 crc kubenswrapper[4830]: I0227 16:15:00.413488 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b1d91153-884f-474d-a8f6-e14287fd0a16-secret-volume\") pod \"collect-profiles-29536815-jjxmq\" (UID: \"b1d91153-884f-474d-a8f6-e14287fd0a16\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536815-jjxmq" Feb 27 16:15:00 crc kubenswrapper[4830]: I0227 16:15:00.415612 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1d91153-884f-474d-a8f6-e14287fd0a16-config-volume\") pod \"collect-profiles-29536815-jjxmq\" (UID: \"b1d91153-884f-474d-a8f6-e14287fd0a16\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536815-jjxmq" Feb 27 16:15:00 crc kubenswrapper[4830]: I0227 16:15:00.423343 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b1d91153-884f-474d-a8f6-e14287fd0a16-secret-volume\") pod \"collect-profiles-29536815-jjxmq\" (UID: \"b1d91153-884f-474d-a8f6-e14287fd0a16\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536815-jjxmq" Feb 27 16:15:00 crc kubenswrapper[4830]: I0227 16:15:00.433196 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54g7x\" (UniqueName: \"kubernetes.io/projected/b1d91153-884f-474d-a8f6-e14287fd0a16-kube-api-access-54g7x\") pod \"collect-profiles-29536815-jjxmq\" (UID: \"b1d91153-884f-474d-a8f6-e14287fd0a16\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536815-jjxmq" Feb 27 16:15:00 crc kubenswrapper[4830]: I0227 16:15:00.473384 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536815-jjxmq" Feb 27 16:15:00 crc kubenswrapper[4830]: I0227 16:15:00.612766 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-62jtg" Feb 27 16:15:00 crc kubenswrapper[4830]: I0227 16:15:00.931887 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536815-jjxmq"] Feb 27 16:15:00 crc kubenswrapper[4830]: W0227 16:15:00.939175 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1d91153_884f_474d_a8f6_e14287fd0a16.slice/crio-0ff0400f004b7ae1317495bc68f7b208eb11d3b721cf25953c79fd3049ae2283 WatchSource:0}: Error finding container 0ff0400f004b7ae1317495bc68f7b208eb11d3b721cf25953c79fd3049ae2283: Status 404 returned error can't find the container with id 0ff0400f004b7ae1317495bc68f7b208eb11d3b721cf25953c79fd3049ae2283 Feb 27 16:15:01 crc kubenswrapper[4830]: I0227 16:15:01.578125 4830 generic.go:334] "Generic (PLEG): container finished" podID="b1d91153-884f-474d-a8f6-e14287fd0a16" containerID="8cbee3107edaf59aab527082dd6fc221346233646ff72e54576a142fadfef314" exitCode=0 Feb 27 16:15:01 crc kubenswrapper[4830]: I0227 16:15:01.578224 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536815-jjxmq" event={"ID":"b1d91153-884f-474d-a8f6-e14287fd0a16","Type":"ContainerDied","Data":"8cbee3107edaf59aab527082dd6fc221346233646ff72e54576a142fadfef314"} Feb 27 16:15:01 crc kubenswrapper[4830]: I0227 16:15:01.578426 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536815-jjxmq" event={"ID":"b1d91153-884f-474d-a8f6-e14287fd0a16","Type":"ContainerStarted","Data":"0ff0400f004b7ae1317495bc68f7b208eb11d3b721cf25953c79fd3049ae2283"} Feb 27 16:15:01 crc kubenswrapper[4830]: I0227 16:15:01.869638 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jq4fp" Feb 27 16:15:01 crc kubenswrapper[4830]: I0227 16:15:01.869724 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jq4fp" Feb 27 16:15:02 crc kubenswrapper[4830]: I0227 16:15:02.042228 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nckd4" Feb 27 16:15:02 crc kubenswrapper[4830]: I0227 16:15:02.042278 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nckd4" Feb 27 16:15:02 crc kubenswrapper[4830]: I0227 16:15:02.111030 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nckd4" Feb 27 16:15:02 crc kubenswrapper[4830]: I0227 16:15:02.653999 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nckd4" Feb 27 16:15:02 crc kubenswrapper[4830]: I0227 16:15:02.929776 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jq4fp" podUID="6ce624ae-e85d-456f-9da1-fb880e9640ca" containerName="registry-server" probeResult="failure" output=< Feb 27 16:15:02 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Feb 27 16:15:02 crc kubenswrapper[4830]: > Feb 27 16:15:02 crc kubenswrapper[4830]: I0227 16:15:02.950231 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536815-jjxmq" Feb 27 16:15:03 crc kubenswrapper[4830]: I0227 16:15:03.056841 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54g7x\" (UniqueName: \"kubernetes.io/projected/b1d91153-884f-474d-a8f6-e14287fd0a16-kube-api-access-54g7x\") pod \"b1d91153-884f-474d-a8f6-e14287fd0a16\" (UID: \"b1d91153-884f-474d-a8f6-e14287fd0a16\") " Feb 27 16:15:03 crc kubenswrapper[4830]: I0227 16:15:03.056919 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1d91153-884f-474d-a8f6-e14287fd0a16-config-volume\") pod \"b1d91153-884f-474d-a8f6-e14287fd0a16\" (UID: \"b1d91153-884f-474d-a8f6-e14287fd0a16\") " Feb 27 16:15:03 crc kubenswrapper[4830]: I0227 16:15:03.057039 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b1d91153-884f-474d-a8f6-e14287fd0a16-secret-volume\") pod \"b1d91153-884f-474d-a8f6-e14287fd0a16\" (UID: \"b1d91153-884f-474d-a8f6-e14287fd0a16\") " Feb 27 16:15:03 crc kubenswrapper[4830]: I0227 16:15:03.057836 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1d91153-884f-474d-a8f6-e14287fd0a16-config-volume" (OuterVolumeSpecName: "config-volume") pod "b1d91153-884f-474d-a8f6-e14287fd0a16" (UID: "b1d91153-884f-474d-a8f6-e14287fd0a16"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:15:03 crc kubenswrapper[4830]: I0227 16:15:03.061825 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1d91153-884f-474d-a8f6-e14287fd0a16-kube-api-access-54g7x" (OuterVolumeSpecName: "kube-api-access-54g7x") pod "b1d91153-884f-474d-a8f6-e14287fd0a16" (UID: "b1d91153-884f-474d-a8f6-e14287fd0a16"). InnerVolumeSpecName "kube-api-access-54g7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:15:03 crc kubenswrapper[4830]: I0227 16:15:03.062069 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1d91153-884f-474d-a8f6-e14287fd0a16-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b1d91153-884f-474d-a8f6-e14287fd0a16" (UID: "b1d91153-884f-474d-a8f6-e14287fd0a16"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:15:03 crc kubenswrapper[4830]: I0227 16:15:03.158153 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54g7x\" (UniqueName: \"kubernetes.io/projected/b1d91153-884f-474d-a8f6-e14287fd0a16-kube-api-access-54g7x\") on node \"crc\" DevicePath \"\"" Feb 27 16:15:03 crc kubenswrapper[4830]: I0227 16:15:03.158199 4830 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1d91153-884f-474d-a8f6-e14287fd0a16-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 16:15:03 crc kubenswrapper[4830]: I0227 16:15:03.158217 4830 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b1d91153-884f-474d-a8f6-e14287fd0a16-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 16:15:03 crc kubenswrapper[4830]: I0227 16:15:03.591967 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536815-jjxmq" event={"ID":"b1d91153-884f-474d-a8f6-e14287fd0a16","Type":"ContainerDied","Data":"0ff0400f004b7ae1317495bc68f7b208eb11d3b721cf25953c79fd3049ae2283"} Feb 27 16:15:03 crc kubenswrapper[4830]: I0227 16:15:03.592015 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ff0400f004b7ae1317495bc68f7b208eb11d3b721cf25953c79fd3049ae2283" Feb 27 16:15:03 crc kubenswrapper[4830]: I0227 16:15:03.591982 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536815-jjxmq" Feb 27 16:15:11 crc kubenswrapper[4830]: I0227 16:15:11.949499 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jq4fp" Feb 27 16:15:12 crc kubenswrapper[4830]: I0227 16:15:12.018615 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jq4fp" Feb 27 16:16:00 crc kubenswrapper[4830]: I0227 16:16:00.148110 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536816-xjsj9"] Feb 27 16:16:00 crc kubenswrapper[4830]: E0227 16:16:00.149212 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1d91153-884f-474d-a8f6-e14287fd0a16" containerName="collect-profiles" Feb 27 16:16:00 crc kubenswrapper[4830]: I0227 16:16:00.149247 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1d91153-884f-474d-a8f6-e14287fd0a16" containerName="collect-profiles" Feb 27 16:16:00 crc kubenswrapper[4830]: I0227 16:16:00.149525 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1d91153-884f-474d-a8f6-e14287fd0a16" containerName="collect-profiles" Feb 27 16:16:00 crc kubenswrapper[4830]: I0227 16:16:00.150344 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536816-xjsj9" Feb 27 16:16:00 crc kubenswrapper[4830]: I0227 16:16:00.152730 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 16:16:00 crc kubenswrapper[4830]: I0227 16:16:00.153042 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 16:16:00 crc kubenswrapper[4830]: I0227 16:16:00.153560 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 16:16:00 crc kubenswrapper[4830]: I0227 16:16:00.156114 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536816-xjsj9"] Feb 27 16:16:00 crc kubenswrapper[4830]: I0227 16:16:00.285445 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmj59\" (UniqueName: \"kubernetes.io/projected/24d73d21-c3de-47b8-a9cf-38fba733a4b8-kube-api-access-cmj59\") pod \"auto-csr-approver-29536816-xjsj9\" (UID: \"24d73d21-c3de-47b8-a9cf-38fba733a4b8\") " pod="openshift-infra/auto-csr-approver-29536816-xjsj9" Feb 27 16:16:00 crc kubenswrapper[4830]: I0227 16:16:00.386788 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmj59\" (UniqueName: \"kubernetes.io/projected/24d73d21-c3de-47b8-a9cf-38fba733a4b8-kube-api-access-cmj59\") pod \"auto-csr-approver-29536816-xjsj9\" (UID: \"24d73d21-c3de-47b8-a9cf-38fba733a4b8\") " pod="openshift-infra/auto-csr-approver-29536816-xjsj9" Feb 27 16:16:00 crc kubenswrapper[4830]: I0227 16:16:00.431436 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmj59\" (UniqueName: \"kubernetes.io/projected/24d73d21-c3de-47b8-a9cf-38fba733a4b8-kube-api-access-cmj59\") pod \"auto-csr-approver-29536816-xjsj9\" (UID: \"24d73d21-c3de-47b8-a9cf-38fba733a4b8\") " pod="openshift-infra/auto-csr-approver-29536816-xjsj9" Feb 27 16:16:00 crc kubenswrapper[4830]: I0227 16:16:00.500547 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536816-xjsj9" Feb 27 16:16:00 crc kubenswrapper[4830]: I0227 16:16:00.774007 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536816-xjsj9"] Feb 27 16:16:00 crc kubenswrapper[4830]: I0227 16:16:00.777677 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 16:16:01 crc kubenswrapper[4830]: I0227 16:16:01.127585 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536816-xjsj9" event={"ID":"24d73d21-c3de-47b8-a9cf-38fba733a4b8","Type":"ContainerStarted","Data":"1228c958f8514140def9764a1acf90cabd64070b641495af7a678fc90f2de57b"} Feb 27 16:16:03 crc kubenswrapper[4830]: I0227 16:16:03.139551 4830 generic.go:334] "Generic (PLEG): container finished" podID="24d73d21-c3de-47b8-a9cf-38fba733a4b8" containerID="1468d3f52c12e00c0351a45bf01df6e20300dfed38123d1bc936e2b88628e636" exitCode=0 Feb 27 16:16:03 crc kubenswrapper[4830]: I0227 16:16:03.139626 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536816-xjsj9" event={"ID":"24d73d21-c3de-47b8-a9cf-38fba733a4b8","Type":"ContainerDied","Data":"1468d3f52c12e00c0351a45bf01df6e20300dfed38123d1bc936e2b88628e636"} Feb 27 16:16:04 crc kubenswrapper[4830]: I0227 16:16:04.462184 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536816-xjsj9" Feb 27 16:16:04 crc kubenswrapper[4830]: I0227 16:16:04.549236 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmj59\" (UniqueName: \"kubernetes.io/projected/24d73d21-c3de-47b8-a9cf-38fba733a4b8-kube-api-access-cmj59\") pod \"24d73d21-c3de-47b8-a9cf-38fba733a4b8\" (UID: \"24d73d21-c3de-47b8-a9cf-38fba733a4b8\") " Feb 27 16:16:04 crc kubenswrapper[4830]: I0227 16:16:04.557827 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24d73d21-c3de-47b8-a9cf-38fba733a4b8-kube-api-access-cmj59" (OuterVolumeSpecName: "kube-api-access-cmj59") pod "24d73d21-c3de-47b8-a9cf-38fba733a4b8" (UID: "24d73d21-c3de-47b8-a9cf-38fba733a4b8"). InnerVolumeSpecName "kube-api-access-cmj59". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:16:04 crc kubenswrapper[4830]: I0227 16:16:04.650553 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmj59\" (UniqueName: \"kubernetes.io/projected/24d73d21-c3de-47b8-a9cf-38fba733a4b8-kube-api-access-cmj59\") on node \"crc\" DevicePath \"\"" Feb 27 16:16:05 crc kubenswrapper[4830]: I0227 16:16:05.155734 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536816-xjsj9" event={"ID":"24d73d21-c3de-47b8-a9cf-38fba733a4b8","Type":"ContainerDied","Data":"1228c958f8514140def9764a1acf90cabd64070b641495af7a678fc90f2de57b"} Feb 27 16:16:05 crc kubenswrapper[4830]: I0227 16:16:05.155796 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536816-xjsj9" Feb 27 16:16:05 crc kubenswrapper[4830]: I0227 16:16:05.155819 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1228c958f8514140def9764a1acf90cabd64070b641495af7a678fc90f2de57b" Feb 27 16:16:05 crc kubenswrapper[4830]: I0227 16:16:05.522403 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536810-bc446"] Feb 27 16:16:05 crc kubenswrapper[4830]: I0227 16:16:05.525177 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536810-bc446"] Feb 27 16:16:06 crc kubenswrapper[4830]: I0227 16:16:06.773705 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1eb064bc-39af-405a-bdbf-665e31fa07c3" path="/var/lib/kubelet/pods/1eb064bc-39af-405a-bdbf-665e31fa07c3/volumes" Feb 27 16:17:03 crc kubenswrapper[4830]: I0227 16:17:03.160661 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 16:17:03 crc kubenswrapper[4830]: I0227 16:17:03.161582 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 16:17:25 crc kubenswrapper[4830]: I0227 16:17:25.945648 4830 scope.go:117] "RemoveContainer" containerID="2752d42115a6a9ee8f1db79008a40907b77e6730aee724c7ce880c7ef63ed522" Feb 27 16:17:33 crc kubenswrapper[4830]: I0227 16:17:33.160680 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 16:17:33 crc kubenswrapper[4830]: I0227 16:17:33.161261 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 16:18:00 crc kubenswrapper[4830]: I0227 16:18:00.149285 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536818-l5tbc"] Feb 27 16:18:00 crc kubenswrapper[4830]: E0227 16:18:00.150462 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24d73d21-c3de-47b8-a9cf-38fba733a4b8" containerName="oc" Feb 27 16:18:00 crc kubenswrapper[4830]: I0227 16:18:00.150483 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="24d73d21-c3de-47b8-a9cf-38fba733a4b8" containerName="oc" Feb 27 16:18:00 crc kubenswrapper[4830]: I0227 16:18:00.150659 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="24d73d21-c3de-47b8-a9cf-38fba733a4b8" containerName="oc" Feb 27 16:18:00 crc kubenswrapper[4830]: I0227 16:18:00.151253 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536818-l5tbc" Feb 27 16:18:00 crc kubenswrapper[4830]: I0227 16:18:00.155233 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 16:18:00 crc kubenswrapper[4830]: I0227 16:18:00.155287 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 16:18:00 crc kubenswrapper[4830]: I0227 16:18:00.155642 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 16:18:00 crc kubenswrapper[4830]: I0227 16:18:00.179682 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536818-l5tbc"] Feb 27 16:18:00 crc kubenswrapper[4830]: I0227 16:18:00.273423 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d757f\" (UniqueName: \"kubernetes.io/projected/dd17a9f9-0fb3-4c98-bbb0-36e8d23f71a0-kube-api-access-d757f\") pod \"auto-csr-approver-29536818-l5tbc\" (UID: \"dd17a9f9-0fb3-4c98-bbb0-36e8d23f71a0\") " pod="openshift-infra/auto-csr-approver-29536818-l5tbc" Feb 27 16:18:00 crc kubenswrapper[4830]: I0227 16:18:00.375211 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d757f\" (UniqueName: \"kubernetes.io/projected/dd17a9f9-0fb3-4c98-bbb0-36e8d23f71a0-kube-api-access-d757f\") pod \"auto-csr-approver-29536818-l5tbc\" (UID: \"dd17a9f9-0fb3-4c98-bbb0-36e8d23f71a0\") " pod="openshift-infra/auto-csr-approver-29536818-l5tbc" Feb 27 16:18:00 crc kubenswrapper[4830]: I0227 16:18:00.410060 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d757f\" (UniqueName: \"kubernetes.io/projected/dd17a9f9-0fb3-4c98-bbb0-36e8d23f71a0-kube-api-access-d757f\") pod \"auto-csr-approver-29536818-l5tbc\" (UID: \"dd17a9f9-0fb3-4c98-bbb0-36e8d23f71a0\") " pod="openshift-infra/auto-csr-approver-29536818-l5tbc" Feb 27 16:18:00 crc kubenswrapper[4830]: I0227 16:18:00.482149 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536818-l5tbc" Feb 27 16:18:00 crc kubenswrapper[4830]: I0227 16:18:00.808918 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536818-l5tbc"] Feb 27 16:18:00 crc kubenswrapper[4830]: I0227 16:18:00.981146 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536818-l5tbc" event={"ID":"dd17a9f9-0fb3-4c98-bbb0-36e8d23f71a0","Type":"ContainerStarted","Data":"eb6567c7d683a01b471e0d90a90b30777d1fea34697d5906c4561162d1189df4"} Feb 27 16:18:03 crc kubenswrapper[4830]: I0227 16:18:03.160142 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 16:18:03 crc kubenswrapper[4830]: I0227 16:18:03.160657 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 16:18:03 crc kubenswrapper[4830]: I0227 16:18:03.160725 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 16:18:03 crc kubenswrapper[4830]: I0227 16:18:03.161505 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a6e439bde057753a649382c8178958e1e7d593adbfc771d6e3b530cc84fe06fb"} pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 16:18:03 crc kubenswrapper[4830]: I0227 16:18:03.161605 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" containerID="cri-o://a6e439bde057753a649382c8178958e1e7d593adbfc771d6e3b530cc84fe06fb" gracePeriod=600 Feb 27 16:18:04 crc kubenswrapper[4830]: I0227 16:18:04.002849 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536818-l5tbc" event={"ID":"dd17a9f9-0fb3-4c98-bbb0-36e8d23f71a0","Type":"ContainerStarted","Data":"e2d6e44f8d67831444414ecc436155070fa81b8ab9b4f4dbc3aa08611cd8b99e"} Feb 27 16:18:04 crc kubenswrapper[4830]: I0227 16:18:04.006378 4830 generic.go:334] "Generic (PLEG): container finished" podID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerID="a6e439bde057753a649382c8178958e1e7d593adbfc771d6e3b530cc84fe06fb" exitCode=0 Feb 27 16:18:04 crc kubenswrapper[4830]: I0227 16:18:04.006435 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerDied","Data":"a6e439bde057753a649382c8178958e1e7d593adbfc771d6e3b530cc84fe06fb"} Feb 27 16:18:04 crc kubenswrapper[4830]: I0227 16:18:04.006513 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerStarted","Data":"4111740fc2dfad5826ea06b4b6f06e8a362844590f5bbcb26cd71fafa0b5a6e3"} Feb 27 16:18:04 crc kubenswrapper[4830]: I0227 16:18:04.006557 4830 scope.go:117] "RemoveContainer" containerID="ad7b3479bfc7bc824e438e72666ce37c850e7de1824a4243534d5a7cc2b790bd" Feb 27 16:18:04 crc kubenswrapper[4830]: I0227 16:18:04.023042 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536818-l5tbc" podStartSLOduration=1.36956334 podStartE2EDuration="4.023024511s" podCreationTimestamp="2026-02-27 16:18:00 +0000 UTC" firstStartedPulling="2026-02-27 16:18:00.82661828 +0000 UTC m=+676.915890773" lastFinishedPulling="2026-02-27 16:18:03.480079441 +0000 UTC m=+679.569351944" observedRunningTime="2026-02-27 16:18:04.022816856 +0000 UTC m=+680.112089329" watchObservedRunningTime="2026-02-27 16:18:04.023024511 +0000 UTC m=+680.112296974" Feb 27 16:18:05 crc kubenswrapper[4830]: I0227 16:18:05.016542 4830 generic.go:334] "Generic (PLEG): container finished" podID="dd17a9f9-0fb3-4c98-bbb0-36e8d23f71a0" containerID="e2d6e44f8d67831444414ecc436155070fa81b8ab9b4f4dbc3aa08611cd8b99e" exitCode=0 Feb 27 16:18:05 crc kubenswrapper[4830]: I0227 16:18:05.016643 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536818-l5tbc" event={"ID":"dd17a9f9-0fb3-4c98-bbb0-36e8d23f71a0","Type":"ContainerDied","Data":"e2d6e44f8d67831444414ecc436155070fa81b8ab9b4f4dbc3aa08611cd8b99e"} Feb 27 16:18:06 crc kubenswrapper[4830]: I0227 16:18:06.346589 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536818-l5tbc" Feb 27 16:18:06 crc kubenswrapper[4830]: I0227 16:18:06.474460 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d757f\" (UniqueName: \"kubernetes.io/projected/dd17a9f9-0fb3-4c98-bbb0-36e8d23f71a0-kube-api-access-d757f\") pod \"dd17a9f9-0fb3-4c98-bbb0-36e8d23f71a0\" (UID: \"dd17a9f9-0fb3-4c98-bbb0-36e8d23f71a0\") " Feb 27 16:18:06 crc kubenswrapper[4830]: I0227 16:18:06.485850 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd17a9f9-0fb3-4c98-bbb0-36e8d23f71a0-kube-api-access-d757f" (OuterVolumeSpecName: "kube-api-access-d757f") pod "dd17a9f9-0fb3-4c98-bbb0-36e8d23f71a0" (UID: "dd17a9f9-0fb3-4c98-bbb0-36e8d23f71a0"). InnerVolumeSpecName "kube-api-access-d757f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:18:06 crc kubenswrapper[4830]: I0227 16:18:06.576927 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d757f\" (UniqueName: \"kubernetes.io/projected/dd17a9f9-0fb3-4c98-bbb0-36e8d23f71a0-kube-api-access-d757f\") on node \"crc\" DevicePath \"\"" Feb 27 16:18:07 crc kubenswrapper[4830]: I0227 16:18:07.037985 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536818-l5tbc" event={"ID":"dd17a9f9-0fb3-4c98-bbb0-36e8d23f71a0","Type":"ContainerDied","Data":"eb6567c7d683a01b471e0d90a90b30777d1fea34697d5906c4561162d1189df4"} Feb 27 16:18:07 crc kubenswrapper[4830]: I0227 16:18:07.038036 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb6567c7d683a01b471e0d90a90b30777d1fea34697d5906c4561162d1189df4" Feb 27 16:18:07 crc kubenswrapper[4830]: I0227 16:18:07.038065 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536818-l5tbc" Feb 27 16:18:07 crc kubenswrapper[4830]: I0227 16:18:07.098091 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536812-7wkbt"] Feb 27 16:18:07 crc kubenswrapper[4830]: I0227 16:18:07.106670 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536812-7wkbt"] Feb 27 16:18:08 crc kubenswrapper[4830]: I0227 16:18:08.774205 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dce3358b-25c4-4fe9-a3fa-0a0be053e8f0" path="/var/lib/kubelet/pods/dce3358b-25c4-4fe9-a3fa-0a0be053e8f0/volumes" Feb 27 16:18:26 crc kubenswrapper[4830]: I0227 16:18:26.033882 4830 scope.go:117] "RemoveContainer" containerID="7669a6f647f383b53f489bdf9bfd485dae7bcaf4da2d4c3f77794eda9777dccf" Feb 27 16:20:00 crc kubenswrapper[4830]: I0227 16:20:00.149045 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536820-c4dp5"] Feb 27 16:20:00 crc kubenswrapper[4830]: E0227 16:20:00.149856 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd17a9f9-0fb3-4c98-bbb0-36e8d23f71a0" containerName="oc" Feb 27 16:20:00 crc kubenswrapper[4830]: I0227 16:20:00.149875 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd17a9f9-0fb3-4c98-bbb0-36e8d23f71a0" containerName="oc" Feb 27 16:20:00 crc kubenswrapper[4830]: I0227 16:20:00.150063 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd17a9f9-0fb3-4c98-bbb0-36e8d23f71a0" containerName="oc" Feb 27 16:20:00 crc kubenswrapper[4830]: I0227 16:20:00.150607 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536820-c4dp5" Feb 27 16:20:00 crc kubenswrapper[4830]: I0227 16:20:00.154093 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 16:20:00 crc kubenswrapper[4830]: I0227 16:20:00.154815 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 16:20:00 crc kubenswrapper[4830]: I0227 16:20:00.154916 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 16:20:00 crc kubenswrapper[4830]: I0227 16:20:00.159033 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536820-c4dp5"] Feb 27 16:20:00 crc kubenswrapper[4830]: I0227 16:20:00.293830 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fmqq\" (UniqueName: \"kubernetes.io/projected/a82f7818-da64-486f-a7e7-66af2352917b-kube-api-access-4fmqq\") pod \"auto-csr-approver-29536820-c4dp5\" (UID: \"a82f7818-da64-486f-a7e7-66af2352917b\") " pod="openshift-infra/auto-csr-approver-29536820-c4dp5" Feb 27 16:20:00 crc kubenswrapper[4830]: I0227 16:20:00.395461 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fmqq\" (UniqueName: \"kubernetes.io/projected/a82f7818-da64-486f-a7e7-66af2352917b-kube-api-access-4fmqq\") pod \"auto-csr-approver-29536820-c4dp5\" (UID: \"a82f7818-da64-486f-a7e7-66af2352917b\") " pod="openshift-infra/auto-csr-approver-29536820-c4dp5" Feb 27 16:20:00 crc kubenswrapper[4830]: I0227 16:20:00.434135 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fmqq\" (UniqueName: \"kubernetes.io/projected/a82f7818-da64-486f-a7e7-66af2352917b-kube-api-access-4fmqq\") pod \"auto-csr-approver-29536820-c4dp5\" (UID: \"a82f7818-da64-486f-a7e7-66af2352917b\") " pod="openshift-infra/auto-csr-approver-29536820-c4dp5" Feb 27 16:20:00 crc kubenswrapper[4830]: I0227 16:20:00.509566 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536820-c4dp5" Feb 27 16:20:00 crc kubenswrapper[4830]: I0227 16:20:00.909266 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536820-c4dp5"] Feb 27 16:20:01 crc kubenswrapper[4830]: I0227 16:20:01.917283 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536820-c4dp5" event={"ID":"a82f7818-da64-486f-a7e7-66af2352917b","Type":"ContainerStarted","Data":"e26cdbda7fc9fb31f79df4cd36ce4747a3093bec910395f43eb8ebdadd6a6abb"} Feb 27 16:20:02 crc kubenswrapper[4830]: I0227 16:20:02.927166 4830 generic.go:334] "Generic (PLEG): container finished" podID="a82f7818-da64-486f-a7e7-66af2352917b" containerID="f4b1b9938bdb9a55e6f8062ca783b7910d7fec344c1af23042a5cec75f9761ae" exitCode=0 Feb 27 16:20:02 crc kubenswrapper[4830]: I0227 16:20:02.927281 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536820-c4dp5" event={"ID":"a82f7818-da64-486f-a7e7-66af2352917b","Type":"ContainerDied","Data":"f4b1b9938bdb9a55e6f8062ca783b7910d7fec344c1af23042a5cec75f9761ae"} Feb 27 16:20:03 crc kubenswrapper[4830]: I0227 16:20:03.161051 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 16:20:03 crc kubenswrapper[4830]: I0227 16:20:03.161136 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 16:20:04 crc kubenswrapper[4830]: I0227 16:20:04.192702 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536820-c4dp5" Feb 27 16:20:04 crc kubenswrapper[4830]: I0227 16:20:04.352893 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fmqq\" (UniqueName: \"kubernetes.io/projected/a82f7818-da64-486f-a7e7-66af2352917b-kube-api-access-4fmqq\") pod \"a82f7818-da64-486f-a7e7-66af2352917b\" (UID: \"a82f7818-da64-486f-a7e7-66af2352917b\") " Feb 27 16:20:04 crc kubenswrapper[4830]: I0227 16:20:04.363197 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a82f7818-da64-486f-a7e7-66af2352917b-kube-api-access-4fmqq" (OuterVolumeSpecName: "kube-api-access-4fmqq") pod "a82f7818-da64-486f-a7e7-66af2352917b" (UID: "a82f7818-da64-486f-a7e7-66af2352917b"). InnerVolumeSpecName "kube-api-access-4fmqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:20:04 crc kubenswrapper[4830]: I0227 16:20:04.454641 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fmqq\" (UniqueName: \"kubernetes.io/projected/a82f7818-da64-486f-a7e7-66af2352917b-kube-api-access-4fmqq\") on node \"crc\" DevicePath \"\"" Feb 27 16:20:04 crc kubenswrapper[4830]: I0227 16:20:04.949051 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536820-c4dp5" event={"ID":"a82f7818-da64-486f-a7e7-66af2352917b","Type":"ContainerDied","Data":"e26cdbda7fc9fb31f79df4cd36ce4747a3093bec910395f43eb8ebdadd6a6abb"} Feb 27 16:20:04 crc kubenswrapper[4830]: I0227 16:20:04.949092 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e26cdbda7fc9fb31f79df4cd36ce4747a3093bec910395f43eb8ebdadd6a6abb" Feb 27 16:20:04 crc kubenswrapper[4830]: I0227 16:20:04.949175 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536820-c4dp5" Feb 27 16:20:05 crc kubenswrapper[4830]: I0227 16:20:05.269619 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536814-mtslw"] Feb 27 16:20:05 crc kubenswrapper[4830]: I0227 16:20:05.277865 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536814-mtslw"] Feb 27 16:20:06 crc kubenswrapper[4830]: I0227 16:20:06.774388 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cc8e4cc-918f-47f8-8baf-b531cbeedc76" path="/var/lib/kubelet/pods/7cc8e4cc-918f-47f8-8baf-b531cbeedc76/volumes" Feb 27 16:20:26 crc kubenswrapper[4830]: I0227 16:20:26.086311 4830 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 27 16:20:26 crc kubenswrapper[4830]: I0227 16:20:26.130518 4830 scope.go:117] "RemoveContainer" containerID="1f0a4854add6d99771670874866bcf97f5e49cc2c063cc2b8cf4261525405a9f" Feb 27 16:20:33 crc kubenswrapper[4830]: I0227 16:20:33.160995 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 16:20:33 crc kubenswrapper[4830]: I0227 16:20:33.161748 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 16:21:03 crc kubenswrapper[4830]: I0227 16:21:03.159940 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 16:21:03 crc kubenswrapper[4830]: I0227 16:21:03.160752 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 16:21:03 crc kubenswrapper[4830]: I0227 16:21:03.160817 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 16:21:03 crc kubenswrapper[4830]: I0227 16:21:03.161648 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4111740fc2dfad5826ea06b4b6f06e8a362844590f5bbcb26cd71fafa0b5a6e3"} pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 16:21:03 crc kubenswrapper[4830]: I0227 16:21:03.161746 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" containerID="cri-o://4111740fc2dfad5826ea06b4b6f06e8a362844590f5bbcb26cd71fafa0b5a6e3" gracePeriod=600 Feb 27 16:21:03 crc kubenswrapper[4830]: I0227 16:21:03.360662 4830 generic.go:334] "Generic (PLEG): container finished" podID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerID="4111740fc2dfad5826ea06b4b6f06e8a362844590f5bbcb26cd71fafa0b5a6e3" exitCode=0 Feb 27 16:21:03 crc kubenswrapper[4830]: I0227 16:21:03.360790 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerDied","Data":"4111740fc2dfad5826ea06b4b6f06e8a362844590f5bbcb26cd71fafa0b5a6e3"} Feb 27 16:21:03 crc kubenswrapper[4830]: I0227 16:21:03.361302 4830 scope.go:117] "RemoveContainer" containerID="a6e439bde057753a649382c8178958e1e7d593adbfc771d6e3b530cc84fe06fb" Feb 27 16:21:04 crc kubenswrapper[4830]: I0227 16:21:04.371113 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerStarted","Data":"e43810c75db22ebd0d19e92c6c2850742cda834a0ba155fedd3f4498a6dd6d20"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.263920 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bf9lh"] Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.265109 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="ovn-controller" containerID="cri-o://41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41" gracePeriod=30 Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.265192 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="nbdb" containerID="cri-o://a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa" gracePeriod=30 Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.265227 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef" gracePeriod=30 Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.265268 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="ovn-acl-logging" containerID="cri-o://05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05" gracePeriod=30 Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.265324 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="sbdb" containerID="cri-o://4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0" gracePeriod=30 Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.265362 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="northd" containerID="cri-o://dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f" gracePeriod=30 Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.265397 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="kube-rbac-proxy-node" containerID="cri-o://f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372" gracePeriod=30 Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.323187 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="ovnkube-controller" containerID="cri-o://32af4b45910f57f6a3fa4635f90216703a4ed17597dbac7320a131b2e5119ac5" gracePeriod=30 Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.627307 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bf9lh_2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904/ovnkube-controller/3.log" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.630579 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bf9lh_2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904/ovn-acl-logging/0.log" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.631257 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bf9lh_2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904/ovn-controller/0.log" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.632134 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.655123 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bf9lh_2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904/ovnkube-controller/3.log" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.659673 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bf9lh_2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904/ovn-acl-logging/0.log" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.660427 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bf9lh_2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904/ovn-controller/0.log" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.661305 4830 generic.go:334] "Generic (PLEG): container finished" podID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerID="32af4b45910f57f6a3fa4635f90216703a4ed17597dbac7320a131b2e5119ac5" exitCode=0 Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.661345 4830 generic.go:334] "Generic (PLEG): container finished" podID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerID="4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0" exitCode=0 Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.661382 4830 generic.go:334] "Generic (PLEG): container finished" podID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerID="a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa" exitCode=0 Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.661396 4830 generic.go:334] "Generic (PLEG): container finished" podID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerID="dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f" exitCode=0 Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.661387 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" event={"ID":"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904","Type":"ContainerDied","Data":"32af4b45910f57f6a3fa4635f90216703a4ed17597dbac7320a131b2e5119ac5"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.661468 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" event={"ID":"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904","Type":"ContainerDied","Data":"4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.661493 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" event={"ID":"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904","Type":"ContainerDied","Data":"a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.661513 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" event={"ID":"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904","Type":"ContainerDied","Data":"dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.661534 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" event={"ID":"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904","Type":"ContainerDied","Data":"273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.661535 4830 scope.go:117] "RemoveContainer" containerID="32af4b45910f57f6a3fa4635f90216703a4ed17597dbac7320a131b2e5119ac5" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.661411 4830 generic.go:334] "Generic (PLEG): container finished" podID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerID="273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef" exitCode=0 Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.661701 4830 generic.go:334] "Generic (PLEG): container finished" podID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerID="f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372" exitCode=0 Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.661724 4830 generic.go:334] "Generic (PLEG): container finished" podID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerID="05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05" exitCode=143 Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.661743 4830 generic.go:334] "Generic (PLEG): container finished" podID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerID="41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41" exitCode=143 Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.661793 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" event={"ID":"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904","Type":"ContainerDied","Data":"f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.661814 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.661830 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.661841 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.661852 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.661862 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.661872 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.661882 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.661892 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.661903 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.661917 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" event={"ID":"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904","Type":"ContainerDied","Data":"05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.661933 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"32af4b45910f57f6a3fa4635f90216703a4ed17597dbac7320a131b2e5119ac5"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.661977 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.661988 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.662002 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.662012 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.662023 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.662033 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.662045 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.662055 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.662083 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.662098 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" event={"ID":"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904","Type":"ContainerDied","Data":"41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.662115 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"32af4b45910f57f6a3fa4635f90216703a4ed17597dbac7320a131b2e5119ac5"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.662128 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.662138 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.662149 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.662160 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.662170 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.662180 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.662190 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.662201 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.662213 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.662216 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.662230 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bf9lh" event={"ID":"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904","Type":"ContainerDied","Data":"6be9cfa3d02e0e72c62c85546d0d78fbfbe835257b1e639aa5a10fea773570ff"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.662369 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"32af4b45910f57f6a3fa4635f90216703a4ed17597dbac7320a131b2e5119ac5"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.662384 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.662396 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.662406 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.662419 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.662430 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.662440 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.662451 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.662463 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.662473 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.665476 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fsrq9_bb72b0f7-1d22-4d13-9653-b1607aa2235d/kube-multus/2.log" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.666063 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fsrq9_bb72b0f7-1d22-4d13-9653-b1607aa2235d/kube-multus/1.log" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.666206 4830 generic.go:334] "Generic (PLEG): container finished" podID="bb72b0f7-1d22-4d13-9653-b1607aa2235d" containerID="ae5ebcddc959e70697cd3baeda6440556cbec5ca5056d85333946284a2e0f292" exitCode=2 Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.666337 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fsrq9" event={"ID":"bb72b0f7-1d22-4d13-9653-b1607aa2235d","Type":"ContainerDied","Data":"ae5ebcddc959e70697cd3baeda6440556cbec5ca5056d85333946284a2e0f292"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.666409 4830 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"787ecf758c99d969efde354b66bb37b5f9a8c2d79cccd8bab200423af61d4109"} Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.667268 4830 scope.go:117] "RemoveContainer" containerID="ae5ebcddc959e70697cd3baeda6440556cbec5ca5056d85333946284a2e0f292" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.714026 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-p6hxx"] Feb 27 16:21:46 crc kubenswrapper[4830]: E0227 16:21:46.714358 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a82f7818-da64-486f-a7e7-66af2352917b" containerName="oc" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.714391 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="a82f7818-da64-486f-a7e7-66af2352917b" containerName="oc" Feb 27 16:21:46 crc kubenswrapper[4830]: E0227 16:21:46.714405 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="northd" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.714416 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="northd" Feb 27 16:21:46 crc kubenswrapper[4830]: E0227 16:21:46.714434 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="ovnkube-controller" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.714447 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="ovnkube-controller" Feb 27 16:21:46 crc kubenswrapper[4830]: E0227 16:21:46.714463 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="ovnkube-controller" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.714473 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="ovnkube-controller" Feb 27 16:21:46 crc kubenswrapper[4830]: E0227 16:21:46.714487 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="ovn-acl-logging" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.714497 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="ovn-acl-logging" Feb 27 16:21:46 crc kubenswrapper[4830]: E0227 16:21:46.714510 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="kube-rbac-proxy-ovn-metrics" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.714520 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="kube-rbac-proxy-ovn-metrics" Feb 27 16:21:46 crc kubenswrapper[4830]: E0227 16:21:46.714537 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="ovnkube-controller" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.714549 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="ovnkube-controller" Feb 27 16:21:46 crc kubenswrapper[4830]: E0227 16:21:46.714562 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="nbdb" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.714571 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="nbdb" Feb 27 16:21:46 crc kubenswrapper[4830]: E0227 16:21:46.714587 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="sbdb" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.714597 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="sbdb" Feb 27 16:21:46 crc kubenswrapper[4830]: E0227 16:21:46.714614 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="kubecfg-setup" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.714624 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="kubecfg-setup" Feb 27 16:21:46 crc kubenswrapper[4830]: E0227 16:21:46.714641 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="kube-rbac-proxy-node" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.714651 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="kube-rbac-proxy-node" Feb 27 16:21:46 crc kubenswrapper[4830]: E0227 16:21:46.714667 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="ovn-controller" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.714677 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="ovn-controller" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.714819 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="ovnkube-controller" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.714834 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="sbdb" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.714848 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="ovnkube-controller" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.714859 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="kube-rbac-proxy-ovn-metrics" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.714875 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="a82f7818-da64-486f-a7e7-66af2352917b" containerName="oc" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.714886 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="ovnkube-controller" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.714895 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="ovn-acl-logging" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.714910 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="nbdb" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.714923 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="northd" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.714937 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="ovnkube-controller" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.714984 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="kube-rbac-proxy-node" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.714996 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="ovn-controller" Feb 27 16:21:46 crc kubenswrapper[4830]: E0227 16:21:46.715146 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="ovnkube-controller" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.715161 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="ovnkube-controller" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.715321 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="ovnkube-controller" Feb 27 16:21:46 crc kubenswrapper[4830]: E0227 16:21:46.715480 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="ovnkube-controller" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.715494 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" containerName="ovnkube-controller" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.718790 4830 scope.go:117] "RemoveContainer" containerID="e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.721369 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.754408 4830 scope.go:117] "RemoveContainer" containerID="4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.773505 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-slash\") pod \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.773552 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-run-openvswitch\") pod \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.773574 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-run-ovn\") pod \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.773608 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-cni-bin\") pod \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.773659 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-run-netns\") pod \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.773676 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-run-systemd\") pod \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.773707 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-ovn-node-metrics-cert\") pod \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.773730 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-systemd-units\") pod \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.773761 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tf9wc\" (UniqueName: \"kubernetes.io/projected/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-kube-api-access-tf9wc\") pod \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.773798 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-var-lib-cni-networks-ovn-kubernetes\") pod \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.773818 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-log-socket\") pod \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.773837 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-kubelet\") pod \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.773857 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-env-overrides\") pod \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.773881 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-ovnkube-script-lib\") pod \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.773906 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-var-lib-openvswitch\") pod \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.773936 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-ovnkube-config\") pod \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.773979 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-node-log\") pod \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.774003 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-cni-netd\") pod \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.774022 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-etc-openvswitch\") pod \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.774047 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-run-ovn-kubernetes\") pod \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\" (UID: \"2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904\") " Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.773750 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" (UID: "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.773792 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-slash" (OuterVolumeSpecName: "host-slash") pod "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" (UID: "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.773825 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" (UID: "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.773865 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" (UID: "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.773890 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" (UID: "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.773915 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" (UID: "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.774765 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" (UID: "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.775011 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" (UID: "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.775129 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" (UID: "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.775157 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" (UID: "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.775520 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" (UID: "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.775553 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" (UID: "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.775591 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" (UID: "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.775627 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-node-log" (OuterVolumeSpecName: "node-log") pod "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" (UID: "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.775827 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-log-socket" (OuterVolumeSpecName: "log-socket") pod "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" (UID: "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.775856 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" (UID: "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.776935 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" (UID: "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.785058 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-kube-api-access-tf9wc" (OuterVolumeSpecName: "kube-api-access-tf9wc") pod "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" (UID: "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904"). InnerVolumeSpecName "kube-api-access-tf9wc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.786629 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" (UID: "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.790671 4830 scope.go:117] "RemoveContainer" containerID="a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.795513 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" (UID: "2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.806343 4830 scope.go:117] "RemoveContainer" containerID="dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.821990 4830 scope.go:117] "RemoveContainer" containerID="273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.837604 4830 scope.go:117] "RemoveContainer" containerID="f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.865201 4830 scope.go:117] "RemoveContainer" containerID="05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.874880 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-host-run-netns\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.874933 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-systemd-units\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.874977 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-etc-openvswitch\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.875005 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-node-log\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.875037 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/181f1cb7-02f1-4252-a53d-3e83ca7c290d-env-overrides\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.875058 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-run-ovn\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.875083 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.875105 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmdms\" (UniqueName: \"kubernetes.io/projected/181f1cb7-02f1-4252-a53d-3e83ca7c290d-kube-api-access-cmdms\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.875126 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/181f1cb7-02f1-4252-a53d-3e83ca7c290d-ovnkube-script-lib\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.875147 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/181f1cb7-02f1-4252-a53d-3e83ca7c290d-ovn-node-metrics-cert\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.875173 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-host-kubelet\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.875207 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-host-cni-netd\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.875251 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-run-openvswitch\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.875271 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-var-lib-openvswitch\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.875297 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-host-slash\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.875324 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-host-run-ovn-kubernetes\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.875348 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-log-socket\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.875377 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-run-systemd\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.875400 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-host-cni-bin\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.875419 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/181f1cb7-02f1-4252-a53d-3e83ca7c290d-ovnkube-config\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.875474 4830 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.875489 4830 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.875502 4830 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.875514 4830 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.875527 4830 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.875539 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tf9wc\" (UniqueName: \"kubernetes.io/projected/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-kube-api-access-tf9wc\") on node \"crc\" DevicePath \"\"" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.875551 4830 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-log-socket\") on node \"crc\" DevicePath \"\"" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.875564 4830 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.875579 4830 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.875593 4830 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.875606 4830 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.875618 4830 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.875630 4830 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.875643 4830 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-node-log\") on node \"crc\" DevicePath \"\"" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.875656 4830 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.875667 4830 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.875681 4830 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.875693 4830 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-host-slash\") on node \"crc\" DevicePath \"\"" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.875705 4830 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.875717 4830 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.879529 4830 scope.go:117] "RemoveContainer" containerID="41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.896431 4830 scope.go:117] "RemoveContainer" containerID="45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.912706 4830 scope.go:117] "RemoveContainer" containerID="32af4b45910f57f6a3fa4635f90216703a4ed17597dbac7320a131b2e5119ac5" Feb 27 16:21:46 crc kubenswrapper[4830]: E0227 16:21:46.913192 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32af4b45910f57f6a3fa4635f90216703a4ed17597dbac7320a131b2e5119ac5\": container with ID starting with 32af4b45910f57f6a3fa4635f90216703a4ed17597dbac7320a131b2e5119ac5 not found: ID does not exist" containerID="32af4b45910f57f6a3fa4635f90216703a4ed17597dbac7320a131b2e5119ac5" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.913254 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32af4b45910f57f6a3fa4635f90216703a4ed17597dbac7320a131b2e5119ac5"} err="failed to get container status \"32af4b45910f57f6a3fa4635f90216703a4ed17597dbac7320a131b2e5119ac5\": rpc error: code = NotFound desc = could not find container \"32af4b45910f57f6a3fa4635f90216703a4ed17597dbac7320a131b2e5119ac5\": container with ID starting with 32af4b45910f57f6a3fa4635f90216703a4ed17597dbac7320a131b2e5119ac5 not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.913287 4830 scope.go:117] "RemoveContainer" containerID="e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140" Feb 27 16:21:46 crc kubenswrapper[4830]: E0227 16:21:46.913714 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140\": container with ID starting with e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140 not found: ID does not exist" containerID="e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.913756 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140"} err="failed to get container status \"e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140\": rpc error: code = NotFound desc = could not find container \"e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140\": container with ID starting with e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140 not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.913785 4830 scope.go:117] "RemoveContainer" containerID="4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0" Feb 27 16:21:46 crc kubenswrapper[4830]: E0227 16:21:46.914123 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0\": container with ID starting with 4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0 not found: ID does not exist" containerID="4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.914155 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0"} err="failed to get container status \"4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0\": rpc error: code = NotFound desc = could not find container \"4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0\": container with ID starting with 4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0 not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.914174 4830 scope.go:117] "RemoveContainer" containerID="a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa" Feb 27 16:21:46 crc kubenswrapper[4830]: E0227 16:21:46.914560 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa\": container with ID starting with a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa not found: ID does not exist" containerID="a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.914602 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa"} err="failed to get container status \"a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa\": rpc error: code = NotFound desc = could not find container \"a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa\": container with ID starting with a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.914626 4830 scope.go:117] "RemoveContainer" containerID="dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f" Feb 27 16:21:46 crc kubenswrapper[4830]: E0227 16:21:46.915083 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f\": container with ID starting with dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f not found: ID does not exist" containerID="dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.915111 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f"} err="failed to get container status \"dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f\": rpc error: code = NotFound desc = could not find container \"dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f\": container with ID starting with dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.915126 4830 scope.go:117] "RemoveContainer" containerID="273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef" Feb 27 16:21:46 crc kubenswrapper[4830]: E0227 16:21:46.915447 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef\": container with ID starting with 273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef not found: ID does not exist" containerID="273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.915478 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef"} err="failed to get container status \"273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef\": rpc error: code = NotFound desc = could not find container \"273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef\": container with ID starting with 273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.915498 4830 scope.go:117] "RemoveContainer" containerID="f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372" Feb 27 16:21:46 crc kubenswrapper[4830]: E0227 16:21:46.915777 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372\": container with ID starting with f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372 not found: ID does not exist" containerID="f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.915810 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372"} err="failed to get container status \"f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372\": rpc error: code = NotFound desc = could not find container \"f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372\": container with ID starting with f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372 not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.915828 4830 scope.go:117] "RemoveContainer" containerID="05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05" Feb 27 16:21:46 crc kubenswrapper[4830]: E0227 16:21:46.916100 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05\": container with ID starting with 05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05 not found: ID does not exist" containerID="05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.916143 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05"} err="failed to get container status \"05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05\": rpc error: code = NotFound desc = could not find container \"05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05\": container with ID starting with 05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05 not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.916167 4830 scope.go:117] "RemoveContainer" containerID="41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41" Feb 27 16:21:46 crc kubenswrapper[4830]: E0227 16:21:46.916427 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41\": container with ID starting with 41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41 not found: ID does not exist" containerID="41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.916456 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41"} err="failed to get container status \"41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41\": rpc error: code = NotFound desc = could not find container \"41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41\": container with ID starting with 41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41 not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.916473 4830 scope.go:117] "RemoveContainer" containerID="45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2" Feb 27 16:21:46 crc kubenswrapper[4830]: E0227 16:21:46.916714 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\": container with ID starting with 45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2 not found: ID does not exist" containerID="45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.916746 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2"} err="failed to get container status \"45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\": rpc error: code = NotFound desc = could not find container \"45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\": container with ID starting with 45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2 not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.916767 4830 scope.go:117] "RemoveContainer" containerID="32af4b45910f57f6a3fa4635f90216703a4ed17597dbac7320a131b2e5119ac5" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.917071 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32af4b45910f57f6a3fa4635f90216703a4ed17597dbac7320a131b2e5119ac5"} err="failed to get container status \"32af4b45910f57f6a3fa4635f90216703a4ed17597dbac7320a131b2e5119ac5\": rpc error: code = NotFound desc = could not find container \"32af4b45910f57f6a3fa4635f90216703a4ed17597dbac7320a131b2e5119ac5\": container with ID starting with 32af4b45910f57f6a3fa4635f90216703a4ed17597dbac7320a131b2e5119ac5 not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.917096 4830 scope.go:117] "RemoveContainer" containerID="e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.917392 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140"} err="failed to get container status \"e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140\": rpc error: code = NotFound desc = could not find container \"e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140\": container with ID starting with e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140 not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.917422 4830 scope.go:117] "RemoveContainer" containerID="4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.917684 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0"} err="failed to get container status \"4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0\": rpc error: code = NotFound desc = could not find container \"4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0\": container with ID starting with 4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0 not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.917709 4830 scope.go:117] "RemoveContainer" containerID="a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.918011 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa"} err="failed to get container status \"a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa\": rpc error: code = NotFound desc = could not find container \"a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa\": container with ID starting with a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.918047 4830 scope.go:117] "RemoveContainer" containerID="dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.918305 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f"} err="failed to get container status \"dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f\": rpc error: code = NotFound desc = could not find container \"dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f\": container with ID starting with dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.918327 4830 scope.go:117] "RemoveContainer" containerID="273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.918583 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef"} err="failed to get container status \"273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef\": rpc error: code = NotFound desc = could not find container \"273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef\": container with ID starting with 273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.918610 4830 scope.go:117] "RemoveContainer" containerID="f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.918866 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372"} err="failed to get container status \"f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372\": rpc error: code = NotFound desc = could not find container \"f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372\": container with ID starting with f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372 not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.918895 4830 scope.go:117] "RemoveContainer" containerID="05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.919239 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05"} err="failed to get container status \"05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05\": rpc error: code = NotFound desc = could not find container \"05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05\": container with ID starting with 05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05 not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.919265 4830 scope.go:117] "RemoveContainer" containerID="41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.919515 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41"} err="failed to get container status \"41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41\": rpc error: code = NotFound desc = could not find container \"41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41\": container with ID starting with 41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41 not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.919539 4830 scope.go:117] "RemoveContainer" containerID="45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.919793 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2"} err="failed to get container status \"45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\": rpc error: code = NotFound desc = could not find container \"45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\": container with ID starting with 45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2 not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.919820 4830 scope.go:117] "RemoveContainer" containerID="32af4b45910f57f6a3fa4635f90216703a4ed17597dbac7320a131b2e5119ac5" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.920109 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32af4b45910f57f6a3fa4635f90216703a4ed17597dbac7320a131b2e5119ac5"} err="failed to get container status \"32af4b45910f57f6a3fa4635f90216703a4ed17597dbac7320a131b2e5119ac5\": rpc error: code = NotFound desc = could not find container \"32af4b45910f57f6a3fa4635f90216703a4ed17597dbac7320a131b2e5119ac5\": container with ID starting with 32af4b45910f57f6a3fa4635f90216703a4ed17597dbac7320a131b2e5119ac5 not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.920134 4830 scope.go:117] "RemoveContainer" containerID="e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.920386 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140"} err="failed to get container status \"e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140\": rpc error: code = NotFound desc = could not find container \"e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140\": container with ID starting with e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140 not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.920411 4830 scope.go:117] "RemoveContainer" containerID="4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.920665 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0"} err="failed to get container status \"4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0\": rpc error: code = NotFound desc = could not find container \"4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0\": container with ID starting with 4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0 not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.920692 4830 scope.go:117] "RemoveContainer" containerID="a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.921042 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa"} err="failed to get container status \"a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa\": rpc error: code = NotFound desc = could not find container \"a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa\": container with ID starting with a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.921075 4830 scope.go:117] "RemoveContainer" containerID="dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.921344 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f"} err="failed to get container status \"dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f\": rpc error: code = NotFound desc = could not find container \"dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f\": container with ID starting with dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.921372 4830 scope.go:117] "RemoveContainer" containerID="273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.921620 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef"} err="failed to get container status \"273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef\": rpc error: code = NotFound desc = could not find container \"273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef\": container with ID starting with 273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.921647 4830 scope.go:117] "RemoveContainer" containerID="f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.921843 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372"} err="failed to get container status \"f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372\": rpc error: code = NotFound desc = could not find container \"f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372\": container with ID starting with f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372 not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.921860 4830 scope.go:117] "RemoveContainer" containerID="05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.922064 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05"} err="failed to get container status \"05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05\": rpc error: code = NotFound desc = could not find container \"05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05\": container with ID starting with 05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05 not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.922088 4830 scope.go:117] "RemoveContainer" containerID="41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.922262 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41"} err="failed to get container status \"41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41\": rpc error: code = NotFound desc = could not find container \"41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41\": container with ID starting with 41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41 not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.922284 4830 scope.go:117] "RemoveContainer" containerID="45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.922493 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2"} err="failed to get container status \"45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\": rpc error: code = NotFound desc = could not find container \"45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\": container with ID starting with 45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2 not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.922522 4830 scope.go:117] "RemoveContainer" containerID="32af4b45910f57f6a3fa4635f90216703a4ed17597dbac7320a131b2e5119ac5" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.922778 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32af4b45910f57f6a3fa4635f90216703a4ed17597dbac7320a131b2e5119ac5"} err="failed to get container status \"32af4b45910f57f6a3fa4635f90216703a4ed17597dbac7320a131b2e5119ac5\": rpc error: code = NotFound desc = could not find container \"32af4b45910f57f6a3fa4635f90216703a4ed17597dbac7320a131b2e5119ac5\": container with ID starting with 32af4b45910f57f6a3fa4635f90216703a4ed17597dbac7320a131b2e5119ac5 not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.922808 4830 scope.go:117] "RemoveContainer" containerID="e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.923089 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140"} err="failed to get container status \"e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140\": rpc error: code = NotFound desc = could not find container \"e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140\": container with ID starting with e570bdefb1555e447bc4d5c56d6ea5e2639dbdc50b7a78dd40e3026e0a5ca140 not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.923128 4830 scope.go:117] "RemoveContainer" containerID="4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.923402 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0"} err="failed to get container status \"4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0\": rpc error: code = NotFound desc = could not find container \"4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0\": container with ID starting with 4c9e139c0599bb81726ae3f42b0d9e6ee394e3d6641eb356cf5c63fe9260f4c0 not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.923428 4830 scope.go:117] "RemoveContainer" containerID="a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.923664 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa"} err="failed to get container status \"a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa\": rpc error: code = NotFound desc = could not find container \"a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa\": container with ID starting with a05759873cd3a2fb58756795e02d8247af4d8204388bf669ddc7cb1a8fbd7baa not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.923690 4830 scope.go:117] "RemoveContainer" containerID="dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.923918 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f"} err="failed to get container status \"dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f\": rpc error: code = NotFound desc = could not find container \"dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f\": container with ID starting with dc95e89196ccb52580164d57b311b57b2bcc7e666c3801f53378c59446c0e73f not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.923962 4830 scope.go:117] "RemoveContainer" containerID="273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.924212 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef"} err="failed to get container status \"273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef\": rpc error: code = NotFound desc = could not find container \"273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef\": container with ID starting with 273a7927f1c00c51d8c6157c6050f697430275ea8b1cac13ebe621d6556b4fef not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.924237 4830 scope.go:117] "RemoveContainer" containerID="f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.924513 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372"} err="failed to get container status \"f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372\": rpc error: code = NotFound desc = could not find container \"f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372\": container with ID starting with f3371981f3cdb5468ef6e9038108b3f9d884cec2f2db2f2754a3b5bf908d0372 not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.924542 4830 scope.go:117] "RemoveContainer" containerID="05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.924772 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05"} err="failed to get container status \"05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05\": rpc error: code = NotFound desc = could not find container \"05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05\": container with ID starting with 05614397aaaa68f352f4263160c3463ce65513156517137bf09a3b5deb505c05 not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.924796 4830 scope.go:117] "RemoveContainer" containerID="41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.925893 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41"} err="failed to get container status \"41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41\": rpc error: code = NotFound desc = could not find container \"41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41\": container with ID starting with 41cda3a7a261313b65434f3e8b885a9628d71f9c9a9c55f3e559226341c8aa41 not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.925925 4830 scope.go:117] "RemoveContainer" containerID="45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.926206 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2"} err="failed to get container status \"45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\": rpc error: code = NotFound desc = could not find container \"45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2\": container with ID starting with 45e519054da5e98ba4be7deaac7911617f787b44d308bda8d4ee6fb7573336a2 not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.926236 4830 scope.go:117] "RemoveContainer" containerID="32af4b45910f57f6a3fa4635f90216703a4ed17597dbac7320a131b2e5119ac5" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.926626 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32af4b45910f57f6a3fa4635f90216703a4ed17597dbac7320a131b2e5119ac5"} err="failed to get container status \"32af4b45910f57f6a3fa4635f90216703a4ed17597dbac7320a131b2e5119ac5\": rpc error: code = NotFound desc = could not find container \"32af4b45910f57f6a3fa4635f90216703a4ed17597dbac7320a131b2e5119ac5\": container with ID starting with 32af4b45910f57f6a3fa4635f90216703a4ed17597dbac7320a131b2e5119ac5 not found: ID does not exist" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.976888 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-etc-openvswitch\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.977026 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-node-log\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.977070 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-node-log\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.977033 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-etc-openvswitch\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.977112 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/181f1cb7-02f1-4252-a53d-3e83ca7c290d-env-overrides\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.977169 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-run-ovn\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.977228 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.977278 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/181f1cb7-02f1-4252-a53d-3e83ca7c290d-ovnkube-script-lib\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.977328 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmdms\" (UniqueName: \"kubernetes.io/projected/181f1cb7-02f1-4252-a53d-3e83ca7c290d-kube-api-access-cmdms\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.977375 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/181f1cb7-02f1-4252-a53d-3e83ca7c290d-ovn-node-metrics-cert\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.977434 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-host-kubelet\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.977500 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-host-cni-netd\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.977564 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-run-openvswitch\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.977624 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-var-lib-openvswitch\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.977670 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-host-slash\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.977731 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-host-run-ovn-kubernetes\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.977786 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-log-socket\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.977821 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/181f1cb7-02f1-4252-a53d-3e83ca7c290d-env-overrides\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.977848 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-run-systemd\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.978282 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/181f1cb7-02f1-4252-a53d-3e83ca7c290d-ovnkube-script-lib\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.979470 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-run-openvswitch\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.979526 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-host-cni-bin\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.979546 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/181f1cb7-02f1-4252-a53d-3e83ca7c290d-ovnkube-config\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.979554 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-host-run-ovn-kubernetes\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.979585 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-run-ovn\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.979601 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-host-cni-netd\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.979610 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-var-lib-openvswitch\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.979629 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-host-cni-bin\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.979632 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-host-slash\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.979648 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-log-socket\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.979669 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.979697 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-host-kubelet\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.980031 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-run-systemd\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.980100 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/181f1cb7-02f1-4252-a53d-3e83ca7c290d-ovnkube-config\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.980193 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-host-run-netns\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.983505 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-systemd-units\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.983582 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-systemd-units\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.980276 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/181f1cb7-02f1-4252-a53d-3e83ca7c290d-host-run-netns\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:46 crc kubenswrapper[4830]: I0227 16:21:46.984825 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/181f1cb7-02f1-4252-a53d-3e83ca7c290d-ovn-node-metrics-cert\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:47 crc kubenswrapper[4830]: I0227 16:21:47.014473 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmdms\" (UniqueName: \"kubernetes.io/projected/181f1cb7-02f1-4252-a53d-3e83ca7c290d-kube-api-access-cmdms\") pod \"ovnkube-node-p6hxx\" (UID: \"181f1cb7-02f1-4252-a53d-3e83ca7c290d\") " pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:47 crc kubenswrapper[4830]: I0227 16:21:47.053898 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bf9lh"] Feb 27 16:21:47 crc kubenswrapper[4830]: I0227 16:21:47.072804 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:47 crc kubenswrapper[4830]: I0227 16:21:47.080894 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bf9lh"] Feb 27 16:21:47 crc kubenswrapper[4830]: W0227 16:21:47.093153 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod181f1cb7_02f1_4252_a53d_3e83ca7c290d.slice/crio-de9f90208d7c63f59f64a70df665d95a6c649ab250b43c5e825ea04da0193216 WatchSource:0}: Error finding container de9f90208d7c63f59f64a70df665d95a6c649ab250b43c5e825ea04da0193216: Status 404 returned error can't find the container with id de9f90208d7c63f59f64a70df665d95a6c649ab250b43c5e825ea04da0193216 Feb 27 16:21:47 crc kubenswrapper[4830]: I0227 16:21:47.676098 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fsrq9_bb72b0f7-1d22-4d13-9653-b1607aa2235d/kube-multus/2.log" Feb 27 16:21:47 crc kubenswrapper[4830]: I0227 16:21:47.677720 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fsrq9_bb72b0f7-1d22-4d13-9653-b1607aa2235d/kube-multus/1.log" Feb 27 16:21:47 crc kubenswrapper[4830]: I0227 16:21:47.677820 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fsrq9" event={"ID":"bb72b0f7-1d22-4d13-9653-b1607aa2235d","Type":"ContainerStarted","Data":"da72ce489a11c25daaad9c2cfb0f81675fdd12a68f719702c509ce3a5fd0d8cd"} Feb 27 16:21:47 crc kubenswrapper[4830]: I0227 16:21:47.679926 4830 generic.go:334] "Generic (PLEG): container finished" podID="181f1cb7-02f1-4252-a53d-3e83ca7c290d" containerID="2a2bf9975ba9d250354d5c278248409222251d43d73518ab8903f6813c12aca9" exitCode=0 Feb 27 16:21:47 crc kubenswrapper[4830]: I0227 16:21:47.679976 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" event={"ID":"181f1cb7-02f1-4252-a53d-3e83ca7c290d","Type":"ContainerDied","Data":"2a2bf9975ba9d250354d5c278248409222251d43d73518ab8903f6813c12aca9"} Feb 27 16:21:47 crc kubenswrapper[4830]: I0227 16:21:47.680034 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" event={"ID":"181f1cb7-02f1-4252-a53d-3e83ca7c290d","Type":"ContainerStarted","Data":"de9f90208d7c63f59f64a70df665d95a6c649ab250b43c5e825ea04da0193216"} Feb 27 16:21:48 crc kubenswrapper[4830]: I0227 16:21:48.694984 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" event={"ID":"181f1cb7-02f1-4252-a53d-3e83ca7c290d","Type":"ContainerStarted","Data":"9706487c3d035bcdb8aa2b8405269a857e121d4419e7a10a550869bdd0fae81b"} Feb 27 16:21:48 crc kubenswrapper[4830]: I0227 16:21:48.695673 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" event={"ID":"181f1cb7-02f1-4252-a53d-3e83ca7c290d","Type":"ContainerStarted","Data":"49bed89ca7ff6cd5c7b1cf80bf1ad4ade02f670bb2b454de06c5f7574acc6a2b"} Feb 27 16:21:48 crc kubenswrapper[4830]: I0227 16:21:48.695701 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" event={"ID":"181f1cb7-02f1-4252-a53d-3e83ca7c290d","Type":"ContainerStarted","Data":"2c720d13a97b72841b71451264fe0fe3e90d86fe60cfde8b12716e050cc2a8c9"} Feb 27 16:21:48 crc kubenswrapper[4830]: I0227 16:21:48.695720 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" event={"ID":"181f1cb7-02f1-4252-a53d-3e83ca7c290d","Type":"ContainerStarted","Data":"1521e888096d456651033a146ebe486331ce9de760e87a9ae4259cf4f94f0773"} Feb 27 16:21:48 crc kubenswrapper[4830]: I0227 16:21:48.695736 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" event={"ID":"181f1cb7-02f1-4252-a53d-3e83ca7c290d","Type":"ContainerStarted","Data":"d5815426dd19c7beb2e37939a1c2171f31f23a5191dea184ab2980b8467bce62"} Feb 27 16:21:48 crc kubenswrapper[4830]: I0227 16:21:48.695755 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" event={"ID":"181f1cb7-02f1-4252-a53d-3e83ca7c290d","Type":"ContainerStarted","Data":"7149066e4a68cb9e700769edf882f0763365d108df047665883a2cdb421f6b30"} Feb 27 16:21:48 crc kubenswrapper[4830]: I0227 16:21:48.772479 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904" path="/var/lib/kubelet/pods/2fcf8ee6-7d12-4dd9-aa0e-8e2c1d0e6904/volumes" Feb 27 16:21:51 crc kubenswrapper[4830]: I0227 16:21:51.729278 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" event={"ID":"181f1cb7-02f1-4252-a53d-3e83ca7c290d","Type":"ContainerStarted","Data":"8ee6894b4b361441e8a32b297d0378dae277a91a0d9fefa8c986f196c6b1af18"} Feb 27 16:21:51 crc kubenswrapper[4830]: I0227 16:21:51.862117 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-kk7k8"] Feb 27 16:21:51 crc kubenswrapper[4830]: I0227 16:21:51.863221 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kk7k8" Feb 27 16:21:51 crc kubenswrapper[4830]: I0227 16:21:51.867402 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Feb 27 16:21:51 crc kubenswrapper[4830]: I0227 16:21:51.867911 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Feb 27 16:21:51 crc kubenswrapper[4830]: I0227 16:21:51.868686 4830 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-8pl97" Feb 27 16:21:51 crc kubenswrapper[4830]: I0227 16:21:51.869214 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Feb 27 16:21:51 crc kubenswrapper[4830]: I0227 16:21:51.976388 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm4x4\" (UniqueName: \"kubernetes.io/projected/961ae73e-ba27-404a-9805-a10277c078b1-kube-api-access-dm4x4\") pod \"crc-storage-crc-kk7k8\" (UID: \"961ae73e-ba27-404a-9805-a10277c078b1\") " pod="crc-storage/crc-storage-crc-kk7k8" Feb 27 16:21:51 crc kubenswrapper[4830]: I0227 16:21:51.976996 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/961ae73e-ba27-404a-9805-a10277c078b1-node-mnt\") pod \"crc-storage-crc-kk7k8\" (UID: \"961ae73e-ba27-404a-9805-a10277c078b1\") " pod="crc-storage/crc-storage-crc-kk7k8" Feb 27 16:21:51 crc kubenswrapper[4830]: I0227 16:21:51.977286 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/961ae73e-ba27-404a-9805-a10277c078b1-crc-storage\") pod \"crc-storage-crc-kk7k8\" (UID: \"961ae73e-ba27-404a-9805-a10277c078b1\") " pod="crc-storage/crc-storage-crc-kk7k8" Feb 27 16:21:52 crc kubenswrapper[4830]: I0227 16:21:52.079146 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/961ae73e-ba27-404a-9805-a10277c078b1-crc-storage\") pod \"crc-storage-crc-kk7k8\" (UID: \"961ae73e-ba27-404a-9805-a10277c078b1\") " pod="crc-storage/crc-storage-crc-kk7k8" Feb 27 16:21:52 crc kubenswrapper[4830]: I0227 16:21:52.079320 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm4x4\" (UniqueName: \"kubernetes.io/projected/961ae73e-ba27-404a-9805-a10277c078b1-kube-api-access-dm4x4\") pod \"crc-storage-crc-kk7k8\" (UID: \"961ae73e-ba27-404a-9805-a10277c078b1\") " pod="crc-storage/crc-storage-crc-kk7k8" Feb 27 16:21:52 crc kubenswrapper[4830]: I0227 16:21:52.079386 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/961ae73e-ba27-404a-9805-a10277c078b1-node-mnt\") pod \"crc-storage-crc-kk7k8\" (UID: \"961ae73e-ba27-404a-9805-a10277c078b1\") " pod="crc-storage/crc-storage-crc-kk7k8" Feb 27 16:21:52 crc kubenswrapper[4830]: I0227 16:21:52.079739 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/961ae73e-ba27-404a-9805-a10277c078b1-node-mnt\") pod \"crc-storage-crc-kk7k8\" (UID: \"961ae73e-ba27-404a-9805-a10277c078b1\") " pod="crc-storage/crc-storage-crc-kk7k8" Feb 27 16:21:52 crc kubenswrapper[4830]: I0227 16:21:52.080746 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/961ae73e-ba27-404a-9805-a10277c078b1-crc-storage\") pod \"crc-storage-crc-kk7k8\" (UID: \"961ae73e-ba27-404a-9805-a10277c078b1\") " pod="crc-storage/crc-storage-crc-kk7k8" Feb 27 16:21:52 crc kubenswrapper[4830]: I0227 16:21:52.151336 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm4x4\" (UniqueName: \"kubernetes.io/projected/961ae73e-ba27-404a-9805-a10277c078b1-kube-api-access-dm4x4\") pod \"crc-storage-crc-kk7k8\" (UID: \"961ae73e-ba27-404a-9805-a10277c078b1\") " pod="crc-storage/crc-storage-crc-kk7k8" Feb 27 16:21:52 crc kubenswrapper[4830]: I0227 16:21:52.192771 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kk7k8" Feb 27 16:21:52 crc kubenswrapper[4830]: E0227 16:21:52.222677 4830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-kk7k8_crc-storage_961ae73e-ba27-404a-9805-a10277c078b1_0(37452542b2a9fd5923742429fb23f32c5872d3db40d1f97dde1b0f25a2f439a7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 16:21:52 crc kubenswrapper[4830]: E0227 16:21:52.222789 4830 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-kk7k8_crc-storage_961ae73e-ba27-404a-9805-a10277c078b1_0(37452542b2a9fd5923742429fb23f32c5872d3db40d1f97dde1b0f25a2f439a7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-kk7k8" Feb 27 16:21:52 crc kubenswrapper[4830]: E0227 16:21:52.222829 4830 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-kk7k8_crc-storage_961ae73e-ba27-404a-9805-a10277c078b1_0(37452542b2a9fd5923742429fb23f32c5872d3db40d1f97dde1b0f25a2f439a7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-kk7k8" Feb 27 16:21:52 crc kubenswrapper[4830]: E0227 16:21:52.222914 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-kk7k8_crc-storage(961ae73e-ba27-404a-9805-a10277c078b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-kk7k8_crc-storage(961ae73e-ba27-404a-9805-a10277c078b1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-kk7k8_crc-storage_961ae73e-ba27-404a-9805-a10277c078b1_0(37452542b2a9fd5923742429fb23f32c5872d3db40d1f97dde1b0f25a2f439a7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-kk7k8" podUID="961ae73e-ba27-404a-9805-a10277c078b1" Feb 27 16:21:53 crc kubenswrapper[4830]: I0227 16:21:53.751557 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" event={"ID":"181f1cb7-02f1-4252-a53d-3e83ca7c290d","Type":"ContainerStarted","Data":"6da8d8b015889b314355798fa122572301b0a1baec6a21cf3fd7253e2ee1cb4f"} Feb 27 16:21:53 crc kubenswrapper[4830]: I0227 16:21:53.751930 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:53 crc kubenswrapper[4830]: I0227 16:21:53.752142 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:53 crc kubenswrapper[4830]: I0227 16:21:53.752213 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:53 crc kubenswrapper[4830]: I0227 16:21:53.802198 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" podStartSLOduration=7.8021708 podStartE2EDuration="7.8021708s" podCreationTimestamp="2026-02-27 16:21:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:21:53.797566535 +0000 UTC m=+909.886839028" watchObservedRunningTime="2026-02-27 16:21:53.8021708 +0000 UTC m=+909.891443293" Feb 27 16:21:53 crc kubenswrapper[4830]: I0227 16:21:53.805643 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:53 crc kubenswrapper[4830]: I0227 16:21:53.809286 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:21:55 crc kubenswrapper[4830]: I0227 16:21:55.155495 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-kk7k8"] Feb 27 16:21:55 crc kubenswrapper[4830]: I0227 16:21:55.155884 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kk7k8" Feb 27 16:21:55 crc kubenswrapper[4830]: I0227 16:21:55.156391 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kk7k8" Feb 27 16:21:55 crc kubenswrapper[4830]: E0227 16:21:55.188158 4830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-kk7k8_crc-storage_961ae73e-ba27-404a-9805-a10277c078b1_0(ba4cfca2e66eecb354166277412030e4c18111b3fa8d5d61213ac6e964770a34): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 16:21:55 crc kubenswrapper[4830]: E0227 16:21:55.188249 4830 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-kk7k8_crc-storage_961ae73e-ba27-404a-9805-a10277c078b1_0(ba4cfca2e66eecb354166277412030e4c18111b3fa8d5d61213ac6e964770a34): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-kk7k8" Feb 27 16:21:55 crc kubenswrapper[4830]: E0227 16:21:55.188288 4830 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-kk7k8_crc-storage_961ae73e-ba27-404a-9805-a10277c078b1_0(ba4cfca2e66eecb354166277412030e4c18111b3fa8d5d61213ac6e964770a34): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-kk7k8" Feb 27 16:21:55 crc kubenswrapper[4830]: E0227 16:21:55.188363 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-kk7k8_crc-storage(961ae73e-ba27-404a-9805-a10277c078b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-kk7k8_crc-storage(961ae73e-ba27-404a-9805-a10277c078b1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-kk7k8_crc-storage_961ae73e-ba27-404a-9805-a10277c078b1_0(ba4cfca2e66eecb354166277412030e4c18111b3fa8d5d61213ac6e964770a34): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-kk7k8" podUID="961ae73e-ba27-404a-9805-a10277c078b1" Feb 27 16:22:00 crc kubenswrapper[4830]: I0227 16:22:00.141438 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536822-xbmsc"] Feb 27 16:22:00 crc kubenswrapper[4830]: I0227 16:22:00.143094 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536822-xbmsc" Feb 27 16:22:00 crc kubenswrapper[4830]: I0227 16:22:00.148389 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 16:22:00 crc kubenswrapper[4830]: I0227 16:22:00.148685 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 16:22:00 crc kubenswrapper[4830]: I0227 16:22:00.148979 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 16:22:00 crc kubenswrapper[4830]: I0227 16:22:00.156705 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536822-xbmsc"] Feb 27 16:22:00 crc kubenswrapper[4830]: I0227 16:22:00.288982 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktc5z\" (UniqueName: \"kubernetes.io/projected/275d93b7-6091-41c7-98d8-7a7a67d6f043-kube-api-access-ktc5z\") pod \"auto-csr-approver-29536822-xbmsc\" (UID: \"275d93b7-6091-41c7-98d8-7a7a67d6f043\") " pod="openshift-infra/auto-csr-approver-29536822-xbmsc" Feb 27 16:22:00 crc kubenswrapper[4830]: I0227 16:22:00.391004 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktc5z\" (UniqueName: \"kubernetes.io/projected/275d93b7-6091-41c7-98d8-7a7a67d6f043-kube-api-access-ktc5z\") pod \"auto-csr-approver-29536822-xbmsc\" (UID: \"275d93b7-6091-41c7-98d8-7a7a67d6f043\") " pod="openshift-infra/auto-csr-approver-29536822-xbmsc" Feb 27 16:22:00 crc kubenswrapper[4830]: I0227 16:22:00.425319 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktc5z\" (UniqueName: \"kubernetes.io/projected/275d93b7-6091-41c7-98d8-7a7a67d6f043-kube-api-access-ktc5z\") pod \"auto-csr-approver-29536822-xbmsc\" (UID: \"275d93b7-6091-41c7-98d8-7a7a67d6f043\") " pod="openshift-infra/auto-csr-approver-29536822-xbmsc" Feb 27 16:22:00 crc kubenswrapper[4830]: I0227 16:22:00.492421 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536822-xbmsc" Feb 27 16:22:00 crc kubenswrapper[4830]: I0227 16:22:00.758565 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536822-xbmsc"] Feb 27 16:22:00 crc kubenswrapper[4830]: I0227 16:22:00.761261 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 16:22:00 crc kubenswrapper[4830]: I0227 16:22:00.799851 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536822-xbmsc" event={"ID":"275d93b7-6091-41c7-98d8-7a7a67d6f043","Type":"ContainerStarted","Data":"c219f4a7e541d3328e03acf89e6dbc3ac793adeeb775fc354f90db421c1bc700"} Feb 27 16:22:02 crc kubenswrapper[4830]: I0227 16:22:02.815790 4830 generic.go:334] "Generic (PLEG): container finished" podID="275d93b7-6091-41c7-98d8-7a7a67d6f043" containerID="886dac081110561ac958d0214372fee20a21a53a90469a1c53e73815d1340221" exitCode=0 Feb 27 16:22:02 crc kubenswrapper[4830]: I0227 16:22:02.815857 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536822-xbmsc" event={"ID":"275d93b7-6091-41c7-98d8-7a7a67d6f043","Type":"ContainerDied","Data":"886dac081110561ac958d0214372fee20a21a53a90469a1c53e73815d1340221"} Feb 27 16:22:04 crc kubenswrapper[4830]: I0227 16:22:04.164226 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536822-xbmsc" Feb 27 16:22:04 crc kubenswrapper[4830]: I0227 16:22:04.345326 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktc5z\" (UniqueName: \"kubernetes.io/projected/275d93b7-6091-41c7-98d8-7a7a67d6f043-kube-api-access-ktc5z\") pod \"275d93b7-6091-41c7-98d8-7a7a67d6f043\" (UID: \"275d93b7-6091-41c7-98d8-7a7a67d6f043\") " Feb 27 16:22:04 crc kubenswrapper[4830]: I0227 16:22:04.353582 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/275d93b7-6091-41c7-98d8-7a7a67d6f043-kube-api-access-ktc5z" (OuterVolumeSpecName: "kube-api-access-ktc5z") pod "275d93b7-6091-41c7-98d8-7a7a67d6f043" (UID: "275d93b7-6091-41c7-98d8-7a7a67d6f043"). InnerVolumeSpecName "kube-api-access-ktc5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:22:04 crc kubenswrapper[4830]: I0227 16:22:04.447554 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktc5z\" (UniqueName: \"kubernetes.io/projected/275d93b7-6091-41c7-98d8-7a7a67d6f043-kube-api-access-ktc5z\") on node \"crc\" DevicePath \"\"" Feb 27 16:22:04 crc kubenswrapper[4830]: I0227 16:22:04.831520 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536822-xbmsc" event={"ID":"275d93b7-6091-41c7-98d8-7a7a67d6f043","Type":"ContainerDied","Data":"c219f4a7e541d3328e03acf89e6dbc3ac793adeeb775fc354f90db421c1bc700"} Feb 27 16:22:04 crc kubenswrapper[4830]: I0227 16:22:04.831573 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c219f4a7e541d3328e03acf89e6dbc3ac793adeeb775fc354f90db421c1bc700" Feb 27 16:22:04 crc kubenswrapper[4830]: I0227 16:22:04.831617 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536822-xbmsc" Feb 27 16:22:05 crc kubenswrapper[4830]: I0227 16:22:05.244449 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536816-xjsj9"] Feb 27 16:22:05 crc kubenswrapper[4830]: I0227 16:22:05.248580 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536816-xjsj9"] Feb 27 16:22:05 crc kubenswrapper[4830]: I0227 16:22:05.761667 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kk7k8" Feb 27 16:22:05 crc kubenswrapper[4830]: I0227 16:22:05.762444 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kk7k8" Feb 27 16:22:06 crc kubenswrapper[4830]: I0227 16:22:06.065123 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-kk7k8"] Feb 27 16:22:06 crc kubenswrapper[4830]: W0227 16:22:06.074830 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod961ae73e_ba27_404a_9805_a10277c078b1.slice/crio-16d10ba6fbbe4dcb5219bc3ca9f42026f98319df7ac9b2861ae57f5044ca190d WatchSource:0}: Error finding container 16d10ba6fbbe4dcb5219bc3ca9f42026f98319df7ac9b2861ae57f5044ca190d: Status 404 returned error can't find the container with id 16d10ba6fbbe4dcb5219bc3ca9f42026f98319df7ac9b2861ae57f5044ca190d Feb 27 16:22:06 crc kubenswrapper[4830]: I0227 16:22:06.772538 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24d73d21-c3de-47b8-a9cf-38fba733a4b8" path="/var/lib/kubelet/pods/24d73d21-c3de-47b8-a9cf-38fba733a4b8/volumes" Feb 27 16:22:06 crc kubenswrapper[4830]: I0227 16:22:06.845503 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-kk7k8" event={"ID":"961ae73e-ba27-404a-9805-a10277c078b1","Type":"ContainerStarted","Data":"16d10ba6fbbe4dcb5219bc3ca9f42026f98319df7ac9b2861ae57f5044ca190d"} Feb 27 16:22:07 crc kubenswrapper[4830]: I0227 16:22:07.857524 4830 generic.go:334] "Generic (PLEG): container finished" podID="961ae73e-ba27-404a-9805-a10277c078b1" containerID="6551ca8307c310307738252d5c343368661b9835bd2aa4841de7ad6adee6d3b5" exitCode=0 Feb 27 16:22:07 crc kubenswrapper[4830]: I0227 16:22:07.859804 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-kk7k8" event={"ID":"961ae73e-ba27-404a-9805-a10277c078b1","Type":"ContainerDied","Data":"6551ca8307c310307738252d5c343368661b9835bd2aa4841de7ad6adee6d3b5"} Feb 27 16:22:09 crc kubenswrapper[4830]: I0227 16:22:09.203054 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kk7k8" Feb 27 16:22:09 crc kubenswrapper[4830]: I0227 16:22:09.310718 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/961ae73e-ba27-404a-9805-a10277c078b1-node-mnt\") pod \"961ae73e-ba27-404a-9805-a10277c078b1\" (UID: \"961ae73e-ba27-404a-9805-a10277c078b1\") " Feb 27 16:22:09 crc kubenswrapper[4830]: I0227 16:22:09.310811 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/961ae73e-ba27-404a-9805-a10277c078b1-crc-storage\") pod \"961ae73e-ba27-404a-9805-a10277c078b1\" (UID: \"961ae73e-ba27-404a-9805-a10277c078b1\") " Feb 27 16:22:09 crc kubenswrapper[4830]: I0227 16:22:09.310883 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dm4x4\" (UniqueName: \"kubernetes.io/projected/961ae73e-ba27-404a-9805-a10277c078b1-kube-api-access-dm4x4\") pod \"961ae73e-ba27-404a-9805-a10277c078b1\" (UID: \"961ae73e-ba27-404a-9805-a10277c078b1\") " Feb 27 16:22:09 crc kubenswrapper[4830]: I0227 16:22:09.311140 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/961ae73e-ba27-404a-9805-a10277c078b1-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "961ae73e-ba27-404a-9805-a10277c078b1" (UID: "961ae73e-ba27-404a-9805-a10277c078b1"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:22:09 crc kubenswrapper[4830]: I0227 16:22:09.311354 4830 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/961ae73e-ba27-404a-9805-a10277c078b1-node-mnt\") on node \"crc\" DevicePath \"\"" Feb 27 16:22:09 crc kubenswrapper[4830]: I0227 16:22:09.319136 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/961ae73e-ba27-404a-9805-a10277c078b1-kube-api-access-dm4x4" (OuterVolumeSpecName: "kube-api-access-dm4x4") pod "961ae73e-ba27-404a-9805-a10277c078b1" (UID: "961ae73e-ba27-404a-9805-a10277c078b1"). InnerVolumeSpecName "kube-api-access-dm4x4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:22:09 crc kubenswrapper[4830]: I0227 16:22:09.339271 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/961ae73e-ba27-404a-9805-a10277c078b1-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "961ae73e-ba27-404a-9805-a10277c078b1" (UID: "961ae73e-ba27-404a-9805-a10277c078b1"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:22:09 crc kubenswrapper[4830]: I0227 16:22:09.412625 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dm4x4\" (UniqueName: \"kubernetes.io/projected/961ae73e-ba27-404a-9805-a10277c078b1-kube-api-access-dm4x4\") on node \"crc\" DevicePath \"\"" Feb 27 16:22:09 crc kubenswrapper[4830]: I0227 16:22:09.412709 4830 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/961ae73e-ba27-404a-9805-a10277c078b1-crc-storage\") on node \"crc\" DevicePath \"\"" Feb 27 16:22:09 crc kubenswrapper[4830]: I0227 16:22:09.877545 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-kk7k8" event={"ID":"961ae73e-ba27-404a-9805-a10277c078b1","Type":"ContainerDied","Data":"16d10ba6fbbe4dcb5219bc3ca9f42026f98319df7ac9b2861ae57f5044ca190d"} Feb 27 16:22:09 crc kubenswrapper[4830]: I0227 16:22:09.878062 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16d10ba6fbbe4dcb5219bc3ca9f42026f98319df7ac9b2861ae57f5044ca190d" Feb 27 16:22:09 crc kubenswrapper[4830]: I0227 16:22:09.877620 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-kk7k8" Feb 27 16:22:17 crc kubenswrapper[4830]: I0227 16:22:17.095927 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-p6hxx" Feb 27 16:22:17 crc kubenswrapper[4830]: I0227 16:22:17.628031 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828swbr"] Feb 27 16:22:17 crc kubenswrapper[4830]: E0227 16:22:17.628704 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="275d93b7-6091-41c7-98d8-7a7a67d6f043" containerName="oc" Feb 27 16:22:17 crc kubenswrapper[4830]: I0227 16:22:17.628732 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="275d93b7-6091-41c7-98d8-7a7a67d6f043" containerName="oc" Feb 27 16:22:17 crc kubenswrapper[4830]: E0227 16:22:17.628752 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="961ae73e-ba27-404a-9805-a10277c078b1" containerName="storage" Feb 27 16:22:17 crc kubenswrapper[4830]: I0227 16:22:17.628768 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="961ae73e-ba27-404a-9805-a10277c078b1" containerName="storage" Feb 27 16:22:17 crc kubenswrapper[4830]: I0227 16:22:17.628932 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="961ae73e-ba27-404a-9805-a10277c078b1" containerName="storage" Feb 27 16:22:17 crc kubenswrapper[4830]: I0227 16:22:17.628989 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="275d93b7-6091-41c7-98d8-7a7a67d6f043" containerName="oc" Feb 27 16:22:17 crc kubenswrapper[4830]: I0227 16:22:17.630332 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828swbr" Feb 27 16:22:17 crc kubenswrapper[4830]: I0227 16:22:17.634135 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 27 16:22:17 crc kubenswrapper[4830]: I0227 16:22:17.634667 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2c326b5-3888-4022-8171-e06f87caf906-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828swbr\" (UID: \"c2c326b5-3888-4022-8171-e06f87caf906\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828swbr" Feb 27 16:22:17 crc kubenswrapper[4830]: I0227 16:22:17.634738 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrbvh\" (UniqueName: \"kubernetes.io/projected/c2c326b5-3888-4022-8171-e06f87caf906-kube-api-access-hrbvh\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828swbr\" (UID: \"c2c326b5-3888-4022-8171-e06f87caf906\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828swbr" Feb 27 16:22:17 crc kubenswrapper[4830]: I0227 16:22:17.634868 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2c326b5-3888-4022-8171-e06f87caf906-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828swbr\" (UID: \"c2c326b5-3888-4022-8171-e06f87caf906\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828swbr" Feb 27 16:22:17 crc kubenswrapper[4830]: I0227 16:22:17.637115 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828swbr"] Feb 27 16:22:17 crc kubenswrapper[4830]: I0227 16:22:17.735913 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2c326b5-3888-4022-8171-e06f87caf906-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828swbr\" (UID: \"c2c326b5-3888-4022-8171-e06f87caf906\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828swbr" Feb 27 16:22:17 crc kubenswrapper[4830]: I0227 16:22:17.736035 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2c326b5-3888-4022-8171-e06f87caf906-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828swbr\" (UID: \"c2c326b5-3888-4022-8171-e06f87caf906\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828swbr" Feb 27 16:22:17 crc kubenswrapper[4830]: I0227 16:22:17.736095 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrbvh\" (UniqueName: \"kubernetes.io/projected/c2c326b5-3888-4022-8171-e06f87caf906-kube-api-access-hrbvh\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828swbr\" (UID: \"c2c326b5-3888-4022-8171-e06f87caf906\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828swbr" Feb 27 16:22:17 crc kubenswrapper[4830]: I0227 16:22:17.737498 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2c326b5-3888-4022-8171-e06f87caf906-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828swbr\" (UID: \"c2c326b5-3888-4022-8171-e06f87caf906\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828swbr" Feb 27 16:22:17 crc kubenswrapper[4830]: I0227 16:22:17.737616 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2c326b5-3888-4022-8171-e06f87caf906-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828swbr\" (UID: \"c2c326b5-3888-4022-8171-e06f87caf906\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828swbr" Feb 27 16:22:17 crc kubenswrapper[4830]: I0227 16:22:17.760160 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrbvh\" (UniqueName: \"kubernetes.io/projected/c2c326b5-3888-4022-8171-e06f87caf906-kube-api-access-hrbvh\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828swbr\" (UID: \"c2c326b5-3888-4022-8171-e06f87caf906\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828swbr" Feb 27 16:22:17 crc kubenswrapper[4830]: I0227 16:22:17.960469 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828swbr" Feb 27 16:22:18 crc kubenswrapper[4830]: I0227 16:22:18.281327 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828swbr"] Feb 27 16:22:18 crc kubenswrapper[4830]: I0227 16:22:18.942481 4830 generic.go:334] "Generic (PLEG): container finished" podID="c2c326b5-3888-4022-8171-e06f87caf906" containerID="89ac861f923d0216d2cdd876f7688165e8fd909d562219c065d74436cf577746" exitCode=0 Feb 27 16:22:18 crc kubenswrapper[4830]: I0227 16:22:18.942729 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828swbr" event={"ID":"c2c326b5-3888-4022-8171-e06f87caf906","Type":"ContainerDied","Data":"89ac861f923d0216d2cdd876f7688165e8fd909d562219c065d74436cf577746"} Feb 27 16:22:18 crc kubenswrapper[4830]: I0227 16:22:18.942991 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828swbr" event={"ID":"c2c326b5-3888-4022-8171-e06f87caf906","Type":"ContainerStarted","Data":"e9a33432abb9641d1040876fe473a43121d21278cba05ab0253b39c728016af4"} Feb 27 16:22:19 crc kubenswrapper[4830]: I0227 16:22:19.791382 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4cp9f"] Feb 27 16:22:19 crc kubenswrapper[4830]: I0227 16:22:19.793182 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4cp9f" Feb 27 16:22:19 crc kubenswrapper[4830]: I0227 16:22:19.813745 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4cp9f"] Feb 27 16:22:19 crc kubenswrapper[4830]: I0227 16:22:19.973316 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfjkn\" (UniqueName: \"kubernetes.io/projected/1ec659cc-62c3-4a0a-b505-dc3b7cc01941-kube-api-access-zfjkn\") pod \"redhat-operators-4cp9f\" (UID: \"1ec659cc-62c3-4a0a-b505-dc3b7cc01941\") " pod="openshift-marketplace/redhat-operators-4cp9f" Feb 27 16:22:19 crc kubenswrapper[4830]: I0227 16:22:19.973406 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ec659cc-62c3-4a0a-b505-dc3b7cc01941-catalog-content\") pod \"redhat-operators-4cp9f\" (UID: \"1ec659cc-62c3-4a0a-b505-dc3b7cc01941\") " pod="openshift-marketplace/redhat-operators-4cp9f" Feb 27 16:22:19 crc kubenswrapper[4830]: I0227 16:22:19.973426 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ec659cc-62c3-4a0a-b505-dc3b7cc01941-utilities\") pod \"redhat-operators-4cp9f\" (UID: \"1ec659cc-62c3-4a0a-b505-dc3b7cc01941\") " pod="openshift-marketplace/redhat-operators-4cp9f" Feb 27 16:22:20 crc kubenswrapper[4830]: I0227 16:22:20.074135 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfjkn\" (UniqueName: \"kubernetes.io/projected/1ec659cc-62c3-4a0a-b505-dc3b7cc01941-kube-api-access-zfjkn\") pod \"redhat-operators-4cp9f\" (UID: \"1ec659cc-62c3-4a0a-b505-dc3b7cc01941\") " pod="openshift-marketplace/redhat-operators-4cp9f" Feb 27 16:22:20 crc kubenswrapper[4830]: I0227 16:22:20.074196 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ec659cc-62c3-4a0a-b505-dc3b7cc01941-catalog-content\") pod \"redhat-operators-4cp9f\" (UID: \"1ec659cc-62c3-4a0a-b505-dc3b7cc01941\") " pod="openshift-marketplace/redhat-operators-4cp9f" Feb 27 16:22:20 crc kubenswrapper[4830]: I0227 16:22:20.074217 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ec659cc-62c3-4a0a-b505-dc3b7cc01941-utilities\") pod \"redhat-operators-4cp9f\" (UID: \"1ec659cc-62c3-4a0a-b505-dc3b7cc01941\") " pod="openshift-marketplace/redhat-operators-4cp9f" Feb 27 16:22:20 crc kubenswrapper[4830]: I0227 16:22:20.074679 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ec659cc-62c3-4a0a-b505-dc3b7cc01941-utilities\") pod \"redhat-operators-4cp9f\" (UID: \"1ec659cc-62c3-4a0a-b505-dc3b7cc01941\") " pod="openshift-marketplace/redhat-operators-4cp9f" Feb 27 16:22:20 crc kubenswrapper[4830]: I0227 16:22:20.075005 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ec659cc-62c3-4a0a-b505-dc3b7cc01941-catalog-content\") pod \"redhat-operators-4cp9f\" (UID: \"1ec659cc-62c3-4a0a-b505-dc3b7cc01941\") " pod="openshift-marketplace/redhat-operators-4cp9f" Feb 27 16:22:20 crc kubenswrapper[4830]: I0227 16:22:20.093112 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfjkn\" (UniqueName: \"kubernetes.io/projected/1ec659cc-62c3-4a0a-b505-dc3b7cc01941-kube-api-access-zfjkn\") pod \"redhat-operators-4cp9f\" (UID: \"1ec659cc-62c3-4a0a-b505-dc3b7cc01941\") " pod="openshift-marketplace/redhat-operators-4cp9f" Feb 27 16:22:20 crc kubenswrapper[4830]: I0227 16:22:20.120465 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4cp9f" Feb 27 16:22:20 crc kubenswrapper[4830]: I0227 16:22:20.582678 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4cp9f"] Feb 27 16:22:20 crc kubenswrapper[4830]: I0227 16:22:20.956740 4830 generic.go:334] "Generic (PLEG): container finished" podID="c2c326b5-3888-4022-8171-e06f87caf906" containerID="56543c2110f5031bc0b1d440eaeea43b9d641f10baa1bd9c38a3dc0fad9125ac" exitCode=0 Feb 27 16:22:20 crc kubenswrapper[4830]: I0227 16:22:20.956827 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828swbr" event={"ID":"c2c326b5-3888-4022-8171-e06f87caf906","Type":"ContainerDied","Data":"56543c2110f5031bc0b1d440eaeea43b9d641f10baa1bd9c38a3dc0fad9125ac"} Feb 27 16:22:20 crc kubenswrapper[4830]: I0227 16:22:20.960978 4830 generic.go:334] "Generic (PLEG): container finished" podID="1ec659cc-62c3-4a0a-b505-dc3b7cc01941" containerID="d7bfc2bddd0bf01f9e5e3b175d414d353dc990d019cea899717d6b5c250f79e3" exitCode=0 Feb 27 16:22:20 crc kubenswrapper[4830]: I0227 16:22:20.961004 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4cp9f" event={"ID":"1ec659cc-62c3-4a0a-b505-dc3b7cc01941","Type":"ContainerDied","Data":"d7bfc2bddd0bf01f9e5e3b175d414d353dc990d019cea899717d6b5c250f79e3"} Feb 27 16:22:20 crc kubenswrapper[4830]: I0227 16:22:20.961019 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4cp9f" event={"ID":"1ec659cc-62c3-4a0a-b505-dc3b7cc01941","Type":"ContainerStarted","Data":"6ea40fa7198b3329b493ae6cfe1c2218755ffbc5f3c7c866c8240077359a4b33"} Feb 27 16:22:21 crc kubenswrapper[4830]: I0227 16:22:21.972037 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4cp9f" event={"ID":"1ec659cc-62c3-4a0a-b505-dc3b7cc01941","Type":"ContainerStarted","Data":"6304f60a01cf6fa75942c09f48d506248fac45333d2f25ca84543b86f13248a8"} Feb 27 16:22:21 crc kubenswrapper[4830]: I0227 16:22:21.975057 4830 generic.go:334] "Generic (PLEG): container finished" podID="c2c326b5-3888-4022-8171-e06f87caf906" containerID="6ea00e8b3b191b43202daeeeff05e0a6e9710d8c4b9dc5d61b9e09032773ca40" exitCode=0 Feb 27 16:22:21 crc kubenswrapper[4830]: I0227 16:22:21.975143 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828swbr" event={"ID":"c2c326b5-3888-4022-8171-e06f87caf906","Type":"ContainerDied","Data":"6ea00e8b3b191b43202daeeeff05e0a6e9710d8c4b9dc5d61b9e09032773ca40"} Feb 27 16:22:22 crc kubenswrapper[4830]: I0227 16:22:22.984616 4830 generic.go:334] "Generic (PLEG): container finished" podID="1ec659cc-62c3-4a0a-b505-dc3b7cc01941" containerID="6304f60a01cf6fa75942c09f48d506248fac45333d2f25ca84543b86f13248a8" exitCode=0 Feb 27 16:22:22 crc kubenswrapper[4830]: I0227 16:22:22.984700 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4cp9f" event={"ID":"1ec659cc-62c3-4a0a-b505-dc3b7cc01941","Type":"ContainerDied","Data":"6304f60a01cf6fa75942c09f48d506248fac45333d2f25ca84543b86f13248a8"} Feb 27 16:22:23 crc kubenswrapper[4830]: I0227 16:22:23.290283 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828swbr" Feb 27 16:22:23 crc kubenswrapper[4830]: I0227 16:22:23.418835 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2c326b5-3888-4022-8171-e06f87caf906-bundle\") pod \"c2c326b5-3888-4022-8171-e06f87caf906\" (UID: \"c2c326b5-3888-4022-8171-e06f87caf906\") " Feb 27 16:22:23 crc kubenswrapper[4830]: I0227 16:22:23.419243 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2c326b5-3888-4022-8171-e06f87caf906-util\") pod \"c2c326b5-3888-4022-8171-e06f87caf906\" (UID: \"c2c326b5-3888-4022-8171-e06f87caf906\") " Feb 27 16:22:23 crc kubenswrapper[4830]: I0227 16:22:23.419281 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrbvh\" (UniqueName: \"kubernetes.io/projected/c2c326b5-3888-4022-8171-e06f87caf906-kube-api-access-hrbvh\") pod \"c2c326b5-3888-4022-8171-e06f87caf906\" (UID: \"c2c326b5-3888-4022-8171-e06f87caf906\") " Feb 27 16:22:23 crc kubenswrapper[4830]: I0227 16:22:23.419881 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2c326b5-3888-4022-8171-e06f87caf906-bundle" (OuterVolumeSpecName: "bundle") pod "c2c326b5-3888-4022-8171-e06f87caf906" (UID: "c2c326b5-3888-4022-8171-e06f87caf906"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:22:23 crc kubenswrapper[4830]: I0227 16:22:23.422184 4830 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2c326b5-3888-4022-8171-e06f87caf906-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:22:23 crc kubenswrapper[4830]: I0227 16:22:23.428381 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2c326b5-3888-4022-8171-e06f87caf906-kube-api-access-hrbvh" (OuterVolumeSpecName: "kube-api-access-hrbvh") pod "c2c326b5-3888-4022-8171-e06f87caf906" (UID: "c2c326b5-3888-4022-8171-e06f87caf906"). InnerVolumeSpecName "kube-api-access-hrbvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:22:23 crc kubenswrapper[4830]: I0227 16:22:23.440791 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2c326b5-3888-4022-8171-e06f87caf906-util" (OuterVolumeSpecName: "util") pod "c2c326b5-3888-4022-8171-e06f87caf906" (UID: "c2c326b5-3888-4022-8171-e06f87caf906"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:22:23 crc kubenswrapper[4830]: I0227 16:22:23.523888 4830 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2c326b5-3888-4022-8171-e06f87caf906-util\") on node \"crc\" DevicePath \"\"" Feb 27 16:22:23 crc kubenswrapper[4830]: I0227 16:22:23.523924 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrbvh\" (UniqueName: \"kubernetes.io/projected/c2c326b5-3888-4022-8171-e06f87caf906-kube-api-access-hrbvh\") on node \"crc\" DevicePath \"\"" Feb 27 16:22:23 crc kubenswrapper[4830]: I0227 16:22:23.996679 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4cp9f" event={"ID":"1ec659cc-62c3-4a0a-b505-dc3b7cc01941","Type":"ContainerStarted","Data":"42377dc3294e2822f2b1d8afbd849d5a1098c3fb95292108c58275d6541e4ca0"} Feb 27 16:22:24 crc kubenswrapper[4830]: I0227 16:22:24.000792 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828swbr" event={"ID":"c2c326b5-3888-4022-8171-e06f87caf906","Type":"ContainerDied","Data":"e9a33432abb9641d1040876fe473a43121d21278cba05ab0253b39c728016af4"} Feb 27 16:22:24 crc kubenswrapper[4830]: I0227 16:22:24.000937 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9a33432abb9641d1040876fe473a43121d21278cba05ab0253b39c728016af4" Feb 27 16:22:24 crc kubenswrapper[4830]: I0227 16:22:24.000858 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828swbr" Feb 27 16:22:24 crc kubenswrapper[4830]: I0227 16:22:24.030830 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4cp9f" podStartSLOduration=2.600399351 podStartE2EDuration="5.030802761s" podCreationTimestamp="2026-02-27 16:22:19 +0000 UTC" firstStartedPulling="2026-02-27 16:22:20.962222624 +0000 UTC m=+937.051495097" lastFinishedPulling="2026-02-27 16:22:23.392626034 +0000 UTC m=+939.481898507" observedRunningTime="2026-02-27 16:22:24.026485773 +0000 UTC m=+940.115758276" watchObservedRunningTime="2026-02-27 16:22:24.030802761 +0000 UTC m=+940.120075254" Feb 27 16:22:26 crc kubenswrapper[4830]: I0227 16:22:26.228232 4830 scope.go:117] "RemoveContainer" containerID="1468d3f52c12e00c0351a45bf01df6e20300dfed38123d1bc936e2b88628e636" Feb 27 16:22:26 crc kubenswrapper[4830]: I0227 16:22:26.279365 4830 scope.go:117] "RemoveContainer" containerID="787ecf758c99d969efde354b66bb37b5f9a8c2d79cccd8bab200423af61d4109" Feb 27 16:22:27 crc kubenswrapper[4830]: I0227 16:22:27.025607 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fsrq9_bb72b0f7-1d22-4d13-9653-b1607aa2235d/kube-multus/2.log" Feb 27 16:22:28 crc kubenswrapper[4830]: I0227 16:22:28.113753 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-5tq6p"] Feb 27 16:22:28 crc kubenswrapper[4830]: E0227 16:22:28.113999 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2c326b5-3888-4022-8171-e06f87caf906" containerName="pull" Feb 27 16:22:28 crc kubenswrapper[4830]: I0227 16:22:28.114012 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2c326b5-3888-4022-8171-e06f87caf906" containerName="pull" Feb 27 16:22:28 crc kubenswrapper[4830]: E0227 16:22:28.114029 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2c326b5-3888-4022-8171-e06f87caf906" containerName="util" Feb 27 16:22:28 crc kubenswrapper[4830]: I0227 16:22:28.114037 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2c326b5-3888-4022-8171-e06f87caf906" containerName="util" Feb 27 16:22:28 crc kubenswrapper[4830]: E0227 16:22:28.114060 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2c326b5-3888-4022-8171-e06f87caf906" containerName="extract" Feb 27 16:22:28 crc kubenswrapper[4830]: I0227 16:22:28.114068 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2c326b5-3888-4022-8171-e06f87caf906" containerName="extract" Feb 27 16:22:28 crc kubenswrapper[4830]: I0227 16:22:28.114174 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2c326b5-3888-4022-8171-e06f87caf906" containerName="extract" Feb 27 16:22:28 crc kubenswrapper[4830]: I0227 16:22:28.114831 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-5tq6p" Feb 27 16:22:28 crc kubenswrapper[4830]: I0227 16:22:28.120010 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 27 16:22:28 crc kubenswrapper[4830]: I0227 16:22:28.120253 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-xvh44" Feb 27 16:22:28 crc kubenswrapper[4830]: I0227 16:22:28.120256 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 27 16:22:28 crc kubenswrapper[4830]: I0227 16:22:28.143580 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-5tq6p"] Feb 27 16:22:28 crc kubenswrapper[4830]: I0227 16:22:28.219336 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tft4\" (UniqueName: \"kubernetes.io/projected/760fa7ab-d23d-4c12-afd2-fe11766fd7d1-kube-api-access-6tft4\") pod \"nmstate-operator-75c5dccd6c-5tq6p\" (UID: \"760fa7ab-d23d-4c12-afd2-fe11766fd7d1\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-5tq6p" Feb 27 16:22:28 crc kubenswrapper[4830]: I0227 16:22:28.321376 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tft4\" (UniqueName: \"kubernetes.io/projected/760fa7ab-d23d-4c12-afd2-fe11766fd7d1-kube-api-access-6tft4\") pod \"nmstate-operator-75c5dccd6c-5tq6p\" (UID: \"760fa7ab-d23d-4c12-afd2-fe11766fd7d1\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-5tq6p" Feb 27 16:22:28 crc kubenswrapper[4830]: I0227 16:22:28.355932 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tft4\" (UniqueName: \"kubernetes.io/projected/760fa7ab-d23d-4c12-afd2-fe11766fd7d1-kube-api-access-6tft4\") pod \"nmstate-operator-75c5dccd6c-5tq6p\" (UID: \"760fa7ab-d23d-4c12-afd2-fe11766fd7d1\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-5tq6p" Feb 27 16:22:28 crc kubenswrapper[4830]: I0227 16:22:28.431428 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-5tq6p" Feb 27 16:22:28 crc kubenswrapper[4830]: I0227 16:22:28.671542 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-5tq6p"] Feb 27 16:22:29 crc kubenswrapper[4830]: I0227 16:22:29.037197 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-5tq6p" event={"ID":"760fa7ab-d23d-4c12-afd2-fe11766fd7d1","Type":"ContainerStarted","Data":"7163289ae6e9d9ccd58f971d74e330391624b9bf84d09077013dcb5e62ab1ee8"} Feb 27 16:22:30 crc kubenswrapper[4830]: I0227 16:22:30.120971 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4cp9f" Feb 27 16:22:30 crc kubenswrapper[4830]: I0227 16:22:30.121042 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4cp9f" Feb 27 16:22:31 crc kubenswrapper[4830]: I0227 16:22:31.195810 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4cp9f" podUID="1ec659cc-62c3-4a0a-b505-dc3b7cc01941" containerName="registry-server" probeResult="failure" output=< Feb 27 16:22:31 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Feb 27 16:22:31 crc kubenswrapper[4830]: > Feb 27 16:22:32 crc kubenswrapper[4830]: I0227 16:22:32.053900 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-5tq6p" event={"ID":"760fa7ab-d23d-4c12-afd2-fe11766fd7d1","Type":"ContainerStarted","Data":"a91ba0189eba0faa771abfec31262cbed576be6610167b71dacb5148c1b6eb2b"} Feb 27 16:22:32 crc kubenswrapper[4830]: I0227 16:22:32.084500 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-5tq6p" podStartSLOduration=1.301046699 podStartE2EDuration="4.084480526s" podCreationTimestamp="2026-02-27 16:22:28 +0000 UTC" firstStartedPulling="2026-02-27 16:22:28.683489748 +0000 UTC m=+944.772762221" lastFinishedPulling="2026-02-27 16:22:31.466923575 +0000 UTC m=+947.556196048" observedRunningTime="2026-02-27 16:22:32.081059091 +0000 UTC m=+948.170331614" watchObservedRunningTime="2026-02-27 16:22:32.084480526 +0000 UTC m=+948.173752999" Feb 27 16:22:37 crc kubenswrapper[4830]: I0227 16:22:37.839853 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-5mjm7"] Feb 27 16:22:37 crc kubenswrapper[4830]: I0227 16:22:37.841104 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-5mjm7" Feb 27 16:22:37 crc kubenswrapper[4830]: I0227 16:22:37.849872 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-6kgbb"] Feb 27 16:22:37 crc kubenswrapper[4830]: I0227 16:22:37.851547 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-6kgbb" Feb 27 16:22:37 crc kubenswrapper[4830]: I0227 16:22:37.854587 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-5mjm7"] Feb 27 16:22:37 crc kubenswrapper[4830]: I0227 16:22:37.856084 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-tgc8r" Feb 27 16:22:37 crc kubenswrapper[4830]: I0227 16:22:37.856262 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 27 16:22:37 crc kubenswrapper[4830]: I0227 16:22:37.895733 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-jpmgn"] Feb 27 16:22:37 crc kubenswrapper[4830]: I0227 16:22:37.896789 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-jpmgn" Feb 27 16:22:37 crc kubenswrapper[4830]: I0227 16:22:37.974042 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/0e5821a0-b1d4-49d4-becb-f08af1b6a92f-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-6kgbb\" (UID: \"0e5821a0-b1d4-49d4-becb-f08af1b6a92f\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-6kgbb" Feb 27 16:22:37 crc kubenswrapper[4830]: I0227 16:22:37.974101 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmvx7\" (UniqueName: \"kubernetes.io/projected/0e5821a0-b1d4-49d4-becb-f08af1b6a92f-kube-api-access-qmvx7\") pod \"nmstate-webhook-786f45cff4-6kgbb\" (UID: \"0e5821a0-b1d4-49d4-becb-f08af1b6a92f\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-6kgbb" Feb 27 16:22:37 crc kubenswrapper[4830]: I0227 16:22:37.974128 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl8kh\" (UniqueName: \"kubernetes.io/projected/35d07d48-0cd6-4813-9737-497857d9e40b-kube-api-access-kl8kh\") pod \"nmstate-handler-jpmgn\" (UID: \"35d07d48-0cd6-4813-9737-497857d9e40b\") " pod="openshift-nmstate/nmstate-handler-jpmgn" Feb 27 16:22:37 crc kubenswrapper[4830]: I0227 16:22:37.974159 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/35d07d48-0cd6-4813-9737-497857d9e40b-dbus-socket\") pod \"nmstate-handler-jpmgn\" (UID: \"35d07d48-0cd6-4813-9737-497857d9e40b\") " pod="openshift-nmstate/nmstate-handler-jpmgn" Feb 27 16:22:37 crc kubenswrapper[4830]: I0227 16:22:37.974193 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/35d07d48-0cd6-4813-9737-497857d9e40b-ovs-socket\") pod \"nmstate-handler-jpmgn\" (UID: \"35d07d48-0cd6-4813-9737-497857d9e40b\") " pod="openshift-nmstate/nmstate-handler-jpmgn" Feb 27 16:22:37 crc kubenswrapper[4830]: I0227 16:22:37.974222 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwtxc\" (UniqueName: \"kubernetes.io/projected/6865cd8a-83de-4744-8631-7b95fd599910-kube-api-access-kwtxc\") pod \"nmstate-metrics-69594cc75-5mjm7\" (UID: \"6865cd8a-83de-4744-8631-7b95fd599910\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-5mjm7" Feb 27 16:22:37 crc kubenswrapper[4830]: I0227 16:22:37.974244 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/35d07d48-0cd6-4813-9737-497857d9e40b-nmstate-lock\") pod \"nmstate-handler-jpmgn\" (UID: \"35d07d48-0cd6-4813-9737-497857d9e40b\") " pod="openshift-nmstate/nmstate-handler-jpmgn" Feb 27 16:22:37 crc kubenswrapper[4830]: I0227 16:22:37.981474 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-6kgbb"] Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.050504 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-fl5kr"] Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.051211 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-fl5kr" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.052918 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-p8w98" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.057045 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.057051 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.061613 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-fl5kr"] Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.075337 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/0e5821a0-b1d4-49d4-becb-f08af1b6a92f-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-6kgbb\" (UID: \"0e5821a0-b1d4-49d4-becb-f08af1b6a92f\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-6kgbb" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.075586 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmvx7\" (UniqueName: \"kubernetes.io/projected/0e5821a0-b1d4-49d4-becb-f08af1b6a92f-kube-api-access-qmvx7\") pod \"nmstate-webhook-786f45cff4-6kgbb\" (UID: \"0e5821a0-b1d4-49d4-becb-f08af1b6a92f\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-6kgbb" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.075676 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl8kh\" (UniqueName: \"kubernetes.io/projected/35d07d48-0cd6-4813-9737-497857d9e40b-kube-api-access-kl8kh\") pod \"nmstate-handler-jpmgn\" (UID: \"35d07d48-0cd6-4813-9737-497857d9e40b\") " pod="openshift-nmstate/nmstate-handler-jpmgn" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.075759 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/35d07d48-0cd6-4813-9737-497857d9e40b-dbus-socket\") pod \"nmstate-handler-jpmgn\" (UID: \"35d07d48-0cd6-4813-9737-497857d9e40b\") " pod="openshift-nmstate/nmstate-handler-jpmgn" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.075855 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/35d07d48-0cd6-4813-9737-497857d9e40b-ovs-socket\") pod \"nmstate-handler-jpmgn\" (UID: \"35d07d48-0cd6-4813-9737-497857d9e40b\") " pod="openshift-nmstate/nmstate-handler-jpmgn" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.075957 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwtxc\" (UniqueName: \"kubernetes.io/projected/6865cd8a-83de-4744-8631-7b95fd599910-kube-api-access-kwtxc\") pod \"nmstate-metrics-69594cc75-5mjm7\" (UID: \"6865cd8a-83de-4744-8631-7b95fd599910\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-5mjm7" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.076045 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/35d07d48-0cd6-4813-9737-497857d9e40b-nmstate-lock\") pod \"nmstate-handler-jpmgn\" (UID: \"35d07d48-0cd6-4813-9737-497857d9e40b\") " pod="openshift-nmstate/nmstate-handler-jpmgn" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.076170 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/35d07d48-0cd6-4813-9737-497857d9e40b-nmstate-lock\") pod \"nmstate-handler-jpmgn\" (UID: \"35d07d48-0cd6-4813-9737-497857d9e40b\") " pod="openshift-nmstate/nmstate-handler-jpmgn" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.076244 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/35d07d48-0cd6-4813-9737-497857d9e40b-ovs-socket\") pod \"nmstate-handler-jpmgn\" (UID: \"35d07d48-0cd6-4813-9737-497857d9e40b\") " pod="openshift-nmstate/nmstate-handler-jpmgn" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.076503 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/35d07d48-0cd6-4813-9737-497857d9e40b-dbus-socket\") pod \"nmstate-handler-jpmgn\" (UID: \"35d07d48-0cd6-4813-9737-497857d9e40b\") " pod="openshift-nmstate/nmstate-handler-jpmgn" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.083959 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/0e5821a0-b1d4-49d4-becb-f08af1b6a92f-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-6kgbb\" (UID: \"0e5821a0-b1d4-49d4-becb-f08af1b6a92f\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-6kgbb" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.091575 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmvx7\" (UniqueName: \"kubernetes.io/projected/0e5821a0-b1d4-49d4-becb-f08af1b6a92f-kube-api-access-qmvx7\") pod \"nmstate-webhook-786f45cff4-6kgbb\" (UID: \"0e5821a0-b1d4-49d4-becb-f08af1b6a92f\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-6kgbb" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.091726 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwtxc\" (UniqueName: \"kubernetes.io/projected/6865cd8a-83de-4744-8631-7b95fd599910-kube-api-access-kwtxc\") pod \"nmstate-metrics-69594cc75-5mjm7\" (UID: \"6865cd8a-83de-4744-8631-7b95fd599910\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-5mjm7" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.093262 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl8kh\" (UniqueName: \"kubernetes.io/projected/35d07d48-0cd6-4813-9737-497857d9e40b-kube-api-access-kl8kh\") pod \"nmstate-handler-jpmgn\" (UID: \"35d07d48-0cd6-4813-9737-497857d9e40b\") " pod="openshift-nmstate/nmstate-handler-jpmgn" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.177102 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/cb2e0063-5469-4239-836b-131854f77207-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-fl5kr\" (UID: \"cb2e0063-5469-4239-836b-131854f77207\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-fl5kr" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.177220 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmszl\" (UniqueName: \"kubernetes.io/projected/cb2e0063-5469-4239-836b-131854f77207-kube-api-access-zmszl\") pod \"nmstate-console-plugin-5dcbbd79cf-fl5kr\" (UID: \"cb2e0063-5469-4239-836b-131854f77207\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-fl5kr" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.177293 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/cb2e0063-5469-4239-836b-131854f77207-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-fl5kr\" (UID: \"cb2e0063-5469-4239-836b-131854f77207\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-fl5kr" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.199636 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-5mjm7" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.231334 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-6kgbb" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.232223 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-67bf8b5dd8-dmtxx"] Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.232875 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67bf8b5dd8-dmtxx" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.247218 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-67bf8b5dd8-dmtxx"] Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.254578 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-jpmgn" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.278072 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmszl\" (UniqueName: \"kubernetes.io/projected/cb2e0063-5469-4239-836b-131854f77207-kube-api-access-zmszl\") pod \"nmstate-console-plugin-5dcbbd79cf-fl5kr\" (UID: \"cb2e0063-5469-4239-836b-131854f77207\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-fl5kr" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.278145 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/cb2e0063-5469-4239-836b-131854f77207-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-fl5kr\" (UID: \"cb2e0063-5469-4239-836b-131854f77207\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-fl5kr" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.278197 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/cb2e0063-5469-4239-836b-131854f77207-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-fl5kr\" (UID: \"cb2e0063-5469-4239-836b-131854f77207\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-fl5kr" Feb 27 16:22:38 crc kubenswrapper[4830]: E0227 16:22:38.279179 4830 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.279279 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/cb2e0063-5469-4239-836b-131854f77207-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-fl5kr\" (UID: \"cb2e0063-5469-4239-836b-131854f77207\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-fl5kr" Feb 27 16:22:38 crc kubenswrapper[4830]: E0227 16:22:38.279434 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cb2e0063-5469-4239-836b-131854f77207-plugin-serving-cert podName:cb2e0063-5469-4239-836b-131854f77207 nodeName:}" failed. No retries permitted until 2026-02-27 16:22:38.779410905 +0000 UTC m=+954.868683368 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/cb2e0063-5469-4239-836b-131854f77207-plugin-serving-cert") pod "nmstate-console-plugin-5dcbbd79cf-fl5kr" (UID: "cb2e0063-5469-4239-836b-131854f77207") : secret "plugin-serving-cert" not found Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.296030 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmszl\" (UniqueName: \"kubernetes.io/projected/cb2e0063-5469-4239-836b-131854f77207-kube-api-access-zmszl\") pod \"nmstate-console-plugin-5dcbbd79cf-fl5kr\" (UID: \"cb2e0063-5469-4239-836b-131854f77207\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-fl5kr" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.380661 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h87br\" (UniqueName: \"kubernetes.io/projected/8e7b1ec3-ba0e-4c58-8763-e8ba008a7940-kube-api-access-h87br\") pod \"console-67bf8b5dd8-dmtxx\" (UID: \"8e7b1ec3-ba0e-4c58-8763-e8ba008a7940\") " pod="openshift-console/console-67bf8b5dd8-dmtxx" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.380716 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8e7b1ec3-ba0e-4c58-8763-e8ba008a7940-service-ca\") pod \"console-67bf8b5dd8-dmtxx\" (UID: \"8e7b1ec3-ba0e-4c58-8763-e8ba008a7940\") " pod="openshift-console/console-67bf8b5dd8-dmtxx" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.380746 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8e7b1ec3-ba0e-4c58-8763-e8ba008a7940-console-oauth-config\") pod \"console-67bf8b5dd8-dmtxx\" (UID: \"8e7b1ec3-ba0e-4c58-8763-e8ba008a7940\") " pod="openshift-console/console-67bf8b5dd8-dmtxx" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.380763 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8e7b1ec3-ba0e-4c58-8763-e8ba008a7940-console-config\") pod \"console-67bf8b5dd8-dmtxx\" (UID: \"8e7b1ec3-ba0e-4c58-8763-e8ba008a7940\") " pod="openshift-console/console-67bf8b5dd8-dmtxx" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.380815 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8e7b1ec3-ba0e-4c58-8763-e8ba008a7940-oauth-serving-cert\") pod \"console-67bf8b5dd8-dmtxx\" (UID: \"8e7b1ec3-ba0e-4c58-8763-e8ba008a7940\") " pod="openshift-console/console-67bf8b5dd8-dmtxx" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.380842 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e7b1ec3-ba0e-4c58-8763-e8ba008a7940-trusted-ca-bundle\") pod \"console-67bf8b5dd8-dmtxx\" (UID: \"8e7b1ec3-ba0e-4c58-8763-e8ba008a7940\") " pod="openshift-console/console-67bf8b5dd8-dmtxx" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.380876 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8e7b1ec3-ba0e-4c58-8763-e8ba008a7940-console-serving-cert\") pod \"console-67bf8b5dd8-dmtxx\" (UID: \"8e7b1ec3-ba0e-4c58-8763-e8ba008a7940\") " pod="openshift-console/console-67bf8b5dd8-dmtxx" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.418276 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-5mjm7"] Feb 27 16:22:38 crc kubenswrapper[4830]: W0227 16:22:38.426458 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6865cd8a_83de_4744_8631_7b95fd599910.slice/crio-dfd0d91740f96592711ba81511a27662ac1646a5ecef79bc66428b194b7fbc8f WatchSource:0}: Error finding container dfd0d91740f96592711ba81511a27662ac1646a5ecef79bc66428b194b7fbc8f: Status 404 returned error can't find the container with id dfd0d91740f96592711ba81511a27662ac1646a5ecef79bc66428b194b7fbc8f Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.481892 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8e7b1ec3-ba0e-4c58-8763-e8ba008a7940-console-oauth-config\") pod \"console-67bf8b5dd8-dmtxx\" (UID: \"8e7b1ec3-ba0e-4c58-8763-e8ba008a7940\") " pod="openshift-console/console-67bf8b5dd8-dmtxx" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.481923 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8e7b1ec3-ba0e-4c58-8763-e8ba008a7940-console-config\") pod \"console-67bf8b5dd8-dmtxx\" (UID: \"8e7b1ec3-ba0e-4c58-8763-e8ba008a7940\") " pod="openshift-console/console-67bf8b5dd8-dmtxx" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.481975 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8e7b1ec3-ba0e-4c58-8763-e8ba008a7940-oauth-serving-cert\") pod \"console-67bf8b5dd8-dmtxx\" (UID: \"8e7b1ec3-ba0e-4c58-8763-e8ba008a7940\") " pod="openshift-console/console-67bf8b5dd8-dmtxx" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.482001 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e7b1ec3-ba0e-4c58-8763-e8ba008a7940-trusted-ca-bundle\") pod \"console-67bf8b5dd8-dmtxx\" (UID: \"8e7b1ec3-ba0e-4c58-8763-e8ba008a7940\") " pod="openshift-console/console-67bf8b5dd8-dmtxx" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.482031 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8e7b1ec3-ba0e-4c58-8763-e8ba008a7940-console-serving-cert\") pod \"console-67bf8b5dd8-dmtxx\" (UID: \"8e7b1ec3-ba0e-4c58-8763-e8ba008a7940\") " pod="openshift-console/console-67bf8b5dd8-dmtxx" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.482065 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h87br\" (UniqueName: \"kubernetes.io/projected/8e7b1ec3-ba0e-4c58-8763-e8ba008a7940-kube-api-access-h87br\") pod \"console-67bf8b5dd8-dmtxx\" (UID: \"8e7b1ec3-ba0e-4c58-8763-e8ba008a7940\") " pod="openshift-console/console-67bf8b5dd8-dmtxx" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.482082 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8e7b1ec3-ba0e-4c58-8763-e8ba008a7940-service-ca\") pod \"console-67bf8b5dd8-dmtxx\" (UID: \"8e7b1ec3-ba0e-4c58-8763-e8ba008a7940\") " pod="openshift-console/console-67bf8b5dd8-dmtxx" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.483608 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e7b1ec3-ba0e-4c58-8763-e8ba008a7940-trusted-ca-bundle\") pod \"console-67bf8b5dd8-dmtxx\" (UID: \"8e7b1ec3-ba0e-4c58-8763-e8ba008a7940\") " pod="openshift-console/console-67bf8b5dd8-dmtxx" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.484144 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8e7b1ec3-ba0e-4c58-8763-e8ba008a7940-console-config\") pod \"console-67bf8b5dd8-dmtxx\" (UID: \"8e7b1ec3-ba0e-4c58-8763-e8ba008a7940\") " pod="openshift-console/console-67bf8b5dd8-dmtxx" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.484187 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8e7b1ec3-ba0e-4c58-8763-e8ba008a7940-service-ca\") pod \"console-67bf8b5dd8-dmtxx\" (UID: \"8e7b1ec3-ba0e-4c58-8763-e8ba008a7940\") " pod="openshift-console/console-67bf8b5dd8-dmtxx" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.484236 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8e7b1ec3-ba0e-4c58-8763-e8ba008a7940-oauth-serving-cert\") pod \"console-67bf8b5dd8-dmtxx\" (UID: \"8e7b1ec3-ba0e-4c58-8763-e8ba008a7940\") " pod="openshift-console/console-67bf8b5dd8-dmtxx" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.485429 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8e7b1ec3-ba0e-4c58-8763-e8ba008a7940-console-oauth-config\") pod \"console-67bf8b5dd8-dmtxx\" (UID: \"8e7b1ec3-ba0e-4c58-8763-e8ba008a7940\") " pod="openshift-console/console-67bf8b5dd8-dmtxx" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.485734 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8e7b1ec3-ba0e-4c58-8763-e8ba008a7940-console-serving-cert\") pod \"console-67bf8b5dd8-dmtxx\" (UID: \"8e7b1ec3-ba0e-4c58-8763-e8ba008a7940\") " pod="openshift-console/console-67bf8b5dd8-dmtxx" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.495393 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-6kgbb"] Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.498371 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h87br\" (UniqueName: \"kubernetes.io/projected/8e7b1ec3-ba0e-4c58-8763-e8ba008a7940-kube-api-access-h87br\") pod \"console-67bf8b5dd8-dmtxx\" (UID: \"8e7b1ec3-ba0e-4c58-8763-e8ba008a7940\") " pod="openshift-console/console-67bf8b5dd8-dmtxx" Feb 27 16:22:38 crc kubenswrapper[4830]: W0227 16:22:38.505566 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e5821a0_b1d4_49d4_becb_f08af1b6a92f.slice/crio-e439052b54218f49d100f52c525c55c7e6af0784883ea750281412604418e9d6 WatchSource:0}: Error finding container e439052b54218f49d100f52c525c55c7e6af0784883ea750281412604418e9d6: Status 404 returned error can't find the container with id e439052b54218f49d100f52c525c55c7e6af0784883ea750281412604418e9d6 Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.563643 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67bf8b5dd8-dmtxx" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.774391 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-67bf8b5dd8-dmtxx"] Feb 27 16:22:38 crc kubenswrapper[4830]: W0227 16:22:38.779581 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e7b1ec3_ba0e_4c58_8763_e8ba008a7940.slice/crio-43f3436b0d11e1aab50cbb9b4cbd0fe908cbcd3d2344db0e281e6b9219b3deff WatchSource:0}: Error finding container 43f3436b0d11e1aab50cbb9b4cbd0fe908cbcd3d2344db0e281e6b9219b3deff: Status 404 returned error can't find the container with id 43f3436b0d11e1aab50cbb9b4cbd0fe908cbcd3d2344db0e281e6b9219b3deff Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.785260 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/cb2e0063-5469-4239-836b-131854f77207-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-fl5kr\" (UID: \"cb2e0063-5469-4239-836b-131854f77207\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-fl5kr" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.789479 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/cb2e0063-5469-4239-836b-131854f77207-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-fl5kr\" (UID: \"cb2e0063-5469-4239-836b-131854f77207\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-fl5kr" Feb 27 16:22:38 crc kubenswrapper[4830]: I0227 16:22:38.964882 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-fl5kr" Feb 27 16:22:39 crc kubenswrapper[4830]: I0227 16:22:39.105762 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-jpmgn" event={"ID":"35d07d48-0cd6-4813-9737-497857d9e40b","Type":"ContainerStarted","Data":"f24d4da768cd9355c16a1282f8a7f7ee98cd16d9f6f235fc3ae869be3608ef98"} Feb 27 16:22:39 crc kubenswrapper[4830]: I0227 16:22:39.108223 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67bf8b5dd8-dmtxx" event={"ID":"8e7b1ec3-ba0e-4c58-8763-e8ba008a7940","Type":"ContainerStarted","Data":"c2885e7feb9062ebc5228847b0b1c240ec2eb55b983a1f04f446a93e96da1da5"} Feb 27 16:22:39 crc kubenswrapper[4830]: I0227 16:22:39.108315 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67bf8b5dd8-dmtxx" event={"ID":"8e7b1ec3-ba0e-4c58-8763-e8ba008a7940","Type":"ContainerStarted","Data":"43f3436b0d11e1aab50cbb9b4cbd0fe908cbcd3d2344db0e281e6b9219b3deff"} Feb 27 16:22:39 crc kubenswrapper[4830]: I0227 16:22:39.111286 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-5mjm7" event={"ID":"6865cd8a-83de-4744-8631-7b95fd599910","Type":"ContainerStarted","Data":"dfd0d91740f96592711ba81511a27662ac1646a5ecef79bc66428b194b7fbc8f"} Feb 27 16:22:39 crc kubenswrapper[4830]: I0227 16:22:39.113359 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-6kgbb" event={"ID":"0e5821a0-b1d4-49d4-becb-f08af1b6a92f","Type":"ContainerStarted","Data":"e439052b54218f49d100f52c525c55c7e6af0784883ea750281412604418e9d6"} Feb 27 16:22:39 crc kubenswrapper[4830]: I0227 16:22:39.129541 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-67bf8b5dd8-dmtxx" podStartSLOduration=1.1295173219999999 podStartE2EDuration="1.129517322s" podCreationTimestamp="2026-02-27 16:22:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:22:39.128104557 +0000 UTC m=+955.217377060" watchObservedRunningTime="2026-02-27 16:22:39.129517322 +0000 UTC m=+955.218789815" Feb 27 16:22:39 crc kubenswrapper[4830]: I0227 16:22:39.257086 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-fl5kr"] Feb 27 16:22:40 crc kubenswrapper[4830]: I0227 16:22:40.119402 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-fl5kr" event={"ID":"cb2e0063-5469-4239-836b-131854f77207","Type":"ContainerStarted","Data":"afe613fb8daec966bc5c6e27b4f4c1d48a9ef31b46648dac2a8e2647148372c0"} Feb 27 16:22:40 crc kubenswrapper[4830]: I0227 16:22:40.172063 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4cp9f" Feb 27 16:22:40 crc kubenswrapper[4830]: I0227 16:22:40.234330 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4cp9f" Feb 27 16:22:40 crc kubenswrapper[4830]: I0227 16:22:40.405734 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4cp9f"] Feb 27 16:22:42 crc kubenswrapper[4830]: I0227 16:22:42.130631 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-5mjm7" event={"ID":"6865cd8a-83de-4744-8631-7b95fd599910","Type":"ContainerStarted","Data":"44c603b55e2b44cbac61b6102d4ae555ee97951c34e9b6faacc6f5a44d5bd4ca"} Feb 27 16:22:42 crc kubenswrapper[4830]: I0227 16:22:42.131695 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-6kgbb" event={"ID":"0e5821a0-b1d4-49d4-becb-f08af1b6a92f","Type":"ContainerStarted","Data":"7873d8a0ff4bd208339ef2764ae471bf1f86e539d9e22a07146a36fce634e84b"} Feb 27 16:22:42 crc kubenswrapper[4830]: I0227 16:22:42.131830 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-786f45cff4-6kgbb" Feb 27 16:22:42 crc kubenswrapper[4830]: I0227 16:22:42.133863 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-jpmgn" event={"ID":"35d07d48-0cd6-4813-9737-497857d9e40b","Type":"ContainerStarted","Data":"22950b28c09f8cf6aaa800d3e11060b209c93a2a30c16400f9b687878c93b5ff"} Feb 27 16:22:42 crc kubenswrapper[4830]: I0227 16:22:42.134116 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4cp9f" podUID="1ec659cc-62c3-4a0a-b505-dc3b7cc01941" containerName="registry-server" containerID="cri-o://42377dc3294e2822f2b1d8afbd849d5a1098c3fb95292108c58275d6541e4ca0" gracePeriod=2 Feb 27 16:22:42 crc kubenswrapper[4830]: I0227 16:22:42.134482 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-jpmgn" Feb 27 16:22:42 crc kubenswrapper[4830]: I0227 16:22:42.169764 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-jpmgn" podStartSLOduration=2.193285335 podStartE2EDuration="5.1697456s" podCreationTimestamp="2026-02-27 16:22:37 +0000 UTC" firstStartedPulling="2026-02-27 16:22:38.292564914 +0000 UTC m=+954.381837377" lastFinishedPulling="2026-02-27 16:22:41.269025139 +0000 UTC m=+957.358297642" observedRunningTime="2026-02-27 16:22:42.16855273 +0000 UTC m=+958.257825223" watchObservedRunningTime="2026-02-27 16:22:42.1697456 +0000 UTC m=+958.259018083" Feb 27 16:22:42 crc kubenswrapper[4830]: I0227 16:22:42.172159 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-786f45cff4-6kgbb" podStartSLOduration=2.408341654 podStartE2EDuration="5.17214813s" podCreationTimestamp="2026-02-27 16:22:37 +0000 UTC" firstStartedPulling="2026-02-27 16:22:38.509910009 +0000 UTC m=+954.599182472" lastFinishedPulling="2026-02-27 16:22:41.273716455 +0000 UTC m=+957.362988948" observedRunningTime="2026-02-27 16:22:42.149015241 +0000 UTC m=+958.238287744" watchObservedRunningTime="2026-02-27 16:22:42.17214813 +0000 UTC m=+958.261420603" Feb 27 16:22:42 crc kubenswrapper[4830]: I0227 16:22:42.514832 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4cp9f" Feb 27 16:22:42 crc kubenswrapper[4830]: I0227 16:22:42.540256 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfjkn\" (UniqueName: \"kubernetes.io/projected/1ec659cc-62c3-4a0a-b505-dc3b7cc01941-kube-api-access-zfjkn\") pod \"1ec659cc-62c3-4a0a-b505-dc3b7cc01941\" (UID: \"1ec659cc-62c3-4a0a-b505-dc3b7cc01941\") " Feb 27 16:22:42 crc kubenswrapper[4830]: I0227 16:22:42.540509 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ec659cc-62c3-4a0a-b505-dc3b7cc01941-utilities\") pod \"1ec659cc-62c3-4a0a-b505-dc3b7cc01941\" (UID: \"1ec659cc-62c3-4a0a-b505-dc3b7cc01941\") " Feb 27 16:22:42 crc kubenswrapper[4830]: I0227 16:22:42.540532 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ec659cc-62c3-4a0a-b505-dc3b7cc01941-catalog-content\") pod \"1ec659cc-62c3-4a0a-b505-dc3b7cc01941\" (UID: \"1ec659cc-62c3-4a0a-b505-dc3b7cc01941\") " Feb 27 16:22:42 crc kubenswrapper[4830]: I0227 16:22:42.543256 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ec659cc-62c3-4a0a-b505-dc3b7cc01941-utilities" (OuterVolumeSpecName: "utilities") pod "1ec659cc-62c3-4a0a-b505-dc3b7cc01941" (UID: "1ec659cc-62c3-4a0a-b505-dc3b7cc01941"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:22:42 crc kubenswrapper[4830]: I0227 16:22:42.569393 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ec659cc-62c3-4a0a-b505-dc3b7cc01941-kube-api-access-zfjkn" (OuterVolumeSpecName: "kube-api-access-zfjkn") pod "1ec659cc-62c3-4a0a-b505-dc3b7cc01941" (UID: "1ec659cc-62c3-4a0a-b505-dc3b7cc01941"). InnerVolumeSpecName "kube-api-access-zfjkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:22:42 crc kubenswrapper[4830]: I0227 16:22:42.642323 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfjkn\" (UniqueName: \"kubernetes.io/projected/1ec659cc-62c3-4a0a-b505-dc3b7cc01941-kube-api-access-zfjkn\") on node \"crc\" DevicePath \"\"" Feb 27 16:22:42 crc kubenswrapper[4830]: I0227 16:22:42.642369 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ec659cc-62c3-4a0a-b505-dc3b7cc01941-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 16:22:42 crc kubenswrapper[4830]: I0227 16:22:42.724101 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ec659cc-62c3-4a0a-b505-dc3b7cc01941-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ec659cc-62c3-4a0a-b505-dc3b7cc01941" (UID: "1ec659cc-62c3-4a0a-b505-dc3b7cc01941"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:22:42 crc kubenswrapper[4830]: I0227 16:22:42.743600 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ec659cc-62c3-4a0a-b505-dc3b7cc01941-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 16:22:43 crc kubenswrapper[4830]: I0227 16:22:43.146406 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-fl5kr" event={"ID":"cb2e0063-5469-4239-836b-131854f77207","Type":"ContainerStarted","Data":"1bce4e837c62cc3879e5730ce7eaf33ac09c3594d5bdb4ac63d386c47e23ac64"} Feb 27 16:22:43 crc kubenswrapper[4830]: I0227 16:22:43.151035 4830 generic.go:334] "Generic (PLEG): container finished" podID="1ec659cc-62c3-4a0a-b505-dc3b7cc01941" containerID="42377dc3294e2822f2b1d8afbd849d5a1098c3fb95292108c58275d6541e4ca0" exitCode=0 Feb 27 16:22:43 crc kubenswrapper[4830]: I0227 16:22:43.151184 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4cp9f" event={"ID":"1ec659cc-62c3-4a0a-b505-dc3b7cc01941","Type":"ContainerDied","Data":"42377dc3294e2822f2b1d8afbd849d5a1098c3fb95292108c58275d6541e4ca0"} Feb 27 16:22:43 crc kubenswrapper[4830]: I0227 16:22:43.151240 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4cp9f" event={"ID":"1ec659cc-62c3-4a0a-b505-dc3b7cc01941","Type":"ContainerDied","Data":"6ea40fa7198b3329b493ae6cfe1c2218755ffbc5f3c7c866c8240077359a4b33"} Feb 27 16:22:43 crc kubenswrapper[4830]: I0227 16:22:43.151273 4830 scope.go:117] "RemoveContainer" containerID="42377dc3294e2822f2b1d8afbd849d5a1098c3fb95292108c58275d6541e4ca0" Feb 27 16:22:43 crc kubenswrapper[4830]: I0227 16:22:43.151408 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4cp9f" Feb 27 16:22:43 crc kubenswrapper[4830]: I0227 16:22:43.174114 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-fl5kr" podStartSLOduration=2.213922454 podStartE2EDuration="5.174086722s" podCreationTimestamp="2026-02-27 16:22:38 +0000 UTC" firstStartedPulling="2026-02-27 16:22:39.271937012 +0000 UTC m=+955.361209515" lastFinishedPulling="2026-02-27 16:22:42.23210131 +0000 UTC m=+958.321373783" observedRunningTime="2026-02-27 16:22:43.168650776 +0000 UTC m=+959.257923319" watchObservedRunningTime="2026-02-27 16:22:43.174086722 +0000 UTC m=+959.263359215" Feb 27 16:22:43 crc kubenswrapper[4830]: I0227 16:22:43.190284 4830 scope.go:117] "RemoveContainer" containerID="6304f60a01cf6fa75942c09f48d506248fac45333d2f25ca84543b86f13248a8" Feb 27 16:22:43 crc kubenswrapper[4830]: I0227 16:22:43.202139 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4cp9f"] Feb 27 16:22:43 crc kubenswrapper[4830]: I0227 16:22:43.205667 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4cp9f"] Feb 27 16:22:43 crc kubenswrapper[4830]: I0227 16:22:43.234357 4830 scope.go:117] "RemoveContainer" containerID="d7bfc2bddd0bf01f9e5e3b175d414d353dc990d019cea899717d6b5c250f79e3" Feb 27 16:22:43 crc kubenswrapper[4830]: I0227 16:22:43.249876 4830 scope.go:117] "RemoveContainer" containerID="42377dc3294e2822f2b1d8afbd849d5a1098c3fb95292108c58275d6541e4ca0" Feb 27 16:22:43 crc kubenswrapper[4830]: E0227 16:22:43.250539 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42377dc3294e2822f2b1d8afbd849d5a1098c3fb95292108c58275d6541e4ca0\": container with ID starting with 42377dc3294e2822f2b1d8afbd849d5a1098c3fb95292108c58275d6541e4ca0 not found: ID does not exist" containerID="42377dc3294e2822f2b1d8afbd849d5a1098c3fb95292108c58275d6541e4ca0" Feb 27 16:22:43 crc kubenswrapper[4830]: I0227 16:22:43.250588 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42377dc3294e2822f2b1d8afbd849d5a1098c3fb95292108c58275d6541e4ca0"} err="failed to get container status \"42377dc3294e2822f2b1d8afbd849d5a1098c3fb95292108c58275d6541e4ca0\": rpc error: code = NotFound desc = could not find container \"42377dc3294e2822f2b1d8afbd849d5a1098c3fb95292108c58275d6541e4ca0\": container with ID starting with 42377dc3294e2822f2b1d8afbd849d5a1098c3fb95292108c58275d6541e4ca0 not found: ID does not exist" Feb 27 16:22:43 crc kubenswrapper[4830]: I0227 16:22:43.250620 4830 scope.go:117] "RemoveContainer" containerID="6304f60a01cf6fa75942c09f48d506248fac45333d2f25ca84543b86f13248a8" Feb 27 16:22:43 crc kubenswrapper[4830]: E0227 16:22:43.251215 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6304f60a01cf6fa75942c09f48d506248fac45333d2f25ca84543b86f13248a8\": container with ID starting with 6304f60a01cf6fa75942c09f48d506248fac45333d2f25ca84543b86f13248a8 not found: ID does not exist" containerID="6304f60a01cf6fa75942c09f48d506248fac45333d2f25ca84543b86f13248a8" Feb 27 16:22:43 crc kubenswrapper[4830]: I0227 16:22:43.251257 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6304f60a01cf6fa75942c09f48d506248fac45333d2f25ca84543b86f13248a8"} err="failed to get container status \"6304f60a01cf6fa75942c09f48d506248fac45333d2f25ca84543b86f13248a8\": rpc error: code = NotFound desc = could not find container \"6304f60a01cf6fa75942c09f48d506248fac45333d2f25ca84543b86f13248a8\": container with ID starting with 6304f60a01cf6fa75942c09f48d506248fac45333d2f25ca84543b86f13248a8 not found: ID does not exist" Feb 27 16:22:43 crc kubenswrapper[4830]: I0227 16:22:43.251286 4830 scope.go:117] "RemoveContainer" containerID="d7bfc2bddd0bf01f9e5e3b175d414d353dc990d019cea899717d6b5c250f79e3" Feb 27 16:22:43 crc kubenswrapper[4830]: E0227 16:22:43.251591 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7bfc2bddd0bf01f9e5e3b175d414d353dc990d019cea899717d6b5c250f79e3\": container with ID starting with d7bfc2bddd0bf01f9e5e3b175d414d353dc990d019cea899717d6b5c250f79e3 not found: ID does not exist" containerID="d7bfc2bddd0bf01f9e5e3b175d414d353dc990d019cea899717d6b5c250f79e3" Feb 27 16:22:43 crc kubenswrapper[4830]: I0227 16:22:43.251614 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7bfc2bddd0bf01f9e5e3b175d414d353dc990d019cea899717d6b5c250f79e3"} err="failed to get container status \"d7bfc2bddd0bf01f9e5e3b175d414d353dc990d019cea899717d6b5c250f79e3\": rpc error: code = NotFound desc = could not find container \"d7bfc2bddd0bf01f9e5e3b175d414d353dc990d019cea899717d6b5c250f79e3\": container with ID starting with d7bfc2bddd0bf01f9e5e3b175d414d353dc990d019cea899717d6b5c250f79e3 not found: ID does not exist" Feb 27 16:22:44 crc kubenswrapper[4830]: I0227 16:22:44.777548 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ec659cc-62c3-4a0a-b505-dc3b7cc01941" path="/var/lib/kubelet/pods/1ec659cc-62c3-4a0a-b505-dc3b7cc01941/volumes" Feb 27 16:22:45 crc kubenswrapper[4830]: I0227 16:22:45.178051 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-5mjm7" event={"ID":"6865cd8a-83de-4744-8631-7b95fd599910","Type":"ContainerStarted","Data":"21112981f44cd45a09fd3d53aa61cb62f505d9d55304729732bec786b1b18f49"} Feb 27 16:22:45 crc kubenswrapper[4830]: I0227 16:22:45.200990 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-69594cc75-5mjm7" podStartSLOduration=2.653316708 podStartE2EDuration="8.200934352s" podCreationTimestamp="2026-02-27 16:22:37 +0000 UTC" firstStartedPulling="2026-02-27 16:22:38.428954824 +0000 UTC m=+954.518227287" lastFinishedPulling="2026-02-27 16:22:43.976572428 +0000 UTC m=+960.065844931" observedRunningTime="2026-02-27 16:22:45.200252235 +0000 UTC m=+961.289524738" watchObservedRunningTime="2026-02-27 16:22:45.200934352 +0000 UTC m=+961.290206895" Feb 27 16:22:48 crc kubenswrapper[4830]: I0227 16:22:48.292159 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-jpmgn" Feb 27 16:22:48 crc kubenswrapper[4830]: I0227 16:22:48.564005 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-67bf8b5dd8-dmtxx" Feb 27 16:22:48 crc kubenswrapper[4830]: I0227 16:22:48.564469 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-67bf8b5dd8-dmtxx" Feb 27 16:22:48 crc kubenswrapper[4830]: I0227 16:22:48.572790 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-67bf8b5dd8-dmtxx" Feb 27 16:22:49 crc kubenswrapper[4830]: I0227 16:22:49.209249 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-67bf8b5dd8-dmtxx" Feb 27 16:22:49 crc kubenswrapper[4830]: I0227 16:22:49.300381 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-kjfn6"] Feb 27 16:22:58 crc kubenswrapper[4830]: I0227 16:22:58.240824 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-786f45cff4-6kgbb" Feb 27 16:23:03 crc kubenswrapper[4830]: I0227 16:23:03.160720 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 16:23:03 crc kubenswrapper[4830]: I0227 16:23:03.160802 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 16:23:13 crc kubenswrapper[4830]: I0227 16:23:13.815688 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4bq74j"] Feb 27 16:23:13 crc kubenswrapper[4830]: E0227 16:23:13.816660 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ec659cc-62c3-4a0a-b505-dc3b7cc01941" containerName="extract-utilities" Feb 27 16:23:13 crc kubenswrapper[4830]: I0227 16:23:13.816685 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ec659cc-62c3-4a0a-b505-dc3b7cc01941" containerName="extract-utilities" Feb 27 16:23:13 crc kubenswrapper[4830]: E0227 16:23:13.816708 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ec659cc-62c3-4a0a-b505-dc3b7cc01941" containerName="extract-content" Feb 27 16:23:13 crc kubenswrapper[4830]: I0227 16:23:13.816721 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ec659cc-62c3-4a0a-b505-dc3b7cc01941" containerName="extract-content" Feb 27 16:23:13 crc kubenswrapper[4830]: E0227 16:23:13.816735 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ec659cc-62c3-4a0a-b505-dc3b7cc01941" containerName="registry-server" Feb 27 16:23:13 crc kubenswrapper[4830]: I0227 16:23:13.816773 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ec659cc-62c3-4a0a-b505-dc3b7cc01941" containerName="registry-server" Feb 27 16:23:13 crc kubenswrapper[4830]: I0227 16:23:13.817055 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ec659cc-62c3-4a0a-b505-dc3b7cc01941" containerName="registry-server" Feb 27 16:23:13 crc kubenswrapper[4830]: I0227 16:23:13.818288 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4bq74j" Feb 27 16:23:13 crc kubenswrapper[4830]: I0227 16:23:13.826367 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 27 16:23:13 crc kubenswrapper[4830]: I0227 16:23:13.830767 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4bq74j"] Feb 27 16:23:13 crc kubenswrapper[4830]: I0227 16:23:13.916343 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ea172fb-feaf-4174-9aaf-e50231dcdf04-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4bq74j\" (UID: \"9ea172fb-feaf-4174-9aaf-e50231dcdf04\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4bq74j" Feb 27 16:23:13 crc kubenswrapper[4830]: I0227 16:23:13.916720 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bswvp\" (UniqueName: \"kubernetes.io/projected/9ea172fb-feaf-4174-9aaf-e50231dcdf04-kube-api-access-bswvp\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4bq74j\" (UID: \"9ea172fb-feaf-4174-9aaf-e50231dcdf04\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4bq74j" Feb 27 16:23:13 crc kubenswrapper[4830]: I0227 16:23:13.916824 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ea172fb-feaf-4174-9aaf-e50231dcdf04-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4bq74j\" (UID: \"9ea172fb-feaf-4174-9aaf-e50231dcdf04\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4bq74j" Feb 27 16:23:14 crc kubenswrapper[4830]: I0227 16:23:14.017842 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ea172fb-feaf-4174-9aaf-e50231dcdf04-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4bq74j\" (UID: \"9ea172fb-feaf-4174-9aaf-e50231dcdf04\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4bq74j" Feb 27 16:23:14 crc kubenswrapper[4830]: I0227 16:23:14.018042 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bswvp\" (UniqueName: \"kubernetes.io/projected/9ea172fb-feaf-4174-9aaf-e50231dcdf04-kube-api-access-bswvp\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4bq74j\" (UID: \"9ea172fb-feaf-4174-9aaf-e50231dcdf04\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4bq74j" Feb 27 16:23:14 crc kubenswrapper[4830]: I0227 16:23:14.018084 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ea172fb-feaf-4174-9aaf-e50231dcdf04-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4bq74j\" (UID: \"9ea172fb-feaf-4174-9aaf-e50231dcdf04\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4bq74j" Feb 27 16:23:14 crc kubenswrapper[4830]: I0227 16:23:14.018873 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ea172fb-feaf-4174-9aaf-e50231dcdf04-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4bq74j\" (UID: \"9ea172fb-feaf-4174-9aaf-e50231dcdf04\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4bq74j" Feb 27 16:23:14 crc kubenswrapper[4830]: I0227 16:23:14.018896 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ea172fb-feaf-4174-9aaf-e50231dcdf04-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4bq74j\" (UID: \"9ea172fb-feaf-4174-9aaf-e50231dcdf04\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4bq74j" Feb 27 16:23:14 crc kubenswrapper[4830]: I0227 16:23:14.048124 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bswvp\" (UniqueName: \"kubernetes.io/projected/9ea172fb-feaf-4174-9aaf-e50231dcdf04-kube-api-access-bswvp\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4bq74j\" (UID: \"9ea172fb-feaf-4174-9aaf-e50231dcdf04\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4bq74j" Feb 27 16:23:14 crc kubenswrapper[4830]: I0227 16:23:14.144252 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4bq74j" Feb 27 16:23:14 crc kubenswrapper[4830]: I0227 16:23:14.362383 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-kjfn6" podUID="11fbaa05-cf66-40dd-be15-c6474a011768" containerName="console" containerID="cri-o://30e1c78398b7de2fa04252f58c541cb457aea0dcce0e056e3d2ee80cbd726291" gracePeriod=15 Feb 27 16:23:14 crc kubenswrapper[4830]: I0227 16:23:14.419467 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4bq74j"] Feb 27 16:23:14 crc kubenswrapper[4830]: I0227 16:23:14.719602 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-kjfn6_11fbaa05-cf66-40dd-be15-c6474a011768/console/0.log" Feb 27 16:23:14 crc kubenswrapper[4830]: I0227 16:23:14.719892 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-kjfn6" Feb 27 16:23:14 crc kubenswrapper[4830]: I0227 16:23:14.830597 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/11fbaa05-cf66-40dd-be15-c6474a011768-console-config\") pod \"11fbaa05-cf66-40dd-be15-c6474a011768\" (UID: \"11fbaa05-cf66-40dd-be15-c6474a011768\") " Feb 27 16:23:14 crc kubenswrapper[4830]: I0227 16:23:14.830663 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/11fbaa05-cf66-40dd-be15-c6474a011768-oauth-serving-cert\") pod \"11fbaa05-cf66-40dd-be15-c6474a011768\" (UID: \"11fbaa05-cf66-40dd-be15-c6474a011768\") " Feb 27 16:23:14 crc kubenswrapper[4830]: I0227 16:23:14.830709 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11fbaa05-cf66-40dd-be15-c6474a011768-trusted-ca-bundle\") pod \"11fbaa05-cf66-40dd-be15-c6474a011768\" (UID: \"11fbaa05-cf66-40dd-be15-c6474a011768\") " Feb 27 16:23:14 crc kubenswrapper[4830]: I0227 16:23:14.830774 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/11fbaa05-cf66-40dd-be15-c6474a011768-console-oauth-config\") pod \"11fbaa05-cf66-40dd-be15-c6474a011768\" (UID: \"11fbaa05-cf66-40dd-be15-c6474a011768\") " Feb 27 16:23:14 crc kubenswrapper[4830]: I0227 16:23:14.830819 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4ntq\" (UniqueName: \"kubernetes.io/projected/11fbaa05-cf66-40dd-be15-c6474a011768-kube-api-access-z4ntq\") pod \"11fbaa05-cf66-40dd-be15-c6474a011768\" (UID: \"11fbaa05-cf66-40dd-be15-c6474a011768\") " Feb 27 16:23:14 crc kubenswrapper[4830]: I0227 16:23:14.830892 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/11fbaa05-cf66-40dd-be15-c6474a011768-service-ca\") pod \"11fbaa05-cf66-40dd-be15-c6474a011768\" (UID: \"11fbaa05-cf66-40dd-be15-c6474a011768\") " Feb 27 16:23:14 crc kubenswrapper[4830]: I0227 16:23:14.830993 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/11fbaa05-cf66-40dd-be15-c6474a011768-console-serving-cert\") pod \"11fbaa05-cf66-40dd-be15-c6474a011768\" (UID: \"11fbaa05-cf66-40dd-be15-c6474a011768\") " Feb 27 16:23:14 crc kubenswrapper[4830]: I0227 16:23:14.831613 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11fbaa05-cf66-40dd-be15-c6474a011768-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "11fbaa05-cf66-40dd-be15-c6474a011768" (UID: "11fbaa05-cf66-40dd-be15-c6474a011768"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:23:14 crc kubenswrapper[4830]: I0227 16:23:14.831593 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11fbaa05-cf66-40dd-be15-c6474a011768-console-config" (OuterVolumeSpecName: "console-config") pod "11fbaa05-cf66-40dd-be15-c6474a011768" (UID: "11fbaa05-cf66-40dd-be15-c6474a011768"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:23:14 crc kubenswrapper[4830]: I0227 16:23:14.832338 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11fbaa05-cf66-40dd-be15-c6474a011768-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "11fbaa05-cf66-40dd-be15-c6474a011768" (UID: "11fbaa05-cf66-40dd-be15-c6474a011768"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:23:14 crc kubenswrapper[4830]: I0227 16:23:14.833010 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11fbaa05-cf66-40dd-be15-c6474a011768-service-ca" (OuterVolumeSpecName: "service-ca") pod "11fbaa05-cf66-40dd-be15-c6474a011768" (UID: "11fbaa05-cf66-40dd-be15-c6474a011768"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:23:14 crc kubenswrapper[4830]: I0227 16:23:14.839143 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11fbaa05-cf66-40dd-be15-c6474a011768-kube-api-access-z4ntq" (OuterVolumeSpecName: "kube-api-access-z4ntq") pod "11fbaa05-cf66-40dd-be15-c6474a011768" (UID: "11fbaa05-cf66-40dd-be15-c6474a011768"). InnerVolumeSpecName "kube-api-access-z4ntq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:23:14 crc kubenswrapper[4830]: I0227 16:23:14.839141 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11fbaa05-cf66-40dd-be15-c6474a011768-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "11fbaa05-cf66-40dd-be15-c6474a011768" (UID: "11fbaa05-cf66-40dd-be15-c6474a011768"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:23:14 crc kubenswrapper[4830]: I0227 16:23:14.842163 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11fbaa05-cf66-40dd-be15-c6474a011768-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "11fbaa05-cf66-40dd-be15-c6474a011768" (UID: "11fbaa05-cf66-40dd-be15-c6474a011768"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:23:14 crc kubenswrapper[4830]: I0227 16:23:14.933074 4830 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/11fbaa05-cf66-40dd-be15-c6474a011768-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:23:14 crc kubenswrapper[4830]: I0227 16:23:14.933123 4830 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/11fbaa05-cf66-40dd-be15-c6474a011768-console-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:23:14 crc kubenswrapper[4830]: I0227 16:23:14.933143 4830 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/11fbaa05-cf66-40dd-be15-c6474a011768-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:23:14 crc kubenswrapper[4830]: I0227 16:23:14.933161 4830 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11fbaa05-cf66-40dd-be15-c6474a011768-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:23:14 crc kubenswrapper[4830]: I0227 16:23:14.933180 4830 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/11fbaa05-cf66-40dd-be15-c6474a011768-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:23:14 crc kubenswrapper[4830]: I0227 16:23:14.933197 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4ntq\" (UniqueName: \"kubernetes.io/projected/11fbaa05-cf66-40dd-be15-c6474a011768-kube-api-access-z4ntq\") on node \"crc\" DevicePath \"\"" Feb 27 16:23:14 crc kubenswrapper[4830]: I0227 16:23:14.933217 4830 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/11fbaa05-cf66-40dd-be15-c6474a011768-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:23:15 crc kubenswrapper[4830]: I0227 16:23:15.428402 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-kjfn6_11fbaa05-cf66-40dd-be15-c6474a011768/console/0.log" Feb 27 16:23:15 crc kubenswrapper[4830]: I0227 16:23:15.428480 4830 generic.go:334] "Generic (PLEG): container finished" podID="11fbaa05-cf66-40dd-be15-c6474a011768" containerID="30e1c78398b7de2fa04252f58c541cb457aea0dcce0e056e3d2ee80cbd726291" exitCode=2 Feb 27 16:23:15 crc kubenswrapper[4830]: I0227 16:23:15.428633 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-kjfn6" Feb 27 16:23:15 crc kubenswrapper[4830]: I0227 16:23:15.429171 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-kjfn6" event={"ID":"11fbaa05-cf66-40dd-be15-c6474a011768","Type":"ContainerDied","Data":"30e1c78398b7de2fa04252f58c541cb457aea0dcce0e056e3d2ee80cbd726291"} Feb 27 16:23:15 crc kubenswrapper[4830]: I0227 16:23:15.429405 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-kjfn6" event={"ID":"11fbaa05-cf66-40dd-be15-c6474a011768","Type":"ContainerDied","Data":"ed5a8190bcc1ccd763f39d2a6d76a6f0e916da530bc60d35fc51ab3831ea9848"} Feb 27 16:23:15 crc kubenswrapper[4830]: I0227 16:23:15.429572 4830 scope.go:117] "RemoveContainer" containerID="30e1c78398b7de2fa04252f58c541cb457aea0dcce0e056e3d2ee80cbd726291" Feb 27 16:23:15 crc kubenswrapper[4830]: I0227 16:23:15.433173 4830 generic.go:334] "Generic (PLEG): container finished" podID="9ea172fb-feaf-4174-9aaf-e50231dcdf04" containerID="690c3a69fe5fb2f8f02180ebe2637e4fb367b3871c0ba06f9dea7868d6f44baa" exitCode=0 Feb 27 16:23:15 crc kubenswrapper[4830]: I0227 16:23:15.433230 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4bq74j" event={"ID":"9ea172fb-feaf-4174-9aaf-e50231dcdf04","Type":"ContainerDied","Data":"690c3a69fe5fb2f8f02180ebe2637e4fb367b3871c0ba06f9dea7868d6f44baa"} Feb 27 16:23:15 crc kubenswrapper[4830]: I0227 16:23:15.433272 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4bq74j" event={"ID":"9ea172fb-feaf-4174-9aaf-e50231dcdf04","Type":"ContainerStarted","Data":"be05c73eb0e0b37d423463ca5949c36b40274f2968e50f148650c9386eef312e"} Feb 27 16:23:15 crc kubenswrapper[4830]: I0227 16:23:15.466665 4830 scope.go:117] "RemoveContainer" containerID="30e1c78398b7de2fa04252f58c541cb457aea0dcce0e056e3d2ee80cbd726291" Feb 27 16:23:15 crc kubenswrapper[4830]: E0227 16:23:15.467337 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30e1c78398b7de2fa04252f58c541cb457aea0dcce0e056e3d2ee80cbd726291\": container with ID starting with 30e1c78398b7de2fa04252f58c541cb457aea0dcce0e056e3d2ee80cbd726291 not found: ID does not exist" containerID="30e1c78398b7de2fa04252f58c541cb457aea0dcce0e056e3d2ee80cbd726291" Feb 27 16:23:15 crc kubenswrapper[4830]: I0227 16:23:15.467401 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30e1c78398b7de2fa04252f58c541cb457aea0dcce0e056e3d2ee80cbd726291"} err="failed to get container status \"30e1c78398b7de2fa04252f58c541cb457aea0dcce0e056e3d2ee80cbd726291\": rpc error: code = NotFound desc = could not find container \"30e1c78398b7de2fa04252f58c541cb457aea0dcce0e056e3d2ee80cbd726291\": container with ID starting with 30e1c78398b7de2fa04252f58c541cb457aea0dcce0e056e3d2ee80cbd726291 not found: ID does not exist" Feb 27 16:23:15 crc kubenswrapper[4830]: I0227 16:23:15.481874 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-kjfn6"] Feb 27 16:23:15 crc kubenswrapper[4830]: I0227 16:23:15.489311 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-kjfn6"] Feb 27 16:23:16 crc kubenswrapper[4830]: I0227 16:23:16.778024 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11fbaa05-cf66-40dd-be15-c6474a011768" path="/var/lib/kubelet/pods/11fbaa05-cf66-40dd-be15-c6474a011768/volumes" Feb 27 16:23:18 crc kubenswrapper[4830]: I0227 16:23:18.457395 4830 generic.go:334] "Generic (PLEG): container finished" podID="9ea172fb-feaf-4174-9aaf-e50231dcdf04" containerID="9f46fc51cf0a887f85f2d4d94f97e0b8169728ef8f3e5bbfa6f9e530b69612c1" exitCode=0 Feb 27 16:23:18 crc kubenswrapper[4830]: I0227 16:23:18.457457 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4bq74j" event={"ID":"9ea172fb-feaf-4174-9aaf-e50231dcdf04","Type":"ContainerDied","Data":"9f46fc51cf0a887f85f2d4d94f97e0b8169728ef8f3e5bbfa6f9e530b69612c1"} Feb 27 16:23:19 crc kubenswrapper[4830]: I0227 16:23:19.466971 4830 generic.go:334] "Generic (PLEG): container finished" podID="9ea172fb-feaf-4174-9aaf-e50231dcdf04" containerID="1883e66efb5b8d82613299f10a5da49aabfab0d51453289ebb36ae6a9a79d086" exitCode=0 Feb 27 16:23:19 crc kubenswrapper[4830]: I0227 16:23:19.467375 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4bq74j" event={"ID":"9ea172fb-feaf-4174-9aaf-e50231dcdf04","Type":"ContainerDied","Data":"1883e66efb5b8d82613299f10a5da49aabfab0d51453289ebb36ae6a9a79d086"} Feb 27 16:23:20 crc kubenswrapper[4830]: I0227 16:23:20.842995 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4bq74j" Feb 27 16:23:20 crc kubenswrapper[4830]: I0227 16:23:20.920606 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ea172fb-feaf-4174-9aaf-e50231dcdf04-util\") pod \"9ea172fb-feaf-4174-9aaf-e50231dcdf04\" (UID: \"9ea172fb-feaf-4174-9aaf-e50231dcdf04\") " Feb 27 16:23:20 crc kubenswrapper[4830]: I0227 16:23:20.920705 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ea172fb-feaf-4174-9aaf-e50231dcdf04-bundle\") pod \"9ea172fb-feaf-4174-9aaf-e50231dcdf04\" (UID: \"9ea172fb-feaf-4174-9aaf-e50231dcdf04\") " Feb 27 16:23:20 crc kubenswrapper[4830]: I0227 16:23:20.920764 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bswvp\" (UniqueName: \"kubernetes.io/projected/9ea172fb-feaf-4174-9aaf-e50231dcdf04-kube-api-access-bswvp\") pod \"9ea172fb-feaf-4174-9aaf-e50231dcdf04\" (UID: \"9ea172fb-feaf-4174-9aaf-e50231dcdf04\") " Feb 27 16:23:20 crc kubenswrapper[4830]: I0227 16:23:20.922363 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ea172fb-feaf-4174-9aaf-e50231dcdf04-bundle" (OuterVolumeSpecName: "bundle") pod "9ea172fb-feaf-4174-9aaf-e50231dcdf04" (UID: "9ea172fb-feaf-4174-9aaf-e50231dcdf04"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:23:20 crc kubenswrapper[4830]: I0227 16:23:20.929108 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ea172fb-feaf-4174-9aaf-e50231dcdf04-kube-api-access-bswvp" (OuterVolumeSpecName: "kube-api-access-bswvp") pod "9ea172fb-feaf-4174-9aaf-e50231dcdf04" (UID: "9ea172fb-feaf-4174-9aaf-e50231dcdf04"). InnerVolumeSpecName "kube-api-access-bswvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:23:20 crc kubenswrapper[4830]: I0227 16:23:20.935628 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ea172fb-feaf-4174-9aaf-e50231dcdf04-util" (OuterVolumeSpecName: "util") pod "9ea172fb-feaf-4174-9aaf-e50231dcdf04" (UID: "9ea172fb-feaf-4174-9aaf-e50231dcdf04"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:23:21 crc kubenswrapper[4830]: I0227 16:23:21.022445 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bswvp\" (UniqueName: \"kubernetes.io/projected/9ea172fb-feaf-4174-9aaf-e50231dcdf04-kube-api-access-bswvp\") on node \"crc\" DevicePath \"\"" Feb 27 16:23:21 crc kubenswrapper[4830]: I0227 16:23:21.022491 4830 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ea172fb-feaf-4174-9aaf-e50231dcdf04-util\") on node \"crc\" DevicePath \"\"" Feb 27 16:23:21 crc kubenswrapper[4830]: I0227 16:23:21.022511 4830 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ea172fb-feaf-4174-9aaf-e50231dcdf04-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:23:21 crc kubenswrapper[4830]: I0227 16:23:21.484200 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4bq74j" event={"ID":"9ea172fb-feaf-4174-9aaf-e50231dcdf04","Type":"ContainerDied","Data":"be05c73eb0e0b37d423463ca5949c36b40274f2968e50f148650c9386eef312e"} Feb 27 16:23:21 crc kubenswrapper[4830]: I0227 16:23:21.484263 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be05c73eb0e0b37d423463ca5949c36b40274f2968e50f148650c9386eef312e" Feb 27 16:23:21 crc kubenswrapper[4830]: I0227 16:23:21.484329 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4bq74j" Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.264672 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-64c4cc7899-7w4m7"] Feb 27 16:23:29 crc kubenswrapper[4830]: E0227 16:23:29.265452 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ea172fb-feaf-4174-9aaf-e50231dcdf04" containerName="util" Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.265468 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ea172fb-feaf-4174-9aaf-e50231dcdf04" containerName="util" Feb 27 16:23:29 crc kubenswrapper[4830]: E0227 16:23:29.265484 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11fbaa05-cf66-40dd-be15-c6474a011768" containerName="console" Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.265495 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="11fbaa05-cf66-40dd-be15-c6474a011768" containerName="console" Feb 27 16:23:29 crc kubenswrapper[4830]: E0227 16:23:29.265509 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ea172fb-feaf-4174-9aaf-e50231dcdf04" containerName="pull" Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.265519 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ea172fb-feaf-4174-9aaf-e50231dcdf04" containerName="pull" Feb 27 16:23:29 crc kubenswrapper[4830]: E0227 16:23:29.265535 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ea172fb-feaf-4174-9aaf-e50231dcdf04" containerName="extract" Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.265544 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ea172fb-feaf-4174-9aaf-e50231dcdf04" containerName="extract" Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.265680 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ea172fb-feaf-4174-9aaf-e50231dcdf04" containerName="extract" Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.265692 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="11fbaa05-cf66-40dd-be15-c6474a011768" containerName="console" Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.266133 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-64c4cc7899-7w4m7" Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.268833 4830 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.268896 4830 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.269829 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.270851 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.271734 4830 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-sjcgb" Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.315740 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-64c4cc7899-7w4m7"] Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.431663 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5352a317-0150-4796-91dc-e91251c1bc20-apiservice-cert\") pod \"metallb-operator-controller-manager-64c4cc7899-7w4m7\" (UID: \"5352a317-0150-4796-91dc-e91251c1bc20\") " pod="metallb-system/metallb-operator-controller-manager-64c4cc7899-7w4m7" Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.431964 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7vcb\" (UniqueName: \"kubernetes.io/projected/5352a317-0150-4796-91dc-e91251c1bc20-kube-api-access-j7vcb\") pod \"metallb-operator-controller-manager-64c4cc7899-7w4m7\" (UID: \"5352a317-0150-4796-91dc-e91251c1bc20\") " pod="metallb-system/metallb-operator-controller-manager-64c4cc7899-7w4m7" Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.432065 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5352a317-0150-4796-91dc-e91251c1bc20-webhook-cert\") pod \"metallb-operator-controller-manager-64c4cc7899-7w4m7\" (UID: \"5352a317-0150-4796-91dc-e91251c1bc20\") " pod="metallb-system/metallb-operator-controller-manager-64c4cc7899-7w4m7" Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.533844 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7vcb\" (UniqueName: \"kubernetes.io/projected/5352a317-0150-4796-91dc-e91251c1bc20-kube-api-access-j7vcb\") pod \"metallb-operator-controller-manager-64c4cc7899-7w4m7\" (UID: \"5352a317-0150-4796-91dc-e91251c1bc20\") " pod="metallb-system/metallb-operator-controller-manager-64c4cc7899-7w4m7" Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.533914 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5352a317-0150-4796-91dc-e91251c1bc20-webhook-cert\") pod \"metallb-operator-controller-manager-64c4cc7899-7w4m7\" (UID: \"5352a317-0150-4796-91dc-e91251c1bc20\") " pod="metallb-system/metallb-operator-controller-manager-64c4cc7899-7w4m7" Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.534018 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5352a317-0150-4796-91dc-e91251c1bc20-apiservice-cert\") pod \"metallb-operator-controller-manager-64c4cc7899-7w4m7\" (UID: \"5352a317-0150-4796-91dc-e91251c1bc20\") " pod="metallb-system/metallb-operator-controller-manager-64c4cc7899-7w4m7" Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.539168 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5352a317-0150-4796-91dc-e91251c1bc20-webhook-cert\") pod \"metallb-operator-controller-manager-64c4cc7899-7w4m7\" (UID: \"5352a317-0150-4796-91dc-e91251c1bc20\") " pod="metallb-system/metallb-operator-controller-manager-64c4cc7899-7w4m7" Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.539216 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5352a317-0150-4796-91dc-e91251c1bc20-apiservice-cert\") pod \"metallb-operator-controller-manager-64c4cc7899-7w4m7\" (UID: \"5352a317-0150-4796-91dc-e91251c1bc20\") " pod="metallb-system/metallb-operator-controller-manager-64c4cc7899-7w4m7" Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.575276 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7vcb\" (UniqueName: \"kubernetes.io/projected/5352a317-0150-4796-91dc-e91251c1bc20-kube-api-access-j7vcb\") pod \"metallb-operator-controller-manager-64c4cc7899-7w4m7\" (UID: \"5352a317-0150-4796-91dc-e91251c1bc20\") " pod="metallb-system/metallb-operator-controller-manager-64c4cc7899-7w4m7" Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.580371 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-64c4cc7899-7w4m7" Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.653356 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7955dd9b7b-tb4vn"] Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.654226 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7955dd9b7b-tb4vn" Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.656230 4830 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-hscp2" Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.656511 4830 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.656728 4830 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.673268 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7955dd9b7b-tb4vn"] Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.736067 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/76c3c72e-3bfb-4b1c-9ab1-fdb798994872-apiservice-cert\") pod \"metallb-operator-webhook-server-7955dd9b7b-tb4vn\" (UID: \"76c3c72e-3bfb-4b1c-9ab1-fdb798994872\") " pod="metallb-system/metallb-operator-webhook-server-7955dd9b7b-tb4vn" Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.736121 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d72wk\" (UniqueName: \"kubernetes.io/projected/76c3c72e-3bfb-4b1c-9ab1-fdb798994872-kube-api-access-d72wk\") pod \"metallb-operator-webhook-server-7955dd9b7b-tb4vn\" (UID: \"76c3c72e-3bfb-4b1c-9ab1-fdb798994872\") " pod="metallb-system/metallb-operator-webhook-server-7955dd9b7b-tb4vn" Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.736171 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/76c3c72e-3bfb-4b1c-9ab1-fdb798994872-webhook-cert\") pod \"metallb-operator-webhook-server-7955dd9b7b-tb4vn\" (UID: \"76c3c72e-3bfb-4b1c-9ab1-fdb798994872\") " pod="metallb-system/metallb-operator-webhook-server-7955dd9b7b-tb4vn" Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.836800 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d72wk\" (UniqueName: \"kubernetes.io/projected/76c3c72e-3bfb-4b1c-9ab1-fdb798994872-kube-api-access-d72wk\") pod \"metallb-operator-webhook-server-7955dd9b7b-tb4vn\" (UID: \"76c3c72e-3bfb-4b1c-9ab1-fdb798994872\") " pod="metallb-system/metallb-operator-webhook-server-7955dd9b7b-tb4vn" Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.836877 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/76c3c72e-3bfb-4b1c-9ab1-fdb798994872-webhook-cert\") pod \"metallb-operator-webhook-server-7955dd9b7b-tb4vn\" (UID: \"76c3c72e-3bfb-4b1c-9ab1-fdb798994872\") " pod="metallb-system/metallb-operator-webhook-server-7955dd9b7b-tb4vn" Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.836916 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/76c3c72e-3bfb-4b1c-9ab1-fdb798994872-apiservice-cert\") pod \"metallb-operator-webhook-server-7955dd9b7b-tb4vn\" (UID: \"76c3c72e-3bfb-4b1c-9ab1-fdb798994872\") " pod="metallb-system/metallb-operator-webhook-server-7955dd9b7b-tb4vn" Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.841721 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/76c3c72e-3bfb-4b1c-9ab1-fdb798994872-webhook-cert\") pod \"metallb-operator-webhook-server-7955dd9b7b-tb4vn\" (UID: \"76c3c72e-3bfb-4b1c-9ab1-fdb798994872\") " pod="metallb-system/metallb-operator-webhook-server-7955dd9b7b-tb4vn" Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.845647 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/76c3c72e-3bfb-4b1c-9ab1-fdb798994872-apiservice-cert\") pod \"metallb-operator-webhook-server-7955dd9b7b-tb4vn\" (UID: \"76c3c72e-3bfb-4b1c-9ab1-fdb798994872\") " pod="metallb-system/metallb-operator-webhook-server-7955dd9b7b-tb4vn" Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.856923 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d72wk\" (UniqueName: \"kubernetes.io/projected/76c3c72e-3bfb-4b1c-9ab1-fdb798994872-kube-api-access-d72wk\") pod \"metallb-operator-webhook-server-7955dd9b7b-tb4vn\" (UID: \"76c3c72e-3bfb-4b1c-9ab1-fdb798994872\") " pod="metallb-system/metallb-operator-webhook-server-7955dd9b7b-tb4vn" Feb 27 16:23:29 crc kubenswrapper[4830]: I0227 16:23:29.973469 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7955dd9b7b-tb4vn" Feb 27 16:23:30 crc kubenswrapper[4830]: I0227 16:23:30.137375 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-64c4cc7899-7w4m7"] Feb 27 16:23:30 crc kubenswrapper[4830]: I0227 16:23:30.426332 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7955dd9b7b-tb4vn"] Feb 27 16:23:30 crc kubenswrapper[4830]: W0227 16:23:30.431189 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76c3c72e_3bfb_4b1c_9ab1_fdb798994872.slice/crio-50ace13269a03d6b70cc4a05d9471e4fc1843981b58273ec1be0c6a46abdd94e WatchSource:0}: Error finding container 50ace13269a03d6b70cc4a05d9471e4fc1843981b58273ec1be0c6a46abdd94e: Status 404 returned error can't find the container with id 50ace13269a03d6b70cc4a05d9471e4fc1843981b58273ec1be0c6a46abdd94e Feb 27 16:23:30 crc kubenswrapper[4830]: I0227 16:23:30.537347 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-64c4cc7899-7w4m7" event={"ID":"5352a317-0150-4796-91dc-e91251c1bc20","Type":"ContainerStarted","Data":"5ce225e606a73ef94023603131060c337dcc6e4c9784eb7b301eab4ff960c8d9"} Feb 27 16:23:30 crc kubenswrapper[4830]: I0227 16:23:30.538767 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7955dd9b7b-tb4vn" event={"ID":"76c3c72e-3bfb-4b1c-9ab1-fdb798994872","Type":"ContainerStarted","Data":"50ace13269a03d6b70cc4a05d9471e4fc1843981b58273ec1be0c6a46abdd94e"} Feb 27 16:23:33 crc kubenswrapper[4830]: I0227 16:23:33.161102 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 16:23:33 crc kubenswrapper[4830]: I0227 16:23:33.161470 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 16:23:33 crc kubenswrapper[4830]: I0227 16:23:33.571676 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-64c4cc7899-7w4m7" event={"ID":"5352a317-0150-4796-91dc-e91251c1bc20","Type":"ContainerStarted","Data":"3bf1090b439593435dfa14ce907994bb1fa7f2046af627383f39113f42852089"} Feb 27 16:23:33 crc kubenswrapper[4830]: I0227 16:23:33.572218 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-64c4cc7899-7w4m7" Feb 27 16:23:33 crc kubenswrapper[4830]: I0227 16:23:33.595960 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-64c4cc7899-7w4m7" podStartSLOduration=1.385947898 podStartE2EDuration="4.595928306s" podCreationTimestamp="2026-02-27 16:23:29 +0000 UTC" firstStartedPulling="2026-02-27 16:23:30.142217286 +0000 UTC m=+1006.231489759" lastFinishedPulling="2026-02-27 16:23:33.352197694 +0000 UTC m=+1009.441470167" observedRunningTime="2026-02-27 16:23:33.589420613 +0000 UTC m=+1009.678693076" watchObservedRunningTime="2026-02-27 16:23:33.595928306 +0000 UTC m=+1009.685200769" Feb 27 16:23:37 crc kubenswrapper[4830]: I0227 16:23:37.596814 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7955dd9b7b-tb4vn" event={"ID":"76c3c72e-3bfb-4b1c-9ab1-fdb798994872","Type":"ContainerStarted","Data":"4b5ca985905f05c8a8bd470b14e984603343e3bd6296e9b5f9f81e12202aa284"} Feb 27 16:23:37 crc kubenswrapper[4830]: I0227 16:23:37.597320 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7955dd9b7b-tb4vn" Feb 27 16:23:37 crc kubenswrapper[4830]: I0227 16:23:37.627931 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7955dd9b7b-tb4vn" podStartSLOduration=2.355249156 podStartE2EDuration="8.627902054s" podCreationTimestamp="2026-02-27 16:23:29 +0000 UTC" firstStartedPulling="2026-02-27 16:23:30.436060832 +0000 UTC m=+1006.525333295" lastFinishedPulling="2026-02-27 16:23:36.70871372 +0000 UTC m=+1012.797986193" observedRunningTime="2026-02-27 16:23:37.625523194 +0000 UTC m=+1013.714795657" watchObservedRunningTime="2026-02-27 16:23:37.627902054 +0000 UTC m=+1013.717174557" Feb 27 16:23:49 crc kubenswrapper[4830]: I0227 16:23:49.980488 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7955dd9b7b-tb4vn" Feb 27 16:23:50 crc kubenswrapper[4830]: I0227 16:23:50.188707 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p25hj"] Feb 27 16:23:50 crc kubenswrapper[4830]: I0227 16:23:50.189897 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p25hj" Feb 27 16:23:50 crc kubenswrapper[4830]: I0227 16:23:50.208807 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p25hj"] Feb 27 16:23:50 crc kubenswrapper[4830]: I0227 16:23:50.230410 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f488eb5b-f0bb-4fae-875a-0dc00a9d63d8-catalog-content\") pod \"redhat-marketplace-p25hj\" (UID: \"f488eb5b-f0bb-4fae-875a-0dc00a9d63d8\") " pod="openshift-marketplace/redhat-marketplace-p25hj" Feb 27 16:23:50 crc kubenswrapper[4830]: I0227 16:23:50.230677 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6hl4\" (UniqueName: \"kubernetes.io/projected/f488eb5b-f0bb-4fae-875a-0dc00a9d63d8-kube-api-access-f6hl4\") pod \"redhat-marketplace-p25hj\" (UID: \"f488eb5b-f0bb-4fae-875a-0dc00a9d63d8\") " pod="openshift-marketplace/redhat-marketplace-p25hj" Feb 27 16:23:50 crc kubenswrapper[4830]: I0227 16:23:50.230818 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f488eb5b-f0bb-4fae-875a-0dc00a9d63d8-utilities\") pod \"redhat-marketplace-p25hj\" (UID: \"f488eb5b-f0bb-4fae-875a-0dc00a9d63d8\") " pod="openshift-marketplace/redhat-marketplace-p25hj" Feb 27 16:23:50 crc kubenswrapper[4830]: I0227 16:23:50.332248 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f488eb5b-f0bb-4fae-875a-0dc00a9d63d8-catalog-content\") pod \"redhat-marketplace-p25hj\" (UID: \"f488eb5b-f0bb-4fae-875a-0dc00a9d63d8\") " pod="openshift-marketplace/redhat-marketplace-p25hj" Feb 27 16:23:50 crc kubenswrapper[4830]: I0227 16:23:50.332293 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6hl4\" (UniqueName: \"kubernetes.io/projected/f488eb5b-f0bb-4fae-875a-0dc00a9d63d8-kube-api-access-f6hl4\") pod \"redhat-marketplace-p25hj\" (UID: \"f488eb5b-f0bb-4fae-875a-0dc00a9d63d8\") " pod="openshift-marketplace/redhat-marketplace-p25hj" Feb 27 16:23:50 crc kubenswrapper[4830]: I0227 16:23:50.332333 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f488eb5b-f0bb-4fae-875a-0dc00a9d63d8-utilities\") pod \"redhat-marketplace-p25hj\" (UID: \"f488eb5b-f0bb-4fae-875a-0dc00a9d63d8\") " pod="openshift-marketplace/redhat-marketplace-p25hj" Feb 27 16:23:50 crc kubenswrapper[4830]: I0227 16:23:50.332725 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f488eb5b-f0bb-4fae-875a-0dc00a9d63d8-utilities\") pod \"redhat-marketplace-p25hj\" (UID: \"f488eb5b-f0bb-4fae-875a-0dc00a9d63d8\") " pod="openshift-marketplace/redhat-marketplace-p25hj" Feb 27 16:23:50 crc kubenswrapper[4830]: I0227 16:23:50.332807 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f488eb5b-f0bb-4fae-875a-0dc00a9d63d8-catalog-content\") pod \"redhat-marketplace-p25hj\" (UID: \"f488eb5b-f0bb-4fae-875a-0dc00a9d63d8\") " pod="openshift-marketplace/redhat-marketplace-p25hj" Feb 27 16:23:50 crc kubenswrapper[4830]: I0227 16:23:50.353216 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6hl4\" (UniqueName: \"kubernetes.io/projected/f488eb5b-f0bb-4fae-875a-0dc00a9d63d8-kube-api-access-f6hl4\") pod \"redhat-marketplace-p25hj\" (UID: \"f488eb5b-f0bb-4fae-875a-0dc00a9d63d8\") " pod="openshift-marketplace/redhat-marketplace-p25hj" Feb 27 16:23:50 crc kubenswrapper[4830]: I0227 16:23:50.506651 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p25hj" Feb 27 16:23:50 crc kubenswrapper[4830]: I0227 16:23:50.748731 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p25hj"] Feb 27 16:23:50 crc kubenswrapper[4830]: W0227 16:23:50.756158 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf488eb5b_f0bb_4fae_875a_0dc00a9d63d8.slice/crio-ec35caf6c6def119f0e943c2cbbb73d6dfc680a72f8b609845649d329c98fce6 WatchSource:0}: Error finding container ec35caf6c6def119f0e943c2cbbb73d6dfc680a72f8b609845649d329c98fce6: Status 404 returned error can't find the container with id ec35caf6c6def119f0e943c2cbbb73d6dfc680a72f8b609845649d329c98fce6 Feb 27 16:23:51 crc kubenswrapper[4830]: I0227 16:23:51.691367 4830 generic.go:334] "Generic (PLEG): container finished" podID="f488eb5b-f0bb-4fae-875a-0dc00a9d63d8" containerID="f1d80cf62168e26e5b574015c75de9840256c891c760d8163a22ca4d060cb58b" exitCode=0 Feb 27 16:23:51 crc kubenswrapper[4830]: I0227 16:23:51.691484 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p25hj" event={"ID":"f488eb5b-f0bb-4fae-875a-0dc00a9d63d8","Type":"ContainerDied","Data":"f1d80cf62168e26e5b574015c75de9840256c891c760d8163a22ca4d060cb58b"} Feb 27 16:23:51 crc kubenswrapper[4830]: I0227 16:23:51.691854 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p25hj" event={"ID":"f488eb5b-f0bb-4fae-875a-0dc00a9d63d8","Type":"ContainerStarted","Data":"ec35caf6c6def119f0e943c2cbbb73d6dfc680a72f8b609845649d329c98fce6"} Feb 27 16:23:52 crc kubenswrapper[4830]: I0227 16:23:52.702619 4830 generic.go:334] "Generic (PLEG): container finished" podID="f488eb5b-f0bb-4fae-875a-0dc00a9d63d8" containerID="0f1b3b48167a509df687681e7351a4f58bb46d4915943352746beb5f578a0865" exitCode=0 Feb 27 16:23:52 crc kubenswrapper[4830]: I0227 16:23:52.702776 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p25hj" event={"ID":"f488eb5b-f0bb-4fae-875a-0dc00a9d63d8","Type":"ContainerDied","Data":"0f1b3b48167a509df687681e7351a4f58bb46d4915943352746beb5f578a0865"} Feb 27 16:23:53 crc kubenswrapper[4830]: I0227 16:23:53.712197 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p25hj" event={"ID":"f488eb5b-f0bb-4fae-875a-0dc00a9d63d8","Type":"ContainerStarted","Data":"edf366612e25af39bbb1ae77746eef4fd44f63e503a640e1b719118d113a2168"} Feb 27 16:23:53 crc kubenswrapper[4830]: I0227 16:23:53.736862 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p25hj" podStartSLOduration=2.356277056 podStartE2EDuration="3.736839471s" podCreationTimestamp="2026-02-27 16:23:50 +0000 UTC" firstStartedPulling="2026-02-27 16:23:51.700042776 +0000 UTC m=+1027.789315249" lastFinishedPulling="2026-02-27 16:23:53.080605201 +0000 UTC m=+1029.169877664" observedRunningTime="2026-02-27 16:23:53.734351738 +0000 UTC m=+1029.823624231" watchObservedRunningTime="2026-02-27 16:23:53.736839471 +0000 UTC m=+1029.826111964" Feb 27 16:24:00 crc kubenswrapper[4830]: I0227 16:24:00.146076 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536824-fd6f8"] Feb 27 16:24:00 crc kubenswrapper[4830]: I0227 16:24:00.147760 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536824-fd6f8" Feb 27 16:24:00 crc kubenswrapper[4830]: I0227 16:24:00.151441 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 16:24:00 crc kubenswrapper[4830]: I0227 16:24:00.151583 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 16:24:00 crc kubenswrapper[4830]: I0227 16:24:00.151634 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 16:24:00 crc kubenswrapper[4830]: I0227 16:24:00.159010 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536824-fd6f8"] Feb 27 16:24:00 crc kubenswrapper[4830]: I0227 16:24:00.279605 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nh2v\" (UniqueName: \"kubernetes.io/projected/9d13ca02-7160-46d2-9c14-c123b6e44512-kube-api-access-9nh2v\") pod \"auto-csr-approver-29536824-fd6f8\" (UID: \"9d13ca02-7160-46d2-9c14-c123b6e44512\") " pod="openshift-infra/auto-csr-approver-29536824-fd6f8" Feb 27 16:24:00 crc kubenswrapper[4830]: I0227 16:24:00.381433 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nh2v\" (UniqueName: \"kubernetes.io/projected/9d13ca02-7160-46d2-9c14-c123b6e44512-kube-api-access-9nh2v\") pod \"auto-csr-approver-29536824-fd6f8\" (UID: \"9d13ca02-7160-46d2-9c14-c123b6e44512\") " pod="openshift-infra/auto-csr-approver-29536824-fd6f8" Feb 27 16:24:00 crc kubenswrapper[4830]: I0227 16:24:00.417984 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nh2v\" (UniqueName: \"kubernetes.io/projected/9d13ca02-7160-46d2-9c14-c123b6e44512-kube-api-access-9nh2v\") pod \"auto-csr-approver-29536824-fd6f8\" (UID: \"9d13ca02-7160-46d2-9c14-c123b6e44512\") " pod="openshift-infra/auto-csr-approver-29536824-fd6f8" Feb 27 16:24:00 crc kubenswrapper[4830]: I0227 16:24:00.498183 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536824-fd6f8" Feb 27 16:24:00 crc kubenswrapper[4830]: I0227 16:24:00.507417 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p25hj" Feb 27 16:24:00 crc kubenswrapper[4830]: I0227 16:24:00.507859 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p25hj" Feb 27 16:24:00 crc kubenswrapper[4830]: I0227 16:24:00.590601 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p25hj" Feb 27 16:24:00 crc kubenswrapper[4830]: I0227 16:24:00.815857 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p25hj" Feb 27 16:24:00 crc kubenswrapper[4830]: I0227 16:24:00.841114 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536824-fd6f8"] Feb 27 16:24:00 crc kubenswrapper[4830]: I0227 16:24:00.856489 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p25hj"] Feb 27 16:24:01 crc kubenswrapper[4830]: I0227 16:24:01.768970 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536824-fd6f8" event={"ID":"9d13ca02-7160-46d2-9c14-c123b6e44512","Type":"ContainerStarted","Data":"8eb69447fcf25aba4623b256f694c2573b8f717050e1623aa8d383c5a2d0a065"} Feb 27 16:24:02 crc kubenswrapper[4830]: I0227 16:24:02.782308 4830 generic.go:334] "Generic (PLEG): container finished" podID="9d13ca02-7160-46d2-9c14-c123b6e44512" containerID="427efb10b90dced1a6f6d81475fe71ba7d102b5583d7add27988e759bbb7b566" exitCode=0 Feb 27 16:24:02 crc kubenswrapper[4830]: I0227 16:24:02.782452 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536824-fd6f8" event={"ID":"9d13ca02-7160-46d2-9c14-c123b6e44512","Type":"ContainerDied","Data":"427efb10b90dced1a6f6d81475fe71ba7d102b5583d7add27988e759bbb7b566"} Feb 27 16:24:02 crc kubenswrapper[4830]: I0227 16:24:02.782601 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p25hj" podUID="f488eb5b-f0bb-4fae-875a-0dc00a9d63d8" containerName="registry-server" containerID="cri-o://edf366612e25af39bbb1ae77746eef4fd44f63e503a640e1b719118d113a2168" gracePeriod=2 Feb 27 16:24:03 crc kubenswrapper[4830]: I0227 16:24:03.160770 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 16:24:03 crc kubenswrapper[4830]: I0227 16:24:03.161284 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 16:24:03 crc kubenswrapper[4830]: I0227 16:24:03.161349 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 16:24:03 crc kubenswrapper[4830]: I0227 16:24:03.162230 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e43810c75db22ebd0d19e92c6c2850742cda834a0ba155fedd3f4498a6dd6d20"} pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 16:24:03 crc kubenswrapper[4830]: I0227 16:24:03.162335 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" containerID="cri-o://e43810c75db22ebd0d19e92c6c2850742cda834a0ba155fedd3f4498a6dd6d20" gracePeriod=600 Feb 27 16:24:03 crc kubenswrapper[4830]: I0227 16:24:03.233149 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p25hj" Feb 27 16:24:03 crc kubenswrapper[4830]: I0227 16:24:03.325646 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f488eb5b-f0bb-4fae-875a-0dc00a9d63d8-catalog-content\") pod \"f488eb5b-f0bb-4fae-875a-0dc00a9d63d8\" (UID: \"f488eb5b-f0bb-4fae-875a-0dc00a9d63d8\") " Feb 27 16:24:03 crc kubenswrapper[4830]: I0227 16:24:03.325819 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f488eb5b-f0bb-4fae-875a-0dc00a9d63d8-utilities\") pod \"f488eb5b-f0bb-4fae-875a-0dc00a9d63d8\" (UID: \"f488eb5b-f0bb-4fae-875a-0dc00a9d63d8\") " Feb 27 16:24:03 crc kubenswrapper[4830]: I0227 16:24:03.325890 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6hl4\" (UniqueName: \"kubernetes.io/projected/f488eb5b-f0bb-4fae-875a-0dc00a9d63d8-kube-api-access-f6hl4\") pod \"f488eb5b-f0bb-4fae-875a-0dc00a9d63d8\" (UID: \"f488eb5b-f0bb-4fae-875a-0dc00a9d63d8\") " Feb 27 16:24:03 crc kubenswrapper[4830]: I0227 16:24:03.327143 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f488eb5b-f0bb-4fae-875a-0dc00a9d63d8-utilities" (OuterVolumeSpecName: "utilities") pod "f488eb5b-f0bb-4fae-875a-0dc00a9d63d8" (UID: "f488eb5b-f0bb-4fae-875a-0dc00a9d63d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:24:03 crc kubenswrapper[4830]: I0227 16:24:03.338106 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f488eb5b-f0bb-4fae-875a-0dc00a9d63d8-kube-api-access-f6hl4" (OuterVolumeSpecName: "kube-api-access-f6hl4") pod "f488eb5b-f0bb-4fae-875a-0dc00a9d63d8" (UID: "f488eb5b-f0bb-4fae-875a-0dc00a9d63d8"). InnerVolumeSpecName "kube-api-access-f6hl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:24:03 crc kubenswrapper[4830]: I0227 16:24:03.365531 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f488eb5b-f0bb-4fae-875a-0dc00a9d63d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f488eb5b-f0bb-4fae-875a-0dc00a9d63d8" (UID: "f488eb5b-f0bb-4fae-875a-0dc00a9d63d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:24:03 crc kubenswrapper[4830]: I0227 16:24:03.427202 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f488eb5b-f0bb-4fae-875a-0dc00a9d63d8-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 16:24:03 crc kubenswrapper[4830]: I0227 16:24:03.427461 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6hl4\" (UniqueName: \"kubernetes.io/projected/f488eb5b-f0bb-4fae-875a-0dc00a9d63d8-kube-api-access-f6hl4\") on node \"crc\" DevicePath \"\"" Feb 27 16:24:03 crc kubenswrapper[4830]: I0227 16:24:03.427471 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f488eb5b-f0bb-4fae-875a-0dc00a9d63d8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 16:24:03 crc kubenswrapper[4830]: I0227 16:24:03.793738 4830 generic.go:334] "Generic (PLEG): container finished" podID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerID="e43810c75db22ebd0d19e92c6c2850742cda834a0ba155fedd3f4498a6dd6d20" exitCode=0 Feb 27 16:24:03 crc kubenswrapper[4830]: I0227 16:24:03.793841 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerDied","Data":"e43810c75db22ebd0d19e92c6c2850742cda834a0ba155fedd3f4498a6dd6d20"} Feb 27 16:24:03 crc kubenswrapper[4830]: I0227 16:24:03.793909 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerStarted","Data":"471097b7c348ccaf71a4c92a38d56632d777ed06a5ddca169a907c05253b1349"} Feb 27 16:24:03 crc kubenswrapper[4830]: I0227 16:24:03.793975 4830 scope.go:117] "RemoveContainer" containerID="4111740fc2dfad5826ea06b4b6f06e8a362844590f5bbcb26cd71fafa0b5a6e3" Feb 27 16:24:03 crc kubenswrapper[4830]: I0227 16:24:03.799774 4830 generic.go:334] "Generic (PLEG): container finished" podID="f488eb5b-f0bb-4fae-875a-0dc00a9d63d8" containerID="edf366612e25af39bbb1ae77746eef4fd44f63e503a640e1b719118d113a2168" exitCode=0 Feb 27 16:24:03 crc kubenswrapper[4830]: I0227 16:24:03.800053 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p25hj" Feb 27 16:24:03 crc kubenswrapper[4830]: I0227 16:24:03.801128 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p25hj" event={"ID":"f488eb5b-f0bb-4fae-875a-0dc00a9d63d8","Type":"ContainerDied","Data":"edf366612e25af39bbb1ae77746eef4fd44f63e503a640e1b719118d113a2168"} Feb 27 16:24:03 crc kubenswrapper[4830]: I0227 16:24:03.801214 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p25hj" event={"ID":"f488eb5b-f0bb-4fae-875a-0dc00a9d63d8","Type":"ContainerDied","Data":"ec35caf6c6def119f0e943c2cbbb73d6dfc680a72f8b609845649d329c98fce6"} Feb 27 16:24:03 crc kubenswrapper[4830]: I0227 16:24:03.839219 4830 scope.go:117] "RemoveContainer" containerID="edf366612e25af39bbb1ae77746eef4fd44f63e503a640e1b719118d113a2168" Feb 27 16:24:03 crc kubenswrapper[4830]: I0227 16:24:03.842888 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p25hj"] Feb 27 16:24:03 crc kubenswrapper[4830]: I0227 16:24:03.847154 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p25hj"] Feb 27 16:24:03 crc kubenswrapper[4830]: I0227 16:24:03.866963 4830 scope.go:117] "RemoveContainer" containerID="0f1b3b48167a509df687681e7351a4f58bb46d4915943352746beb5f578a0865" Feb 27 16:24:03 crc kubenswrapper[4830]: I0227 16:24:03.885384 4830 scope.go:117] "RemoveContainer" containerID="f1d80cf62168e26e5b574015c75de9840256c891c760d8163a22ca4d060cb58b" Feb 27 16:24:03 crc kubenswrapper[4830]: I0227 16:24:03.905691 4830 scope.go:117] "RemoveContainer" containerID="edf366612e25af39bbb1ae77746eef4fd44f63e503a640e1b719118d113a2168" Feb 27 16:24:03 crc kubenswrapper[4830]: E0227 16:24:03.907418 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edf366612e25af39bbb1ae77746eef4fd44f63e503a640e1b719118d113a2168\": container with ID starting with edf366612e25af39bbb1ae77746eef4fd44f63e503a640e1b719118d113a2168 not found: ID does not exist" containerID="edf366612e25af39bbb1ae77746eef4fd44f63e503a640e1b719118d113a2168" Feb 27 16:24:03 crc kubenswrapper[4830]: I0227 16:24:03.907465 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edf366612e25af39bbb1ae77746eef4fd44f63e503a640e1b719118d113a2168"} err="failed to get container status \"edf366612e25af39bbb1ae77746eef4fd44f63e503a640e1b719118d113a2168\": rpc error: code = NotFound desc = could not find container \"edf366612e25af39bbb1ae77746eef4fd44f63e503a640e1b719118d113a2168\": container with ID starting with edf366612e25af39bbb1ae77746eef4fd44f63e503a640e1b719118d113a2168 not found: ID does not exist" Feb 27 16:24:03 crc kubenswrapper[4830]: I0227 16:24:03.907503 4830 scope.go:117] "RemoveContainer" containerID="0f1b3b48167a509df687681e7351a4f58bb46d4915943352746beb5f578a0865" Feb 27 16:24:03 crc kubenswrapper[4830]: E0227 16:24:03.907847 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f1b3b48167a509df687681e7351a4f58bb46d4915943352746beb5f578a0865\": container with ID starting with 0f1b3b48167a509df687681e7351a4f58bb46d4915943352746beb5f578a0865 not found: ID does not exist" containerID="0f1b3b48167a509df687681e7351a4f58bb46d4915943352746beb5f578a0865" Feb 27 16:24:03 crc kubenswrapper[4830]: I0227 16:24:03.907883 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f1b3b48167a509df687681e7351a4f58bb46d4915943352746beb5f578a0865"} err="failed to get container status \"0f1b3b48167a509df687681e7351a4f58bb46d4915943352746beb5f578a0865\": rpc error: code = NotFound desc = could not find container \"0f1b3b48167a509df687681e7351a4f58bb46d4915943352746beb5f578a0865\": container with ID starting with 0f1b3b48167a509df687681e7351a4f58bb46d4915943352746beb5f578a0865 not found: ID does not exist" Feb 27 16:24:03 crc kubenswrapper[4830]: I0227 16:24:03.907910 4830 scope.go:117] "RemoveContainer" containerID="f1d80cf62168e26e5b574015c75de9840256c891c760d8163a22ca4d060cb58b" Feb 27 16:24:03 crc kubenswrapper[4830]: E0227 16:24:03.908284 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1d80cf62168e26e5b574015c75de9840256c891c760d8163a22ca4d060cb58b\": container with ID starting with f1d80cf62168e26e5b574015c75de9840256c891c760d8163a22ca4d060cb58b not found: ID does not exist" containerID="f1d80cf62168e26e5b574015c75de9840256c891c760d8163a22ca4d060cb58b" Feb 27 16:24:03 crc kubenswrapper[4830]: I0227 16:24:03.908323 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1d80cf62168e26e5b574015c75de9840256c891c760d8163a22ca4d060cb58b"} err="failed to get container status \"f1d80cf62168e26e5b574015c75de9840256c891c760d8163a22ca4d060cb58b\": rpc error: code = NotFound desc = could not find container \"f1d80cf62168e26e5b574015c75de9840256c891c760d8163a22ca4d060cb58b\": container with ID starting with f1d80cf62168e26e5b574015c75de9840256c891c760d8163a22ca4d060cb58b not found: ID does not exist" Feb 27 16:24:04 crc kubenswrapper[4830]: I0227 16:24:04.076263 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536824-fd6f8" Feb 27 16:24:04 crc kubenswrapper[4830]: I0227 16:24:04.135782 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nh2v\" (UniqueName: \"kubernetes.io/projected/9d13ca02-7160-46d2-9c14-c123b6e44512-kube-api-access-9nh2v\") pod \"9d13ca02-7160-46d2-9c14-c123b6e44512\" (UID: \"9d13ca02-7160-46d2-9c14-c123b6e44512\") " Feb 27 16:24:04 crc kubenswrapper[4830]: I0227 16:24:04.144504 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d13ca02-7160-46d2-9c14-c123b6e44512-kube-api-access-9nh2v" (OuterVolumeSpecName: "kube-api-access-9nh2v") pod "9d13ca02-7160-46d2-9c14-c123b6e44512" (UID: "9d13ca02-7160-46d2-9c14-c123b6e44512"). InnerVolumeSpecName "kube-api-access-9nh2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:24:04 crc kubenswrapper[4830]: I0227 16:24:04.236936 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nh2v\" (UniqueName: \"kubernetes.io/projected/9d13ca02-7160-46d2-9c14-c123b6e44512-kube-api-access-9nh2v\") on node \"crc\" DevicePath \"\"" Feb 27 16:24:04 crc kubenswrapper[4830]: I0227 16:24:04.778212 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f488eb5b-f0bb-4fae-875a-0dc00a9d63d8" path="/var/lib/kubelet/pods/f488eb5b-f0bb-4fae-875a-0dc00a9d63d8/volumes" Feb 27 16:24:04 crc kubenswrapper[4830]: I0227 16:24:04.814743 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536824-fd6f8" event={"ID":"9d13ca02-7160-46d2-9c14-c123b6e44512","Type":"ContainerDied","Data":"8eb69447fcf25aba4623b256f694c2573b8f717050e1623aa8d383c5a2d0a065"} Feb 27 16:24:04 crc kubenswrapper[4830]: I0227 16:24:04.814781 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8eb69447fcf25aba4623b256f694c2573b8f717050e1623aa8d383c5a2d0a065" Feb 27 16:24:04 crc kubenswrapper[4830]: I0227 16:24:04.814853 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536824-fd6f8" Feb 27 16:24:05 crc kubenswrapper[4830]: I0227 16:24:05.156037 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536818-l5tbc"] Feb 27 16:24:05 crc kubenswrapper[4830]: I0227 16:24:05.164287 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536818-l5tbc"] Feb 27 16:24:06 crc kubenswrapper[4830]: I0227 16:24:06.250837 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rmfp5"] Feb 27 16:24:06 crc kubenswrapper[4830]: E0227 16:24:06.251189 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f488eb5b-f0bb-4fae-875a-0dc00a9d63d8" containerName="extract-content" Feb 27 16:24:06 crc kubenswrapper[4830]: I0227 16:24:06.251210 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f488eb5b-f0bb-4fae-875a-0dc00a9d63d8" containerName="extract-content" Feb 27 16:24:06 crc kubenswrapper[4830]: E0227 16:24:06.251233 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f488eb5b-f0bb-4fae-875a-0dc00a9d63d8" containerName="extract-utilities" Feb 27 16:24:06 crc kubenswrapper[4830]: I0227 16:24:06.251246 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f488eb5b-f0bb-4fae-875a-0dc00a9d63d8" containerName="extract-utilities" Feb 27 16:24:06 crc kubenswrapper[4830]: E0227 16:24:06.251260 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d13ca02-7160-46d2-9c14-c123b6e44512" containerName="oc" Feb 27 16:24:06 crc kubenswrapper[4830]: I0227 16:24:06.251273 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d13ca02-7160-46d2-9c14-c123b6e44512" containerName="oc" Feb 27 16:24:06 crc kubenswrapper[4830]: E0227 16:24:06.251300 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f488eb5b-f0bb-4fae-875a-0dc00a9d63d8" containerName="registry-server" Feb 27 16:24:06 crc kubenswrapper[4830]: I0227 16:24:06.251319 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f488eb5b-f0bb-4fae-875a-0dc00a9d63d8" containerName="registry-server" Feb 27 16:24:06 crc kubenswrapper[4830]: I0227 16:24:06.251521 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d13ca02-7160-46d2-9c14-c123b6e44512" containerName="oc" Feb 27 16:24:06 crc kubenswrapper[4830]: I0227 16:24:06.251551 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f488eb5b-f0bb-4fae-875a-0dc00a9d63d8" containerName="registry-server" Feb 27 16:24:06 crc kubenswrapper[4830]: I0227 16:24:06.253406 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rmfp5" Feb 27 16:24:06 crc kubenswrapper[4830]: I0227 16:24:06.267122 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxjl2\" (UniqueName: \"kubernetes.io/projected/01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d-kube-api-access-hxjl2\") pod \"community-operators-rmfp5\" (UID: \"01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d\") " pod="openshift-marketplace/community-operators-rmfp5" Feb 27 16:24:06 crc kubenswrapper[4830]: I0227 16:24:06.267233 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d-utilities\") pod \"community-operators-rmfp5\" (UID: \"01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d\") " pod="openshift-marketplace/community-operators-rmfp5" Feb 27 16:24:06 crc kubenswrapper[4830]: I0227 16:24:06.267335 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d-catalog-content\") pod \"community-operators-rmfp5\" (UID: \"01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d\") " pod="openshift-marketplace/community-operators-rmfp5" Feb 27 16:24:06 crc kubenswrapper[4830]: I0227 16:24:06.269834 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rmfp5"] Feb 27 16:24:06 crc kubenswrapper[4830]: I0227 16:24:06.368755 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d-utilities\") pod \"community-operators-rmfp5\" (UID: \"01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d\") " pod="openshift-marketplace/community-operators-rmfp5" Feb 27 16:24:06 crc kubenswrapper[4830]: I0227 16:24:06.368845 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d-catalog-content\") pod \"community-operators-rmfp5\" (UID: \"01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d\") " pod="openshift-marketplace/community-operators-rmfp5" Feb 27 16:24:06 crc kubenswrapper[4830]: I0227 16:24:06.368963 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxjl2\" (UniqueName: \"kubernetes.io/projected/01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d-kube-api-access-hxjl2\") pod \"community-operators-rmfp5\" (UID: \"01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d\") " pod="openshift-marketplace/community-operators-rmfp5" Feb 27 16:24:06 crc kubenswrapper[4830]: I0227 16:24:06.369462 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d-utilities\") pod \"community-operators-rmfp5\" (UID: \"01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d\") " pod="openshift-marketplace/community-operators-rmfp5" Feb 27 16:24:06 crc kubenswrapper[4830]: I0227 16:24:06.369685 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d-catalog-content\") pod \"community-operators-rmfp5\" (UID: \"01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d\") " pod="openshift-marketplace/community-operators-rmfp5" Feb 27 16:24:06 crc kubenswrapper[4830]: I0227 16:24:06.388631 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxjl2\" (UniqueName: \"kubernetes.io/projected/01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d-kube-api-access-hxjl2\") pod \"community-operators-rmfp5\" (UID: \"01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d\") " pod="openshift-marketplace/community-operators-rmfp5" Feb 27 16:24:06 crc kubenswrapper[4830]: I0227 16:24:06.573095 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rmfp5" Feb 27 16:24:06 crc kubenswrapper[4830]: I0227 16:24:06.771206 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd17a9f9-0fb3-4c98-bbb0-36e8d23f71a0" path="/var/lib/kubelet/pods/dd17a9f9-0fb3-4c98-bbb0-36e8d23f71a0/volumes" Feb 27 16:24:07 crc kubenswrapper[4830]: I0227 16:24:07.066432 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rmfp5"] Feb 27 16:24:07 crc kubenswrapper[4830]: I0227 16:24:07.834118 4830 generic.go:334] "Generic (PLEG): container finished" podID="01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d" containerID="22d1ae98f4a4b0688e2cca6a74c96037de865e0379e49caf928586c7a5d50b3f" exitCode=0 Feb 27 16:24:07 crc kubenswrapper[4830]: I0227 16:24:07.834187 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmfp5" event={"ID":"01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d","Type":"ContainerDied","Data":"22d1ae98f4a4b0688e2cca6a74c96037de865e0379e49caf928586c7a5d50b3f"} Feb 27 16:24:07 crc kubenswrapper[4830]: I0227 16:24:07.834221 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmfp5" event={"ID":"01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d","Type":"ContainerStarted","Data":"6a11770deede6caccb3ba4790e080da31b80eb3a05cf259be443ab85d8437dea"} Feb 27 16:24:08 crc kubenswrapper[4830]: I0227 16:24:08.840720 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmfp5" event={"ID":"01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d","Type":"ContainerStarted","Data":"572ccac4826f9a0b5e8ba15133a9fba334c2cab8cbff4c932d291ed16a4ca0da"} Feb 27 16:24:09 crc kubenswrapper[4830]: I0227 16:24:09.585331 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-64c4cc7899-7w4m7" Feb 27 16:24:09 crc kubenswrapper[4830]: I0227 16:24:09.850869 4830 generic.go:334] "Generic (PLEG): container finished" podID="01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d" containerID="572ccac4826f9a0b5e8ba15133a9fba334c2cab8cbff4c932d291ed16a4ca0da" exitCode=0 Feb 27 16:24:09 crc kubenswrapper[4830]: I0227 16:24:09.850926 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmfp5" event={"ID":"01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d","Type":"ContainerDied","Data":"572ccac4826f9a0b5e8ba15133a9fba334c2cab8cbff4c932d291ed16a4ca0da"} Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.346166 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-mcd8b"] Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.346930 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mcd8b" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.354082 4830 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.354148 4830 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-ls72m" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.365365 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-t7kgx"] Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.367344 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-t7kgx" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.371527 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.371706 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-mcd8b"] Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.374105 4830 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.427729 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnj7b\" (UniqueName: \"kubernetes.io/projected/053107e9-9202-4a31-8c74-a54d8a3cf63b-kube-api-access-tnj7b\") pod \"frr-k8s-t7kgx\" (UID: \"053107e9-9202-4a31-8c74-a54d8a3cf63b\") " pod="metallb-system/frr-k8s-t7kgx" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.428040 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/053107e9-9202-4a31-8c74-a54d8a3cf63b-frr-conf\") pod \"frr-k8s-t7kgx\" (UID: \"053107e9-9202-4a31-8c74-a54d8a3cf63b\") " pod="metallb-system/frr-k8s-t7kgx" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.428083 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/68745f95-bd81-4609-bc51-f6222d4b2f27-cert\") pod \"frr-k8s-webhook-server-7f989f654f-mcd8b\" (UID: \"68745f95-bd81-4609-bc51-f6222d4b2f27\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mcd8b" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.428108 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgbkh\" (UniqueName: \"kubernetes.io/projected/68745f95-bd81-4609-bc51-f6222d4b2f27-kube-api-access-mgbkh\") pod \"frr-k8s-webhook-server-7f989f654f-mcd8b\" (UID: \"68745f95-bd81-4609-bc51-f6222d4b2f27\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mcd8b" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.428128 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/053107e9-9202-4a31-8c74-a54d8a3cf63b-frr-sockets\") pod \"frr-k8s-t7kgx\" (UID: \"053107e9-9202-4a31-8c74-a54d8a3cf63b\") " pod="metallb-system/frr-k8s-t7kgx" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.428143 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/053107e9-9202-4a31-8c74-a54d8a3cf63b-frr-startup\") pod \"frr-k8s-t7kgx\" (UID: \"053107e9-9202-4a31-8c74-a54d8a3cf63b\") " pod="metallb-system/frr-k8s-t7kgx" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.428170 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/053107e9-9202-4a31-8c74-a54d8a3cf63b-reloader\") pod \"frr-k8s-t7kgx\" (UID: \"053107e9-9202-4a31-8c74-a54d8a3cf63b\") " pod="metallb-system/frr-k8s-t7kgx" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.428200 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/053107e9-9202-4a31-8c74-a54d8a3cf63b-metrics\") pod \"frr-k8s-t7kgx\" (UID: \"053107e9-9202-4a31-8c74-a54d8a3cf63b\") " pod="metallb-system/frr-k8s-t7kgx" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.428214 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/053107e9-9202-4a31-8c74-a54d8a3cf63b-metrics-certs\") pod \"frr-k8s-t7kgx\" (UID: \"053107e9-9202-4a31-8c74-a54d8a3cf63b\") " pod="metallb-system/frr-k8s-t7kgx" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.434122 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-skvmw"] Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.434922 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-skvmw" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.438098 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.438233 4830 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.438662 4830 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-d2n9z" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.441529 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-86ddb6bd46-tlhc9"] Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.442373 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-tlhc9" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.449176 4830 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.449205 4830 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.480375 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-tlhc9"] Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.528909 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/053107e9-9202-4a31-8c74-a54d8a3cf63b-frr-sockets\") pod \"frr-k8s-t7kgx\" (UID: \"053107e9-9202-4a31-8c74-a54d8a3cf63b\") " pod="metallb-system/frr-k8s-t7kgx" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.528958 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/053107e9-9202-4a31-8c74-a54d8a3cf63b-frr-startup\") pod \"frr-k8s-t7kgx\" (UID: \"053107e9-9202-4a31-8c74-a54d8a3cf63b\") " pod="metallb-system/frr-k8s-t7kgx" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.529001 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/053107e9-9202-4a31-8c74-a54d8a3cf63b-reloader\") pod \"frr-k8s-t7kgx\" (UID: \"053107e9-9202-4a31-8c74-a54d8a3cf63b\") " pod="metallb-system/frr-k8s-t7kgx" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.529019 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/053107e9-9202-4a31-8c74-a54d8a3cf63b-metrics\") pod \"frr-k8s-t7kgx\" (UID: \"053107e9-9202-4a31-8c74-a54d8a3cf63b\") " pod="metallb-system/frr-k8s-t7kgx" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.529037 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/053107e9-9202-4a31-8c74-a54d8a3cf63b-metrics-certs\") pod \"frr-k8s-t7kgx\" (UID: \"053107e9-9202-4a31-8c74-a54d8a3cf63b\") " pod="metallb-system/frr-k8s-t7kgx" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.529060 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e9ed2887-fafc-4283-baf2-1ecd1da2da58-metallb-excludel2\") pod \"speaker-skvmw\" (UID: \"e9ed2887-fafc-4283-baf2-1ecd1da2da58\") " pod="metallb-system/speaker-skvmw" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.529081 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a7982aec-1d5b-4ab1-a8ae-a027dab24864-cert\") pod \"controller-86ddb6bd46-tlhc9\" (UID: \"a7982aec-1d5b-4ab1-a8ae-a027dab24864\") " pod="metallb-system/controller-86ddb6bd46-tlhc9" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.529101 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnj7b\" (UniqueName: \"kubernetes.io/projected/053107e9-9202-4a31-8c74-a54d8a3cf63b-kube-api-access-tnj7b\") pod \"frr-k8s-t7kgx\" (UID: \"053107e9-9202-4a31-8c74-a54d8a3cf63b\") " pod="metallb-system/frr-k8s-t7kgx" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.529119 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/053107e9-9202-4a31-8c74-a54d8a3cf63b-frr-conf\") pod \"frr-k8s-t7kgx\" (UID: \"053107e9-9202-4a31-8c74-a54d8a3cf63b\") " pod="metallb-system/frr-k8s-t7kgx" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.529138 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9ed2887-fafc-4283-baf2-1ecd1da2da58-metrics-certs\") pod \"speaker-skvmw\" (UID: \"e9ed2887-fafc-4283-baf2-1ecd1da2da58\") " pod="metallb-system/speaker-skvmw" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.529158 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl9w7\" (UniqueName: \"kubernetes.io/projected/a7982aec-1d5b-4ab1-a8ae-a027dab24864-kube-api-access-wl9w7\") pod \"controller-86ddb6bd46-tlhc9\" (UID: \"a7982aec-1d5b-4ab1-a8ae-a027dab24864\") " pod="metallb-system/controller-86ddb6bd46-tlhc9" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.529181 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7982aec-1d5b-4ab1-a8ae-a027dab24864-metrics-certs\") pod \"controller-86ddb6bd46-tlhc9\" (UID: \"a7982aec-1d5b-4ab1-a8ae-a027dab24864\") " pod="metallb-system/controller-86ddb6bd46-tlhc9" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.529214 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/68745f95-bd81-4609-bc51-f6222d4b2f27-cert\") pod \"frr-k8s-webhook-server-7f989f654f-mcd8b\" (UID: \"68745f95-bd81-4609-bc51-f6222d4b2f27\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mcd8b" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.529235 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgbkh\" (UniqueName: \"kubernetes.io/projected/68745f95-bd81-4609-bc51-f6222d4b2f27-kube-api-access-mgbkh\") pod \"frr-k8s-webhook-server-7f989f654f-mcd8b\" (UID: \"68745f95-bd81-4609-bc51-f6222d4b2f27\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mcd8b" Feb 27 16:24:10 crc kubenswrapper[4830]: E0227 16:24:10.529311 4830 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.529445 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/053107e9-9202-4a31-8c74-a54d8a3cf63b-frr-sockets\") pod \"frr-k8s-t7kgx\" (UID: \"053107e9-9202-4a31-8c74-a54d8a3cf63b\") " pod="metallb-system/frr-k8s-t7kgx" Feb 27 16:24:10 crc kubenswrapper[4830]: E0227 16:24:10.529474 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68745f95-bd81-4609-bc51-f6222d4b2f27-cert podName:68745f95-bd81-4609-bc51-f6222d4b2f27 nodeName:}" failed. No retries permitted until 2026-02-27 16:24:11.029446445 +0000 UTC m=+1047.118718898 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/68745f95-bd81-4609-bc51-f6222d4b2f27-cert") pod "frr-k8s-webhook-server-7f989f654f-mcd8b" (UID: "68745f95-bd81-4609-bc51-f6222d4b2f27") : secret "frr-k8s-webhook-server-cert" not found Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.529471 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/053107e9-9202-4a31-8c74-a54d8a3cf63b-reloader\") pod \"frr-k8s-t7kgx\" (UID: \"053107e9-9202-4a31-8c74-a54d8a3cf63b\") " pod="metallb-system/frr-k8s-t7kgx" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.529450 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e9ed2887-fafc-4283-baf2-1ecd1da2da58-memberlist\") pod \"speaker-skvmw\" (UID: \"e9ed2887-fafc-4283-baf2-1ecd1da2da58\") " pod="metallb-system/speaker-skvmw" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.529561 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dtr8\" (UniqueName: \"kubernetes.io/projected/e9ed2887-fafc-4283-baf2-1ecd1da2da58-kube-api-access-5dtr8\") pod \"speaker-skvmw\" (UID: \"e9ed2887-fafc-4283-baf2-1ecd1da2da58\") " pod="metallb-system/speaker-skvmw" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.530038 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/053107e9-9202-4a31-8c74-a54d8a3cf63b-frr-startup\") pod \"frr-k8s-t7kgx\" (UID: \"053107e9-9202-4a31-8c74-a54d8a3cf63b\") " pod="metallb-system/frr-k8s-t7kgx" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.530142 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/053107e9-9202-4a31-8c74-a54d8a3cf63b-metrics\") pod \"frr-k8s-t7kgx\" (UID: \"053107e9-9202-4a31-8c74-a54d8a3cf63b\") " pod="metallb-system/frr-k8s-t7kgx" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.531160 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/053107e9-9202-4a31-8c74-a54d8a3cf63b-frr-conf\") pod \"frr-k8s-t7kgx\" (UID: \"053107e9-9202-4a31-8c74-a54d8a3cf63b\") " pod="metallb-system/frr-k8s-t7kgx" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.544160 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnj7b\" (UniqueName: \"kubernetes.io/projected/053107e9-9202-4a31-8c74-a54d8a3cf63b-kube-api-access-tnj7b\") pod \"frr-k8s-t7kgx\" (UID: \"053107e9-9202-4a31-8c74-a54d8a3cf63b\") " pod="metallb-system/frr-k8s-t7kgx" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.544450 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/053107e9-9202-4a31-8c74-a54d8a3cf63b-metrics-certs\") pod \"frr-k8s-t7kgx\" (UID: \"053107e9-9202-4a31-8c74-a54d8a3cf63b\") " pod="metallb-system/frr-k8s-t7kgx" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.550008 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgbkh\" (UniqueName: \"kubernetes.io/projected/68745f95-bd81-4609-bc51-f6222d4b2f27-kube-api-access-mgbkh\") pod \"frr-k8s-webhook-server-7f989f654f-mcd8b\" (UID: \"68745f95-bd81-4609-bc51-f6222d4b2f27\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mcd8b" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.629900 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e9ed2887-fafc-4283-baf2-1ecd1da2da58-metallb-excludel2\") pod \"speaker-skvmw\" (UID: \"e9ed2887-fafc-4283-baf2-1ecd1da2da58\") " pod="metallb-system/speaker-skvmw" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.629938 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a7982aec-1d5b-4ab1-a8ae-a027dab24864-cert\") pod \"controller-86ddb6bd46-tlhc9\" (UID: \"a7982aec-1d5b-4ab1-a8ae-a027dab24864\") " pod="metallb-system/controller-86ddb6bd46-tlhc9" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.629997 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9ed2887-fafc-4283-baf2-1ecd1da2da58-metrics-certs\") pod \"speaker-skvmw\" (UID: \"e9ed2887-fafc-4283-baf2-1ecd1da2da58\") " pod="metallb-system/speaker-skvmw" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.630034 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl9w7\" (UniqueName: \"kubernetes.io/projected/a7982aec-1d5b-4ab1-a8ae-a027dab24864-kube-api-access-wl9w7\") pod \"controller-86ddb6bd46-tlhc9\" (UID: \"a7982aec-1d5b-4ab1-a8ae-a027dab24864\") " pod="metallb-system/controller-86ddb6bd46-tlhc9" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.630057 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7982aec-1d5b-4ab1-a8ae-a027dab24864-metrics-certs\") pod \"controller-86ddb6bd46-tlhc9\" (UID: \"a7982aec-1d5b-4ab1-a8ae-a027dab24864\") " pod="metallb-system/controller-86ddb6bd46-tlhc9" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.630093 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e9ed2887-fafc-4283-baf2-1ecd1da2da58-memberlist\") pod \"speaker-skvmw\" (UID: \"e9ed2887-fafc-4283-baf2-1ecd1da2da58\") " pod="metallb-system/speaker-skvmw" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.630110 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dtr8\" (UniqueName: \"kubernetes.io/projected/e9ed2887-fafc-4283-baf2-1ecd1da2da58-kube-api-access-5dtr8\") pod \"speaker-skvmw\" (UID: \"e9ed2887-fafc-4283-baf2-1ecd1da2da58\") " pod="metallb-system/speaker-skvmw" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.630867 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e9ed2887-fafc-4283-baf2-1ecd1da2da58-metallb-excludel2\") pod \"speaker-skvmw\" (UID: \"e9ed2887-fafc-4283-baf2-1ecd1da2da58\") " pod="metallb-system/speaker-skvmw" Feb 27 16:24:10 crc kubenswrapper[4830]: E0227 16:24:10.631606 4830 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 27 16:24:10 crc kubenswrapper[4830]: E0227 16:24:10.631629 4830 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 27 16:24:10 crc kubenswrapper[4830]: E0227 16:24:10.631657 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7982aec-1d5b-4ab1-a8ae-a027dab24864-metrics-certs podName:a7982aec-1d5b-4ab1-a8ae-a027dab24864 nodeName:}" failed. No retries permitted until 2026-02-27 16:24:11.131646244 +0000 UTC m=+1047.220918697 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a7982aec-1d5b-4ab1-a8ae-a027dab24864-metrics-certs") pod "controller-86ddb6bd46-tlhc9" (UID: "a7982aec-1d5b-4ab1-a8ae-a027dab24864") : secret "controller-certs-secret" not found Feb 27 16:24:10 crc kubenswrapper[4830]: E0227 16:24:10.631670 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9ed2887-fafc-4283-baf2-1ecd1da2da58-memberlist podName:e9ed2887-fafc-4283-baf2-1ecd1da2da58 nodeName:}" failed. No retries permitted until 2026-02-27 16:24:11.131664744 +0000 UTC m=+1047.220937207 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/e9ed2887-fafc-4283-baf2-1ecd1da2da58-memberlist") pod "speaker-skvmw" (UID: "e9ed2887-fafc-4283-baf2-1ecd1da2da58") : secret "metallb-memberlist" not found Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.635273 4830 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.635296 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e9ed2887-fafc-4283-baf2-1ecd1da2da58-metrics-certs\") pod \"speaker-skvmw\" (UID: \"e9ed2887-fafc-4283-baf2-1ecd1da2da58\") " pod="metallb-system/speaker-skvmw" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.644490 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a7982aec-1d5b-4ab1-a8ae-a027dab24864-cert\") pod \"controller-86ddb6bd46-tlhc9\" (UID: \"a7982aec-1d5b-4ab1-a8ae-a027dab24864\") " pod="metallb-system/controller-86ddb6bd46-tlhc9" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.644893 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dtr8\" (UniqueName: \"kubernetes.io/projected/e9ed2887-fafc-4283-baf2-1ecd1da2da58-kube-api-access-5dtr8\") pod \"speaker-skvmw\" (UID: \"e9ed2887-fafc-4283-baf2-1ecd1da2da58\") " pod="metallb-system/speaker-skvmw" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.646330 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl9w7\" (UniqueName: \"kubernetes.io/projected/a7982aec-1d5b-4ab1-a8ae-a027dab24864-kube-api-access-wl9w7\") pod \"controller-86ddb6bd46-tlhc9\" (UID: \"a7982aec-1d5b-4ab1-a8ae-a027dab24864\") " pod="metallb-system/controller-86ddb6bd46-tlhc9" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.698707 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-t7kgx" Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.857304 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmfp5" event={"ID":"01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d","Type":"ContainerStarted","Data":"4fe62d6384146ea1888564864a79bb53bb3c96170492f09315cf42ce4b5a15c3"} Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.858674 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t7kgx" event={"ID":"053107e9-9202-4a31-8c74-a54d8a3cf63b","Type":"ContainerStarted","Data":"a7dd4b4783304d3dff7bb60b412c18e88099fc973b9020e1ab4416ee2d8ada79"} Feb 27 16:24:10 crc kubenswrapper[4830]: I0227 16:24:10.885425 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rmfp5" podStartSLOduration=2.442057973 podStartE2EDuration="4.885406707s" podCreationTimestamp="2026-02-27 16:24:06 +0000 UTC" firstStartedPulling="2026-02-27 16:24:07.836710088 +0000 UTC m=+1043.925982581" lastFinishedPulling="2026-02-27 16:24:10.280058842 +0000 UTC m=+1046.369331315" observedRunningTime="2026-02-27 16:24:10.878003172 +0000 UTC m=+1046.967275635" watchObservedRunningTime="2026-02-27 16:24:10.885406707 +0000 UTC m=+1046.974679180" Feb 27 16:24:11 crc kubenswrapper[4830]: I0227 16:24:11.033908 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/68745f95-bd81-4609-bc51-f6222d4b2f27-cert\") pod \"frr-k8s-webhook-server-7f989f654f-mcd8b\" (UID: \"68745f95-bd81-4609-bc51-f6222d4b2f27\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mcd8b" Feb 27 16:24:11 crc kubenswrapper[4830]: I0227 16:24:11.038484 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/68745f95-bd81-4609-bc51-f6222d4b2f27-cert\") pod \"frr-k8s-webhook-server-7f989f654f-mcd8b\" (UID: \"68745f95-bd81-4609-bc51-f6222d4b2f27\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mcd8b" Feb 27 16:24:11 crc kubenswrapper[4830]: I0227 16:24:11.135821 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e9ed2887-fafc-4283-baf2-1ecd1da2da58-memberlist\") pod \"speaker-skvmw\" (UID: \"e9ed2887-fafc-4283-baf2-1ecd1da2da58\") " pod="metallb-system/speaker-skvmw" Feb 27 16:24:11 crc kubenswrapper[4830]: E0227 16:24:11.136017 4830 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 27 16:24:11 crc kubenswrapper[4830]: I0227 16:24:11.136053 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7982aec-1d5b-4ab1-a8ae-a027dab24864-metrics-certs\") pod \"controller-86ddb6bd46-tlhc9\" (UID: \"a7982aec-1d5b-4ab1-a8ae-a027dab24864\") " pod="metallb-system/controller-86ddb6bd46-tlhc9" Feb 27 16:24:11 crc kubenswrapper[4830]: E0227 16:24:11.136100 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9ed2887-fafc-4283-baf2-1ecd1da2da58-memberlist podName:e9ed2887-fafc-4283-baf2-1ecd1da2da58 nodeName:}" failed. No retries permitted until 2026-02-27 16:24:12.136078063 +0000 UTC m=+1048.225350546 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/e9ed2887-fafc-4283-baf2-1ecd1da2da58-memberlist") pod "speaker-skvmw" (UID: "e9ed2887-fafc-4283-baf2-1ecd1da2da58") : secret "metallb-memberlist" not found Feb 27 16:24:11 crc kubenswrapper[4830]: I0227 16:24:11.140805 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a7982aec-1d5b-4ab1-a8ae-a027dab24864-metrics-certs\") pod \"controller-86ddb6bd46-tlhc9\" (UID: \"a7982aec-1d5b-4ab1-a8ae-a027dab24864\") " pod="metallb-system/controller-86ddb6bd46-tlhc9" Feb 27 16:24:11 crc kubenswrapper[4830]: I0227 16:24:11.284795 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mcd8b" Feb 27 16:24:11 crc kubenswrapper[4830]: I0227 16:24:11.358740 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-tlhc9" Feb 27 16:24:11 crc kubenswrapper[4830]: I0227 16:24:11.523584 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-mcd8b"] Feb 27 16:24:11 crc kubenswrapper[4830]: I0227 16:24:11.643744 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-tlhc9"] Feb 27 16:24:11 crc kubenswrapper[4830]: I0227 16:24:11.868621 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-tlhc9" event={"ID":"a7982aec-1d5b-4ab1-a8ae-a027dab24864","Type":"ContainerStarted","Data":"70a41aa5cbce575487a5476dd718760ce2d54dbbd4f6558959488c36e1faef87"} Feb 27 16:24:11 crc kubenswrapper[4830]: I0227 16:24:11.868919 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-tlhc9" event={"ID":"a7982aec-1d5b-4ab1-a8ae-a027dab24864","Type":"ContainerStarted","Data":"4be49be0715a7809f227e042d7a75ae33bc8ac1a167984b0374fff93df7151fc"} Feb 27 16:24:11 crc kubenswrapper[4830]: I0227 16:24:11.875896 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mcd8b" event={"ID":"68745f95-bd81-4609-bc51-f6222d4b2f27","Type":"ContainerStarted","Data":"792ef423f17576c88090323b5c26ee11c68e9723eb878d10b8634aeeba755d02"} Feb 27 16:24:12 crc kubenswrapper[4830]: I0227 16:24:12.155086 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e9ed2887-fafc-4283-baf2-1ecd1da2da58-memberlist\") pod \"speaker-skvmw\" (UID: \"e9ed2887-fafc-4283-baf2-1ecd1da2da58\") " pod="metallb-system/speaker-skvmw" Feb 27 16:24:12 crc kubenswrapper[4830]: I0227 16:24:12.162847 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e9ed2887-fafc-4283-baf2-1ecd1da2da58-memberlist\") pod \"speaker-skvmw\" (UID: \"e9ed2887-fafc-4283-baf2-1ecd1da2da58\") " pod="metallb-system/speaker-skvmw" Feb 27 16:24:12 crc kubenswrapper[4830]: I0227 16:24:12.250914 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-skvmw" Feb 27 16:24:12 crc kubenswrapper[4830]: W0227 16:24:12.277971 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9ed2887_fafc_4283_baf2_1ecd1da2da58.slice/crio-d4e07554247dc3c8d2fb82860f72bcba6e5ae4f594870c290873d4d233a1b2ae WatchSource:0}: Error finding container d4e07554247dc3c8d2fb82860f72bcba6e5ae4f594870c290873d4d233a1b2ae: Status 404 returned error can't find the container with id d4e07554247dc3c8d2fb82860f72bcba6e5ae4f594870c290873d4d233a1b2ae Feb 27 16:24:12 crc kubenswrapper[4830]: I0227 16:24:12.884836 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-tlhc9" event={"ID":"a7982aec-1d5b-4ab1-a8ae-a027dab24864","Type":"ContainerStarted","Data":"565e574c7a3374485112c98e0b77288eeac3079ed93753203af7fc717e539b70"} Feb 27 16:24:12 crc kubenswrapper[4830]: I0227 16:24:12.885447 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-86ddb6bd46-tlhc9" Feb 27 16:24:12 crc kubenswrapper[4830]: I0227 16:24:12.889003 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-skvmw" event={"ID":"e9ed2887-fafc-4283-baf2-1ecd1da2da58","Type":"ContainerStarted","Data":"4945a8f2622ba76cfb14f556647e0004cf6804d4a64aecc83f7c360a29c6a91d"} Feb 27 16:24:12 crc kubenswrapper[4830]: I0227 16:24:12.889211 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-skvmw" event={"ID":"e9ed2887-fafc-4283-baf2-1ecd1da2da58","Type":"ContainerStarted","Data":"fdc8ca86d18776311b071dff963a4517ce9d9dbaeae144b8855b2b5849b1f199"} Feb 27 16:24:12 crc kubenswrapper[4830]: I0227 16:24:12.889373 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-skvmw" event={"ID":"e9ed2887-fafc-4283-baf2-1ecd1da2da58","Type":"ContainerStarted","Data":"d4e07554247dc3c8d2fb82860f72bcba6e5ae4f594870c290873d4d233a1b2ae"} Feb 27 16:24:12 crc kubenswrapper[4830]: I0227 16:24:12.889558 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-skvmw" Feb 27 16:24:12 crc kubenswrapper[4830]: I0227 16:24:12.919403 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-86ddb6bd46-tlhc9" podStartSLOduration=2.919386222 podStartE2EDuration="2.919386222s" podCreationTimestamp="2026-02-27 16:24:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:24:12.913264578 +0000 UTC m=+1049.002537061" watchObservedRunningTime="2026-02-27 16:24:12.919386222 +0000 UTC m=+1049.008658695" Feb 27 16:24:12 crc kubenswrapper[4830]: I0227 16:24:12.937068 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-skvmw" podStartSLOduration=2.937045563 podStartE2EDuration="2.937045563s" podCreationTimestamp="2026-02-27 16:24:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:24:12.932457489 +0000 UTC m=+1049.021729962" watchObservedRunningTime="2026-02-27 16:24:12.937045563 +0000 UTC m=+1049.026318046" Feb 27 16:24:16 crc kubenswrapper[4830]: I0227 16:24:16.574112 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rmfp5" Feb 27 16:24:16 crc kubenswrapper[4830]: I0227 16:24:16.574441 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rmfp5" Feb 27 16:24:16 crc kubenswrapper[4830]: I0227 16:24:16.624247 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rmfp5" Feb 27 16:24:16 crc kubenswrapper[4830]: I0227 16:24:16.965671 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rmfp5" Feb 27 16:24:17 crc kubenswrapper[4830]: I0227 16:24:17.000108 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rmfp5"] Feb 27 16:24:18 crc kubenswrapper[4830]: I0227 16:24:18.934648 4830 generic.go:334] "Generic (PLEG): container finished" podID="053107e9-9202-4a31-8c74-a54d8a3cf63b" containerID="1630bb2f48ad1b2204b79724e367f903b97e0d323e06b4bbeceb6568160ca642" exitCode=0 Feb 27 16:24:18 crc kubenswrapper[4830]: I0227 16:24:18.934722 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t7kgx" event={"ID":"053107e9-9202-4a31-8c74-a54d8a3cf63b","Type":"ContainerDied","Data":"1630bb2f48ad1b2204b79724e367f903b97e0d323e06b4bbeceb6568160ca642"} Feb 27 16:24:18 crc kubenswrapper[4830]: I0227 16:24:18.937630 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mcd8b" event={"ID":"68745f95-bd81-4609-bc51-f6222d4b2f27","Type":"ContainerStarted","Data":"935628716a2adc0cd2594496e8d4b71b2604459e1f570aa0fccf812176e652b2"} Feb 27 16:24:18 crc kubenswrapper[4830]: I0227 16:24:18.937795 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rmfp5" podUID="01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d" containerName="registry-server" containerID="cri-o://4fe62d6384146ea1888564864a79bb53bb3c96170492f09315cf42ce4b5a15c3" gracePeriod=2 Feb 27 16:24:19 crc kubenswrapper[4830]: I0227 16:24:19.003520 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mcd8b" podStartSLOduration=2.074261142 podStartE2EDuration="9.003499649s" podCreationTimestamp="2026-02-27 16:24:10 +0000 UTC" firstStartedPulling="2026-02-27 16:24:11.551628537 +0000 UTC m=+1047.640901010" lastFinishedPulling="2026-02-27 16:24:18.480867034 +0000 UTC m=+1054.570139517" observedRunningTime="2026-02-27 16:24:19.003217922 +0000 UTC m=+1055.092490395" watchObservedRunningTime="2026-02-27 16:24:19.003499649 +0000 UTC m=+1055.092772122" Feb 27 16:24:19 crc kubenswrapper[4830]: I0227 16:24:19.327433 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rmfp5" Feb 27 16:24:19 crc kubenswrapper[4830]: E0227 16:24:19.396664 4830 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod053107e9_9202_4a31_8c74_a54d8a3cf63b.slice/crio-conmon-274f8f042a83f52eab0312c09f806e7ef020c928d4cd55bb34cf7e20a7ba244a.scope\": RecentStats: unable to find data in memory cache]" Feb 27 16:24:19 crc kubenswrapper[4830]: I0227 16:24:19.461982 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d-utilities\") pod \"01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d\" (UID: \"01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d\") " Feb 27 16:24:19 crc kubenswrapper[4830]: I0227 16:24:19.462055 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxjl2\" (UniqueName: \"kubernetes.io/projected/01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d-kube-api-access-hxjl2\") pod \"01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d\" (UID: \"01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d\") " Feb 27 16:24:19 crc kubenswrapper[4830]: I0227 16:24:19.462094 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d-catalog-content\") pod \"01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d\" (UID: \"01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d\") " Feb 27 16:24:19 crc kubenswrapper[4830]: I0227 16:24:19.463657 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d-utilities" (OuterVolumeSpecName: "utilities") pod "01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d" (UID: "01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:24:19 crc kubenswrapper[4830]: I0227 16:24:19.470345 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d-kube-api-access-hxjl2" (OuterVolumeSpecName: "kube-api-access-hxjl2") pod "01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d" (UID: "01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d"). InnerVolumeSpecName "kube-api-access-hxjl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:24:19 crc kubenswrapper[4830]: I0227 16:24:19.521476 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d" (UID: "01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:24:19 crc kubenswrapper[4830]: I0227 16:24:19.563339 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 16:24:19 crc kubenswrapper[4830]: I0227 16:24:19.563386 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxjl2\" (UniqueName: \"kubernetes.io/projected/01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d-kube-api-access-hxjl2\") on node \"crc\" DevicePath \"\"" Feb 27 16:24:19 crc kubenswrapper[4830]: I0227 16:24:19.563401 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 16:24:19 crc kubenswrapper[4830]: I0227 16:24:19.951541 4830 generic.go:334] "Generic (PLEG): container finished" podID="01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d" containerID="4fe62d6384146ea1888564864a79bb53bb3c96170492f09315cf42ce4b5a15c3" exitCode=0 Feb 27 16:24:19 crc kubenswrapper[4830]: I0227 16:24:19.951658 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmfp5" event={"ID":"01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d","Type":"ContainerDied","Data":"4fe62d6384146ea1888564864a79bb53bb3c96170492f09315cf42ce4b5a15c3"} Feb 27 16:24:19 crc kubenswrapper[4830]: I0227 16:24:19.951702 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmfp5" event={"ID":"01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d","Type":"ContainerDied","Data":"6a11770deede6caccb3ba4790e080da31b80eb3a05cf259be443ab85d8437dea"} Feb 27 16:24:19 crc kubenswrapper[4830]: I0227 16:24:19.951699 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rmfp5" Feb 27 16:24:19 crc kubenswrapper[4830]: I0227 16:24:19.951746 4830 scope.go:117] "RemoveContainer" containerID="4fe62d6384146ea1888564864a79bb53bb3c96170492f09315cf42ce4b5a15c3" Feb 27 16:24:19 crc kubenswrapper[4830]: I0227 16:24:19.960321 4830 generic.go:334] "Generic (PLEG): container finished" podID="053107e9-9202-4a31-8c74-a54d8a3cf63b" containerID="274f8f042a83f52eab0312c09f806e7ef020c928d4cd55bb34cf7e20a7ba244a" exitCode=0 Feb 27 16:24:19 crc kubenswrapper[4830]: I0227 16:24:19.960431 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t7kgx" event={"ID":"053107e9-9202-4a31-8c74-a54d8a3cf63b","Type":"ContainerDied","Data":"274f8f042a83f52eab0312c09f806e7ef020c928d4cd55bb34cf7e20a7ba244a"} Feb 27 16:24:19 crc kubenswrapper[4830]: I0227 16:24:19.961013 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mcd8b" Feb 27 16:24:19 crc kubenswrapper[4830]: I0227 16:24:19.983868 4830 scope.go:117] "RemoveContainer" containerID="572ccac4826f9a0b5e8ba15133a9fba334c2cab8cbff4c932d291ed16a4ca0da" Feb 27 16:24:20 crc kubenswrapper[4830]: I0227 16:24:20.023141 4830 scope.go:117] "RemoveContainer" containerID="22d1ae98f4a4b0688e2cca6a74c96037de865e0379e49caf928586c7a5d50b3f" Feb 27 16:24:20 crc kubenswrapper[4830]: I0227 16:24:20.037021 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rmfp5"] Feb 27 16:24:20 crc kubenswrapper[4830]: I0227 16:24:20.041960 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rmfp5"] Feb 27 16:24:20 crc kubenswrapper[4830]: I0227 16:24:20.058355 4830 scope.go:117] "RemoveContainer" containerID="4fe62d6384146ea1888564864a79bb53bb3c96170492f09315cf42ce4b5a15c3" Feb 27 16:24:20 crc kubenswrapper[4830]: E0227 16:24:20.059474 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fe62d6384146ea1888564864a79bb53bb3c96170492f09315cf42ce4b5a15c3\": container with ID starting with 4fe62d6384146ea1888564864a79bb53bb3c96170492f09315cf42ce4b5a15c3 not found: ID does not exist" containerID="4fe62d6384146ea1888564864a79bb53bb3c96170492f09315cf42ce4b5a15c3" Feb 27 16:24:20 crc kubenswrapper[4830]: I0227 16:24:20.059525 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fe62d6384146ea1888564864a79bb53bb3c96170492f09315cf42ce4b5a15c3"} err="failed to get container status \"4fe62d6384146ea1888564864a79bb53bb3c96170492f09315cf42ce4b5a15c3\": rpc error: code = NotFound desc = could not find container \"4fe62d6384146ea1888564864a79bb53bb3c96170492f09315cf42ce4b5a15c3\": container with ID starting with 4fe62d6384146ea1888564864a79bb53bb3c96170492f09315cf42ce4b5a15c3 not found: ID does not exist" Feb 27 16:24:20 crc kubenswrapper[4830]: I0227 16:24:20.059554 4830 scope.go:117] "RemoveContainer" containerID="572ccac4826f9a0b5e8ba15133a9fba334c2cab8cbff4c932d291ed16a4ca0da" Feb 27 16:24:20 crc kubenswrapper[4830]: E0227 16:24:20.060039 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"572ccac4826f9a0b5e8ba15133a9fba334c2cab8cbff4c932d291ed16a4ca0da\": container with ID starting with 572ccac4826f9a0b5e8ba15133a9fba334c2cab8cbff4c932d291ed16a4ca0da not found: ID does not exist" containerID="572ccac4826f9a0b5e8ba15133a9fba334c2cab8cbff4c932d291ed16a4ca0da" Feb 27 16:24:20 crc kubenswrapper[4830]: I0227 16:24:20.060081 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"572ccac4826f9a0b5e8ba15133a9fba334c2cab8cbff4c932d291ed16a4ca0da"} err="failed to get container status \"572ccac4826f9a0b5e8ba15133a9fba334c2cab8cbff4c932d291ed16a4ca0da\": rpc error: code = NotFound desc = could not find container \"572ccac4826f9a0b5e8ba15133a9fba334c2cab8cbff4c932d291ed16a4ca0da\": container with ID starting with 572ccac4826f9a0b5e8ba15133a9fba334c2cab8cbff4c932d291ed16a4ca0da not found: ID does not exist" Feb 27 16:24:20 crc kubenswrapper[4830]: I0227 16:24:20.060112 4830 scope.go:117] "RemoveContainer" containerID="22d1ae98f4a4b0688e2cca6a74c96037de865e0379e49caf928586c7a5d50b3f" Feb 27 16:24:20 crc kubenswrapper[4830]: E0227 16:24:20.060525 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22d1ae98f4a4b0688e2cca6a74c96037de865e0379e49caf928586c7a5d50b3f\": container with ID starting with 22d1ae98f4a4b0688e2cca6a74c96037de865e0379e49caf928586c7a5d50b3f not found: ID does not exist" containerID="22d1ae98f4a4b0688e2cca6a74c96037de865e0379e49caf928586c7a5d50b3f" Feb 27 16:24:20 crc kubenswrapper[4830]: I0227 16:24:20.060563 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22d1ae98f4a4b0688e2cca6a74c96037de865e0379e49caf928586c7a5d50b3f"} err="failed to get container status \"22d1ae98f4a4b0688e2cca6a74c96037de865e0379e49caf928586c7a5d50b3f\": rpc error: code = NotFound desc = could not find container \"22d1ae98f4a4b0688e2cca6a74c96037de865e0379e49caf928586c7a5d50b3f\": container with ID starting with 22d1ae98f4a4b0688e2cca6a74c96037de865e0379e49caf928586c7a5d50b3f not found: ID does not exist" Feb 27 16:24:20 crc kubenswrapper[4830]: I0227 16:24:20.775259 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d" path="/var/lib/kubelet/pods/01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d/volumes" Feb 27 16:24:20 crc kubenswrapper[4830]: I0227 16:24:20.971102 4830 generic.go:334] "Generic (PLEG): container finished" podID="053107e9-9202-4a31-8c74-a54d8a3cf63b" containerID="cafbb0f6ab48c26e05a195187e4f3f7313af149377a3e6bbd5ab5aecbb8671f1" exitCode=0 Feb 27 16:24:20 crc kubenswrapper[4830]: I0227 16:24:20.971177 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t7kgx" event={"ID":"053107e9-9202-4a31-8c74-a54d8a3cf63b","Type":"ContainerDied","Data":"cafbb0f6ab48c26e05a195187e4f3f7313af149377a3e6bbd5ab5aecbb8671f1"} Feb 27 16:24:21 crc kubenswrapper[4830]: I0227 16:24:21.364727 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-86ddb6bd46-tlhc9" Feb 27 16:24:21 crc kubenswrapper[4830]: I0227 16:24:21.983197 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t7kgx" event={"ID":"053107e9-9202-4a31-8c74-a54d8a3cf63b","Type":"ContainerStarted","Data":"4cfb389100a6b572faad48f8bd9a92790cd199011fd8e2a2b5a8fa423ebab524"} Feb 27 16:24:21 crc kubenswrapper[4830]: I0227 16:24:21.983750 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t7kgx" event={"ID":"053107e9-9202-4a31-8c74-a54d8a3cf63b","Type":"ContainerStarted","Data":"e34be5015548ab34893391cf9cc592eb4c74a015d105dd0f7b9d09b0065003f7"} Feb 27 16:24:21 crc kubenswrapper[4830]: I0227 16:24:21.983764 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t7kgx" event={"ID":"053107e9-9202-4a31-8c74-a54d8a3cf63b","Type":"ContainerStarted","Data":"2b8763f3d2f83490e3e4d691bd409d601795e70ec6146b13368912e6e4eb11a4"} Feb 27 16:24:21 crc kubenswrapper[4830]: I0227 16:24:21.983780 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t7kgx" event={"ID":"053107e9-9202-4a31-8c74-a54d8a3cf63b","Type":"ContainerStarted","Data":"50ddefd1a869cb180e1c90b02f5eb909edcf173bba14ba6421e3c714842d1b17"} Feb 27 16:24:22 crc kubenswrapper[4830]: I0227 16:24:22.255454 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-skvmw" Feb 27 16:24:22 crc kubenswrapper[4830]: I0227 16:24:22.993525 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t7kgx" event={"ID":"053107e9-9202-4a31-8c74-a54d8a3cf63b","Type":"ContainerStarted","Data":"e813f59a36306d13f7b41dee673f917a6dea8c46585c2d34e3614e8190cede76"} Feb 27 16:24:23 crc kubenswrapper[4830]: I0227 16:24:23.937454 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qd9b8"] Feb 27 16:24:23 crc kubenswrapper[4830]: E0227 16:24:23.938233 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d" containerName="extract-utilities" Feb 27 16:24:23 crc kubenswrapper[4830]: I0227 16:24:23.938263 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d" containerName="extract-utilities" Feb 27 16:24:23 crc kubenswrapper[4830]: E0227 16:24:23.938287 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d" containerName="extract-content" Feb 27 16:24:23 crc kubenswrapper[4830]: I0227 16:24:23.938300 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d" containerName="extract-content" Feb 27 16:24:23 crc kubenswrapper[4830]: E0227 16:24:23.938323 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d" containerName="registry-server" Feb 27 16:24:23 crc kubenswrapper[4830]: I0227 16:24:23.938336 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d" containerName="registry-server" Feb 27 16:24:23 crc kubenswrapper[4830]: I0227 16:24:23.938520 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="01d484a1-3bf1-4f54-b0b1-7ef6adfe9a0d" containerName="registry-server" Feb 27 16:24:23 crc kubenswrapper[4830]: I0227 16:24:23.940004 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qd9b8" Feb 27 16:24:23 crc kubenswrapper[4830]: I0227 16:24:23.942202 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 27 16:24:23 crc kubenswrapper[4830]: I0227 16:24:23.950995 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qd9b8"] Feb 27 16:24:24 crc kubenswrapper[4830]: I0227 16:24:24.002453 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t7kgx" event={"ID":"053107e9-9202-4a31-8c74-a54d8a3cf63b","Type":"ContainerStarted","Data":"112f854d39797c1c7069889c189d96bd1c1988d82657120839ce0910f7dabc6a"} Feb 27 16:24:24 crc kubenswrapper[4830]: I0227 16:24:24.002622 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-t7kgx" Feb 27 16:24:24 crc kubenswrapper[4830]: I0227 16:24:24.058878 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-t7kgx" podStartSLOduration=6.427745301 podStartE2EDuration="14.05886232s" podCreationTimestamp="2026-02-27 16:24:10 +0000 UTC" firstStartedPulling="2026-02-27 16:24:10.822301447 +0000 UTC m=+1046.911573930" lastFinishedPulling="2026-02-27 16:24:18.453418476 +0000 UTC m=+1054.542690949" observedRunningTime="2026-02-27 16:24:24.056621183 +0000 UTC m=+1060.145893646" watchObservedRunningTime="2026-02-27 16:24:24.05886232 +0000 UTC m=+1060.148134783" Feb 27 16:24:24 crc kubenswrapper[4830]: I0227 16:24:24.129447 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a8fb3e00-3a8c-4ffd-9638-e1d738fc1651-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qd9b8\" (UID: \"a8fb3e00-3a8c-4ffd-9638-e1d738fc1651\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qd9b8" Feb 27 16:24:24 crc kubenswrapper[4830]: I0227 16:24:24.129508 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhtp6\" (UniqueName: \"kubernetes.io/projected/a8fb3e00-3a8c-4ffd-9638-e1d738fc1651-kube-api-access-fhtp6\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qd9b8\" (UID: \"a8fb3e00-3a8c-4ffd-9638-e1d738fc1651\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qd9b8" Feb 27 16:24:24 crc kubenswrapper[4830]: I0227 16:24:24.129776 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a8fb3e00-3a8c-4ffd-9638-e1d738fc1651-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qd9b8\" (UID: \"a8fb3e00-3a8c-4ffd-9638-e1d738fc1651\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qd9b8" Feb 27 16:24:24 crc kubenswrapper[4830]: I0227 16:24:24.230876 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a8fb3e00-3a8c-4ffd-9638-e1d738fc1651-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qd9b8\" (UID: \"a8fb3e00-3a8c-4ffd-9638-e1d738fc1651\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qd9b8" Feb 27 16:24:24 crc kubenswrapper[4830]: I0227 16:24:24.230988 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhtp6\" (UniqueName: \"kubernetes.io/projected/a8fb3e00-3a8c-4ffd-9638-e1d738fc1651-kube-api-access-fhtp6\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qd9b8\" (UID: \"a8fb3e00-3a8c-4ffd-9638-e1d738fc1651\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qd9b8" Feb 27 16:24:24 crc kubenswrapper[4830]: I0227 16:24:24.231057 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a8fb3e00-3a8c-4ffd-9638-e1d738fc1651-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qd9b8\" (UID: \"a8fb3e00-3a8c-4ffd-9638-e1d738fc1651\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qd9b8" Feb 27 16:24:24 crc kubenswrapper[4830]: I0227 16:24:24.231639 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a8fb3e00-3a8c-4ffd-9638-e1d738fc1651-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qd9b8\" (UID: \"a8fb3e00-3a8c-4ffd-9638-e1d738fc1651\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qd9b8" Feb 27 16:24:24 crc kubenswrapper[4830]: I0227 16:24:24.231824 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a8fb3e00-3a8c-4ffd-9638-e1d738fc1651-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qd9b8\" (UID: \"a8fb3e00-3a8c-4ffd-9638-e1d738fc1651\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qd9b8" Feb 27 16:24:24 crc kubenswrapper[4830]: I0227 16:24:24.265506 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhtp6\" (UniqueName: \"kubernetes.io/projected/a8fb3e00-3a8c-4ffd-9638-e1d738fc1651-kube-api-access-fhtp6\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qd9b8\" (UID: \"a8fb3e00-3a8c-4ffd-9638-e1d738fc1651\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qd9b8" Feb 27 16:24:24 crc kubenswrapper[4830]: I0227 16:24:24.312797 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qd9b8" Feb 27 16:24:24 crc kubenswrapper[4830]: I0227 16:24:24.776684 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qd9b8"] Feb 27 16:24:25 crc kubenswrapper[4830]: I0227 16:24:25.011239 4830 generic.go:334] "Generic (PLEG): container finished" podID="a8fb3e00-3a8c-4ffd-9638-e1d738fc1651" containerID="032ee5cd814266aa57d023c8439022e606e1c575f040b68a12b1a64187782e4b" exitCode=0 Feb 27 16:24:25 crc kubenswrapper[4830]: I0227 16:24:25.011311 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qd9b8" event={"ID":"a8fb3e00-3a8c-4ffd-9638-e1d738fc1651","Type":"ContainerDied","Data":"032ee5cd814266aa57d023c8439022e606e1c575f040b68a12b1a64187782e4b"} Feb 27 16:24:25 crc kubenswrapper[4830]: I0227 16:24:25.011374 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qd9b8" event={"ID":"a8fb3e00-3a8c-4ffd-9638-e1d738fc1651","Type":"ContainerStarted","Data":"82396102f0e4687f33d51fc4d61a237fa84ed99bff687c2fe90a1abd25e4db5c"} Feb 27 16:24:25 crc kubenswrapper[4830]: I0227 16:24:25.699988 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-t7kgx" Feb 27 16:24:25 crc kubenswrapper[4830]: I0227 16:24:25.768624 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-t7kgx" Feb 27 16:24:26 crc kubenswrapper[4830]: I0227 16:24:26.419030 4830 scope.go:117] "RemoveContainer" containerID="e2d6e44f8d67831444414ecc436155070fa81b8ab9b4f4dbc3aa08611cd8b99e" Feb 27 16:24:29 crc kubenswrapper[4830]: I0227 16:24:29.047329 4830 generic.go:334] "Generic (PLEG): container finished" podID="a8fb3e00-3a8c-4ffd-9638-e1d738fc1651" containerID="4bacbeaf177d4b7efea08e90b9dfa5dacd46205bd734ad1cb3415aa11d090970" exitCode=0 Feb 27 16:24:29 crc kubenswrapper[4830]: I0227 16:24:29.047472 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qd9b8" event={"ID":"a8fb3e00-3a8c-4ffd-9638-e1d738fc1651","Type":"ContainerDied","Data":"4bacbeaf177d4b7efea08e90b9dfa5dacd46205bd734ad1cb3415aa11d090970"} Feb 27 16:24:30 crc kubenswrapper[4830]: I0227 16:24:30.056512 4830 generic.go:334] "Generic (PLEG): container finished" podID="a8fb3e00-3a8c-4ffd-9638-e1d738fc1651" containerID="b18e33860c5cd45b21b3922987ab1d8a87ad1c28da1c6d55c769cfa4e0285893" exitCode=0 Feb 27 16:24:30 crc kubenswrapper[4830]: I0227 16:24:30.056577 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qd9b8" event={"ID":"a8fb3e00-3a8c-4ffd-9638-e1d738fc1651","Type":"ContainerDied","Data":"b18e33860c5cd45b21b3922987ab1d8a87ad1c28da1c6d55c769cfa4e0285893"} Feb 27 16:24:31 crc kubenswrapper[4830]: I0227 16:24:31.290888 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-mcd8b" Feb 27 16:24:31 crc kubenswrapper[4830]: I0227 16:24:31.398564 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qd9b8" Feb 27 16:24:31 crc kubenswrapper[4830]: I0227 16:24:31.460633 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a8fb3e00-3a8c-4ffd-9638-e1d738fc1651-util\") pod \"a8fb3e00-3a8c-4ffd-9638-e1d738fc1651\" (UID: \"a8fb3e00-3a8c-4ffd-9638-e1d738fc1651\") " Feb 27 16:24:31 crc kubenswrapper[4830]: I0227 16:24:31.460717 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhtp6\" (UniqueName: \"kubernetes.io/projected/a8fb3e00-3a8c-4ffd-9638-e1d738fc1651-kube-api-access-fhtp6\") pod \"a8fb3e00-3a8c-4ffd-9638-e1d738fc1651\" (UID: \"a8fb3e00-3a8c-4ffd-9638-e1d738fc1651\") " Feb 27 16:24:31 crc kubenswrapper[4830]: I0227 16:24:31.460820 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a8fb3e00-3a8c-4ffd-9638-e1d738fc1651-bundle\") pod \"a8fb3e00-3a8c-4ffd-9638-e1d738fc1651\" (UID: \"a8fb3e00-3a8c-4ffd-9638-e1d738fc1651\") " Feb 27 16:24:31 crc kubenswrapper[4830]: I0227 16:24:31.461706 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8fb3e00-3a8c-4ffd-9638-e1d738fc1651-bundle" (OuterVolumeSpecName: "bundle") pod "a8fb3e00-3a8c-4ffd-9638-e1d738fc1651" (UID: "a8fb3e00-3a8c-4ffd-9638-e1d738fc1651"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:24:31 crc kubenswrapper[4830]: I0227 16:24:31.469371 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8fb3e00-3a8c-4ffd-9638-e1d738fc1651-kube-api-access-fhtp6" (OuterVolumeSpecName: "kube-api-access-fhtp6") pod "a8fb3e00-3a8c-4ffd-9638-e1d738fc1651" (UID: "a8fb3e00-3a8c-4ffd-9638-e1d738fc1651"). InnerVolumeSpecName "kube-api-access-fhtp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:24:31 crc kubenswrapper[4830]: I0227 16:24:31.476166 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8fb3e00-3a8c-4ffd-9638-e1d738fc1651-util" (OuterVolumeSpecName: "util") pod "a8fb3e00-3a8c-4ffd-9638-e1d738fc1651" (UID: "a8fb3e00-3a8c-4ffd-9638-e1d738fc1651"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:24:31 crc kubenswrapper[4830]: I0227 16:24:31.562983 4830 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a8fb3e00-3a8c-4ffd-9638-e1d738fc1651-util\") on node \"crc\" DevicePath \"\"" Feb 27 16:24:31 crc kubenswrapper[4830]: I0227 16:24:31.563021 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhtp6\" (UniqueName: \"kubernetes.io/projected/a8fb3e00-3a8c-4ffd-9638-e1d738fc1651-kube-api-access-fhtp6\") on node \"crc\" DevicePath \"\"" Feb 27 16:24:31 crc kubenswrapper[4830]: I0227 16:24:31.563036 4830 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a8fb3e00-3a8c-4ffd-9638-e1d738fc1651-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:24:32 crc kubenswrapper[4830]: I0227 16:24:32.071195 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qd9b8" event={"ID":"a8fb3e00-3a8c-4ffd-9638-e1d738fc1651","Type":"ContainerDied","Data":"82396102f0e4687f33d51fc4d61a237fa84ed99bff687c2fe90a1abd25e4db5c"} Feb 27 16:24:32 crc kubenswrapper[4830]: I0227 16:24:32.071474 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82396102f0e4687f33d51fc4d61a237fa84ed99bff687c2fe90a1abd25e4db5c" Feb 27 16:24:32 crc kubenswrapper[4830]: I0227 16:24:32.071449 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qd9b8" Feb 27 16:24:33 crc kubenswrapper[4830]: I0227 16:24:33.468770 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cgjpn"] Feb 27 16:24:33 crc kubenswrapper[4830]: E0227 16:24:33.470612 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8fb3e00-3a8c-4ffd-9638-e1d738fc1651" containerName="util" Feb 27 16:24:33 crc kubenswrapper[4830]: I0227 16:24:33.470759 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8fb3e00-3a8c-4ffd-9638-e1d738fc1651" containerName="util" Feb 27 16:24:33 crc kubenswrapper[4830]: E0227 16:24:33.470884 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8fb3e00-3a8c-4ffd-9638-e1d738fc1651" containerName="pull" Feb 27 16:24:33 crc kubenswrapper[4830]: I0227 16:24:33.478023 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8fb3e00-3a8c-4ffd-9638-e1d738fc1651" containerName="pull" Feb 27 16:24:33 crc kubenswrapper[4830]: E0227 16:24:33.478080 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8fb3e00-3a8c-4ffd-9638-e1d738fc1651" containerName="extract" Feb 27 16:24:33 crc kubenswrapper[4830]: I0227 16:24:33.478097 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8fb3e00-3a8c-4ffd-9638-e1d738fc1651" containerName="extract" Feb 27 16:24:33 crc kubenswrapper[4830]: I0227 16:24:33.478461 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8fb3e00-3a8c-4ffd-9638-e1d738fc1651" containerName="extract" Feb 27 16:24:33 crc kubenswrapper[4830]: I0227 16:24:33.479940 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cgjpn" Feb 27 16:24:33 crc kubenswrapper[4830]: I0227 16:24:33.486146 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cgjpn"] Feb 27 16:24:33 crc kubenswrapper[4830]: I0227 16:24:33.624763 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ff1e8d8-4559-4849-9e50-ed97beeba7af-utilities\") pod \"certified-operators-cgjpn\" (UID: \"8ff1e8d8-4559-4849-9e50-ed97beeba7af\") " pod="openshift-marketplace/certified-operators-cgjpn" Feb 27 16:24:33 crc kubenswrapper[4830]: I0227 16:24:33.624854 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwv5w\" (UniqueName: \"kubernetes.io/projected/8ff1e8d8-4559-4849-9e50-ed97beeba7af-kube-api-access-rwv5w\") pod \"certified-operators-cgjpn\" (UID: \"8ff1e8d8-4559-4849-9e50-ed97beeba7af\") " pod="openshift-marketplace/certified-operators-cgjpn" Feb 27 16:24:33 crc kubenswrapper[4830]: I0227 16:24:33.624933 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ff1e8d8-4559-4849-9e50-ed97beeba7af-catalog-content\") pod \"certified-operators-cgjpn\" (UID: \"8ff1e8d8-4559-4849-9e50-ed97beeba7af\") " pod="openshift-marketplace/certified-operators-cgjpn" Feb 27 16:24:33 crc kubenswrapper[4830]: I0227 16:24:33.726109 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ff1e8d8-4559-4849-9e50-ed97beeba7af-utilities\") pod \"certified-operators-cgjpn\" (UID: \"8ff1e8d8-4559-4849-9e50-ed97beeba7af\") " pod="openshift-marketplace/certified-operators-cgjpn" Feb 27 16:24:33 crc kubenswrapper[4830]: I0227 16:24:33.726167 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwv5w\" (UniqueName: \"kubernetes.io/projected/8ff1e8d8-4559-4849-9e50-ed97beeba7af-kube-api-access-rwv5w\") pod \"certified-operators-cgjpn\" (UID: \"8ff1e8d8-4559-4849-9e50-ed97beeba7af\") " pod="openshift-marketplace/certified-operators-cgjpn" Feb 27 16:24:33 crc kubenswrapper[4830]: I0227 16:24:33.726202 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ff1e8d8-4559-4849-9e50-ed97beeba7af-catalog-content\") pod \"certified-operators-cgjpn\" (UID: \"8ff1e8d8-4559-4849-9e50-ed97beeba7af\") " pod="openshift-marketplace/certified-operators-cgjpn" Feb 27 16:24:33 crc kubenswrapper[4830]: I0227 16:24:33.726696 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ff1e8d8-4559-4849-9e50-ed97beeba7af-utilities\") pod \"certified-operators-cgjpn\" (UID: \"8ff1e8d8-4559-4849-9e50-ed97beeba7af\") " pod="openshift-marketplace/certified-operators-cgjpn" Feb 27 16:24:33 crc kubenswrapper[4830]: I0227 16:24:33.726708 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ff1e8d8-4559-4849-9e50-ed97beeba7af-catalog-content\") pod \"certified-operators-cgjpn\" (UID: \"8ff1e8d8-4559-4849-9e50-ed97beeba7af\") " pod="openshift-marketplace/certified-operators-cgjpn" Feb 27 16:24:33 crc kubenswrapper[4830]: I0227 16:24:33.755810 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwv5w\" (UniqueName: \"kubernetes.io/projected/8ff1e8d8-4559-4849-9e50-ed97beeba7af-kube-api-access-rwv5w\") pod \"certified-operators-cgjpn\" (UID: \"8ff1e8d8-4559-4849-9e50-ed97beeba7af\") " pod="openshift-marketplace/certified-operators-cgjpn" Feb 27 16:24:33 crc kubenswrapper[4830]: I0227 16:24:33.835457 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cgjpn" Feb 27 16:24:34 crc kubenswrapper[4830]: I0227 16:24:34.163907 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cgjpn"] Feb 27 16:24:35 crc kubenswrapper[4830]: I0227 16:24:35.100711 4830 generic.go:334] "Generic (PLEG): container finished" podID="8ff1e8d8-4559-4849-9e50-ed97beeba7af" containerID="022f033d4f557ed60e552f59d36eb0b79044c702a2846b2289f2f9992478af34" exitCode=0 Feb 27 16:24:35 crc kubenswrapper[4830]: I0227 16:24:35.100860 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cgjpn" event={"ID":"8ff1e8d8-4559-4849-9e50-ed97beeba7af","Type":"ContainerDied","Data":"022f033d4f557ed60e552f59d36eb0b79044c702a2846b2289f2f9992478af34"} Feb 27 16:24:35 crc kubenswrapper[4830]: I0227 16:24:35.101082 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cgjpn" event={"ID":"8ff1e8d8-4559-4849-9e50-ed97beeba7af","Type":"ContainerStarted","Data":"363695dc7d03da74d1219c1a98d965692265dea051d344f10a3d368b60b53bdc"} Feb 27 16:24:36 crc kubenswrapper[4830]: I0227 16:24:36.110392 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cgjpn" event={"ID":"8ff1e8d8-4559-4849-9e50-ed97beeba7af","Type":"ContainerStarted","Data":"53e54faf3a811750187ae03d7728e1285e690f85b74165fe4331b3392e2cabe1"} Feb 27 16:24:36 crc kubenswrapper[4830]: I0227 16:24:36.267406 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-mvj69"] Feb 27 16:24:36 crc kubenswrapper[4830]: I0227 16:24:36.268439 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-mvj69" Feb 27 16:24:36 crc kubenswrapper[4830]: I0227 16:24:36.272387 4830 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-crvhl" Feb 27 16:24:36 crc kubenswrapper[4830]: I0227 16:24:36.272756 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Feb 27 16:24:36 crc kubenswrapper[4830]: I0227 16:24:36.273047 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Feb 27 16:24:36 crc kubenswrapper[4830]: I0227 16:24:36.290474 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-mvj69"] Feb 27 16:24:36 crc kubenswrapper[4830]: I0227 16:24:36.362677 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfbns\" (UniqueName: \"kubernetes.io/projected/52619f3a-9522-4097-97d2-032caec65e26-kube-api-access-dfbns\") pod \"cert-manager-operator-controller-manager-66c8bdd694-mvj69\" (UID: \"52619f3a-9522-4097-97d2-032caec65e26\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-mvj69" Feb 27 16:24:36 crc kubenswrapper[4830]: I0227 16:24:36.362752 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/52619f3a-9522-4097-97d2-032caec65e26-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-mvj69\" (UID: \"52619f3a-9522-4097-97d2-032caec65e26\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-mvj69" Feb 27 16:24:36 crc kubenswrapper[4830]: I0227 16:24:36.464055 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfbns\" (UniqueName: \"kubernetes.io/projected/52619f3a-9522-4097-97d2-032caec65e26-kube-api-access-dfbns\") pod \"cert-manager-operator-controller-manager-66c8bdd694-mvj69\" (UID: \"52619f3a-9522-4097-97d2-032caec65e26\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-mvj69" Feb 27 16:24:36 crc kubenswrapper[4830]: I0227 16:24:36.464331 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/52619f3a-9522-4097-97d2-032caec65e26-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-mvj69\" (UID: \"52619f3a-9522-4097-97d2-032caec65e26\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-mvj69" Feb 27 16:24:36 crc kubenswrapper[4830]: I0227 16:24:36.464750 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/52619f3a-9522-4097-97d2-032caec65e26-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-mvj69\" (UID: \"52619f3a-9522-4097-97d2-032caec65e26\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-mvj69" Feb 27 16:24:36 crc kubenswrapper[4830]: I0227 16:24:36.487018 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfbns\" (UniqueName: \"kubernetes.io/projected/52619f3a-9522-4097-97d2-032caec65e26-kube-api-access-dfbns\") pod \"cert-manager-operator-controller-manager-66c8bdd694-mvj69\" (UID: \"52619f3a-9522-4097-97d2-032caec65e26\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-mvj69" Feb 27 16:24:36 crc kubenswrapper[4830]: I0227 16:24:36.582916 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-mvj69" Feb 27 16:24:36 crc kubenswrapper[4830]: I0227 16:24:36.879145 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-mvj69"] Feb 27 16:24:36 crc kubenswrapper[4830]: W0227 16:24:36.887128 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52619f3a_9522_4097_97d2_032caec65e26.slice/crio-c527d2cba1ea0661b8a5b3f5d43861dd7afaa201569ea1c3383c496ee72c5ee9 WatchSource:0}: Error finding container c527d2cba1ea0661b8a5b3f5d43861dd7afaa201569ea1c3383c496ee72c5ee9: Status 404 returned error can't find the container with id c527d2cba1ea0661b8a5b3f5d43861dd7afaa201569ea1c3383c496ee72c5ee9 Feb 27 16:24:37 crc kubenswrapper[4830]: I0227 16:24:37.117796 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-mvj69" event={"ID":"52619f3a-9522-4097-97d2-032caec65e26","Type":"ContainerStarted","Data":"c527d2cba1ea0661b8a5b3f5d43861dd7afaa201569ea1c3383c496ee72c5ee9"} Feb 27 16:24:37 crc kubenswrapper[4830]: I0227 16:24:37.119579 4830 generic.go:334] "Generic (PLEG): container finished" podID="8ff1e8d8-4559-4849-9e50-ed97beeba7af" containerID="53e54faf3a811750187ae03d7728e1285e690f85b74165fe4331b3392e2cabe1" exitCode=0 Feb 27 16:24:37 crc kubenswrapper[4830]: I0227 16:24:37.119626 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cgjpn" event={"ID":"8ff1e8d8-4559-4849-9e50-ed97beeba7af","Type":"ContainerDied","Data":"53e54faf3a811750187ae03d7728e1285e690f85b74165fe4331b3392e2cabe1"} Feb 27 16:24:38 crc kubenswrapper[4830]: I0227 16:24:38.132206 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cgjpn" event={"ID":"8ff1e8d8-4559-4849-9e50-ed97beeba7af","Type":"ContainerStarted","Data":"000fa361406936737f4b4a626bcf6f4993bf3ceb2e4d03f290f013e1b8173fb8"} Feb 27 16:24:38 crc kubenswrapper[4830]: I0227 16:24:38.168826 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cgjpn" podStartSLOduration=2.606610078 podStartE2EDuration="5.168806088s" podCreationTimestamp="2026-02-27 16:24:33 +0000 UTC" firstStartedPulling="2026-02-27 16:24:35.102711112 +0000 UTC m=+1071.191983605" lastFinishedPulling="2026-02-27 16:24:37.664907122 +0000 UTC m=+1073.754179615" observedRunningTime="2026-02-27 16:24:38.165518146 +0000 UTC m=+1074.254790629" watchObservedRunningTime="2026-02-27 16:24:38.168806088 +0000 UTC m=+1074.258078561" Feb 27 16:24:40 crc kubenswrapper[4830]: I0227 16:24:40.144605 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-mvj69" event={"ID":"52619f3a-9522-4097-97d2-032caec65e26","Type":"ContainerStarted","Data":"47230c84491181b2e1f1607ea642ca739037f3659513c366c0a8b7e317114e7e"} Feb 27 16:24:40 crc kubenswrapper[4830]: I0227 16:24:40.166634 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-mvj69" podStartSLOduration=1.098784808 podStartE2EDuration="4.166610607s" podCreationTimestamp="2026-02-27 16:24:36 +0000 UTC" firstStartedPulling="2026-02-27 16:24:36.889399535 +0000 UTC m=+1072.978671998" lastFinishedPulling="2026-02-27 16:24:39.957225334 +0000 UTC m=+1076.046497797" observedRunningTime="2026-02-27 16:24:40.163391896 +0000 UTC m=+1076.252664369" watchObservedRunningTime="2026-02-27 16:24:40.166610607 +0000 UTC m=+1076.255883120" Feb 27 16:24:40 crc kubenswrapper[4830]: I0227 16:24:40.706297 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-t7kgx" Feb 27 16:24:43 crc kubenswrapper[4830]: I0227 16:24:43.149649 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-fr5hs"] Feb 27 16:24:43 crc kubenswrapper[4830]: I0227 16:24:43.150808 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-fr5hs" Feb 27 16:24:43 crc kubenswrapper[4830]: I0227 16:24:43.155068 4830 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-ktq8r" Feb 27 16:24:43 crc kubenswrapper[4830]: I0227 16:24:43.155744 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 27 16:24:43 crc kubenswrapper[4830]: I0227 16:24:43.157294 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 27 16:24:43 crc kubenswrapper[4830]: I0227 16:24:43.166088 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-fr5hs"] Feb 27 16:24:43 crc kubenswrapper[4830]: I0227 16:24:43.263561 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b04573a0-1535-4606-8551-ba1c3a53f933-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-fr5hs\" (UID: \"b04573a0-1535-4606-8551-ba1c3a53f933\") " pod="cert-manager/cert-manager-webhook-6888856db4-fr5hs" Feb 27 16:24:43 crc kubenswrapper[4830]: I0227 16:24:43.263637 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl4sl\" (UniqueName: \"kubernetes.io/projected/b04573a0-1535-4606-8551-ba1c3a53f933-kube-api-access-vl4sl\") pod \"cert-manager-webhook-6888856db4-fr5hs\" (UID: \"b04573a0-1535-4606-8551-ba1c3a53f933\") " pod="cert-manager/cert-manager-webhook-6888856db4-fr5hs" Feb 27 16:24:43 crc kubenswrapper[4830]: I0227 16:24:43.364912 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b04573a0-1535-4606-8551-ba1c3a53f933-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-fr5hs\" (UID: \"b04573a0-1535-4606-8551-ba1c3a53f933\") " pod="cert-manager/cert-manager-webhook-6888856db4-fr5hs" Feb 27 16:24:43 crc kubenswrapper[4830]: I0227 16:24:43.365003 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl4sl\" (UniqueName: \"kubernetes.io/projected/b04573a0-1535-4606-8551-ba1c3a53f933-kube-api-access-vl4sl\") pod \"cert-manager-webhook-6888856db4-fr5hs\" (UID: \"b04573a0-1535-4606-8551-ba1c3a53f933\") " pod="cert-manager/cert-manager-webhook-6888856db4-fr5hs" Feb 27 16:24:43 crc kubenswrapper[4830]: I0227 16:24:43.387507 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl4sl\" (UniqueName: \"kubernetes.io/projected/b04573a0-1535-4606-8551-ba1c3a53f933-kube-api-access-vl4sl\") pod \"cert-manager-webhook-6888856db4-fr5hs\" (UID: \"b04573a0-1535-4606-8551-ba1c3a53f933\") " pod="cert-manager/cert-manager-webhook-6888856db4-fr5hs" Feb 27 16:24:43 crc kubenswrapper[4830]: I0227 16:24:43.388604 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b04573a0-1535-4606-8551-ba1c3a53f933-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-fr5hs\" (UID: \"b04573a0-1535-4606-8551-ba1c3a53f933\") " pod="cert-manager/cert-manager-webhook-6888856db4-fr5hs" Feb 27 16:24:43 crc kubenswrapper[4830]: I0227 16:24:43.494277 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-fr5hs" Feb 27 16:24:43 crc kubenswrapper[4830]: I0227 16:24:43.764302 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-fr5hs"] Feb 27 16:24:43 crc kubenswrapper[4830]: W0227 16:24:43.769842 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb04573a0_1535_4606_8551_ba1c3a53f933.slice/crio-98ae55ab1d44b15374ca9a07e5d7143fa666f6053d6b136b5898de263af12e1e WatchSource:0}: Error finding container 98ae55ab1d44b15374ca9a07e5d7143fa666f6053d6b136b5898de263af12e1e: Status 404 returned error can't find the container with id 98ae55ab1d44b15374ca9a07e5d7143fa666f6053d6b136b5898de263af12e1e Feb 27 16:24:43 crc kubenswrapper[4830]: I0227 16:24:43.836331 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cgjpn" Feb 27 16:24:43 crc kubenswrapper[4830]: I0227 16:24:43.837434 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cgjpn" Feb 27 16:24:43 crc kubenswrapper[4830]: I0227 16:24:43.891561 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cgjpn" Feb 27 16:24:44 crc kubenswrapper[4830]: I0227 16:24:44.175584 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-fr5hs" event={"ID":"b04573a0-1535-4606-8551-ba1c3a53f933","Type":"ContainerStarted","Data":"98ae55ab1d44b15374ca9a07e5d7143fa666f6053d6b136b5898de263af12e1e"} Feb 27 16:24:44 crc kubenswrapper[4830]: I0227 16:24:44.229205 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cgjpn" Feb 27 16:24:44 crc kubenswrapper[4830]: I0227 16:24:44.453853 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cgjpn"] Feb 27 16:24:46 crc kubenswrapper[4830]: I0227 16:24:46.195154 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cgjpn" podUID="8ff1e8d8-4559-4849-9e50-ed97beeba7af" containerName="registry-server" containerID="cri-o://000fa361406936737f4b4a626bcf6f4993bf3ceb2e4d03f290f013e1b8173fb8" gracePeriod=2 Feb 27 16:24:46 crc kubenswrapper[4830]: I0227 16:24:46.626690 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cgjpn" Feb 27 16:24:46 crc kubenswrapper[4830]: I0227 16:24:46.710084 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ff1e8d8-4559-4849-9e50-ed97beeba7af-catalog-content\") pod \"8ff1e8d8-4559-4849-9e50-ed97beeba7af\" (UID: \"8ff1e8d8-4559-4849-9e50-ed97beeba7af\") " Feb 27 16:24:46 crc kubenswrapper[4830]: I0227 16:24:46.710204 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwv5w\" (UniqueName: \"kubernetes.io/projected/8ff1e8d8-4559-4849-9e50-ed97beeba7af-kube-api-access-rwv5w\") pod \"8ff1e8d8-4559-4849-9e50-ed97beeba7af\" (UID: \"8ff1e8d8-4559-4849-9e50-ed97beeba7af\") " Feb 27 16:24:46 crc kubenswrapper[4830]: I0227 16:24:46.710281 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ff1e8d8-4559-4849-9e50-ed97beeba7af-utilities\") pod \"8ff1e8d8-4559-4849-9e50-ed97beeba7af\" (UID: \"8ff1e8d8-4559-4849-9e50-ed97beeba7af\") " Feb 27 16:24:46 crc kubenswrapper[4830]: I0227 16:24:46.711378 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ff1e8d8-4559-4849-9e50-ed97beeba7af-utilities" (OuterVolumeSpecName: "utilities") pod "8ff1e8d8-4559-4849-9e50-ed97beeba7af" (UID: "8ff1e8d8-4559-4849-9e50-ed97beeba7af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:24:46 crc kubenswrapper[4830]: I0227 16:24:46.717254 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ff1e8d8-4559-4849-9e50-ed97beeba7af-kube-api-access-rwv5w" (OuterVolumeSpecName: "kube-api-access-rwv5w") pod "8ff1e8d8-4559-4849-9e50-ed97beeba7af" (UID: "8ff1e8d8-4559-4849-9e50-ed97beeba7af"). InnerVolumeSpecName "kube-api-access-rwv5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:24:46 crc kubenswrapper[4830]: I0227 16:24:46.780819 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ff1e8d8-4559-4849-9e50-ed97beeba7af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ff1e8d8-4559-4849-9e50-ed97beeba7af" (UID: "8ff1e8d8-4559-4849-9e50-ed97beeba7af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:24:46 crc kubenswrapper[4830]: I0227 16:24:46.811527 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ff1e8d8-4559-4849-9e50-ed97beeba7af-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 16:24:46 crc kubenswrapper[4830]: I0227 16:24:46.811561 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwv5w\" (UniqueName: \"kubernetes.io/projected/8ff1e8d8-4559-4849-9e50-ed97beeba7af-kube-api-access-rwv5w\") on node \"crc\" DevicePath \"\"" Feb 27 16:24:46 crc kubenswrapper[4830]: I0227 16:24:46.811577 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ff1e8d8-4559-4849-9e50-ed97beeba7af-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 16:24:47 crc kubenswrapper[4830]: I0227 16:24:47.202013 4830 generic.go:334] "Generic (PLEG): container finished" podID="8ff1e8d8-4559-4849-9e50-ed97beeba7af" containerID="000fa361406936737f4b4a626bcf6f4993bf3ceb2e4d03f290f013e1b8173fb8" exitCode=0 Feb 27 16:24:47 crc kubenswrapper[4830]: I0227 16:24:47.202236 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cgjpn" event={"ID":"8ff1e8d8-4559-4849-9e50-ed97beeba7af","Type":"ContainerDied","Data":"000fa361406936737f4b4a626bcf6f4993bf3ceb2e4d03f290f013e1b8173fb8"} Feb 27 16:24:47 crc kubenswrapper[4830]: I0227 16:24:47.202401 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cgjpn" event={"ID":"8ff1e8d8-4559-4849-9e50-ed97beeba7af","Type":"ContainerDied","Data":"363695dc7d03da74d1219c1a98d965692265dea051d344f10a3d368b60b53bdc"} Feb 27 16:24:47 crc kubenswrapper[4830]: I0227 16:24:47.202423 4830 scope.go:117] "RemoveContainer" containerID="000fa361406936737f4b4a626bcf6f4993bf3ceb2e4d03f290f013e1b8173fb8" Feb 27 16:24:47 crc kubenswrapper[4830]: I0227 16:24:47.202360 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cgjpn" Feb 27 16:24:47 crc kubenswrapper[4830]: I0227 16:24:47.244665 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cgjpn"] Feb 27 16:24:47 crc kubenswrapper[4830]: I0227 16:24:47.250563 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cgjpn"] Feb 27 16:24:48 crc kubenswrapper[4830]: I0227 16:24:48.775235 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ff1e8d8-4559-4849-9e50-ed97beeba7af" path="/var/lib/kubelet/pods/8ff1e8d8-4559-4849-9e50-ed97beeba7af/volumes" Feb 27 16:24:49 crc kubenswrapper[4830]: I0227 16:24:49.496637 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-s29zc"] Feb 27 16:24:49 crc kubenswrapper[4830]: E0227 16:24:49.496905 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ff1e8d8-4559-4849-9e50-ed97beeba7af" containerName="extract-content" Feb 27 16:24:49 crc kubenswrapper[4830]: I0227 16:24:49.496921 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ff1e8d8-4559-4849-9e50-ed97beeba7af" containerName="extract-content" Feb 27 16:24:49 crc kubenswrapper[4830]: E0227 16:24:49.496937 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ff1e8d8-4559-4849-9e50-ed97beeba7af" containerName="registry-server" Feb 27 16:24:49 crc kubenswrapper[4830]: I0227 16:24:49.496962 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ff1e8d8-4559-4849-9e50-ed97beeba7af" containerName="registry-server" Feb 27 16:24:49 crc kubenswrapper[4830]: E0227 16:24:49.496984 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ff1e8d8-4559-4849-9e50-ed97beeba7af" containerName="extract-utilities" Feb 27 16:24:49 crc kubenswrapper[4830]: I0227 16:24:49.496992 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ff1e8d8-4559-4849-9e50-ed97beeba7af" containerName="extract-utilities" Feb 27 16:24:49 crc kubenswrapper[4830]: I0227 16:24:49.497121 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ff1e8d8-4559-4849-9e50-ed97beeba7af" containerName="registry-server" Feb 27 16:24:49 crc kubenswrapper[4830]: I0227 16:24:49.497544 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-s29zc" Feb 27 16:24:49 crc kubenswrapper[4830]: I0227 16:24:49.502065 4830 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-kbqbs" Feb 27 16:24:49 crc kubenswrapper[4830]: I0227 16:24:49.517035 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-s29zc"] Feb 27 16:24:49 crc kubenswrapper[4830]: I0227 16:24:49.553985 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w489q\" (UniqueName: \"kubernetes.io/projected/bcd67c59-ad0e-4ca7-b11a-91a4f441ddb4-kube-api-access-w489q\") pod \"cert-manager-cainjector-5545bd876-s29zc\" (UID: \"bcd67c59-ad0e-4ca7-b11a-91a4f441ddb4\") " pod="cert-manager/cert-manager-cainjector-5545bd876-s29zc" Feb 27 16:24:49 crc kubenswrapper[4830]: I0227 16:24:49.554094 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcd67c59-ad0e-4ca7-b11a-91a4f441ddb4-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-s29zc\" (UID: \"bcd67c59-ad0e-4ca7-b11a-91a4f441ddb4\") " pod="cert-manager/cert-manager-cainjector-5545bd876-s29zc" Feb 27 16:24:49 crc kubenswrapper[4830]: I0227 16:24:49.655163 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcd67c59-ad0e-4ca7-b11a-91a4f441ddb4-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-s29zc\" (UID: \"bcd67c59-ad0e-4ca7-b11a-91a4f441ddb4\") " pod="cert-manager/cert-manager-cainjector-5545bd876-s29zc" Feb 27 16:24:49 crc kubenswrapper[4830]: I0227 16:24:49.655278 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w489q\" (UniqueName: \"kubernetes.io/projected/bcd67c59-ad0e-4ca7-b11a-91a4f441ddb4-kube-api-access-w489q\") pod \"cert-manager-cainjector-5545bd876-s29zc\" (UID: \"bcd67c59-ad0e-4ca7-b11a-91a4f441ddb4\") " pod="cert-manager/cert-manager-cainjector-5545bd876-s29zc" Feb 27 16:24:49 crc kubenswrapper[4830]: I0227 16:24:49.698573 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w489q\" (UniqueName: \"kubernetes.io/projected/bcd67c59-ad0e-4ca7-b11a-91a4f441ddb4-kube-api-access-w489q\") pod \"cert-manager-cainjector-5545bd876-s29zc\" (UID: \"bcd67c59-ad0e-4ca7-b11a-91a4f441ddb4\") " pod="cert-manager/cert-manager-cainjector-5545bd876-s29zc" Feb 27 16:24:49 crc kubenswrapper[4830]: I0227 16:24:49.705566 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcd67c59-ad0e-4ca7-b11a-91a4f441ddb4-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-s29zc\" (UID: \"bcd67c59-ad0e-4ca7-b11a-91a4f441ddb4\") " pod="cert-manager/cert-manager-cainjector-5545bd876-s29zc" Feb 27 16:24:49 crc kubenswrapper[4830]: I0227 16:24:49.777278 4830 scope.go:117] "RemoveContainer" containerID="53e54faf3a811750187ae03d7728e1285e690f85b74165fe4331b3392e2cabe1" Feb 27 16:24:49 crc kubenswrapper[4830]: I0227 16:24:49.816435 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-s29zc" Feb 27 16:24:49 crc kubenswrapper[4830]: I0227 16:24:49.843692 4830 scope.go:117] "RemoveContainer" containerID="022f033d4f557ed60e552f59d36eb0b79044c702a2846b2289f2f9992478af34" Feb 27 16:24:49 crc kubenswrapper[4830]: I0227 16:24:49.869522 4830 scope.go:117] "RemoveContainer" containerID="000fa361406936737f4b4a626bcf6f4993bf3ceb2e4d03f290f013e1b8173fb8" Feb 27 16:24:49 crc kubenswrapper[4830]: E0227 16:24:49.870329 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"000fa361406936737f4b4a626bcf6f4993bf3ceb2e4d03f290f013e1b8173fb8\": container with ID starting with 000fa361406936737f4b4a626bcf6f4993bf3ceb2e4d03f290f013e1b8173fb8 not found: ID does not exist" containerID="000fa361406936737f4b4a626bcf6f4993bf3ceb2e4d03f290f013e1b8173fb8" Feb 27 16:24:49 crc kubenswrapper[4830]: I0227 16:24:49.870371 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"000fa361406936737f4b4a626bcf6f4993bf3ceb2e4d03f290f013e1b8173fb8"} err="failed to get container status \"000fa361406936737f4b4a626bcf6f4993bf3ceb2e4d03f290f013e1b8173fb8\": rpc error: code = NotFound desc = could not find container \"000fa361406936737f4b4a626bcf6f4993bf3ceb2e4d03f290f013e1b8173fb8\": container with ID starting with 000fa361406936737f4b4a626bcf6f4993bf3ceb2e4d03f290f013e1b8173fb8 not found: ID does not exist" Feb 27 16:24:49 crc kubenswrapper[4830]: I0227 16:24:49.870400 4830 scope.go:117] "RemoveContainer" containerID="53e54faf3a811750187ae03d7728e1285e690f85b74165fe4331b3392e2cabe1" Feb 27 16:24:49 crc kubenswrapper[4830]: E0227 16:24:49.870820 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53e54faf3a811750187ae03d7728e1285e690f85b74165fe4331b3392e2cabe1\": container with ID starting with 53e54faf3a811750187ae03d7728e1285e690f85b74165fe4331b3392e2cabe1 not found: ID does not exist" containerID="53e54faf3a811750187ae03d7728e1285e690f85b74165fe4331b3392e2cabe1" Feb 27 16:24:49 crc kubenswrapper[4830]: I0227 16:24:49.870848 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53e54faf3a811750187ae03d7728e1285e690f85b74165fe4331b3392e2cabe1"} err="failed to get container status \"53e54faf3a811750187ae03d7728e1285e690f85b74165fe4331b3392e2cabe1\": rpc error: code = NotFound desc = could not find container \"53e54faf3a811750187ae03d7728e1285e690f85b74165fe4331b3392e2cabe1\": container with ID starting with 53e54faf3a811750187ae03d7728e1285e690f85b74165fe4331b3392e2cabe1 not found: ID does not exist" Feb 27 16:24:49 crc kubenswrapper[4830]: I0227 16:24:49.870873 4830 scope.go:117] "RemoveContainer" containerID="022f033d4f557ed60e552f59d36eb0b79044c702a2846b2289f2f9992478af34" Feb 27 16:24:49 crc kubenswrapper[4830]: E0227 16:24:49.871138 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"022f033d4f557ed60e552f59d36eb0b79044c702a2846b2289f2f9992478af34\": container with ID starting with 022f033d4f557ed60e552f59d36eb0b79044c702a2846b2289f2f9992478af34 not found: ID does not exist" containerID="022f033d4f557ed60e552f59d36eb0b79044c702a2846b2289f2f9992478af34" Feb 27 16:24:49 crc kubenswrapper[4830]: I0227 16:24:49.871163 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"022f033d4f557ed60e552f59d36eb0b79044c702a2846b2289f2f9992478af34"} err="failed to get container status \"022f033d4f557ed60e552f59d36eb0b79044c702a2846b2289f2f9992478af34\": rpc error: code = NotFound desc = could not find container \"022f033d4f557ed60e552f59d36eb0b79044c702a2846b2289f2f9992478af34\": container with ID starting with 022f033d4f557ed60e552f59d36eb0b79044c702a2846b2289f2f9992478af34 not found: ID does not exist" Feb 27 16:24:50 crc kubenswrapper[4830]: I0227 16:24:50.227512 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-fr5hs" event={"ID":"b04573a0-1535-4606-8551-ba1c3a53f933","Type":"ContainerStarted","Data":"4b26b39d5afd2c70666793ae8d4581613b3459ddcf757cc08d8836f3e3ec8d76"} Feb 27 16:24:50 crc kubenswrapper[4830]: I0227 16:24:50.227978 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-fr5hs" Feb 27 16:24:50 crc kubenswrapper[4830]: I0227 16:24:50.256782 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-fr5hs" podStartSLOduration=1.174363078 podStartE2EDuration="7.256753062s" podCreationTimestamp="2026-02-27 16:24:43 +0000 UTC" firstStartedPulling="2026-02-27 16:24:43.772149098 +0000 UTC m=+1079.861421561" lastFinishedPulling="2026-02-27 16:24:49.854539072 +0000 UTC m=+1085.943811545" observedRunningTime="2026-02-27 16:24:50.247550612 +0000 UTC m=+1086.336823115" watchObservedRunningTime="2026-02-27 16:24:50.256753062 +0000 UTC m=+1086.346025595" Feb 27 16:24:50 crc kubenswrapper[4830]: I0227 16:24:50.296032 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-s29zc"] Feb 27 16:24:50 crc kubenswrapper[4830]: W0227 16:24:50.302550 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcd67c59_ad0e_4ca7_b11a_91a4f441ddb4.slice/crio-ce0c9aee1f619d83612825905357a408386a09dcf5c8146d458f3337920f0213 WatchSource:0}: Error finding container ce0c9aee1f619d83612825905357a408386a09dcf5c8146d458f3337920f0213: Status 404 returned error can't find the container with id ce0c9aee1f619d83612825905357a408386a09dcf5c8146d458f3337920f0213 Feb 27 16:24:51 crc kubenswrapper[4830]: I0227 16:24:51.240507 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-s29zc" event={"ID":"bcd67c59-ad0e-4ca7-b11a-91a4f441ddb4","Type":"ContainerStarted","Data":"47e0f3f880c9d1f4adc18e225393239f4a1a5cdeadb3b97969900801f06c47ba"} Feb 27 16:24:51 crc kubenswrapper[4830]: I0227 16:24:51.240868 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-s29zc" event={"ID":"bcd67c59-ad0e-4ca7-b11a-91a4f441ddb4","Type":"ContainerStarted","Data":"ce0c9aee1f619d83612825905357a408386a09dcf5c8146d458f3337920f0213"} Feb 27 16:24:58 crc kubenswrapper[4830]: I0227 16:24:58.498131 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-fr5hs" Feb 27 16:24:58 crc kubenswrapper[4830]: I0227 16:24:58.518914 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-s29zc" podStartSLOduration=9.51889471 podStartE2EDuration="9.51889471s" podCreationTimestamp="2026-02-27 16:24:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:24:51.278451622 +0000 UTC m=+1087.367724095" watchObservedRunningTime="2026-02-27 16:24:58.51889471 +0000 UTC m=+1094.608167183" Feb 27 16:25:02 crc kubenswrapper[4830]: I0227 16:25:02.033840 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-xc4z5"] Feb 27 16:25:02 crc kubenswrapper[4830]: I0227 16:25:02.035449 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-xc4z5" Feb 27 16:25:02 crc kubenswrapper[4830]: I0227 16:25:02.039286 4830 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-76vjf" Feb 27 16:25:02 crc kubenswrapper[4830]: I0227 16:25:02.053166 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-xc4z5"] Feb 27 16:25:02 crc kubenswrapper[4830]: I0227 16:25:02.142998 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qj7g\" (UniqueName: \"kubernetes.io/projected/15de8621-6ef1-450c-8af3-e039897a9a14-kube-api-access-9qj7g\") pod \"cert-manager-545d4d4674-xc4z5\" (UID: \"15de8621-6ef1-450c-8af3-e039897a9a14\") " pod="cert-manager/cert-manager-545d4d4674-xc4z5" Feb 27 16:25:02 crc kubenswrapper[4830]: I0227 16:25:02.143214 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/15de8621-6ef1-450c-8af3-e039897a9a14-bound-sa-token\") pod \"cert-manager-545d4d4674-xc4z5\" (UID: \"15de8621-6ef1-450c-8af3-e039897a9a14\") " pod="cert-manager/cert-manager-545d4d4674-xc4z5" Feb 27 16:25:02 crc kubenswrapper[4830]: I0227 16:25:02.244612 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qj7g\" (UniqueName: \"kubernetes.io/projected/15de8621-6ef1-450c-8af3-e039897a9a14-kube-api-access-9qj7g\") pod \"cert-manager-545d4d4674-xc4z5\" (UID: \"15de8621-6ef1-450c-8af3-e039897a9a14\") " pod="cert-manager/cert-manager-545d4d4674-xc4z5" Feb 27 16:25:02 crc kubenswrapper[4830]: I0227 16:25:02.245333 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/15de8621-6ef1-450c-8af3-e039897a9a14-bound-sa-token\") pod \"cert-manager-545d4d4674-xc4z5\" (UID: \"15de8621-6ef1-450c-8af3-e039897a9a14\") " pod="cert-manager/cert-manager-545d4d4674-xc4z5" Feb 27 16:25:02 crc kubenswrapper[4830]: I0227 16:25:02.278024 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/15de8621-6ef1-450c-8af3-e039897a9a14-bound-sa-token\") pod \"cert-manager-545d4d4674-xc4z5\" (UID: \"15de8621-6ef1-450c-8af3-e039897a9a14\") " pod="cert-manager/cert-manager-545d4d4674-xc4z5" Feb 27 16:25:02 crc kubenswrapper[4830]: I0227 16:25:02.279426 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qj7g\" (UniqueName: \"kubernetes.io/projected/15de8621-6ef1-450c-8af3-e039897a9a14-kube-api-access-9qj7g\") pod \"cert-manager-545d4d4674-xc4z5\" (UID: \"15de8621-6ef1-450c-8af3-e039897a9a14\") " pod="cert-manager/cert-manager-545d4d4674-xc4z5" Feb 27 16:25:02 crc kubenswrapper[4830]: I0227 16:25:02.363830 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-xc4z5" Feb 27 16:25:02 crc kubenswrapper[4830]: I0227 16:25:02.877004 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-xc4z5"] Feb 27 16:25:03 crc kubenswrapper[4830]: I0227 16:25:03.339311 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-xc4z5" event={"ID":"15de8621-6ef1-450c-8af3-e039897a9a14","Type":"ContainerStarted","Data":"14001b119f7ea4bdd98b385e08bf7e71b737aa4f25bca472f33544737083a29e"} Feb 27 16:25:03 crc kubenswrapper[4830]: I0227 16:25:03.339379 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-xc4z5" event={"ID":"15de8621-6ef1-450c-8af3-e039897a9a14","Type":"ContainerStarted","Data":"62c2a5a465dfb85f3a04b3cb8eabcad9a72c3df34e6c93329550df84d53fe728"} Feb 27 16:25:03 crc kubenswrapper[4830]: I0227 16:25:03.366302 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-xc4z5" podStartSLOduration=1.366277953 podStartE2EDuration="1.366277953s" podCreationTimestamp="2026-02-27 16:25:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:25:03.364705384 +0000 UTC m=+1099.453977887" watchObservedRunningTime="2026-02-27 16:25:03.366277953 +0000 UTC m=+1099.455550446" Feb 27 16:25:11 crc kubenswrapper[4830]: I0227 16:25:11.961161 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-pj26p"] Feb 27 16:25:11 crc kubenswrapper[4830]: I0227 16:25:11.962709 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pj26p" Feb 27 16:25:11 crc kubenswrapper[4830]: I0227 16:25:11.965908 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 27 16:25:11 crc kubenswrapper[4830]: I0227 16:25:11.967339 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 27 16:25:11 crc kubenswrapper[4830]: I0227 16:25:11.972277 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-zjcrx" Feb 27 16:25:11 crc kubenswrapper[4830]: I0227 16:25:11.983778 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-pj26p"] Feb 27 16:25:12 crc kubenswrapper[4830]: I0227 16:25:12.046562 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jpkx\" (UniqueName: \"kubernetes.io/projected/7b3c87ca-4cef-42ce-8cdb-0f618eb0e342-kube-api-access-5jpkx\") pod \"openstack-operator-index-pj26p\" (UID: \"7b3c87ca-4cef-42ce-8cdb-0f618eb0e342\") " pod="openstack-operators/openstack-operator-index-pj26p" Feb 27 16:25:12 crc kubenswrapper[4830]: I0227 16:25:12.147595 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jpkx\" (UniqueName: \"kubernetes.io/projected/7b3c87ca-4cef-42ce-8cdb-0f618eb0e342-kube-api-access-5jpkx\") pod \"openstack-operator-index-pj26p\" (UID: \"7b3c87ca-4cef-42ce-8cdb-0f618eb0e342\") " pod="openstack-operators/openstack-operator-index-pj26p" Feb 27 16:25:12 crc kubenswrapper[4830]: I0227 16:25:12.170018 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jpkx\" (UniqueName: \"kubernetes.io/projected/7b3c87ca-4cef-42ce-8cdb-0f618eb0e342-kube-api-access-5jpkx\") pod \"openstack-operator-index-pj26p\" (UID: \"7b3c87ca-4cef-42ce-8cdb-0f618eb0e342\") " pod="openstack-operators/openstack-operator-index-pj26p" Feb 27 16:25:12 crc kubenswrapper[4830]: I0227 16:25:12.283353 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pj26p" Feb 27 16:25:12 crc kubenswrapper[4830]: I0227 16:25:12.523297 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-pj26p"] Feb 27 16:25:13 crc kubenswrapper[4830]: I0227 16:25:13.425162 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pj26p" event={"ID":"7b3c87ca-4cef-42ce-8cdb-0f618eb0e342","Type":"ContainerStarted","Data":"58f35d846b62d0f564172096fa58ad284c767c34594f83df4adc1e9037ea6a31"} Feb 27 16:25:15 crc kubenswrapper[4830]: I0227 16:25:15.319699 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-pj26p"] Feb 27 16:25:15 crc kubenswrapper[4830]: I0227 16:25:15.936091 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-4p2qq"] Feb 27 16:25:15 crc kubenswrapper[4830]: I0227 16:25:15.937822 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-4p2qq" Feb 27 16:25:15 crc kubenswrapper[4830]: I0227 16:25:15.984093 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-4p2qq"] Feb 27 16:25:16 crc kubenswrapper[4830]: I0227 16:25:16.119083 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpdw5\" (UniqueName: \"kubernetes.io/projected/b2c0ed51-a6e9-40cd-8ce9-fa9f810528a1-kube-api-access-bpdw5\") pod \"openstack-operator-index-4p2qq\" (UID: \"b2c0ed51-a6e9-40cd-8ce9-fa9f810528a1\") " pod="openstack-operators/openstack-operator-index-4p2qq" Feb 27 16:25:16 crc kubenswrapper[4830]: I0227 16:25:16.220364 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpdw5\" (UniqueName: \"kubernetes.io/projected/b2c0ed51-a6e9-40cd-8ce9-fa9f810528a1-kube-api-access-bpdw5\") pod \"openstack-operator-index-4p2qq\" (UID: \"b2c0ed51-a6e9-40cd-8ce9-fa9f810528a1\") " pod="openstack-operators/openstack-operator-index-4p2qq" Feb 27 16:25:16 crc kubenswrapper[4830]: I0227 16:25:16.250467 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpdw5\" (UniqueName: \"kubernetes.io/projected/b2c0ed51-a6e9-40cd-8ce9-fa9f810528a1-kube-api-access-bpdw5\") pod \"openstack-operator-index-4p2qq\" (UID: \"b2c0ed51-a6e9-40cd-8ce9-fa9f810528a1\") " pod="openstack-operators/openstack-operator-index-4p2qq" Feb 27 16:25:16 crc kubenswrapper[4830]: I0227 16:25:16.293902 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-4p2qq" Feb 27 16:25:16 crc kubenswrapper[4830]: I0227 16:25:16.460004 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pj26p" event={"ID":"7b3c87ca-4cef-42ce-8cdb-0f618eb0e342","Type":"ContainerStarted","Data":"8e31b9ddfefd0cf80f9f2027f9e12ab11f5fb59e0de116f402215ef55d0e813b"} Feb 27 16:25:16 crc kubenswrapper[4830]: I0227 16:25:16.460345 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-pj26p" podUID="7b3c87ca-4cef-42ce-8cdb-0f618eb0e342" containerName="registry-server" containerID="cri-o://8e31b9ddfefd0cf80f9f2027f9e12ab11f5fb59e0de116f402215ef55d0e813b" gracePeriod=2 Feb 27 16:25:16 crc kubenswrapper[4830]: I0227 16:25:16.477970 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-pj26p" podStartSLOduration=2.538682309 podStartE2EDuration="5.477937739s" podCreationTimestamp="2026-02-27 16:25:11 +0000 UTC" firstStartedPulling="2026-02-27 16:25:12.532548285 +0000 UTC m=+1108.621820758" lastFinishedPulling="2026-02-27 16:25:15.471803725 +0000 UTC m=+1111.561076188" observedRunningTime="2026-02-27 16:25:16.475059838 +0000 UTC m=+1112.564332341" watchObservedRunningTime="2026-02-27 16:25:16.477937739 +0000 UTC m=+1112.567210212" Feb 27 16:25:16 crc kubenswrapper[4830]: I0227 16:25:16.587266 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-4p2qq"] Feb 27 16:25:16 crc kubenswrapper[4830]: W0227 16:25:16.589850 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2c0ed51_a6e9_40cd_8ce9_fa9f810528a1.slice/crio-6443666bb8d9a2f838cc91a64e1129773d9383e8c43bb4f55126ae7d41272197 WatchSource:0}: Error finding container 6443666bb8d9a2f838cc91a64e1129773d9383e8c43bb4f55126ae7d41272197: Status 404 returned error can't find the container with id 6443666bb8d9a2f838cc91a64e1129773d9383e8c43bb4f55126ae7d41272197 Feb 27 16:25:16 crc kubenswrapper[4830]: I0227 16:25:16.856292 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pj26p" Feb 27 16:25:17 crc kubenswrapper[4830]: I0227 16:25:17.033711 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jpkx\" (UniqueName: \"kubernetes.io/projected/7b3c87ca-4cef-42ce-8cdb-0f618eb0e342-kube-api-access-5jpkx\") pod \"7b3c87ca-4cef-42ce-8cdb-0f618eb0e342\" (UID: \"7b3c87ca-4cef-42ce-8cdb-0f618eb0e342\") " Feb 27 16:25:17 crc kubenswrapper[4830]: I0227 16:25:17.042407 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b3c87ca-4cef-42ce-8cdb-0f618eb0e342-kube-api-access-5jpkx" (OuterVolumeSpecName: "kube-api-access-5jpkx") pod "7b3c87ca-4cef-42ce-8cdb-0f618eb0e342" (UID: "7b3c87ca-4cef-42ce-8cdb-0f618eb0e342"). InnerVolumeSpecName "kube-api-access-5jpkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:25:17 crc kubenswrapper[4830]: I0227 16:25:17.135844 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jpkx\" (UniqueName: \"kubernetes.io/projected/7b3c87ca-4cef-42ce-8cdb-0f618eb0e342-kube-api-access-5jpkx\") on node \"crc\" DevicePath \"\"" Feb 27 16:25:17 crc kubenswrapper[4830]: I0227 16:25:17.467313 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-4p2qq" event={"ID":"b2c0ed51-a6e9-40cd-8ce9-fa9f810528a1","Type":"ContainerStarted","Data":"ec3c7f4df7b12da5beeeba46d1f9c139cfbbd25fa4994b239ad65870021d9af3"} Feb 27 16:25:17 crc kubenswrapper[4830]: I0227 16:25:17.467356 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-4p2qq" event={"ID":"b2c0ed51-a6e9-40cd-8ce9-fa9f810528a1","Type":"ContainerStarted","Data":"6443666bb8d9a2f838cc91a64e1129773d9383e8c43bb4f55126ae7d41272197"} Feb 27 16:25:17 crc kubenswrapper[4830]: I0227 16:25:17.469018 4830 generic.go:334] "Generic (PLEG): container finished" podID="7b3c87ca-4cef-42ce-8cdb-0f618eb0e342" containerID="8e31b9ddfefd0cf80f9f2027f9e12ab11f5fb59e0de116f402215ef55d0e813b" exitCode=0 Feb 27 16:25:17 crc kubenswrapper[4830]: I0227 16:25:17.469047 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pj26p" event={"ID":"7b3c87ca-4cef-42ce-8cdb-0f618eb0e342","Type":"ContainerDied","Data":"8e31b9ddfefd0cf80f9f2027f9e12ab11f5fb59e0de116f402215ef55d0e813b"} Feb 27 16:25:17 crc kubenswrapper[4830]: I0227 16:25:17.469083 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pj26p" event={"ID":"7b3c87ca-4cef-42ce-8cdb-0f618eb0e342","Type":"ContainerDied","Data":"58f35d846b62d0f564172096fa58ad284c767c34594f83df4adc1e9037ea6a31"} Feb 27 16:25:17 crc kubenswrapper[4830]: I0227 16:25:17.469105 4830 scope.go:117] "RemoveContainer" containerID="8e31b9ddfefd0cf80f9f2027f9e12ab11f5fb59e0de116f402215ef55d0e813b" Feb 27 16:25:17 crc kubenswrapper[4830]: I0227 16:25:17.469454 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pj26p" Feb 27 16:25:17 crc kubenswrapper[4830]: I0227 16:25:17.489598 4830 scope.go:117] "RemoveContainer" containerID="8e31b9ddfefd0cf80f9f2027f9e12ab11f5fb59e0de116f402215ef55d0e813b" Feb 27 16:25:17 crc kubenswrapper[4830]: E0227 16:25:17.491155 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e31b9ddfefd0cf80f9f2027f9e12ab11f5fb59e0de116f402215ef55d0e813b\": container with ID starting with 8e31b9ddfefd0cf80f9f2027f9e12ab11f5fb59e0de116f402215ef55d0e813b not found: ID does not exist" containerID="8e31b9ddfefd0cf80f9f2027f9e12ab11f5fb59e0de116f402215ef55d0e813b" Feb 27 16:25:17 crc kubenswrapper[4830]: I0227 16:25:17.491207 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e31b9ddfefd0cf80f9f2027f9e12ab11f5fb59e0de116f402215ef55d0e813b"} err="failed to get container status \"8e31b9ddfefd0cf80f9f2027f9e12ab11f5fb59e0de116f402215ef55d0e813b\": rpc error: code = NotFound desc = could not find container \"8e31b9ddfefd0cf80f9f2027f9e12ab11f5fb59e0de116f402215ef55d0e813b\": container with ID starting with 8e31b9ddfefd0cf80f9f2027f9e12ab11f5fb59e0de116f402215ef55d0e813b not found: ID does not exist" Feb 27 16:25:17 crc kubenswrapper[4830]: I0227 16:25:17.491402 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-4p2qq" podStartSLOduration=2.44003135 podStartE2EDuration="2.491377994s" podCreationTimestamp="2026-02-27 16:25:15 +0000 UTC" firstStartedPulling="2026-02-27 16:25:16.594076768 +0000 UTC m=+1112.683349241" lastFinishedPulling="2026-02-27 16:25:16.645423422 +0000 UTC m=+1112.734695885" observedRunningTime="2026-02-27 16:25:17.486812811 +0000 UTC m=+1113.576085284" watchObservedRunningTime="2026-02-27 16:25:17.491377994 +0000 UTC m=+1113.580650467" Feb 27 16:25:17 crc kubenswrapper[4830]: I0227 16:25:17.512516 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-pj26p"] Feb 27 16:25:17 crc kubenswrapper[4830]: I0227 16:25:17.521077 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-pj26p"] Feb 27 16:25:18 crc kubenswrapper[4830]: I0227 16:25:18.772534 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b3c87ca-4cef-42ce-8cdb-0f618eb0e342" path="/var/lib/kubelet/pods/7b3c87ca-4cef-42ce-8cdb-0f618eb0e342/volumes" Feb 27 16:25:26 crc kubenswrapper[4830]: I0227 16:25:26.294588 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-4p2qq" Feb 27 16:25:26 crc kubenswrapper[4830]: I0227 16:25:26.295261 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-4p2qq" Feb 27 16:25:26 crc kubenswrapper[4830]: I0227 16:25:26.332638 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-4p2qq" Feb 27 16:25:26 crc kubenswrapper[4830]: I0227 16:25:26.580160 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-4p2qq" Feb 27 16:25:33 crc kubenswrapper[4830]: I0227 16:25:33.772704 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/56ecd13db59bdd13e8f9c434db3243d93729dc89d3d863bd79a8b38d1ddq5l4"] Feb 27 16:25:33 crc kubenswrapper[4830]: E0227 16:25:33.773803 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b3c87ca-4cef-42ce-8cdb-0f618eb0e342" containerName="registry-server" Feb 27 16:25:33 crc kubenswrapper[4830]: I0227 16:25:33.773825 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b3c87ca-4cef-42ce-8cdb-0f618eb0e342" containerName="registry-server" Feb 27 16:25:33 crc kubenswrapper[4830]: I0227 16:25:33.774087 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b3c87ca-4cef-42ce-8cdb-0f618eb0e342" containerName="registry-server" Feb 27 16:25:33 crc kubenswrapper[4830]: I0227 16:25:33.775596 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/56ecd13db59bdd13e8f9c434db3243d93729dc89d3d863bd79a8b38d1ddq5l4" Feb 27 16:25:33 crc kubenswrapper[4830]: I0227 16:25:33.782370 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-v5p65" Feb 27 16:25:33 crc kubenswrapper[4830]: I0227 16:25:33.783308 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/56ecd13db59bdd13e8f9c434db3243d93729dc89d3d863bd79a8b38d1ddq5l4"] Feb 27 16:25:33 crc kubenswrapper[4830]: I0227 16:25:33.797065 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr95r\" (UniqueName: \"kubernetes.io/projected/56801599-f8f5-494d-88bf-2c4786ed93d3-kube-api-access-vr95r\") pod \"56ecd13db59bdd13e8f9c434db3243d93729dc89d3d863bd79a8b38d1ddq5l4\" (UID: \"56801599-f8f5-494d-88bf-2c4786ed93d3\") " pod="openstack-operators/56ecd13db59bdd13e8f9c434db3243d93729dc89d3d863bd79a8b38d1ddq5l4" Feb 27 16:25:33 crc kubenswrapper[4830]: I0227 16:25:33.797218 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/56801599-f8f5-494d-88bf-2c4786ed93d3-util\") pod \"56ecd13db59bdd13e8f9c434db3243d93729dc89d3d863bd79a8b38d1ddq5l4\" (UID: \"56801599-f8f5-494d-88bf-2c4786ed93d3\") " pod="openstack-operators/56ecd13db59bdd13e8f9c434db3243d93729dc89d3d863bd79a8b38d1ddq5l4" Feb 27 16:25:33 crc kubenswrapper[4830]: I0227 16:25:33.797321 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/56801599-f8f5-494d-88bf-2c4786ed93d3-bundle\") pod \"56ecd13db59bdd13e8f9c434db3243d93729dc89d3d863bd79a8b38d1ddq5l4\" (UID: \"56801599-f8f5-494d-88bf-2c4786ed93d3\") " pod="openstack-operators/56ecd13db59bdd13e8f9c434db3243d93729dc89d3d863bd79a8b38d1ddq5l4" Feb 27 16:25:33 crc kubenswrapper[4830]: I0227 16:25:33.899012 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/56801599-f8f5-494d-88bf-2c4786ed93d3-bundle\") pod \"56ecd13db59bdd13e8f9c434db3243d93729dc89d3d863bd79a8b38d1ddq5l4\" (UID: \"56801599-f8f5-494d-88bf-2c4786ed93d3\") " pod="openstack-operators/56ecd13db59bdd13e8f9c434db3243d93729dc89d3d863bd79a8b38d1ddq5l4" Feb 27 16:25:33 crc kubenswrapper[4830]: I0227 16:25:33.899095 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr95r\" (UniqueName: \"kubernetes.io/projected/56801599-f8f5-494d-88bf-2c4786ed93d3-kube-api-access-vr95r\") pod \"56ecd13db59bdd13e8f9c434db3243d93729dc89d3d863bd79a8b38d1ddq5l4\" (UID: \"56801599-f8f5-494d-88bf-2c4786ed93d3\") " pod="openstack-operators/56ecd13db59bdd13e8f9c434db3243d93729dc89d3d863bd79a8b38d1ddq5l4" Feb 27 16:25:33 crc kubenswrapper[4830]: I0227 16:25:33.899152 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/56801599-f8f5-494d-88bf-2c4786ed93d3-util\") pod \"56ecd13db59bdd13e8f9c434db3243d93729dc89d3d863bd79a8b38d1ddq5l4\" (UID: \"56801599-f8f5-494d-88bf-2c4786ed93d3\") " pod="openstack-operators/56ecd13db59bdd13e8f9c434db3243d93729dc89d3d863bd79a8b38d1ddq5l4" Feb 27 16:25:33 crc kubenswrapper[4830]: I0227 16:25:33.899723 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/56801599-f8f5-494d-88bf-2c4786ed93d3-bundle\") pod \"56ecd13db59bdd13e8f9c434db3243d93729dc89d3d863bd79a8b38d1ddq5l4\" (UID: \"56801599-f8f5-494d-88bf-2c4786ed93d3\") " pod="openstack-operators/56ecd13db59bdd13e8f9c434db3243d93729dc89d3d863bd79a8b38d1ddq5l4" Feb 27 16:25:33 crc kubenswrapper[4830]: I0227 16:25:33.899735 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/56801599-f8f5-494d-88bf-2c4786ed93d3-util\") pod \"56ecd13db59bdd13e8f9c434db3243d93729dc89d3d863bd79a8b38d1ddq5l4\" (UID: \"56801599-f8f5-494d-88bf-2c4786ed93d3\") " pod="openstack-operators/56ecd13db59bdd13e8f9c434db3243d93729dc89d3d863bd79a8b38d1ddq5l4" Feb 27 16:25:33 crc kubenswrapper[4830]: I0227 16:25:33.924739 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr95r\" (UniqueName: \"kubernetes.io/projected/56801599-f8f5-494d-88bf-2c4786ed93d3-kube-api-access-vr95r\") pod \"56ecd13db59bdd13e8f9c434db3243d93729dc89d3d863bd79a8b38d1ddq5l4\" (UID: \"56801599-f8f5-494d-88bf-2c4786ed93d3\") " pod="openstack-operators/56ecd13db59bdd13e8f9c434db3243d93729dc89d3d863bd79a8b38d1ddq5l4" Feb 27 16:25:34 crc kubenswrapper[4830]: I0227 16:25:34.141252 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/56ecd13db59bdd13e8f9c434db3243d93729dc89d3d863bd79a8b38d1ddq5l4" Feb 27 16:25:34 crc kubenswrapper[4830]: I0227 16:25:34.604429 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/56ecd13db59bdd13e8f9c434db3243d93729dc89d3d863bd79a8b38d1ddq5l4"] Feb 27 16:25:34 crc kubenswrapper[4830]: W0227 16:25:34.614965 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56801599_f8f5_494d_88bf_2c4786ed93d3.slice/crio-7bd840fd487947aa19a898943a71956486f09cc27f411b65e31b786bcdf396d1 WatchSource:0}: Error finding container 7bd840fd487947aa19a898943a71956486f09cc27f411b65e31b786bcdf396d1: Status 404 returned error can't find the container with id 7bd840fd487947aa19a898943a71956486f09cc27f411b65e31b786bcdf396d1 Feb 27 16:25:35 crc kubenswrapper[4830]: I0227 16:25:35.614890 4830 generic.go:334] "Generic (PLEG): container finished" podID="56801599-f8f5-494d-88bf-2c4786ed93d3" containerID="28d0b269cca788c557ce5ebec33fd60e7d2612a69293d1b1cbcd5639c92a6831" exitCode=0 Feb 27 16:25:35 crc kubenswrapper[4830]: I0227 16:25:35.615025 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/56ecd13db59bdd13e8f9c434db3243d93729dc89d3d863bd79a8b38d1ddq5l4" event={"ID":"56801599-f8f5-494d-88bf-2c4786ed93d3","Type":"ContainerDied","Data":"28d0b269cca788c557ce5ebec33fd60e7d2612a69293d1b1cbcd5639c92a6831"} Feb 27 16:25:35 crc kubenswrapper[4830]: I0227 16:25:35.615394 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/56ecd13db59bdd13e8f9c434db3243d93729dc89d3d863bd79a8b38d1ddq5l4" event={"ID":"56801599-f8f5-494d-88bf-2c4786ed93d3","Type":"ContainerStarted","Data":"7bd840fd487947aa19a898943a71956486f09cc27f411b65e31b786bcdf396d1"} Feb 27 16:25:36 crc kubenswrapper[4830]: I0227 16:25:36.627436 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/56ecd13db59bdd13e8f9c434db3243d93729dc89d3d863bd79a8b38d1ddq5l4" event={"ID":"56801599-f8f5-494d-88bf-2c4786ed93d3","Type":"ContainerStarted","Data":"788c3ffbe8fcf74ec77051415465d36cf84784021dd4d481396bbfa772f78766"} Feb 27 16:25:37 crc kubenswrapper[4830]: I0227 16:25:37.637982 4830 generic.go:334] "Generic (PLEG): container finished" podID="56801599-f8f5-494d-88bf-2c4786ed93d3" containerID="788c3ffbe8fcf74ec77051415465d36cf84784021dd4d481396bbfa772f78766" exitCode=0 Feb 27 16:25:37 crc kubenswrapper[4830]: I0227 16:25:37.638359 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/56ecd13db59bdd13e8f9c434db3243d93729dc89d3d863bd79a8b38d1ddq5l4" event={"ID":"56801599-f8f5-494d-88bf-2c4786ed93d3","Type":"ContainerDied","Data":"788c3ffbe8fcf74ec77051415465d36cf84784021dd4d481396bbfa772f78766"} Feb 27 16:25:38 crc kubenswrapper[4830]: I0227 16:25:38.651626 4830 generic.go:334] "Generic (PLEG): container finished" podID="56801599-f8f5-494d-88bf-2c4786ed93d3" containerID="918a721534b426d311f9b9f10372b5d674ed2aa70bdd81dad8ce4cf7233f752b" exitCode=0 Feb 27 16:25:38 crc kubenswrapper[4830]: I0227 16:25:38.651730 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/56ecd13db59bdd13e8f9c434db3243d93729dc89d3d863bd79a8b38d1ddq5l4" event={"ID":"56801599-f8f5-494d-88bf-2c4786ed93d3","Type":"ContainerDied","Data":"918a721534b426d311f9b9f10372b5d674ed2aa70bdd81dad8ce4cf7233f752b"} Feb 27 16:25:39 crc kubenswrapper[4830]: I0227 16:25:39.957919 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/56ecd13db59bdd13e8f9c434db3243d93729dc89d3d863bd79a8b38d1ddq5l4" Feb 27 16:25:40 crc kubenswrapper[4830]: I0227 16:25:40.113651 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/56801599-f8f5-494d-88bf-2c4786ed93d3-bundle\") pod \"56801599-f8f5-494d-88bf-2c4786ed93d3\" (UID: \"56801599-f8f5-494d-88bf-2c4786ed93d3\") " Feb 27 16:25:40 crc kubenswrapper[4830]: I0227 16:25:40.113814 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/56801599-f8f5-494d-88bf-2c4786ed93d3-util\") pod \"56801599-f8f5-494d-88bf-2c4786ed93d3\" (UID: \"56801599-f8f5-494d-88bf-2c4786ed93d3\") " Feb 27 16:25:40 crc kubenswrapper[4830]: I0227 16:25:40.113912 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr95r\" (UniqueName: \"kubernetes.io/projected/56801599-f8f5-494d-88bf-2c4786ed93d3-kube-api-access-vr95r\") pod \"56801599-f8f5-494d-88bf-2c4786ed93d3\" (UID: \"56801599-f8f5-494d-88bf-2c4786ed93d3\") " Feb 27 16:25:40 crc kubenswrapper[4830]: I0227 16:25:40.115264 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56801599-f8f5-494d-88bf-2c4786ed93d3-bundle" (OuterVolumeSpecName: "bundle") pod "56801599-f8f5-494d-88bf-2c4786ed93d3" (UID: "56801599-f8f5-494d-88bf-2c4786ed93d3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:25:40 crc kubenswrapper[4830]: I0227 16:25:40.122468 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56801599-f8f5-494d-88bf-2c4786ed93d3-kube-api-access-vr95r" (OuterVolumeSpecName: "kube-api-access-vr95r") pod "56801599-f8f5-494d-88bf-2c4786ed93d3" (UID: "56801599-f8f5-494d-88bf-2c4786ed93d3"). InnerVolumeSpecName "kube-api-access-vr95r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:25:40 crc kubenswrapper[4830]: I0227 16:25:40.215623 4830 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/56801599-f8f5-494d-88bf-2c4786ed93d3-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:25:40 crc kubenswrapper[4830]: I0227 16:25:40.216372 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr95r\" (UniqueName: \"kubernetes.io/projected/56801599-f8f5-494d-88bf-2c4786ed93d3-kube-api-access-vr95r\") on node \"crc\" DevicePath \"\"" Feb 27 16:25:40 crc kubenswrapper[4830]: I0227 16:25:40.440403 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56801599-f8f5-494d-88bf-2c4786ed93d3-util" (OuterVolumeSpecName: "util") pod "56801599-f8f5-494d-88bf-2c4786ed93d3" (UID: "56801599-f8f5-494d-88bf-2c4786ed93d3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:25:40 crc kubenswrapper[4830]: I0227 16:25:40.521521 4830 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/56801599-f8f5-494d-88bf-2c4786ed93d3-util\") on node \"crc\" DevicePath \"\"" Feb 27 16:25:40 crc kubenswrapper[4830]: I0227 16:25:40.673562 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/56ecd13db59bdd13e8f9c434db3243d93729dc89d3d863bd79a8b38d1ddq5l4" event={"ID":"56801599-f8f5-494d-88bf-2c4786ed93d3","Type":"ContainerDied","Data":"7bd840fd487947aa19a898943a71956486f09cc27f411b65e31b786bcdf396d1"} Feb 27 16:25:40 crc kubenswrapper[4830]: I0227 16:25:40.673619 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bd840fd487947aa19a898943a71956486f09cc27f411b65e31b786bcdf396d1" Feb 27 16:25:40 crc kubenswrapper[4830]: I0227 16:25:40.673713 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/56ecd13db59bdd13e8f9c434db3243d93729dc89d3d863bd79a8b38d1ddq5l4" Feb 27 16:25:46 crc kubenswrapper[4830]: I0227 16:25:46.852472 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-966449766-gf8mn"] Feb 27 16:25:46 crc kubenswrapper[4830]: E0227 16:25:46.853134 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56801599-f8f5-494d-88bf-2c4786ed93d3" containerName="pull" Feb 27 16:25:46 crc kubenswrapper[4830]: I0227 16:25:46.853146 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="56801599-f8f5-494d-88bf-2c4786ed93d3" containerName="pull" Feb 27 16:25:46 crc kubenswrapper[4830]: E0227 16:25:46.853156 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56801599-f8f5-494d-88bf-2c4786ed93d3" containerName="util" Feb 27 16:25:46 crc kubenswrapper[4830]: I0227 16:25:46.853162 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="56801599-f8f5-494d-88bf-2c4786ed93d3" containerName="util" Feb 27 16:25:46 crc kubenswrapper[4830]: E0227 16:25:46.853174 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56801599-f8f5-494d-88bf-2c4786ed93d3" containerName="extract" Feb 27 16:25:46 crc kubenswrapper[4830]: I0227 16:25:46.853179 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="56801599-f8f5-494d-88bf-2c4786ed93d3" containerName="extract" Feb 27 16:25:46 crc kubenswrapper[4830]: I0227 16:25:46.853283 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="56801599-f8f5-494d-88bf-2c4786ed93d3" containerName="extract" Feb 27 16:25:46 crc kubenswrapper[4830]: I0227 16:25:46.853647 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-966449766-gf8mn" Feb 27 16:25:46 crc kubenswrapper[4830]: I0227 16:25:46.862028 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-vzs9x" Feb 27 16:25:46 crc kubenswrapper[4830]: I0227 16:25:46.930879 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snr4d\" (UniqueName: \"kubernetes.io/projected/d2358885-c27e-4483-9e57-fdd68a711164-kube-api-access-snr4d\") pod \"openstack-operator-controller-init-966449766-gf8mn\" (UID: \"d2358885-c27e-4483-9e57-fdd68a711164\") " pod="openstack-operators/openstack-operator-controller-init-966449766-gf8mn" Feb 27 16:25:46 crc kubenswrapper[4830]: I0227 16:25:46.940983 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-966449766-gf8mn"] Feb 27 16:25:47 crc kubenswrapper[4830]: I0227 16:25:47.031904 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snr4d\" (UniqueName: \"kubernetes.io/projected/d2358885-c27e-4483-9e57-fdd68a711164-kube-api-access-snr4d\") pod \"openstack-operator-controller-init-966449766-gf8mn\" (UID: \"d2358885-c27e-4483-9e57-fdd68a711164\") " pod="openstack-operators/openstack-operator-controller-init-966449766-gf8mn" Feb 27 16:25:47 crc kubenswrapper[4830]: I0227 16:25:47.053570 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snr4d\" (UniqueName: \"kubernetes.io/projected/d2358885-c27e-4483-9e57-fdd68a711164-kube-api-access-snr4d\") pod \"openstack-operator-controller-init-966449766-gf8mn\" (UID: \"d2358885-c27e-4483-9e57-fdd68a711164\") " pod="openstack-operators/openstack-operator-controller-init-966449766-gf8mn" Feb 27 16:25:47 crc kubenswrapper[4830]: I0227 16:25:47.172133 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-966449766-gf8mn" Feb 27 16:25:47 crc kubenswrapper[4830]: I0227 16:25:47.413238 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-966449766-gf8mn"] Feb 27 16:25:47 crc kubenswrapper[4830]: I0227 16:25:47.740313 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-966449766-gf8mn" event={"ID":"d2358885-c27e-4483-9e57-fdd68a711164","Type":"ContainerStarted","Data":"532e0709c3a343d8c46c14d5e9072d85f4e8836f6c2e7d90f1880bc3ec4a7a50"} Feb 27 16:25:53 crc kubenswrapper[4830]: I0227 16:25:53.788882 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-966449766-gf8mn" event={"ID":"d2358885-c27e-4483-9e57-fdd68a711164","Type":"ContainerStarted","Data":"8041b6dab6f1080e1d371d66b29c476a87a02f0b9940f481fc4c6fe645263662"} Feb 27 16:25:53 crc kubenswrapper[4830]: I0227 16:25:53.789493 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-966449766-gf8mn" Feb 27 16:25:53 crc kubenswrapper[4830]: I0227 16:25:53.842775 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-966449766-gf8mn" podStartSLOduration=2.098779308 podStartE2EDuration="7.842750493s" podCreationTimestamp="2026-02-27 16:25:46 +0000 UTC" firstStartedPulling="2026-02-27 16:25:47.418862112 +0000 UTC m=+1143.508134615" lastFinishedPulling="2026-02-27 16:25:53.162833297 +0000 UTC m=+1149.252105800" observedRunningTime="2026-02-27 16:25:53.833843132 +0000 UTC m=+1149.923115625" watchObservedRunningTime="2026-02-27 16:25:53.842750493 +0000 UTC m=+1149.932022996" Feb 27 16:26:00 crc kubenswrapper[4830]: I0227 16:26:00.148432 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536826-jgfgr"] Feb 27 16:26:00 crc kubenswrapper[4830]: I0227 16:26:00.150221 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536826-jgfgr" Feb 27 16:26:00 crc kubenswrapper[4830]: I0227 16:26:00.152845 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 16:26:00 crc kubenswrapper[4830]: I0227 16:26:00.153270 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 16:26:00 crc kubenswrapper[4830]: I0227 16:26:00.155804 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 16:26:00 crc kubenswrapper[4830]: I0227 16:26:00.165360 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536826-jgfgr"] Feb 27 16:26:00 crc kubenswrapper[4830]: I0227 16:26:00.334740 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgjkq\" (UniqueName: \"kubernetes.io/projected/1192f6ae-a29b-4553-a293-6f4e41814652-kube-api-access-kgjkq\") pod \"auto-csr-approver-29536826-jgfgr\" (UID: \"1192f6ae-a29b-4553-a293-6f4e41814652\") " pod="openshift-infra/auto-csr-approver-29536826-jgfgr" Feb 27 16:26:00 crc kubenswrapper[4830]: I0227 16:26:00.435728 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgjkq\" (UniqueName: \"kubernetes.io/projected/1192f6ae-a29b-4553-a293-6f4e41814652-kube-api-access-kgjkq\") pod \"auto-csr-approver-29536826-jgfgr\" (UID: \"1192f6ae-a29b-4553-a293-6f4e41814652\") " pod="openshift-infra/auto-csr-approver-29536826-jgfgr" Feb 27 16:26:00 crc kubenswrapper[4830]: I0227 16:26:00.467698 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgjkq\" (UniqueName: \"kubernetes.io/projected/1192f6ae-a29b-4553-a293-6f4e41814652-kube-api-access-kgjkq\") pod \"auto-csr-approver-29536826-jgfgr\" (UID: \"1192f6ae-a29b-4553-a293-6f4e41814652\") " pod="openshift-infra/auto-csr-approver-29536826-jgfgr" Feb 27 16:26:00 crc kubenswrapper[4830]: I0227 16:26:00.484760 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536826-jgfgr" Feb 27 16:26:00 crc kubenswrapper[4830]: I0227 16:26:00.734929 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536826-jgfgr"] Feb 27 16:26:00 crc kubenswrapper[4830]: W0227 16:26:00.748311 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1192f6ae_a29b_4553_a293_6f4e41814652.slice/crio-9257ebcf6bafc59422ff57e2f5b0a296c0651884fed04bcd7476727cc8a8e6d1 WatchSource:0}: Error finding container 9257ebcf6bafc59422ff57e2f5b0a296c0651884fed04bcd7476727cc8a8e6d1: Status 404 returned error can't find the container with id 9257ebcf6bafc59422ff57e2f5b0a296c0651884fed04bcd7476727cc8a8e6d1 Feb 27 16:26:00 crc kubenswrapper[4830]: I0227 16:26:00.842278 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536826-jgfgr" event={"ID":"1192f6ae-a29b-4553-a293-6f4e41814652","Type":"ContainerStarted","Data":"9257ebcf6bafc59422ff57e2f5b0a296c0651884fed04bcd7476727cc8a8e6d1"} Feb 27 16:26:02 crc kubenswrapper[4830]: I0227 16:26:02.870165 4830 generic.go:334] "Generic (PLEG): container finished" podID="1192f6ae-a29b-4553-a293-6f4e41814652" containerID="578837b4acd572cc743f96ab1ca35beed66af3c7803c66e45ec9a5459c53e247" exitCode=0 Feb 27 16:26:02 crc kubenswrapper[4830]: I0227 16:26:02.870555 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536826-jgfgr" event={"ID":"1192f6ae-a29b-4553-a293-6f4e41814652","Type":"ContainerDied","Data":"578837b4acd572cc743f96ab1ca35beed66af3c7803c66e45ec9a5459c53e247"} Feb 27 16:26:03 crc kubenswrapper[4830]: I0227 16:26:03.160601 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 16:26:03 crc kubenswrapper[4830]: I0227 16:26:03.160710 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 16:26:04 crc kubenswrapper[4830]: I0227 16:26:04.237644 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536826-jgfgr" Feb 27 16:26:04 crc kubenswrapper[4830]: I0227 16:26:04.396716 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgjkq\" (UniqueName: \"kubernetes.io/projected/1192f6ae-a29b-4553-a293-6f4e41814652-kube-api-access-kgjkq\") pod \"1192f6ae-a29b-4553-a293-6f4e41814652\" (UID: \"1192f6ae-a29b-4553-a293-6f4e41814652\") " Feb 27 16:26:04 crc kubenswrapper[4830]: I0227 16:26:04.405701 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1192f6ae-a29b-4553-a293-6f4e41814652-kube-api-access-kgjkq" (OuterVolumeSpecName: "kube-api-access-kgjkq") pod "1192f6ae-a29b-4553-a293-6f4e41814652" (UID: "1192f6ae-a29b-4553-a293-6f4e41814652"). InnerVolumeSpecName "kube-api-access-kgjkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:26:04 crc kubenswrapper[4830]: I0227 16:26:04.499345 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgjkq\" (UniqueName: \"kubernetes.io/projected/1192f6ae-a29b-4553-a293-6f4e41814652-kube-api-access-kgjkq\") on node \"crc\" DevicePath \"\"" Feb 27 16:26:04 crc kubenswrapper[4830]: I0227 16:26:04.900849 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536826-jgfgr" event={"ID":"1192f6ae-a29b-4553-a293-6f4e41814652","Type":"ContainerDied","Data":"9257ebcf6bafc59422ff57e2f5b0a296c0651884fed04bcd7476727cc8a8e6d1"} Feb 27 16:26:04 crc kubenswrapper[4830]: I0227 16:26:04.900883 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9257ebcf6bafc59422ff57e2f5b0a296c0651884fed04bcd7476727cc8a8e6d1" Feb 27 16:26:04 crc kubenswrapper[4830]: I0227 16:26:04.900937 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536826-jgfgr" Feb 27 16:26:05 crc kubenswrapper[4830]: I0227 16:26:05.318180 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536820-c4dp5"] Feb 27 16:26:05 crc kubenswrapper[4830]: I0227 16:26:05.326293 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536820-c4dp5"] Feb 27 16:26:06 crc kubenswrapper[4830]: I0227 16:26:06.776871 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a82f7818-da64-486f-a7e7-66af2352917b" path="/var/lib/kubelet/pods/a82f7818-da64-486f-a7e7-66af2352917b/volumes" Feb 27 16:26:07 crc kubenswrapper[4830]: I0227 16:26:07.175656 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-966449766-gf8mn" Feb 27 16:26:26 crc kubenswrapper[4830]: I0227 16:26:26.618311 4830 scope.go:117] "RemoveContainer" containerID="f4b1b9938bdb9a55e6f8062ca783b7910d7fec344c1af23042a5cec75f9761ae" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.159871 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.160650 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.301989 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-768c8b45bb-7pp52"] Feb 27 16:26:33 crc kubenswrapper[4830]: E0227 16:26:33.302409 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1192f6ae-a29b-4553-a293-6f4e41814652" containerName="oc" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.302437 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="1192f6ae-a29b-4553-a293-6f4e41814652" containerName="oc" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.302672 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="1192f6ae-a29b-4553-a293-6f4e41814652" containerName="oc" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.303372 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-768c8b45bb-7pp52" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.310401 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-7lb8f" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.314873 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6fb74c6d59-zw5q9"] Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.315808 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6fb74c6d59-zw5q9" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.321605 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-9fk97" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.326651 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6fb74c6d59-zw5q9"] Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.333673 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-55cc45767f-m892c"] Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.334454 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-m892c" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.335873 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-rgs27" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.338605 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-768c8b45bb-7pp52"] Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.340906 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6vk8\" (UniqueName: \"kubernetes.io/projected/9526e5f2-4fd2-42bb-b96a-f9cd615313b9-kube-api-access-c6vk8\") pod \"cinder-operator-controller-manager-768c8b45bb-7pp52\" (UID: \"9526e5f2-4fd2-42bb-b96a-f9cd615313b9\") " pod="openstack-operators/cinder-operator-controller-manager-768c8b45bb-7pp52" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.367449 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-55cc45767f-m892c"] Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.372661 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-7f748f8b74-f9pxf"] Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.373709 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7f748f8b74-f9pxf" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.376270 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-sgzlg" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.391745 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7f748f8b74-f9pxf"] Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.421499 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-585b788787-slc8g"] Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.422717 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-585b788787-slc8g" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.424909 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-2vgkv" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.429017 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7db95d7ffb-59k4p"] Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.430004 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7db95d7ffb-59k4p" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.431403 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-lfwbs" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.433294 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-585b788787-slc8g"] Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.437714 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-c77466965-24fz2"] Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.440496 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-c77466965-24fz2" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.441753 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7db95d7ffb-59k4p"] Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.441890 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-4jt5n" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.442275 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnxv2\" (UniqueName: \"kubernetes.io/projected/33e3f2f7-6a6a-4e59-84d6-a7bb2a7b14e2-kube-api-access-bnxv2\") pod \"designate-operator-controller-manager-55cc45767f-m892c\" (UID: \"33e3f2f7-6a6a-4e59-84d6-a7bb2a7b14e2\") " pod="openstack-operators/designate-operator-controller-manager-55cc45767f-m892c" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.442326 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6vk8\" (UniqueName: \"kubernetes.io/projected/9526e5f2-4fd2-42bb-b96a-f9cd615313b9-kube-api-access-c6vk8\") pod \"cinder-operator-controller-manager-768c8b45bb-7pp52\" (UID: \"9526e5f2-4fd2-42bb-b96a-f9cd615313b9\") " pod="openstack-operators/cinder-operator-controller-manager-768c8b45bb-7pp52" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.442378 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smmrf\" (UniqueName: \"kubernetes.io/projected/ddc86b78-f250-426e-80a2-1e0da35ea2a5-kube-api-access-smmrf\") pod \"glance-operator-controller-manager-7f748f8b74-f9pxf\" (UID: \"ddc86b78-f250-426e-80a2-1e0da35ea2a5\") " pod="openstack-operators/glance-operator-controller-manager-7f748f8b74-f9pxf" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.442419 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvj5m\" (UniqueName: \"kubernetes.io/projected/04f72aa7-3bab-4ac9-9fb6-106c7e40b9fb-kube-api-access-cvj5m\") pod \"barbican-operator-controller-manager-6fb74c6d59-zw5q9\" (UID: \"04f72aa7-3bab-4ac9-9fb6-106c7e40b9fb\") " pod="openstack-operators/barbican-operator-controller-manager-6fb74c6d59-zw5q9" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.443514 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.445900 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-c77466965-24fz2"] Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.464066 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-78b64779b9-fhwn5"] Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.465819 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-78b64779b9-fhwn5" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.477679 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-zxlcc" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.499444 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6vk8\" (UniqueName: \"kubernetes.io/projected/9526e5f2-4fd2-42bb-b96a-f9cd615313b9-kube-api-access-c6vk8\") pod \"cinder-operator-controller-manager-768c8b45bb-7pp52\" (UID: \"9526e5f2-4fd2-42bb-b96a-f9cd615313b9\") " pod="openstack-operators/cinder-operator-controller-manager-768c8b45bb-7pp52" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.508325 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-78b64779b9-fhwn5"] Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.544801 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgw59\" (UniqueName: \"kubernetes.io/projected/5b73c28e-36b3-4845-9336-299fc3dd2551-kube-api-access-kgw59\") pod \"infra-operator-controller-manager-c77466965-24fz2\" (UID: \"5b73c28e-36b3-4845-9336-299fc3dd2551\") " pod="openstack-operators/infra-operator-controller-manager-c77466965-24fz2" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.544850 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdnr5\" (UniqueName: \"kubernetes.io/projected/53b4e8e1-00b7-4744-8fcf-a723ae104e53-kube-api-access-fdnr5\") pod \"keystone-operator-controller-manager-78b64779b9-fhwn5\" (UID: \"53b4e8e1-00b7-4744-8fcf-a723ae104e53\") " pod="openstack-operators/keystone-operator-controller-manager-78b64779b9-fhwn5" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.544881 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnxv2\" (UniqueName: \"kubernetes.io/projected/33e3f2f7-6a6a-4e59-84d6-a7bb2a7b14e2-kube-api-access-bnxv2\") pod \"designate-operator-controller-manager-55cc45767f-m892c\" (UID: \"33e3f2f7-6a6a-4e59-84d6-a7bb2a7b14e2\") " pod="openstack-operators/designate-operator-controller-manager-55cc45767f-m892c" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.544918 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smmrf\" (UniqueName: \"kubernetes.io/projected/ddc86b78-f250-426e-80a2-1e0da35ea2a5-kube-api-access-smmrf\") pod \"glance-operator-controller-manager-7f748f8b74-f9pxf\" (UID: \"ddc86b78-f250-426e-80a2-1e0da35ea2a5\") " pod="openstack-operators/glance-operator-controller-manager-7f748f8b74-f9pxf" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.544940 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmldm\" (UniqueName: \"kubernetes.io/projected/190a4a9c-ee4a-4c6d-a45c-1febc5a67e9d-kube-api-access-vmldm\") pod \"heat-operator-controller-manager-585b788787-slc8g\" (UID: \"190a4a9c-ee4a-4c6d-a45c-1febc5a67e9d\") " pod="openstack-operators/heat-operator-controller-manager-585b788787-slc8g" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.544979 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvj5m\" (UniqueName: \"kubernetes.io/projected/04f72aa7-3bab-4ac9-9fb6-106c7e40b9fb-kube-api-access-cvj5m\") pod \"barbican-operator-controller-manager-6fb74c6d59-zw5q9\" (UID: \"04f72aa7-3bab-4ac9-9fb6-106c7e40b9fb\") " pod="openstack-operators/barbican-operator-controller-manager-6fb74c6d59-zw5q9" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.544999 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5b73c28e-36b3-4845-9336-299fc3dd2551-cert\") pod \"infra-operator-controller-manager-c77466965-24fz2\" (UID: \"5b73c28e-36b3-4845-9336-299fc3dd2551\") " pod="openstack-operators/infra-operator-controller-manager-c77466965-24fz2" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.545032 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq7x5\" (UniqueName: \"kubernetes.io/projected/23c25dea-fae4-4381-9b97-98fd17aee9d8-kube-api-access-sq7x5\") pod \"horizon-operator-controller-manager-7db95d7ffb-59k4p\" (UID: \"23c25dea-fae4-4381-9b97-98fd17aee9d8\") " pod="openstack-operators/horizon-operator-controller-manager-7db95d7ffb-59k4p" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.552896 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-8784b4656-29x7g"] Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.553707 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-8784b4656-29x7g" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.563200 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-qgc9v" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.578073 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-76fd76856-vtdk8"] Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.578895 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-76fd76856-vtdk8" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.580878 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smmrf\" (UniqueName: \"kubernetes.io/projected/ddc86b78-f250-426e-80a2-1e0da35ea2a5-kube-api-access-smmrf\") pod \"glance-operator-controller-manager-7f748f8b74-f9pxf\" (UID: \"ddc86b78-f250-426e-80a2-1e0da35ea2a5\") " pod="openstack-operators/glance-operator-controller-manager-7f748f8b74-f9pxf" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.587432 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-nnc9m" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.587549 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvj5m\" (UniqueName: \"kubernetes.io/projected/04f72aa7-3bab-4ac9-9fb6-106c7e40b9fb-kube-api-access-cvj5m\") pod \"barbican-operator-controller-manager-6fb74c6d59-zw5q9\" (UID: \"04f72aa7-3bab-4ac9-9fb6-106c7e40b9fb\") " pod="openstack-operators/barbican-operator-controller-manager-6fb74c6d59-zw5q9" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.593033 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnxv2\" (UniqueName: \"kubernetes.io/projected/33e3f2f7-6a6a-4e59-84d6-a7bb2a7b14e2-kube-api-access-bnxv2\") pod \"designate-operator-controller-manager-55cc45767f-m892c\" (UID: \"33e3f2f7-6a6a-4e59-84d6-a7bb2a7b14e2\") " pod="openstack-operators/designate-operator-controller-manager-55cc45767f-m892c" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.597051 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-8784b4656-29x7g"] Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.607574 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-745fc45789-w8lqb"] Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.608326 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-745fc45789-w8lqb" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.610179 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-76fd76856-vtdk8"] Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.610543 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-x9xr8" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.625394 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-6c67ff7674-ftbbj"] Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.626251 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6c67ff7674-ftbbj" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.630712 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-768c8b45bb-7pp52" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.642341 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-cspln" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.646584 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq7x5\" (UniqueName: \"kubernetes.io/projected/23c25dea-fae4-4381-9b97-98fd17aee9d8-kube-api-access-sq7x5\") pod \"horizon-operator-controller-manager-7db95d7ffb-59k4p\" (UID: \"23c25dea-fae4-4381-9b97-98fd17aee9d8\") " pod="openstack-operators/horizon-operator-controller-manager-7db95d7ffb-59k4p" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.646638 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgw59\" (UniqueName: \"kubernetes.io/projected/5b73c28e-36b3-4845-9336-299fc3dd2551-kube-api-access-kgw59\") pod \"infra-operator-controller-manager-c77466965-24fz2\" (UID: \"5b73c28e-36b3-4845-9336-299fc3dd2551\") " pod="openstack-operators/infra-operator-controller-manager-c77466965-24fz2" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.646677 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stbq8\" (UniqueName: \"kubernetes.io/projected/531e48d4-bbe4-4527-944e-4b27dc957ff4-kube-api-access-stbq8\") pod \"nova-operator-controller-manager-6c67ff7674-ftbbj\" (UID: \"531e48d4-bbe4-4527-944e-4b27dc957ff4\") " pod="openstack-operators/nova-operator-controller-manager-6c67ff7674-ftbbj" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.646696 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdnr5\" (UniqueName: \"kubernetes.io/projected/53b4e8e1-00b7-4744-8fcf-a723ae104e53-kube-api-access-fdnr5\") pod \"keystone-operator-controller-manager-78b64779b9-fhwn5\" (UID: \"53b4e8e1-00b7-4744-8fcf-a723ae104e53\") " pod="openstack-operators/keystone-operator-controller-manager-78b64779b9-fhwn5" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.646706 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6fb74c6d59-zw5q9" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.646719 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppqmg\" (UniqueName: \"kubernetes.io/projected/e42044d1-1153-4216-8d8f-b8333d2bcb00-kube-api-access-ppqmg\") pod \"mariadb-operator-controller-manager-745fc45789-w8lqb\" (UID: \"e42044d1-1153-4216-8d8f-b8333d2bcb00\") " pod="openstack-operators/mariadb-operator-controller-manager-745fc45789-w8lqb" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.646772 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m9gt\" (UniqueName: \"kubernetes.io/projected/b9dbfa18-3a80-408c-9a7d-34a96b2c411e-kube-api-access-4m9gt\") pod \"manila-operator-controller-manager-76fd76856-vtdk8\" (UID: \"b9dbfa18-3a80-408c-9a7d-34a96b2c411e\") " pod="openstack-operators/manila-operator-controller-manager-76fd76856-vtdk8" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.646825 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmldm\" (UniqueName: \"kubernetes.io/projected/190a4a9c-ee4a-4c6d-a45c-1febc5a67e9d-kube-api-access-vmldm\") pod \"heat-operator-controller-manager-585b788787-slc8g\" (UID: \"190a4a9c-ee4a-4c6d-a45c-1febc5a67e9d\") " pod="openstack-operators/heat-operator-controller-manager-585b788787-slc8g" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.646855 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgrnr\" (UniqueName: \"kubernetes.io/projected/e68ac45c-7b30-4cd5-932a-9a0e8a3824f3-kube-api-access-cgrnr\") pod \"ironic-operator-controller-manager-8784b4656-29x7g\" (UID: \"e68ac45c-7b30-4cd5-932a-9a0e8a3824f3\") " pod="openstack-operators/ironic-operator-controller-manager-8784b4656-29x7g" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.646873 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5b73c28e-36b3-4845-9336-299fc3dd2551-cert\") pod \"infra-operator-controller-manager-c77466965-24fz2\" (UID: \"5b73c28e-36b3-4845-9336-299fc3dd2551\") " pod="openstack-operators/infra-operator-controller-manager-c77466965-24fz2" Feb 27 16:26:33 crc kubenswrapper[4830]: E0227 16:26:33.647014 4830 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 27 16:26:33 crc kubenswrapper[4830]: E0227 16:26:33.647067 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b73c28e-36b3-4845-9336-299fc3dd2551-cert podName:5b73c28e-36b3-4845-9336-299fc3dd2551 nodeName:}" failed. No retries permitted until 2026-02-27 16:26:34.147048566 +0000 UTC m=+1190.236321029 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5b73c28e-36b3-4845-9336-299fc3dd2551-cert") pod "infra-operator-controller-manager-c77466965-24fz2" (UID: "5b73c28e-36b3-4845-9336-299fc3dd2551") : secret "infra-operator-webhook-server-cert" not found Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.653002 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-745fc45789-w8lqb"] Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.662635 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-m892c" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.664620 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6c67ff7674-ftbbj"] Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.681515 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-768f998cf4-qvwzn"] Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.682366 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-768f998cf4-qvwzn" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.689650 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-8xkhs" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.693326 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7f748f8b74-f9pxf" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.696792 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdnr5\" (UniqueName: \"kubernetes.io/projected/53b4e8e1-00b7-4744-8fcf-a723ae104e53-kube-api-access-fdnr5\") pod \"keystone-operator-controller-manager-78b64779b9-fhwn5\" (UID: \"53b4e8e1-00b7-4744-8fcf-a723ae104e53\") " pod="openstack-operators/keystone-operator-controller-manager-78b64779b9-fhwn5" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.703730 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-768f998cf4-qvwzn"] Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.710533 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmldm\" (UniqueName: \"kubernetes.io/projected/190a4a9c-ee4a-4c6d-a45c-1febc5a67e9d-kube-api-access-vmldm\") pod \"heat-operator-controller-manager-585b788787-slc8g\" (UID: \"190a4a9c-ee4a-4c6d-a45c-1febc5a67e9d\") " pod="openstack-operators/heat-operator-controller-manager-585b788787-slc8g" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.713552 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq7x5\" (UniqueName: \"kubernetes.io/projected/23c25dea-fae4-4381-9b97-98fd17aee9d8-kube-api-access-sq7x5\") pod \"horizon-operator-controller-manager-7db95d7ffb-59k4p\" (UID: \"23c25dea-fae4-4381-9b97-98fd17aee9d8\") " pod="openstack-operators/horizon-operator-controller-manager-7db95d7ffb-59k4p" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.722126 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-cc79fdffd-2wlpz"] Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.722911 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-cc79fdffd-2wlpz" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.726586 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgw59\" (UniqueName: \"kubernetes.io/projected/5b73c28e-36b3-4845-9336-299fc3dd2551-kube-api-access-kgw59\") pod \"infra-operator-controller-manager-c77466965-24fz2\" (UID: \"5b73c28e-36b3-4845-9336-299fc3dd2551\") " pod="openstack-operators/infra-operator-controller-manager-c77466965-24fz2" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.726977 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-cbbc8" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.751253 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stbq8\" (UniqueName: \"kubernetes.io/projected/531e48d4-bbe4-4527-944e-4b27dc957ff4-kube-api-access-stbq8\") pod \"nova-operator-controller-manager-6c67ff7674-ftbbj\" (UID: \"531e48d4-bbe4-4527-944e-4b27dc957ff4\") " pod="openstack-operators/nova-operator-controller-manager-6c67ff7674-ftbbj" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.751311 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppqmg\" (UniqueName: \"kubernetes.io/projected/e42044d1-1153-4216-8d8f-b8333d2bcb00-kube-api-access-ppqmg\") pod \"mariadb-operator-controller-manager-745fc45789-w8lqb\" (UID: \"e42044d1-1153-4216-8d8f-b8333d2bcb00\") " pod="openstack-operators/mariadb-operator-controller-manager-745fc45789-w8lqb" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.751352 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxl55\" (UniqueName: \"kubernetes.io/projected/bbd18a52-1057-4183-bb46-f1c270691eac-kube-api-access-mxl55\") pod \"neutron-operator-controller-manager-768f998cf4-qvwzn\" (UID: \"bbd18a52-1057-4183-bb46-f1c270691eac\") " pod="openstack-operators/neutron-operator-controller-manager-768f998cf4-qvwzn" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.751389 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m9gt\" (UniqueName: \"kubernetes.io/projected/b9dbfa18-3a80-408c-9a7d-34a96b2c411e-kube-api-access-4m9gt\") pod \"manila-operator-controller-manager-76fd76856-vtdk8\" (UID: \"b9dbfa18-3a80-408c-9a7d-34a96b2c411e\") " pod="openstack-operators/manila-operator-controller-manager-76fd76856-vtdk8" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.751443 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgrnr\" (UniqueName: \"kubernetes.io/projected/e68ac45c-7b30-4cd5-932a-9a0e8a3824f3-kube-api-access-cgrnr\") pod \"ironic-operator-controller-manager-8784b4656-29x7g\" (UID: \"e68ac45c-7b30-4cd5-932a-9a0e8a3824f3\") " pod="openstack-operators/ironic-operator-controller-manager-8784b4656-29x7g" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.751492 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl95s\" (UniqueName: \"kubernetes.io/projected/7237e49f-cb23-40bd-b5ab-f1460c620f13-kube-api-access-pl95s\") pod \"octavia-operator-controller-manager-cc79fdffd-2wlpz\" (UID: \"7237e49f-cb23-40bd-b5ab-f1460c620f13\") " pod="openstack-operators/octavia-operator-controller-manager-cc79fdffd-2wlpz" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.765019 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-684c7d77b-2n88g"] Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.769898 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-684c7d77b-2n88g" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.773981 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stbq8\" (UniqueName: \"kubernetes.io/projected/531e48d4-bbe4-4527-944e-4b27dc957ff4-kube-api-access-stbq8\") pod \"nova-operator-controller-manager-6c67ff7674-ftbbj\" (UID: \"531e48d4-bbe4-4527-944e-4b27dc957ff4\") " pod="openstack-operators/nova-operator-controller-manager-6c67ff7674-ftbbj" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.774489 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-vfzhb" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.779730 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgrnr\" (UniqueName: \"kubernetes.io/projected/e68ac45c-7b30-4cd5-932a-9a0e8a3824f3-kube-api-access-cgrnr\") pod \"ironic-operator-controller-manager-8784b4656-29x7g\" (UID: \"e68ac45c-7b30-4cd5-932a-9a0e8a3824f3\") " pod="openstack-operators/ironic-operator-controller-manager-8784b4656-29x7g" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.779936 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m9gt\" (UniqueName: \"kubernetes.io/projected/b9dbfa18-3a80-408c-9a7d-34a96b2c411e-kube-api-access-4m9gt\") pod \"manila-operator-controller-manager-76fd76856-vtdk8\" (UID: \"b9dbfa18-3a80-408c-9a7d-34a96b2c411e\") " pod="openstack-operators/manila-operator-controller-manager-76fd76856-vtdk8" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.794708 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-cc79fdffd-2wlpz"] Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.800011 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-68j87"] Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.800867 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-68j87" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.803861 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-tlkch" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.804560 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.810269 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-585b788787-slc8g" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.813492 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppqmg\" (UniqueName: \"kubernetes.io/projected/e42044d1-1153-4216-8d8f-b8333d2bcb00-kube-api-access-ppqmg\") pod \"mariadb-operator-controller-manager-745fc45789-w8lqb\" (UID: \"e42044d1-1153-4216-8d8f-b8333d2bcb00\") " pod="openstack-operators/mariadb-operator-controller-manager-745fc45789-w8lqb" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.835706 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7db95d7ffb-59k4p" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.861160 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-78b64779b9-fhwn5" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.862405 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b719a387-109a-49fe-b4df-98038c202a0f-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-68j87\" (UID: \"b719a387-109a-49fe-b4df-98038c202a0f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-68j87" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.862464 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxl55\" (UniqueName: \"kubernetes.io/projected/bbd18a52-1057-4183-bb46-f1c270691eac-kube-api-access-mxl55\") pod \"neutron-operator-controller-manager-768f998cf4-qvwzn\" (UID: \"bbd18a52-1057-4183-bb46-f1c270691eac\") " pod="openstack-operators/neutron-operator-controller-manager-768f998cf4-qvwzn" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.862611 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzjjl\" (UniqueName: \"kubernetes.io/projected/b719a387-109a-49fe-b4df-98038c202a0f-kube-api-access-vzjjl\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-68j87\" (UID: \"b719a387-109a-49fe-b4df-98038c202a0f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-68j87" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.862654 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl95s\" (UniqueName: \"kubernetes.io/projected/7237e49f-cb23-40bd-b5ab-f1460c620f13-kube-api-access-pl95s\") pod \"octavia-operator-controller-manager-cc79fdffd-2wlpz\" (UID: \"7237e49f-cb23-40bd-b5ab-f1460c620f13\") " pod="openstack-operators/octavia-operator-controller-manager-cc79fdffd-2wlpz" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.862752 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7pgc\" (UniqueName: \"kubernetes.io/projected/f179e5c8-193f-47fc-841e-2dc3feff31cd-kube-api-access-b7pgc\") pod \"ovn-operator-controller-manager-684c7d77b-2n88g\" (UID: \"f179e5c8-193f-47fc-841e-2dc3feff31cd\") " pod="openstack-operators/ovn-operator-controller-manager-684c7d77b-2n88g" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.869499 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-8784b4656-29x7g" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.876098 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-684c7d77b-2n88g"] Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.886955 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-bff955cc4-fhgdd"] Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.887009 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl95s\" (UniqueName: \"kubernetes.io/projected/7237e49f-cb23-40bd-b5ab-f1460c620f13-kube-api-access-pl95s\") pod \"octavia-operator-controller-manager-cc79fdffd-2wlpz\" (UID: \"7237e49f-cb23-40bd-b5ab-f1460c620f13\") " pod="openstack-operators/octavia-operator-controller-manager-cc79fdffd-2wlpz" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.890047 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-bff955cc4-fhgdd" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.892915 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxl55\" (UniqueName: \"kubernetes.io/projected/bbd18a52-1057-4183-bb46-f1c270691eac-kube-api-access-mxl55\") pod \"neutron-operator-controller-manager-768f998cf4-qvwzn\" (UID: \"bbd18a52-1057-4183-bb46-f1c270691eac\") " pod="openstack-operators/neutron-operator-controller-manager-768f998cf4-qvwzn" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.895860 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-ll9xv" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.917307 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-68j87"] Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.934075 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-76fd76856-vtdk8" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.934630 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-55f4bf89cb-lqgtj"] Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.935426 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-55f4bf89cb-lqgtj" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.939604 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-67pm2" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.948445 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-bff955cc4-fhgdd"] Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.951230 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-745fc45789-w8lqb" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.990912 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbjdc\" (UniqueName: \"kubernetes.io/projected/8cf505f8-023a-4cfe-be27-2b920c8875cc-kube-api-access-tbjdc\") pod \"placement-operator-controller-manager-bff955cc4-fhgdd\" (UID: \"8cf505f8-023a-4cfe-be27-2b920c8875cc\") " pod="openstack-operators/placement-operator-controller-manager-bff955cc4-fhgdd" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.990999 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzjjl\" (UniqueName: \"kubernetes.io/projected/b719a387-109a-49fe-b4df-98038c202a0f-kube-api-access-vzjjl\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-68j87\" (UID: \"b719a387-109a-49fe-b4df-98038c202a0f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-68j87" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.991084 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7pgc\" (UniqueName: \"kubernetes.io/projected/f179e5c8-193f-47fc-841e-2dc3feff31cd-kube-api-access-b7pgc\") pod \"ovn-operator-controller-manager-684c7d77b-2n88g\" (UID: \"f179e5c8-193f-47fc-841e-2dc3feff31cd\") " pod="openstack-operators/ovn-operator-controller-manager-684c7d77b-2n88g" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.991158 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfg4v\" (UniqueName: \"kubernetes.io/projected/c0bb3f6f-67ec-4669-be22-2122ae624cdd-kube-api-access-mfg4v\") pod \"swift-operator-controller-manager-55f4bf89cb-lqgtj\" (UID: \"c0bb3f6f-67ec-4669-be22-2122ae624cdd\") " pod="openstack-operators/swift-operator-controller-manager-55f4bf89cb-lqgtj" Feb 27 16:26:33 crc kubenswrapper[4830]: I0227 16:26:33.991202 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b719a387-109a-49fe-b4df-98038c202a0f-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-68j87\" (UID: \"b719a387-109a-49fe-b4df-98038c202a0f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-68j87" Feb 27 16:26:33 crc kubenswrapper[4830]: E0227 16:26:33.991424 4830 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 16:26:33 crc kubenswrapper[4830]: E0227 16:26:33.991476 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b719a387-109a-49fe-b4df-98038c202a0f-cert podName:b719a387-109a-49fe-b4df-98038c202a0f nodeName:}" failed. No retries permitted until 2026-02-27 16:26:34.491462085 +0000 UTC m=+1190.580734548 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b719a387-109a-49fe-b4df-98038c202a0f-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-68j87" (UID: "b719a387-109a-49fe-b4df-98038c202a0f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.004008 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-55f4bf89cb-lqgtj"] Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.035230 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7pgc\" (UniqueName: \"kubernetes.io/projected/f179e5c8-193f-47fc-841e-2dc3feff31cd-kube-api-access-b7pgc\") pod \"ovn-operator-controller-manager-684c7d77b-2n88g\" (UID: \"f179e5c8-193f-47fc-841e-2dc3feff31cd\") " pod="openstack-operators/ovn-operator-controller-manager-684c7d77b-2n88g" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.037691 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6c67ff7674-ftbbj" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.046799 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzjjl\" (UniqueName: \"kubernetes.io/projected/b719a387-109a-49fe-b4df-98038c202a0f-kube-api-access-vzjjl\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-68j87\" (UID: \"b719a387-109a-49fe-b4df-98038c202a0f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-68j87" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.053138 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-56dc67d744-44hlt"] Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.063231 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-44hlt" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.065560 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-2ng7b" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.080409 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-768f998cf4-qvwzn" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.103088 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-56dc67d744-44hlt"] Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.106252 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-cc79fdffd-2wlpz" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.106721 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-8467ccb4c8-mh9d6"] Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.107481 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-mh9d6" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.112029 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-5zbrl" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.116795 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbjdc\" (UniqueName: \"kubernetes.io/projected/8cf505f8-023a-4cfe-be27-2b920c8875cc-kube-api-access-tbjdc\") pod \"placement-operator-controller-manager-bff955cc4-fhgdd\" (UID: \"8cf505f8-023a-4cfe-be27-2b920c8875cc\") " pod="openstack-operators/placement-operator-controller-manager-bff955cc4-fhgdd" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.116929 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfg4v\" (UniqueName: \"kubernetes.io/projected/c0bb3f6f-67ec-4669-be22-2122ae624cdd-kube-api-access-mfg4v\") pod \"swift-operator-controller-manager-55f4bf89cb-lqgtj\" (UID: \"c0bb3f6f-67ec-4669-be22-2122ae624cdd\") " pod="openstack-operators/swift-operator-controller-manager-55f4bf89cb-lqgtj" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.130564 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8467ccb4c8-mh9d6"] Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.136698 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbjdc\" (UniqueName: \"kubernetes.io/projected/8cf505f8-023a-4cfe-be27-2b920c8875cc-kube-api-access-tbjdc\") pod \"placement-operator-controller-manager-bff955cc4-fhgdd\" (UID: \"8cf505f8-023a-4cfe-be27-2b920c8875cc\") " pod="openstack-operators/placement-operator-controller-manager-bff955cc4-fhgdd" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.137491 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfg4v\" (UniqueName: \"kubernetes.io/projected/c0bb3f6f-67ec-4669-be22-2122ae624cdd-kube-api-access-mfg4v\") pod \"swift-operator-controller-manager-55f4bf89cb-lqgtj\" (UID: \"c0bb3f6f-67ec-4669-be22-2122ae624cdd\") " pod="openstack-operators/swift-operator-controller-manager-55f4bf89cb-lqgtj" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.139854 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-65c9f4f6b-w6kw7"] Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.141304 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-65c9f4f6b-w6kw7" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.143353 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-vk62s" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.144210 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-65c9f4f6b-w6kw7"] Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.161766 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7987977d84-9b7m9"] Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.162614 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7987977d84-9b7m9" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.163676 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-684c7d77b-2n88g" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.164718 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-87x5q" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.164863 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.164989 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.165583 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7987977d84-9b7m9"] Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.171014 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4pzb7"] Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.172059 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4pzb7" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.173213 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-qbhpl" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.179135 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4pzb7"] Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.217861 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtsq6\" (UniqueName: \"kubernetes.io/projected/37c91ba3-1b2b-4717-b591-d4a4c2ec9d62-kube-api-access-mtsq6\") pod \"watcher-operator-controller-manager-65c9f4f6b-w6kw7\" (UID: \"37c91ba3-1b2b-4717-b591-d4a4c2ec9d62\") " pod="openstack-operators/watcher-operator-controller-manager-65c9f4f6b-w6kw7" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.217926 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdkc6\" (UniqueName: \"kubernetes.io/projected/a358af53-9ef3-4686-8e96-528d08c2e7a2-kube-api-access-vdkc6\") pod \"telemetry-operator-controller-manager-56dc67d744-44hlt\" (UID: \"a358af53-9ef3-4686-8e96-528d08c2e7a2\") " pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-44hlt" Feb 27 16:26:34 crc kubenswrapper[4830]: E0227 16:26:34.218144 4830 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 27 16:26:34 crc kubenswrapper[4830]: E0227 16:26:34.218199 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b73c28e-36b3-4845-9336-299fc3dd2551-cert podName:5b73c28e-36b3-4845-9336-299fc3dd2551 nodeName:}" failed. No retries permitted until 2026-02-27 16:26:35.218185076 +0000 UTC m=+1191.307457539 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5b73c28e-36b3-4845-9336-299fc3dd2551-cert") pod "infra-operator-controller-manager-c77466965-24fz2" (UID: "5b73c28e-36b3-4845-9336-299fc3dd2551") : secret "infra-operator-webhook-server-cert" not found Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.219801 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5b73c28e-36b3-4845-9336-299fc3dd2551-cert\") pod \"infra-operator-controller-manager-c77466965-24fz2\" (UID: \"5b73c28e-36b3-4845-9336-299fc3dd2551\") " pod="openstack-operators/infra-operator-controller-manager-c77466965-24fz2" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.219856 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtfbp\" (UniqueName: \"kubernetes.io/projected/33a4c588-56bf-40d2-892c-9fbe458de600-kube-api-access-mtfbp\") pod \"test-operator-controller-manager-8467ccb4c8-mh9d6\" (UID: \"33a4c588-56bf-40d2-892c-9fbe458de600\") " pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-mh9d6" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.250189 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-bff955cc4-fhgdd" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.276442 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-55f4bf89cb-lqgtj" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.322217 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtsq6\" (UniqueName: \"kubernetes.io/projected/37c91ba3-1b2b-4717-b591-d4a4c2ec9d62-kube-api-access-mtsq6\") pod \"watcher-operator-controller-manager-65c9f4f6b-w6kw7\" (UID: \"37c91ba3-1b2b-4717-b591-d4a4c2ec9d62\") " pod="openstack-operators/watcher-operator-controller-manager-65c9f4f6b-w6kw7" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.322512 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4r57\" (UniqueName: \"kubernetes.io/projected/af786cf1-6705-4c96-9c45-882daad96637-kube-api-access-j4r57\") pod \"rabbitmq-cluster-operator-manager-668c99d594-4pzb7\" (UID: \"af786cf1-6705-4c96-9c45-882daad96637\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4pzb7" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.322540 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-metrics-certs\") pod \"openstack-operator-controller-manager-7987977d84-9b7m9\" (UID: \"7dcda287-c580-4c6d-881d-d2500541cfba\") " pod="openstack-operators/openstack-operator-controller-manager-7987977d84-9b7m9" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.322559 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdkc6\" (UniqueName: \"kubernetes.io/projected/a358af53-9ef3-4686-8e96-528d08c2e7a2-kube-api-access-vdkc6\") pod \"telemetry-operator-controller-manager-56dc67d744-44hlt\" (UID: \"a358af53-9ef3-4686-8e96-528d08c2e7a2\") " pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-44hlt" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.322590 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-webhook-certs\") pod \"openstack-operator-controller-manager-7987977d84-9b7m9\" (UID: \"7dcda287-c580-4c6d-881d-d2500541cfba\") " pod="openstack-operators/openstack-operator-controller-manager-7987977d84-9b7m9" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.322893 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj4tx\" (UniqueName: \"kubernetes.io/projected/7dcda287-c580-4c6d-881d-d2500541cfba-kube-api-access-vj4tx\") pod \"openstack-operator-controller-manager-7987977d84-9b7m9\" (UID: \"7dcda287-c580-4c6d-881d-d2500541cfba\") " pod="openstack-operators/openstack-operator-controller-manager-7987977d84-9b7m9" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.322930 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtfbp\" (UniqueName: \"kubernetes.io/projected/33a4c588-56bf-40d2-892c-9fbe458de600-kube-api-access-mtfbp\") pod \"test-operator-controller-manager-8467ccb4c8-mh9d6\" (UID: \"33a4c588-56bf-40d2-892c-9fbe458de600\") " pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-mh9d6" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.340265 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6fb74c6d59-zw5q9"] Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.349528 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtfbp\" (UniqueName: \"kubernetes.io/projected/33a4c588-56bf-40d2-892c-9fbe458de600-kube-api-access-mtfbp\") pod \"test-operator-controller-manager-8467ccb4c8-mh9d6\" (UID: \"33a4c588-56bf-40d2-892c-9fbe458de600\") " pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-mh9d6" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.350596 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtsq6\" (UniqueName: \"kubernetes.io/projected/37c91ba3-1b2b-4717-b591-d4a4c2ec9d62-kube-api-access-mtsq6\") pod \"watcher-operator-controller-manager-65c9f4f6b-w6kw7\" (UID: \"37c91ba3-1b2b-4717-b591-d4a4c2ec9d62\") " pod="openstack-operators/watcher-operator-controller-manager-65c9f4f6b-w6kw7" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.355009 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdkc6\" (UniqueName: \"kubernetes.io/projected/a358af53-9ef3-4686-8e96-528d08c2e7a2-kube-api-access-vdkc6\") pod \"telemetry-operator-controller-manager-56dc67d744-44hlt\" (UID: \"a358af53-9ef3-4686-8e96-528d08c2e7a2\") " pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-44hlt" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.370097 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-768c8b45bb-7pp52"] Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.405470 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-44hlt" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.425675 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4r57\" (UniqueName: \"kubernetes.io/projected/af786cf1-6705-4c96-9c45-882daad96637-kube-api-access-j4r57\") pod \"rabbitmq-cluster-operator-manager-668c99d594-4pzb7\" (UID: \"af786cf1-6705-4c96-9c45-882daad96637\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4pzb7" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.425716 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-metrics-certs\") pod \"openstack-operator-controller-manager-7987977d84-9b7m9\" (UID: \"7dcda287-c580-4c6d-881d-d2500541cfba\") " pod="openstack-operators/openstack-operator-controller-manager-7987977d84-9b7m9" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.425750 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-webhook-certs\") pod \"openstack-operator-controller-manager-7987977d84-9b7m9\" (UID: \"7dcda287-c580-4c6d-881d-d2500541cfba\") " pod="openstack-operators/openstack-operator-controller-manager-7987977d84-9b7m9" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.425773 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj4tx\" (UniqueName: \"kubernetes.io/projected/7dcda287-c580-4c6d-881d-d2500541cfba-kube-api-access-vj4tx\") pod \"openstack-operator-controller-manager-7987977d84-9b7m9\" (UID: \"7dcda287-c580-4c6d-881d-d2500541cfba\") " pod="openstack-operators/openstack-operator-controller-manager-7987977d84-9b7m9" Feb 27 16:26:34 crc kubenswrapper[4830]: E0227 16:26:34.426120 4830 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 27 16:26:34 crc kubenswrapper[4830]: E0227 16:26:34.426203 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-metrics-certs podName:7dcda287-c580-4c6d-881d-d2500541cfba nodeName:}" failed. No retries permitted until 2026-02-27 16:26:34.926183713 +0000 UTC m=+1191.015456176 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-metrics-certs") pod "openstack-operator-controller-manager-7987977d84-9b7m9" (UID: "7dcda287-c580-4c6d-881d-d2500541cfba") : secret "metrics-server-cert" not found Feb 27 16:26:34 crc kubenswrapper[4830]: E0227 16:26:34.426210 4830 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 27 16:26:34 crc kubenswrapper[4830]: E0227 16:26:34.426252 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-webhook-certs podName:7dcda287-c580-4c6d-881d-d2500541cfba nodeName:}" failed. No retries permitted until 2026-02-27 16:26:34.926237284 +0000 UTC m=+1191.015509747 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-webhook-certs") pod "openstack-operator-controller-manager-7987977d84-9b7m9" (UID: "7dcda287-c580-4c6d-881d-d2500541cfba") : secret "webhook-server-cert" not found Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.442464 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj4tx\" (UniqueName: \"kubernetes.io/projected/7dcda287-c580-4c6d-881d-d2500541cfba-kube-api-access-vj4tx\") pod \"openstack-operator-controller-manager-7987977d84-9b7m9\" (UID: \"7dcda287-c580-4c6d-881d-d2500541cfba\") " pod="openstack-operators/openstack-operator-controller-manager-7987977d84-9b7m9" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.442493 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4r57\" (UniqueName: \"kubernetes.io/projected/af786cf1-6705-4c96-9c45-882daad96637-kube-api-access-j4r57\") pod \"rabbitmq-cluster-operator-manager-668c99d594-4pzb7\" (UID: \"af786cf1-6705-4c96-9c45-882daad96637\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4pzb7" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.499461 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-mh9d6" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.519239 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-65c9f4f6b-w6kw7" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.526853 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b719a387-109a-49fe-b4df-98038c202a0f-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-68j87\" (UID: \"b719a387-109a-49fe-b4df-98038c202a0f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-68j87" Feb 27 16:26:34 crc kubenswrapper[4830]: E0227 16:26:34.526995 4830 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 16:26:34 crc kubenswrapper[4830]: E0227 16:26:34.527053 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b719a387-109a-49fe-b4df-98038c202a0f-cert podName:b719a387-109a-49fe-b4df-98038c202a0f nodeName:}" failed. No retries permitted until 2026-02-27 16:26:35.527039113 +0000 UTC m=+1191.616311576 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b719a387-109a-49fe-b4df-98038c202a0f-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-68j87" (UID: "b719a387-109a-49fe-b4df-98038c202a0f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.554282 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4pzb7" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.667725 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-585b788787-slc8g"] Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.672934 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7db95d7ffb-59k4p"] Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.689767 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-55cc45767f-m892c"] Feb 27 16:26:34 crc kubenswrapper[4830]: W0227 16:26:34.694121 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod190a4a9c_ee4a_4c6d_a45c_1febc5a67e9d.slice/crio-46eff94ebb95bb2df8f1cb80766e717bc6bb2a6c2cf6bec91885d3cd295da007 WatchSource:0}: Error finding container 46eff94ebb95bb2df8f1cb80766e717bc6bb2a6c2cf6bec91885d3cd295da007: Status 404 returned error can't find the container with id 46eff94ebb95bb2df8f1cb80766e717bc6bb2a6c2cf6bec91885d3cd295da007 Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.705639 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7f748f8b74-f9pxf"] Feb 27 16:26:34 crc kubenswrapper[4830]: W0227 16:26:34.713442 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33e3f2f7_6a6a_4e59_84d6_a7bb2a7b14e2.slice/crio-4cbd7ef291ac6e8da8622a102d4efff692e5200cd03d7d58c0a88b551aa38a9b WatchSource:0}: Error finding container 4cbd7ef291ac6e8da8622a102d4efff692e5200cd03d7d58c0a88b551aa38a9b: Status 404 returned error can't find the container with id 4cbd7ef291ac6e8da8622a102d4efff692e5200cd03d7d58c0a88b551aa38a9b Feb 27 16:26:34 crc kubenswrapper[4830]: W0227 16:26:34.732037 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddc86b78_f250_426e_80a2_1e0da35ea2a5.slice/crio-6cec0230a023b2ddef5f10ad1394f13375db8d8d783d640e88801332fa16c346 WatchSource:0}: Error finding container 6cec0230a023b2ddef5f10ad1394f13375db8d8d783d640e88801332fa16c346: Status 404 returned error can't find the container with id 6cec0230a023b2ddef5f10ad1394f13375db8d8d783d640e88801332fa16c346 Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.933110 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-webhook-certs\") pod \"openstack-operator-controller-manager-7987977d84-9b7m9\" (UID: \"7dcda287-c580-4c6d-881d-d2500541cfba\") " pod="openstack-operators/openstack-operator-controller-manager-7987977d84-9b7m9" Feb 27 16:26:34 crc kubenswrapper[4830]: I0227 16:26:34.933589 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-metrics-certs\") pod \"openstack-operator-controller-manager-7987977d84-9b7m9\" (UID: \"7dcda287-c580-4c6d-881d-d2500541cfba\") " pod="openstack-operators/openstack-operator-controller-manager-7987977d84-9b7m9" Feb 27 16:26:34 crc kubenswrapper[4830]: E0227 16:26:34.933745 4830 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 27 16:26:34 crc kubenswrapper[4830]: E0227 16:26:34.933806 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-metrics-certs podName:7dcda287-c580-4c6d-881d-d2500541cfba nodeName:}" failed. No retries permitted until 2026-02-27 16:26:35.933789437 +0000 UTC m=+1192.023061920 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-metrics-certs") pod "openstack-operator-controller-manager-7987977d84-9b7m9" (UID: "7dcda287-c580-4c6d-881d-d2500541cfba") : secret "metrics-server-cert" not found Feb 27 16:26:34 crc kubenswrapper[4830]: E0227 16:26:34.934206 4830 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 27 16:26:34 crc kubenswrapper[4830]: E0227 16:26:34.934248 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-webhook-certs podName:7dcda287-c580-4c6d-881d-d2500541cfba nodeName:}" failed. No retries permitted until 2026-02-27 16:26:35.934236829 +0000 UTC m=+1192.023509312 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-webhook-certs") pod "openstack-operator-controller-manager-7987977d84-9b7m9" (UID: "7dcda287-c580-4c6d-881d-d2500541cfba") : secret "webhook-server-cert" not found Feb 27 16:26:35 crc kubenswrapper[4830]: I0227 16:26:35.132239 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-768f998cf4-qvwzn"] Feb 27 16:26:35 crc kubenswrapper[4830]: I0227 16:26:35.149033 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-bff955cc4-fhgdd"] Feb 27 16:26:35 crc kubenswrapper[4830]: I0227 16:26:35.154102 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-684c7d77b-2n88g"] Feb 27 16:26:35 crc kubenswrapper[4830]: W0227 16:26:35.167184 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8cf505f8_023a_4cfe_be27_2b920c8875cc.slice/crio-69c75429dc6e24fac8d1837366f20a54407eb9fde830bb0b41f8cada99b50a28 WatchSource:0}: Error finding container 69c75429dc6e24fac8d1837366f20a54407eb9fde830bb0b41f8cada99b50a28: Status 404 returned error can't find the container with id 69c75429dc6e24fac8d1837366f20a54407eb9fde830bb0b41f8cada99b50a28 Feb 27 16:26:35 crc kubenswrapper[4830]: I0227 16:26:35.170515 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-78b64779b9-fhwn5"] Feb 27 16:26:35 crc kubenswrapper[4830]: I0227 16:26:35.173701 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-585b788787-slc8g" event={"ID":"190a4a9c-ee4a-4c6d-a45c-1febc5a67e9d","Type":"ContainerStarted","Data":"46eff94ebb95bb2df8f1cb80766e717bc6bb2a6c2cf6bec91885d3cd295da007"} Feb 27 16:26:35 crc kubenswrapper[4830]: I0227 16:26:35.177578 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-cc79fdffd-2wlpz"] Feb 27 16:26:35 crc kubenswrapper[4830]: I0227 16:26:35.183011 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7db95d7ffb-59k4p" event={"ID":"23c25dea-fae4-4381-9b97-98fd17aee9d8","Type":"ContainerStarted","Data":"6b3a2006af0d2efaacff0a2e7c2dc3a4730aec6375f15499757d493c550164d7"} Feb 27 16:26:35 crc kubenswrapper[4830]: I0227 16:26:35.183542 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-745fc45789-w8lqb"] Feb 27 16:26:35 crc kubenswrapper[4830]: I0227 16:26:35.188765 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-56dc67d744-44hlt"] Feb 27 16:26:35 crc kubenswrapper[4830]: I0227 16:26:35.195097 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6fb74c6d59-zw5q9" event={"ID":"04f72aa7-3bab-4ac9-9fb6-106c7e40b9fb","Type":"ContainerStarted","Data":"acfbb31cbabee14bab96279c77e57f16db11054e775dbe385459c83d08f76217"} Feb 27 16:26:35 crc kubenswrapper[4830]: W0227 16:26:35.195603 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9dbfa18_3a80_408c_9a7d_34a96b2c411e.slice/crio-144ecae26df9944afd4d295e4a283eda0d2fcd9256f08ab700cfa5897a56eeed WatchSource:0}: Error finding container 144ecae26df9944afd4d295e4a283eda0d2fcd9256f08ab700cfa5897a56eeed: Status 404 returned error can't find the container with id 144ecae26df9944afd4d295e4a283eda0d2fcd9256f08ab700cfa5897a56eeed Feb 27 16:26:35 crc kubenswrapper[4830]: I0227 16:26:35.198218 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-768c8b45bb-7pp52" event={"ID":"9526e5f2-4fd2-42bb-b96a-f9cd615313b9","Type":"ContainerStarted","Data":"4fe51ed65f8fd4ddadc81525d91eef59e8f5173b2ea47d3ad19c052e0db4ded3"} Feb 27 16:26:35 crc kubenswrapper[4830]: I0227 16:26:35.203570 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-8784b4656-29x7g"] Feb 27 16:26:35 crc kubenswrapper[4830]: E0227 16:26:35.205301 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j4r57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-4pzb7_openstack-operators(af786cf1-6705-4c96-9c45-882daad96637): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 27 16:26:35 crc kubenswrapper[4830]: E0227 16:26:35.206384 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:b61730aa07404c6893c94c73cb7c80f16eb4d92a759740393430aca41f416b28,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tbjdc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-bff955cc4-fhgdd_openstack-operators(8cf505f8-023a-4cfe-be27-2b920c8875cc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 27 16:26:35 crc kubenswrapper[4830]: E0227 16:26:35.206443 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4pzb7" podUID="af786cf1-6705-4c96-9c45-882daad96637" Feb 27 16:26:35 crc kubenswrapper[4830]: I0227 16:26:35.206897 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7f748f8b74-f9pxf" event={"ID":"ddc86b78-f250-426e-80a2-1e0da35ea2a5","Type":"ContainerStarted","Data":"6cec0230a023b2ddef5f10ad1394f13375db8d8d783d640e88801332fa16c346"} Feb 27 16:26:35 crc kubenswrapper[4830]: E0227 16:26:35.208450 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-bff955cc4-fhgdd" podUID="8cf505f8-023a-4cfe-be27-2b920c8875cc" Feb 27 16:26:35 crc kubenswrapper[4830]: E0227 16:26:35.213665 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:4b10e23983c3ec518c35aeabb33ac228063e56c81b4d7a100c5d91139ad7d7fc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vdkc6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-56dc67d744-44hlt_openstack-operators(a358af53-9ef3-4686-8e96-528d08c2e7a2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 27 16:26:35 crc kubenswrapper[4830]: E0227 16:26:35.214109 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:9a940ee50452c206923805ba7bf69dded7fcf53cb7ec14e22e793bd56501e242,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mtsq6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-65c9f4f6b-w6kw7_openstack-operators(37c91ba3-1b2b-4717-b591-d4a4c2ec9d62): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 27 16:26:35 crc kubenswrapper[4830]: I0227 16:26:35.214380 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-m892c" event={"ID":"33e3f2f7-6a6a-4e59-84d6-a7bb2a7b14e2","Type":"ContainerStarted","Data":"4cbd7ef291ac6e8da8622a102d4efff692e5200cd03d7d58c0a88b551aa38a9b"} Feb 27 16:26:35 crc kubenswrapper[4830]: W0227 16:26:35.214890 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode42044d1_1153_4216_8d8f_b8333d2bcb00.slice/crio-3331a872ba0e4e2fb3e9494115b58466dfba0c5d06f578fc527e47bd4fff50f9 WatchSource:0}: Error finding container 3331a872ba0e4e2fb3e9494115b58466dfba0c5d06f578fc527e47bd4fff50f9: Status 404 returned error can't find the container with id 3331a872ba0e4e2fb3e9494115b58466dfba0c5d06f578fc527e47bd4fff50f9 Feb 27 16:26:35 crc kubenswrapper[4830]: E0227 16:26:35.214916 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-44hlt" podUID="a358af53-9ef3-4686-8e96-528d08c2e7a2" Feb 27 16:26:35 crc kubenswrapper[4830]: E0227 16:26:35.215231 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-65c9f4f6b-w6kw7" podUID="37c91ba3-1b2b-4717-b591-d4a4c2ec9d62" Feb 27 16:26:35 crc kubenswrapper[4830]: E0227 16:26:35.216860 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:14c0fc05afebbccb71f9ac9a6913125154a886b697f21002c77d7d1151e26b8e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ppqmg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-745fc45789-w8lqb_openstack-operators(e42044d1-1153-4216-8d8f-b8333d2bcb00): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 27 16:26:35 crc kubenswrapper[4830]: I0227 16:26:35.217817 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4pzb7"] Feb 27 16:26:35 crc kubenswrapper[4830]: E0227 16:26:35.218112 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-745fc45789-w8lqb" podUID="e42044d1-1153-4216-8d8f-b8333d2bcb00" Feb 27 16:26:35 crc kubenswrapper[4830]: E0227 16:26:35.226209 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:6f2cb7c21f4c284ce007f6a00ed4ac1e073036e50efae6285c3ee8d3fe1ae5e3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-stbq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-6c67ff7674-ftbbj_openstack-operators(531e48d4-bbe4-4527-944e-4b27dc957ff4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 27 16:26:35 crc kubenswrapper[4830]: E0227 16:26:35.228921 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-6c67ff7674-ftbbj" podUID="531e48d4-bbe4-4527-944e-4b27dc957ff4" Feb 27 16:26:35 crc kubenswrapper[4830]: I0227 16:26:35.231550 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-55f4bf89cb-lqgtj"] Feb 27 16:26:35 crc kubenswrapper[4830]: I0227 16:26:35.238069 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8467ccb4c8-mh9d6"] Feb 27 16:26:35 crc kubenswrapper[4830]: I0227 16:26:35.238992 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5b73c28e-36b3-4845-9336-299fc3dd2551-cert\") pod \"infra-operator-controller-manager-c77466965-24fz2\" (UID: \"5b73c28e-36b3-4845-9336-299fc3dd2551\") " pod="openstack-operators/infra-operator-controller-manager-c77466965-24fz2" Feb 27 16:26:35 crc kubenswrapper[4830]: E0227 16:26:35.239116 4830 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 27 16:26:35 crc kubenswrapper[4830]: E0227 16:26:35.239163 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b73c28e-36b3-4845-9336-299fc3dd2551-cert podName:5b73c28e-36b3-4845-9336-299fc3dd2551 nodeName:}" failed. No retries permitted until 2026-02-27 16:26:37.239150857 +0000 UTC m=+1193.328423310 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5b73c28e-36b3-4845-9336-299fc3dd2551-cert") pod "infra-operator-controller-manager-c77466965-24fz2" (UID: "5b73c28e-36b3-4845-9336-299fc3dd2551") : secret "infra-operator-webhook-server-cert" not found Feb 27 16:26:35 crc kubenswrapper[4830]: I0227 16:26:35.242510 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-76fd76856-vtdk8"] Feb 27 16:26:35 crc kubenswrapper[4830]: I0227 16:26:35.246838 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6c67ff7674-ftbbj"] Feb 27 16:26:35 crc kubenswrapper[4830]: I0227 16:26:35.251042 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-65c9f4f6b-w6kw7"] Feb 27 16:26:35 crc kubenswrapper[4830]: I0227 16:26:35.548770 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b719a387-109a-49fe-b4df-98038c202a0f-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-68j87\" (UID: \"b719a387-109a-49fe-b4df-98038c202a0f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-68j87" Feb 27 16:26:35 crc kubenswrapper[4830]: E0227 16:26:35.550105 4830 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 16:26:35 crc kubenswrapper[4830]: E0227 16:26:35.550196 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b719a387-109a-49fe-b4df-98038c202a0f-cert podName:b719a387-109a-49fe-b4df-98038c202a0f nodeName:}" failed. No retries permitted until 2026-02-27 16:26:37.550171588 +0000 UTC m=+1193.639444051 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b719a387-109a-49fe-b4df-98038c202a0f-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-68j87" (UID: "b719a387-109a-49fe-b4df-98038c202a0f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 16:26:35 crc kubenswrapper[4830]: I0227 16:26:35.955346 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-metrics-certs\") pod \"openstack-operator-controller-manager-7987977d84-9b7m9\" (UID: \"7dcda287-c580-4c6d-881d-d2500541cfba\") " pod="openstack-operators/openstack-operator-controller-manager-7987977d84-9b7m9" Feb 27 16:26:35 crc kubenswrapper[4830]: I0227 16:26:35.955501 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-webhook-certs\") pod \"openstack-operator-controller-manager-7987977d84-9b7m9\" (UID: \"7dcda287-c580-4c6d-881d-d2500541cfba\") " pod="openstack-operators/openstack-operator-controller-manager-7987977d84-9b7m9" Feb 27 16:26:35 crc kubenswrapper[4830]: E0227 16:26:35.955610 4830 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 27 16:26:35 crc kubenswrapper[4830]: E0227 16:26:35.955674 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-webhook-certs podName:7dcda287-c580-4c6d-881d-d2500541cfba nodeName:}" failed. No retries permitted until 2026-02-27 16:26:37.955660131 +0000 UTC m=+1194.044932594 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-webhook-certs") pod "openstack-operator-controller-manager-7987977d84-9b7m9" (UID: "7dcda287-c580-4c6d-881d-d2500541cfba") : secret "webhook-server-cert" not found Feb 27 16:26:35 crc kubenswrapper[4830]: E0227 16:26:35.955846 4830 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 27 16:26:35 crc kubenswrapper[4830]: E0227 16:26:35.955925 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-metrics-certs podName:7dcda287-c580-4c6d-881d-d2500541cfba nodeName:}" failed. No retries permitted until 2026-02-27 16:26:37.955907117 +0000 UTC m=+1194.045179580 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-metrics-certs") pod "openstack-operator-controller-manager-7987977d84-9b7m9" (UID: "7dcda287-c580-4c6d-881d-d2500541cfba") : secret "metrics-server-cert" not found Feb 27 16:26:36 crc kubenswrapper[4830]: I0227 16:26:36.229836 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6c67ff7674-ftbbj" event={"ID":"531e48d4-bbe4-4527-944e-4b27dc957ff4","Type":"ContainerStarted","Data":"923c30c70995da3e31fd37221d06d00135d234f751f204ce6c371c662f434a70"} Feb 27 16:26:36 crc kubenswrapper[4830]: E0227 16:26:36.231583 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:6f2cb7c21f4c284ce007f6a00ed4ac1e073036e50efae6285c3ee8d3fe1ae5e3\\\"\"" pod="openstack-operators/nova-operator-controller-manager-6c67ff7674-ftbbj" podUID="531e48d4-bbe4-4527-944e-4b27dc957ff4" Feb 27 16:26:36 crc kubenswrapper[4830]: I0227 16:26:36.233647 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-65c9f4f6b-w6kw7" event={"ID":"37c91ba3-1b2b-4717-b591-d4a4c2ec9d62","Type":"ContainerStarted","Data":"3c322a674e99f3a9c5cac17ef868d102bf70b6c150e7eb947b7526b97c1801c8"} Feb 27 16:26:36 crc kubenswrapper[4830]: E0227 16:26:36.234509 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:9a940ee50452c206923805ba7bf69dded7fcf53cb7ec14e22e793bd56501e242\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-65c9f4f6b-w6kw7" podUID="37c91ba3-1b2b-4717-b591-d4a4c2ec9d62" Feb 27 16:26:36 crc kubenswrapper[4830]: I0227 16:26:36.264410 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-cc79fdffd-2wlpz" event={"ID":"7237e49f-cb23-40bd-b5ab-f1460c620f13","Type":"ContainerStarted","Data":"e4aebd67cea5dfaded09cde4c0f8b97818763f270c30e37bcf968a0267d52a91"} Feb 27 16:26:36 crc kubenswrapper[4830]: I0227 16:26:36.266289 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-mh9d6" event={"ID":"33a4c588-56bf-40d2-892c-9fbe458de600","Type":"ContainerStarted","Data":"4d14fabd3ed499cd334428dab6ed83408dc6695e6266902ed3dde163ccaa9442"} Feb 27 16:26:36 crc kubenswrapper[4830]: I0227 16:26:36.267876 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-76fd76856-vtdk8" event={"ID":"b9dbfa18-3a80-408c-9a7d-34a96b2c411e","Type":"ContainerStarted","Data":"144ecae26df9944afd4d295e4a283eda0d2fcd9256f08ab700cfa5897a56eeed"} Feb 27 16:26:36 crc kubenswrapper[4830]: I0227 16:26:36.285371 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4pzb7" event={"ID":"af786cf1-6705-4c96-9c45-882daad96637","Type":"ContainerStarted","Data":"284b7ec1bad7930cefe67ca1cfffddf63c338f16b45b3dd482e0c707765369b3"} Feb 27 16:26:36 crc kubenswrapper[4830]: E0227 16:26:36.288059 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4pzb7" podUID="af786cf1-6705-4c96-9c45-882daad96637" Feb 27 16:26:36 crc kubenswrapper[4830]: I0227 16:26:36.305509 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-745fc45789-w8lqb" event={"ID":"e42044d1-1153-4216-8d8f-b8333d2bcb00","Type":"ContainerStarted","Data":"3331a872ba0e4e2fb3e9494115b58466dfba0c5d06f578fc527e47bd4fff50f9"} Feb 27 16:26:36 crc kubenswrapper[4830]: E0227 16:26:36.308222 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:14c0fc05afebbccb71f9ac9a6913125154a886b697f21002c77d7d1151e26b8e\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-745fc45789-w8lqb" podUID="e42044d1-1153-4216-8d8f-b8333d2bcb00" Feb 27 16:26:36 crc kubenswrapper[4830]: I0227 16:26:36.308239 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-55f4bf89cb-lqgtj" event={"ID":"c0bb3f6f-67ec-4669-be22-2122ae624cdd","Type":"ContainerStarted","Data":"2f63dbacdbbd4ed7068efcf082801fb08443d2dc60a047e164879b45948a940e"} Feb 27 16:26:36 crc kubenswrapper[4830]: I0227 16:26:36.314790 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-44hlt" event={"ID":"a358af53-9ef3-4686-8e96-528d08c2e7a2","Type":"ContainerStarted","Data":"217fb43c31863ea5bb594b0d87d9bdbd2be09e930c8255e83d48f86aeb192108"} Feb 27 16:26:36 crc kubenswrapper[4830]: E0227 16:26:36.315914 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:4b10e23983c3ec518c35aeabb33ac228063e56c81b4d7a100c5d91139ad7d7fc\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-44hlt" podUID="a358af53-9ef3-4686-8e96-528d08c2e7a2" Feb 27 16:26:36 crc kubenswrapper[4830]: I0227 16:26:36.316414 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-78b64779b9-fhwn5" event={"ID":"53b4e8e1-00b7-4744-8fcf-a723ae104e53","Type":"ContainerStarted","Data":"84ff61ea8943ef2ab23c3f010276ea385934515fafb641ed681771d68b063f01"} Feb 27 16:26:36 crc kubenswrapper[4830]: I0227 16:26:36.324488 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-8784b4656-29x7g" event={"ID":"e68ac45c-7b30-4cd5-932a-9a0e8a3824f3","Type":"ContainerStarted","Data":"6ce8313a8be86734d50d1790826185883275d13bd7b5bb6aeaf6b345a22c8706"} Feb 27 16:26:36 crc kubenswrapper[4830]: I0227 16:26:36.326567 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-684c7d77b-2n88g" event={"ID":"f179e5c8-193f-47fc-841e-2dc3feff31cd","Type":"ContainerStarted","Data":"438ecc708922eebfbfee3815ceda1a2c45fb0dd58b90fa99d597c7ce83757da6"} Feb 27 16:26:36 crc kubenswrapper[4830]: I0227 16:26:36.329763 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-bff955cc4-fhgdd" event={"ID":"8cf505f8-023a-4cfe-be27-2b920c8875cc","Type":"ContainerStarted","Data":"69c75429dc6e24fac8d1837366f20a54407eb9fde830bb0b41f8cada99b50a28"} Feb 27 16:26:36 crc kubenswrapper[4830]: E0227 16:26:36.331815 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:b61730aa07404c6893c94c73cb7c80f16eb4d92a759740393430aca41f416b28\\\"\"" pod="openstack-operators/placement-operator-controller-manager-bff955cc4-fhgdd" podUID="8cf505f8-023a-4cfe-be27-2b920c8875cc" Feb 27 16:26:36 crc kubenswrapper[4830]: I0227 16:26:36.332409 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-768f998cf4-qvwzn" event={"ID":"bbd18a52-1057-4183-bb46-f1c270691eac","Type":"ContainerStarted","Data":"fdd2245d0474f7c792eae8fce6aff908b51e3ad11ab68b7c113c22be63769f5b"} Feb 27 16:26:37 crc kubenswrapper[4830]: I0227 16:26:37.280110 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5b73c28e-36b3-4845-9336-299fc3dd2551-cert\") pod \"infra-operator-controller-manager-c77466965-24fz2\" (UID: \"5b73c28e-36b3-4845-9336-299fc3dd2551\") " pod="openstack-operators/infra-operator-controller-manager-c77466965-24fz2" Feb 27 16:26:37 crc kubenswrapper[4830]: E0227 16:26:37.280320 4830 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 27 16:26:37 crc kubenswrapper[4830]: E0227 16:26:37.280373 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b73c28e-36b3-4845-9336-299fc3dd2551-cert podName:5b73c28e-36b3-4845-9336-299fc3dd2551 nodeName:}" failed. No retries permitted until 2026-02-27 16:26:41.280359143 +0000 UTC m=+1197.369631606 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5b73c28e-36b3-4845-9336-299fc3dd2551-cert") pod "infra-operator-controller-manager-c77466965-24fz2" (UID: "5b73c28e-36b3-4845-9336-299fc3dd2551") : secret "infra-operator-webhook-server-cert" not found Feb 27 16:26:37 crc kubenswrapper[4830]: E0227 16:26:37.341262 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:6f2cb7c21f4c284ce007f6a00ed4ac1e073036e50efae6285c3ee8d3fe1ae5e3\\\"\"" pod="openstack-operators/nova-operator-controller-manager-6c67ff7674-ftbbj" podUID="531e48d4-bbe4-4527-944e-4b27dc957ff4" Feb 27 16:26:37 crc kubenswrapper[4830]: E0227 16:26:37.342028 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:14c0fc05afebbccb71f9ac9a6913125154a886b697f21002c77d7d1151e26b8e\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-745fc45789-w8lqb" podUID="e42044d1-1153-4216-8d8f-b8333d2bcb00" Feb 27 16:26:37 crc kubenswrapper[4830]: E0227 16:26:37.342078 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:4b10e23983c3ec518c35aeabb33ac228063e56c81b4d7a100c5d91139ad7d7fc\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-44hlt" podUID="a358af53-9ef3-4686-8e96-528d08c2e7a2" Feb 27 16:26:37 crc kubenswrapper[4830]: E0227 16:26:37.342099 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4pzb7" podUID="af786cf1-6705-4c96-9c45-882daad96637" Feb 27 16:26:37 crc kubenswrapper[4830]: E0227 16:26:37.347585 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:b61730aa07404c6893c94c73cb7c80f16eb4d92a759740393430aca41f416b28\\\"\"" pod="openstack-operators/placement-operator-controller-manager-bff955cc4-fhgdd" podUID="8cf505f8-023a-4cfe-be27-2b920c8875cc" Feb 27 16:26:37 crc kubenswrapper[4830]: E0227 16:26:37.348673 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:9a940ee50452c206923805ba7bf69dded7fcf53cb7ec14e22e793bd56501e242\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-65c9f4f6b-w6kw7" podUID="37c91ba3-1b2b-4717-b591-d4a4c2ec9d62" Feb 27 16:26:37 crc kubenswrapper[4830]: I0227 16:26:37.584589 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b719a387-109a-49fe-b4df-98038c202a0f-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-68j87\" (UID: \"b719a387-109a-49fe-b4df-98038c202a0f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-68j87" Feb 27 16:26:37 crc kubenswrapper[4830]: E0227 16:26:37.584767 4830 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 16:26:37 crc kubenswrapper[4830]: E0227 16:26:37.584825 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b719a387-109a-49fe-b4df-98038c202a0f-cert podName:b719a387-109a-49fe-b4df-98038c202a0f nodeName:}" failed. No retries permitted until 2026-02-27 16:26:41.584808701 +0000 UTC m=+1197.674081164 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b719a387-109a-49fe-b4df-98038c202a0f-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-68j87" (UID: "b719a387-109a-49fe-b4df-98038c202a0f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 16:26:38 crc kubenswrapper[4830]: I0227 16:26:38.012088 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-metrics-certs\") pod \"openstack-operator-controller-manager-7987977d84-9b7m9\" (UID: \"7dcda287-c580-4c6d-881d-d2500541cfba\") " pod="openstack-operators/openstack-operator-controller-manager-7987977d84-9b7m9" Feb 27 16:26:38 crc kubenswrapper[4830]: I0227 16:26:38.012147 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-webhook-certs\") pod \"openstack-operator-controller-manager-7987977d84-9b7m9\" (UID: \"7dcda287-c580-4c6d-881d-d2500541cfba\") " pod="openstack-operators/openstack-operator-controller-manager-7987977d84-9b7m9" Feb 27 16:26:38 crc kubenswrapper[4830]: E0227 16:26:38.012281 4830 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 27 16:26:38 crc kubenswrapper[4830]: E0227 16:26:38.012328 4830 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 27 16:26:38 crc kubenswrapper[4830]: E0227 16:26:38.012358 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-metrics-certs podName:7dcda287-c580-4c6d-881d-d2500541cfba nodeName:}" failed. No retries permitted until 2026-02-27 16:26:42.0123391 +0000 UTC m=+1198.101611563 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-metrics-certs") pod "openstack-operator-controller-manager-7987977d84-9b7m9" (UID: "7dcda287-c580-4c6d-881d-d2500541cfba") : secret "metrics-server-cert" not found Feb 27 16:26:38 crc kubenswrapper[4830]: E0227 16:26:38.012378 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-webhook-certs podName:7dcda287-c580-4c6d-881d-d2500541cfba nodeName:}" failed. No retries permitted until 2026-02-27 16:26:42.012370161 +0000 UTC m=+1198.101642624 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-webhook-certs") pod "openstack-operator-controller-manager-7987977d84-9b7m9" (UID: "7dcda287-c580-4c6d-881d-d2500541cfba") : secret "webhook-server-cert" not found Feb 27 16:26:41 crc kubenswrapper[4830]: I0227 16:26:41.372577 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5b73c28e-36b3-4845-9336-299fc3dd2551-cert\") pod \"infra-operator-controller-manager-c77466965-24fz2\" (UID: \"5b73c28e-36b3-4845-9336-299fc3dd2551\") " pod="openstack-operators/infra-operator-controller-manager-c77466965-24fz2" Feb 27 16:26:41 crc kubenswrapper[4830]: E0227 16:26:41.372830 4830 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 27 16:26:41 crc kubenswrapper[4830]: E0227 16:26:41.373350 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b73c28e-36b3-4845-9336-299fc3dd2551-cert podName:5b73c28e-36b3-4845-9336-299fc3dd2551 nodeName:}" failed. No retries permitted until 2026-02-27 16:26:49.373322775 +0000 UTC m=+1205.462595278 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5b73c28e-36b3-4845-9336-299fc3dd2551-cert") pod "infra-operator-controller-manager-c77466965-24fz2" (UID: "5b73c28e-36b3-4845-9336-299fc3dd2551") : secret "infra-operator-webhook-server-cert" not found Feb 27 16:26:41 crc kubenswrapper[4830]: I0227 16:26:41.677770 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b719a387-109a-49fe-b4df-98038c202a0f-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-68j87\" (UID: \"b719a387-109a-49fe-b4df-98038c202a0f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-68j87" Feb 27 16:26:41 crc kubenswrapper[4830]: E0227 16:26:41.678015 4830 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 16:26:41 crc kubenswrapper[4830]: E0227 16:26:41.678073 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b719a387-109a-49fe-b4df-98038c202a0f-cert podName:b719a387-109a-49fe-b4df-98038c202a0f nodeName:}" failed. No retries permitted until 2026-02-27 16:26:49.67805443 +0000 UTC m=+1205.767326903 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b719a387-109a-49fe-b4df-98038c202a0f-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-68j87" (UID: "b719a387-109a-49fe-b4df-98038c202a0f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 16:26:42 crc kubenswrapper[4830]: I0227 16:26:42.085060 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-webhook-certs\") pod \"openstack-operator-controller-manager-7987977d84-9b7m9\" (UID: \"7dcda287-c580-4c6d-881d-d2500541cfba\") " pod="openstack-operators/openstack-operator-controller-manager-7987977d84-9b7m9" Feb 27 16:26:42 crc kubenswrapper[4830]: I0227 16:26:42.085235 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-metrics-certs\") pod \"openstack-operator-controller-manager-7987977d84-9b7m9\" (UID: \"7dcda287-c580-4c6d-881d-d2500541cfba\") " pod="openstack-operators/openstack-operator-controller-manager-7987977d84-9b7m9" Feb 27 16:26:42 crc kubenswrapper[4830]: E0227 16:26:42.085242 4830 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 27 16:26:42 crc kubenswrapper[4830]: E0227 16:26:42.085315 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-webhook-certs podName:7dcda287-c580-4c6d-881d-d2500541cfba nodeName:}" failed. No retries permitted until 2026-02-27 16:26:50.085294737 +0000 UTC m=+1206.174567200 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-webhook-certs") pod "openstack-operator-controller-manager-7987977d84-9b7m9" (UID: "7dcda287-c580-4c6d-881d-d2500541cfba") : secret "webhook-server-cert" not found Feb 27 16:26:42 crc kubenswrapper[4830]: E0227 16:26:42.085371 4830 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 27 16:26:42 crc kubenswrapper[4830]: E0227 16:26:42.085438 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-metrics-certs podName:7dcda287-c580-4c6d-881d-d2500541cfba nodeName:}" failed. No retries permitted until 2026-02-27 16:26:50.08542031 +0000 UTC m=+1206.174692783 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-metrics-certs") pod "openstack-operator-controller-manager-7987977d84-9b7m9" (UID: "7dcda287-c580-4c6d-881d-d2500541cfba") : secret "metrics-server-cert" not found Feb 27 16:26:46 crc kubenswrapper[4830]: E0227 16:26:46.696633 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:bff5fd9d4a6b97d53632cd4d4c0bd1a2925b409be6a2bbcb088b53b21cc3a54a" Feb 27 16:26:46 crc kubenswrapper[4830]: E0227 16:26:46.698673 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:bff5fd9d4a6b97d53632cd4d4c0bd1a2925b409be6a2bbcb088b53b21cc3a54a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-smmrf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-7f748f8b74-f9pxf_openstack-operators(ddc86b78-f250-426e-80a2-1e0da35ea2a5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 16:26:46 crc kubenswrapper[4830]: E0227 16:26:46.699905 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-7f748f8b74-f9pxf" podUID="ddc86b78-f250-426e-80a2-1e0da35ea2a5" Feb 27 16:26:47 crc kubenswrapper[4830]: E0227 16:26:47.421382 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:bff5fd9d4a6b97d53632cd4d4c0bd1a2925b409be6a2bbcb088b53b21cc3a54a\\\"\"" pod="openstack-operators/glance-operator-controller-manager-7f748f8b74-f9pxf" podUID="ddc86b78-f250-426e-80a2-1e0da35ea2a5" Feb 27 16:26:47 crc kubenswrapper[4830]: E0227 16:26:47.457148 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:f9b2e00617c7f219932ea0d5e2bb795cc4361a335a72743077948d8108695c27" Feb 27 16:26:47 crc kubenswrapper[4830]: E0227 16:26:47.457457 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f9b2e00617c7f219932ea0d5e2bb795cc4361a335a72743077948d8108695c27,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mtfbp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-8467ccb4c8-mh9d6_openstack-operators(33a4c588-56bf-40d2-892c-9fbe458de600): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 16:26:47 crc kubenswrapper[4830]: E0227 16:26:47.459196 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-mh9d6" podUID="33a4c588-56bf-40d2-892c-9fbe458de600" Feb 27 16:26:48 crc kubenswrapper[4830]: E0227 16:26:48.428906 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f9b2e00617c7f219932ea0d5e2bb795cc4361a335a72743077948d8108695c27\\\"\"" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-mh9d6" podUID="33a4c588-56bf-40d2-892c-9fbe458de600" Feb 27 16:26:49 crc kubenswrapper[4830]: I0227 16:26:49.376413 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5b73c28e-36b3-4845-9336-299fc3dd2551-cert\") pod \"infra-operator-controller-manager-c77466965-24fz2\" (UID: \"5b73c28e-36b3-4845-9336-299fc3dd2551\") " pod="openstack-operators/infra-operator-controller-manager-c77466965-24fz2" Feb 27 16:26:49 crc kubenswrapper[4830]: E0227 16:26:49.376723 4830 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 27 16:26:49 crc kubenswrapper[4830]: E0227 16:26:49.376849 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b73c28e-36b3-4845-9336-299fc3dd2551-cert podName:5b73c28e-36b3-4845-9336-299fc3dd2551 nodeName:}" failed. No retries permitted until 2026-02-27 16:27:05.376819297 +0000 UTC m=+1221.466091790 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5b73c28e-36b3-4845-9336-299fc3dd2551-cert") pod "infra-operator-controller-manager-c77466965-24fz2" (UID: "5b73c28e-36b3-4845-9336-299fc3dd2551") : secret "infra-operator-webhook-server-cert" not found Feb 27 16:26:49 crc kubenswrapper[4830]: I0227 16:26:49.681467 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b719a387-109a-49fe-b4df-98038c202a0f-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-68j87\" (UID: \"b719a387-109a-49fe-b4df-98038c202a0f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-68j87" Feb 27 16:26:49 crc kubenswrapper[4830]: E0227 16:26:49.681660 4830 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 16:26:49 crc kubenswrapper[4830]: E0227 16:26:49.681838 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b719a387-109a-49fe-b4df-98038c202a0f-cert podName:b719a387-109a-49fe-b4df-98038c202a0f nodeName:}" failed. No retries permitted until 2026-02-27 16:27:05.681818748 +0000 UTC m=+1221.771091211 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b719a387-109a-49fe-b4df-98038c202a0f-cert") pod "openstack-baremetal-operator-controller-manager-c5677dc5d-68j87" (UID: "b719a387-109a-49fe-b4df-98038c202a0f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 16:26:49 crc kubenswrapper[4830]: E0227 16:26:49.993521 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:2014d974f067289517c3bc3ae780a234a9c6577cfbee44c3a50f9856ec12bf76" Feb 27 16:26:49 crc kubenswrapper[4830]: E0227 16:26:49.993879 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2014d974f067289517c3bc3ae780a234a9c6577cfbee44c3a50f9856ec12bf76,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vmldm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-585b788787-slc8g_openstack-operators(190a4a9c-ee4a-4c6d-a45c-1febc5a67e9d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 16:26:49 crc kubenswrapper[4830]: E0227 16:26:49.995217 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-585b788787-slc8g" podUID="190a4a9c-ee4a-4c6d-a45c-1febc5a67e9d" Feb 27 16:26:50 crc kubenswrapper[4830]: I0227 16:26:50.087816 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-webhook-certs\") pod \"openstack-operator-controller-manager-7987977d84-9b7m9\" (UID: \"7dcda287-c580-4c6d-881d-d2500541cfba\") " pod="openstack-operators/openstack-operator-controller-manager-7987977d84-9b7m9" Feb 27 16:26:50 crc kubenswrapper[4830]: E0227 16:26:50.088086 4830 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 27 16:26:50 crc kubenswrapper[4830]: E0227 16:26:50.088204 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-webhook-certs podName:7dcda287-c580-4c6d-881d-d2500541cfba nodeName:}" failed. No retries permitted until 2026-02-27 16:27:06.088171113 +0000 UTC m=+1222.177443616 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-webhook-certs") pod "openstack-operator-controller-manager-7987977d84-9b7m9" (UID: "7dcda287-c580-4c6d-881d-d2500541cfba") : secret "webhook-server-cert" not found Feb 27 16:26:50 crc kubenswrapper[4830]: I0227 16:26:50.088249 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-metrics-certs\") pod \"openstack-operator-controller-manager-7987977d84-9b7m9\" (UID: \"7dcda287-c580-4c6d-881d-d2500541cfba\") " pod="openstack-operators/openstack-operator-controller-manager-7987977d84-9b7m9" Feb 27 16:26:50 crc kubenswrapper[4830]: E0227 16:26:50.088441 4830 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 27 16:26:50 crc kubenswrapper[4830]: E0227 16:26:50.088511 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-metrics-certs podName:7dcda287-c580-4c6d-881d-d2500541cfba nodeName:}" failed. No retries permitted until 2026-02-27 16:27:06.088492831 +0000 UTC m=+1222.177765334 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-metrics-certs") pod "openstack-operator-controller-manager-7987977d84-9b7m9" (UID: "7dcda287-c580-4c6d-881d-d2500541cfba") : secret "metrics-server-cert" not found Feb 27 16:26:50 crc kubenswrapper[4830]: E0227 16:26:50.442148 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2014d974f067289517c3bc3ae780a234a9c6577cfbee44c3a50f9856ec12bf76\\\"\"" pod="openstack-operators/heat-operator-controller-manager-585b788787-slc8g" podUID="190a4a9c-ee4a-4c6d-a45c-1febc5a67e9d" Feb 27 16:26:50 crc kubenswrapper[4830]: E0227 16:26:50.823650 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:590f89a86c5811a79df9be7fb864fbfb2d8579cf464bed1343f29e639d47b96d" Feb 27 16:26:50 crc kubenswrapper[4830]: E0227 16:26:50.824072 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:590f89a86c5811a79df9be7fb864fbfb2d8579cf464bed1343f29e639d47b96d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pl95s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-cc79fdffd-2wlpz_openstack-operators(7237e49f-cb23-40bd-b5ab-f1460c620f13): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 16:26:50 crc kubenswrapper[4830]: E0227 16:26:50.826881 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-cc79fdffd-2wlpz" podUID="7237e49f-cb23-40bd-b5ab-f1460c620f13" Feb 27 16:26:51 crc kubenswrapper[4830]: E0227 16:26:51.312419 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:77d02d087aee0298fb35e3b79cb131e53d43eff020bf07023c98b2ddf65a195d" Feb 27 16:26:51 crc kubenswrapper[4830]: E0227 16:26:51.312634 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:77d02d087aee0298fb35e3b79cb131e53d43eff020bf07023c98b2ddf65a195d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cgrnr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-8784b4656-29x7g_openstack-operators(e68ac45c-7b30-4cd5-932a-9a0e8a3824f3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 16:26:51 crc kubenswrapper[4830]: E0227 16:26:51.314621 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-8784b4656-29x7g" podUID="e68ac45c-7b30-4cd5-932a-9a0e8a3824f3" Feb 27 16:26:51 crc kubenswrapper[4830]: E0227 16:26:51.448521 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:590f89a86c5811a79df9be7fb864fbfb2d8579cf464bed1343f29e639d47b96d\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-cc79fdffd-2wlpz" podUID="7237e49f-cb23-40bd-b5ab-f1460c620f13" Feb 27 16:26:51 crc kubenswrapper[4830]: E0227 16:26:51.449181 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:77d02d087aee0298fb35e3b79cb131e53d43eff020bf07023c98b2ddf65a195d\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-8784b4656-29x7g" podUID="e68ac45c-7b30-4cd5-932a-9a0e8a3824f3" Feb 27 16:26:53 crc kubenswrapper[4830]: E0227 16:26:53.107484 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:b235599fe44c901b7ac0b51dfbcc9e0cea2bf5a9dc8295bafe16bba528d72997" Feb 27 16:26:53 crc kubenswrapper[4830]: E0227 16:26:53.108357 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:b235599fe44c901b7ac0b51dfbcc9e0cea2bf5a9dc8295bafe16bba528d72997,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fdnr5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-78b64779b9-fhwn5_openstack-operators(53b4e8e1-00b7-4744-8fcf-a723ae104e53): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 16:26:53 crc kubenswrapper[4830]: E0227 16:26:53.109567 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-78b64779b9-fhwn5" podUID="53b4e8e1-00b7-4744-8fcf-a723ae104e53" Feb 27 16:26:53 crc kubenswrapper[4830]: E0227 16:26:53.460243 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:b235599fe44c901b7ac0b51dfbcc9e0cea2bf5a9dc8295bafe16bba528d72997\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-78b64779b9-fhwn5" podUID="53b4e8e1-00b7-4744-8fcf-a723ae104e53" Feb 27 16:27:00 crc kubenswrapper[4830]: I0227 16:27:00.765555 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.530818 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4pzb7" event={"ID":"af786cf1-6705-4c96-9c45-882daad96637","Type":"ContainerStarted","Data":"642abd4e778e1e933ceb2b455e375d4cba68d7eae3dfe82c7dbc410493879e04"} Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.534141 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7f748f8b74-f9pxf" event={"ID":"ddc86b78-f250-426e-80a2-1e0da35ea2a5","Type":"ContainerStarted","Data":"ffc523faa75b530c01634e7cfe5618e60269de6438568c34dcd70536a68c7314"} Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.534601 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-7f748f8b74-f9pxf" Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.535542 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-bff955cc4-fhgdd" event={"ID":"8cf505f8-023a-4cfe-be27-2b920c8875cc","Type":"ContainerStarted","Data":"80ad945aa4f8ec827e7828fb51f00b2aaa3925445d299d6df2c2c115e78f734f"} Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.535857 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-bff955cc4-fhgdd" Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.536707 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-44hlt" event={"ID":"a358af53-9ef3-4686-8e96-528d08c2e7a2","Type":"ContainerStarted","Data":"271e870f4ae5fb66a9b359a730aba33edb1e64b3244d68c0228938bde98e865d"} Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.536901 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-44hlt" Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.538230 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-55f4bf89cb-lqgtj" event={"ID":"c0bb3f6f-67ec-4669-be22-2122ae624cdd","Type":"ContainerStarted","Data":"49d4dab1db82c000ffc30f14651cdb613a34ce0a608f86d3dd1d8be6ec3bca2f"} Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.538552 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-55f4bf89cb-lqgtj" Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.539708 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-65c9f4f6b-w6kw7" event={"ID":"37c91ba3-1b2b-4717-b591-d4a4c2ec9d62","Type":"ContainerStarted","Data":"8ca82108b6f69f1c7f86a42c81f14e457c39cb96b8341cf9560f4cf28d8f6c96"} Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.540069 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-65c9f4f6b-w6kw7" Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.541729 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-768c8b45bb-7pp52" event={"ID":"9526e5f2-4fd2-42bb-b96a-f9cd615313b9","Type":"ContainerStarted","Data":"946bfcfa35c95529f7c9d50af67d551e3d58dda67f89d3403fa14a6f172f21a9"} Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.541797 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-768c8b45bb-7pp52" Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.543302 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-m892c" event={"ID":"33e3f2f7-6a6a-4e59-84d6-a7bb2a7b14e2","Type":"ContainerStarted","Data":"a6199be1211b5886d787e0a0c12f1fd825b99c9ec46a7bcaff9384771fa309ac"} Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.543657 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-m892c" Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.544773 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7db95d7ffb-59k4p" event={"ID":"23c25dea-fae4-4381-9b97-98fd17aee9d8","Type":"ContainerStarted","Data":"e2ccf7b069da7231cc6accee6bb7c801d7cb562d8fb78cfeecd710551022d35d"} Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.545242 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-7db95d7ffb-59k4p" Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.546587 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6fb74c6d59-zw5q9" event={"ID":"04f72aa7-3bab-4ac9-9fb6-106c7e40b9fb","Type":"ContainerStarted","Data":"2ca7f13460ee6cfe4403a41497d6dc627a691cfeb5b6d94ce9e859311b2c0571"} Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.547044 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-6fb74c6d59-zw5q9" Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.548329 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6c67ff7674-ftbbj" event={"ID":"531e48d4-bbe4-4527-944e-4b27dc957ff4","Type":"ContainerStarted","Data":"09c3590bd2aaa153176ab0318c9cb91ef32c54d54f9485d8c5530100678af481"} Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.548687 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6c67ff7674-ftbbj" Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.549775 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-cc79fdffd-2wlpz" event={"ID":"7237e49f-cb23-40bd-b5ab-f1460c620f13","Type":"ContainerStarted","Data":"1848cc39736e52333cf84a7f787b58d50eb8e065db826ac9d2f56ef820ef4b45"} Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.550221 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-cc79fdffd-2wlpz" Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.554208 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-768f998cf4-qvwzn" event={"ID":"bbd18a52-1057-4183-bb46-f1c270691eac","Type":"ContainerStarted","Data":"77851c09d193c507535ac797a0e93f02a6214b816d252e3a95daee277f0145e3"} Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.554756 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-768f998cf4-qvwzn" Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.556170 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4pzb7" podStartSLOduration=1.960492362 podStartE2EDuration="28.556158509s" podCreationTimestamp="2026-02-27 16:26:34 +0000 UTC" firstStartedPulling="2026-02-27 16:26:35.205199856 +0000 UTC m=+1191.294472319" lastFinishedPulling="2026-02-27 16:27:01.800866003 +0000 UTC m=+1217.890138466" observedRunningTime="2026-02-27 16:27:02.551583674 +0000 UTC m=+1218.640856137" watchObservedRunningTime="2026-02-27 16:27:02.556158509 +0000 UTC m=+1218.645430972" Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.560314 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-mh9d6" event={"ID":"33a4c588-56bf-40d2-892c-9fbe458de600","Type":"ContainerStarted","Data":"c79fe12371071589598f7201e06836b7498657747a61924065322b315d06e3cf"} Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.560717 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-mh9d6" Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.561907 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-76fd76856-vtdk8" event={"ID":"b9dbfa18-3a80-408c-9a7d-34a96b2c411e","Type":"ContainerStarted","Data":"e7aa6851e5a49d4d0a86c75721682e3a46f65a7b8570e7c782e04345cb16b5a1"} Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.562063 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-76fd76856-vtdk8" Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.572687 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-684c7d77b-2n88g" event={"ID":"f179e5c8-193f-47fc-841e-2dc3feff31cd","Type":"ContainerStarted","Data":"f929243735d512ebc1aaa4e6b669f874b890b3259432edf38c24717f1853d9ec"} Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.573133 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-684c7d77b-2n88g" Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.583656 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-745fc45789-w8lqb" event={"ID":"e42044d1-1153-4216-8d8f-b8333d2bcb00","Type":"ContainerStarted","Data":"013d869ffab1a31c06f11a0370792620e1a34ca166ca46a1ca01376077986d22"} Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.584301 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-745fc45789-w8lqb" Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.587529 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-6fb74c6d59-zw5q9" podStartSLOduration=4.7674313680000004 podStartE2EDuration="29.587513015s" podCreationTimestamp="2026-02-27 16:26:33 +0000 UTC" firstStartedPulling="2026-02-27 16:26:34.357739226 +0000 UTC m=+1190.447011689" lastFinishedPulling="2026-02-27 16:26:59.177820873 +0000 UTC m=+1215.267093336" observedRunningTime="2026-02-27 16:27:02.586053889 +0000 UTC m=+1218.675326342" watchObservedRunningTime="2026-02-27 16:27:02.587513015 +0000 UTC m=+1218.676785478" Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.623080 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-65c9f4f6b-w6kw7" podStartSLOduration=3.157493776 podStartE2EDuration="29.623065097s" podCreationTimestamp="2026-02-27 16:26:33 +0000 UTC" firstStartedPulling="2026-02-27 16:26:35.213996594 +0000 UTC m=+1191.303269057" lastFinishedPulling="2026-02-27 16:27:01.679567905 +0000 UTC m=+1217.768840378" observedRunningTime="2026-02-27 16:27:02.619230772 +0000 UTC m=+1218.708503235" watchObservedRunningTime="2026-02-27 16:27:02.623065097 +0000 UTC m=+1218.712337560" Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.647723 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-7db95d7ffb-59k4p" podStartSLOduration=5.153025798 podStartE2EDuration="29.647708338s" podCreationTimestamp="2026-02-27 16:26:33 +0000 UTC" firstStartedPulling="2026-02-27 16:26:34.682939478 +0000 UTC m=+1190.772211941" lastFinishedPulling="2026-02-27 16:26:59.177622018 +0000 UTC m=+1215.266894481" observedRunningTime="2026-02-27 16:27:02.644709834 +0000 UTC m=+1218.733982297" watchObservedRunningTime="2026-02-27 16:27:02.647708338 +0000 UTC m=+1218.736980801" Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.667502 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-cc79fdffd-2wlpz" podStartSLOduration=2.648015554 podStartE2EDuration="29.667485168s" podCreationTimestamp="2026-02-27 16:26:33 +0000 UTC" firstStartedPulling="2026-02-27 16:26:35.181517259 +0000 UTC m=+1191.270789722" lastFinishedPulling="2026-02-27 16:27:02.200986873 +0000 UTC m=+1218.290259336" observedRunningTime="2026-02-27 16:27:02.66113404 +0000 UTC m=+1218.750406503" watchObservedRunningTime="2026-02-27 16:27:02.667485168 +0000 UTC m=+1218.756757631" Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.761726 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-m892c" podStartSLOduration=6.731208274 podStartE2EDuration="29.761702043s" podCreationTimestamp="2026-02-27 16:26:33 +0000 UTC" firstStartedPulling="2026-02-27 16:26:34.716147042 +0000 UTC m=+1190.805419505" lastFinishedPulling="2026-02-27 16:26:57.746640781 +0000 UTC m=+1213.835913274" observedRunningTime="2026-02-27 16:27:02.739296818 +0000 UTC m=+1218.828569281" watchObservedRunningTime="2026-02-27 16:27:02.761702043 +0000 UTC m=+1218.850974506" Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.785746 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-6c67ff7674-ftbbj" podStartSLOduration=3.256977742 podStartE2EDuration="29.785724929s" podCreationTimestamp="2026-02-27 16:26:33 +0000 UTC" firstStartedPulling="2026-02-27 16:26:35.226123795 +0000 UTC m=+1191.315396258" lastFinishedPulling="2026-02-27 16:27:01.754870982 +0000 UTC m=+1217.844143445" observedRunningTime="2026-02-27 16:27:02.771577778 +0000 UTC m=+1218.860850231" watchObservedRunningTime="2026-02-27 16:27:02.785724929 +0000 UTC m=+1218.874997392" Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.806534 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-55f4bf89cb-lqgtj" podStartSLOduration=5.833624221 podStartE2EDuration="29.806515785s" podCreationTimestamp="2026-02-27 16:26:33 +0000 UTC" firstStartedPulling="2026-02-27 16:26:35.204757135 +0000 UTC m=+1191.294029598" lastFinishedPulling="2026-02-27 16:26:59.177648699 +0000 UTC m=+1215.266921162" observedRunningTime="2026-02-27 16:27:02.804255359 +0000 UTC m=+1218.893527822" watchObservedRunningTime="2026-02-27 16:27:02.806515785 +0000 UTC m=+1218.895788248" Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.851822 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-768c8b45bb-7pp52" podStartSLOduration=6.506198126 podStartE2EDuration="29.851802248s" podCreationTimestamp="2026-02-27 16:26:33 +0000 UTC" firstStartedPulling="2026-02-27 16:26:34.401110761 +0000 UTC m=+1190.490383224" lastFinishedPulling="2026-02-27 16:26:57.746714853 +0000 UTC m=+1213.835987346" observedRunningTime="2026-02-27 16:27:02.845236435 +0000 UTC m=+1218.934508898" watchObservedRunningTime="2026-02-27 16:27:02.851802248 +0000 UTC m=+1218.941074701" Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.873749 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-44hlt" podStartSLOduration=3.420669519 podStartE2EDuration="29.873732741s" podCreationTimestamp="2026-02-27 16:26:33 +0000 UTC" firstStartedPulling="2026-02-27 16:26:35.213568823 +0000 UTC m=+1191.302841286" lastFinishedPulling="2026-02-27 16:27:01.666632055 +0000 UTC m=+1217.755904508" observedRunningTime="2026-02-27 16:27:02.871216339 +0000 UTC m=+1218.960488802" watchObservedRunningTime="2026-02-27 16:27:02.873732741 +0000 UTC m=+1218.963005194" Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.889564 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-bff955cc4-fhgdd" podStartSLOduration=3.340650268 podStartE2EDuration="29.889545494s" podCreationTimestamp="2026-02-27 16:26:33 +0000 UTC" firstStartedPulling="2026-02-27 16:26:35.205975156 +0000 UTC m=+1191.295247619" lastFinishedPulling="2026-02-27 16:27:01.754870362 +0000 UTC m=+1217.844142845" observedRunningTime="2026-02-27 16:27:02.886264362 +0000 UTC m=+1218.975536825" watchObservedRunningTime="2026-02-27 16:27:02.889545494 +0000 UTC m=+1218.978817957" Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.914772 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-7f748f8b74-f9pxf" podStartSLOduration=2.452526208 podStartE2EDuration="29.914756449s" podCreationTimestamp="2026-02-27 16:26:33 +0000 UTC" firstStartedPulling="2026-02-27 16:26:34.737936221 +0000 UTC m=+1190.827208684" lastFinishedPulling="2026-02-27 16:27:02.200166462 +0000 UTC m=+1218.289438925" observedRunningTime="2026-02-27 16:27:02.910087382 +0000 UTC m=+1218.999359845" watchObservedRunningTime="2026-02-27 16:27:02.914756449 +0000 UTC m=+1219.004028912" Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.933605 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-76fd76856-vtdk8" podStartSLOduration=5.499121469 podStartE2EDuration="29.933586615s" podCreationTimestamp="2026-02-27 16:26:33 +0000 UTC" firstStartedPulling="2026-02-27 16:26:35.203080104 +0000 UTC m=+1191.292352567" lastFinishedPulling="2026-02-27 16:26:59.63754521 +0000 UTC m=+1215.726817713" observedRunningTime="2026-02-27 16:27:02.926692144 +0000 UTC m=+1219.015964607" watchObservedRunningTime="2026-02-27 16:27:02.933586615 +0000 UTC m=+1219.022859078" Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.953898 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-745fc45789-w8lqb" podStartSLOduration=3.459609445 podStartE2EDuration="29.953881929s" podCreationTimestamp="2026-02-27 16:26:33 +0000 UTC" firstStartedPulling="2026-02-27 16:26:35.216637929 +0000 UTC m=+1191.305910392" lastFinishedPulling="2026-02-27 16:27:01.710910423 +0000 UTC m=+1217.800182876" observedRunningTime="2026-02-27 16:27:02.949667984 +0000 UTC m=+1219.038940447" watchObservedRunningTime="2026-02-27 16:27:02.953881929 +0000 UTC m=+1219.043154382" Feb 27 16:27:02 crc kubenswrapper[4830]: I0227 16:27:02.966154 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-684c7d77b-2n88g" podStartSLOduration=5.956400675 podStartE2EDuration="29.966139562s" podCreationTimestamp="2026-02-27 16:26:33 +0000 UTC" firstStartedPulling="2026-02-27 16:26:35.167994514 +0000 UTC m=+1191.257266977" lastFinishedPulling="2026-02-27 16:26:59.177733401 +0000 UTC m=+1215.267005864" observedRunningTime="2026-02-27 16:27:02.964395869 +0000 UTC m=+1219.053668332" watchObservedRunningTime="2026-02-27 16:27:02.966139562 +0000 UTC m=+1219.055412025" Feb 27 16:27:03 crc kubenswrapper[4830]: I0227 16:27:03.035472 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-768f998cf4-qvwzn" podStartSLOduration=6.0287901 podStartE2EDuration="30.035454291s" podCreationTimestamp="2026-02-27 16:26:33 +0000 UTC" firstStartedPulling="2026-02-27 16:26:35.171015509 +0000 UTC m=+1191.260287972" lastFinishedPulling="2026-02-27 16:26:59.17767967 +0000 UTC m=+1215.266952163" observedRunningTime="2026-02-27 16:27:03.000575786 +0000 UTC m=+1219.089848259" watchObservedRunningTime="2026-02-27 16:27:03.035454291 +0000 UTC m=+1219.124726754" Feb 27 16:27:03 crc kubenswrapper[4830]: I0227 16:27:03.055555 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-mh9d6" podStartSLOduration=3.478281838 podStartE2EDuration="30.055539688s" podCreationTimestamp="2026-02-27 16:26:33 +0000 UTC" firstStartedPulling="2026-02-27 16:26:35.193289331 +0000 UTC m=+1191.282561794" lastFinishedPulling="2026-02-27 16:27:01.770547181 +0000 UTC m=+1217.859819644" observedRunningTime="2026-02-27 16:27:03.054799801 +0000 UTC m=+1219.144072264" watchObservedRunningTime="2026-02-27 16:27:03.055539688 +0000 UTC m=+1219.144812151" Feb 27 16:27:03 crc kubenswrapper[4830]: I0227 16:27:03.160322 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 16:27:03 crc kubenswrapper[4830]: I0227 16:27:03.160391 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 16:27:03 crc kubenswrapper[4830]: I0227 16:27:03.160447 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 16:27:03 crc kubenswrapper[4830]: I0227 16:27:03.161043 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"471097b7c348ccaf71a4c92a38d56632d777ed06a5ddca169a907c05253b1349"} pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 16:27:03 crc kubenswrapper[4830]: I0227 16:27:03.161094 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" containerID="cri-o://471097b7c348ccaf71a4c92a38d56632d777ed06a5ddca169a907c05253b1349" gracePeriod=600 Feb 27 16:27:03 crc kubenswrapper[4830]: I0227 16:27:03.592838 4830 generic.go:334] "Generic (PLEG): container finished" podID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerID="471097b7c348ccaf71a4c92a38d56632d777ed06a5ddca169a907c05253b1349" exitCode=0 Feb 27 16:27:03 crc kubenswrapper[4830]: I0227 16:27:03.592926 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerDied","Data":"471097b7c348ccaf71a4c92a38d56632d777ed06a5ddca169a907c05253b1349"} Feb 27 16:27:03 crc kubenswrapper[4830]: I0227 16:27:03.593259 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerStarted","Data":"4451f44bd5a230af740184dd479b8e8cef56c8f4c478f47a91288db9cb943456"} Feb 27 16:27:03 crc kubenswrapper[4830]: I0227 16:27:03.593280 4830 scope.go:117] "RemoveContainer" containerID="e43810c75db22ebd0d19e92c6c2850742cda834a0ba155fedd3f4498a6dd6d20" Feb 27 16:27:03 crc kubenswrapper[4830]: I0227 16:27:03.595442 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-585b788787-slc8g" event={"ID":"190a4a9c-ee4a-4c6d-a45c-1febc5a67e9d","Type":"ContainerStarted","Data":"aef1025db8224bc0eff51d49ce2d4537e0df6766a0e85304712ef19ad7ea57fe"} Feb 27 16:27:03 crc kubenswrapper[4830]: I0227 16:27:03.630062 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-585b788787-slc8g" podStartSLOduration=1.943524799 podStartE2EDuration="30.629934579s" podCreationTimestamp="2026-02-27 16:26:33 +0000 UTC" firstStartedPulling="2026-02-27 16:26:34.698157956 +0000 UTC m=+1190.787430419" lastFinishedPulling="2026-02-27 16:27:03.384567736 +0000 UTC m=+1219.473840199" observedRunningTime="2026-02-27 16:27:03.627853857 +0000 UTC m=+1219.717126320" watchObservedRunningTime="2026-02-27 16:27:03.629934579 +0000 UTC m=+1219.719207042" Feb 27 16:27:03 crc kubenswrapper[4830]: I0227 16:27:03.811409 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-585b788787-slc8g" Feb 27 16:27:05 crc kubenswrapper[4830]: I0227 16:27:05.435587 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5b73c28e-36b3-4845-9336-299fc3dd2551-cert\") pod \"infra-operator-controller-manager-c77466965-24fz2\" (UID: \"5b73c28e-36b3-4845-9336-299fc3dd2551\") " pod="openstack-operators/infra-operator-controller-manager-c77466965-24fz2" Feb 27 16:27:05 crc kubenswrapper[4830]: I0227 16:27:05.444854 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5b73c28e-36b3-4845-9336-299fc3dd2551-cert\") pod \"infra-operator-controller-manager-c77466965-24fz2\" (UID: \"5b73c28e-36b3-4845-9336-299fc3dd2551\") " pod="openstack-operators/infra-operator-controller-manager-c77466965-24fz2" Feb 27 16:27:05 crc kubenswrapper[4830]: I0227 16:27:05.648490 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-4jt5n" Feb 27 16:27:05 crc kubenswrapper[4830]: I0227 16:27:05.655937 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-c77466965-24fz2" Feb 27 16:27:05 crc kubenswrapper[4830]: I0227 16:27:05.739746 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b719a387-109a-49fe-b4df-98038c202a0f-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-68j87\" (UID: \"b719a387-109a-49fe-b4df-98038c202a0f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-68j87" Feb 27 16:27:05 crc kubenswrapper[4830]: I0227 16:27:05.745349 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b719a387-109a-49fe-b4df-98038c202a0f-cert\") pod \"openstack-baremetal-operator-controller-manager-c5677dc5d-68j87\" (UID: \"b719a387-109a-49fe-b4df-98038c202a0f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-68j87" Feb 27 16:27:05 crc kubenswrapper[4830]: I0227 16:27:05.981745 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-tlkch" Feb 27 16:27:05 crc kubenswrapper[4830]: I0227 16:27:05.984018 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-c77466965-24fz2"] Feb 27 16:27:05 crc kubenswrapper[4830]: W0227 16:27:05.987752 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b73c28e_36b3_4845_9336_299fc3dd2551.slice/crio-0d73ae6116e8c8498b67e1181d0bf997a90a997eae969ecd6342d24e06de3e06 WatchSource:0}: Error finding container 0d73ae6116e8c8498b67e1181d0bf997a90a997eae969ecd6342d24e06de3e06: Status 404 returned error can't find the container with id 0d73ae6116e8c8498b67e1181d0bf997a90a997eae969ecd6342d24e06de3e06 Feb 27 16:27:05 crc kubenswrapper[4830]: I0227 16:27:05.989858 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-68j87" Feb 27 16:27:06 crc kubenswrapper[4830]: I0227 16:27:06.145645 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-metrics-certs\") pod \"openstack-operator-controller-manager-7987977d84-9b7m9\" (UID: \"7dcda287-c580-4c6d-881d-d2500541cfba\") " pod="openstack-operators/openstack-operator-controller-manager-7987977d84-9b7m9" Feb 27 16:27:06 crc kubenswrapper[4830]: I0227 16:27:06.145741 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-webhook-certs\") pod \"openstack-operator-controller-manager-7987977d84-9b7m9\" (UID: \"7dcda287-c580-4c6d-881d-d2500541cfba\") " pod="openstack-operators/openstack-operator-controller-manager-7987977d84-9b7m9" Feb 27 16:27:06 crc kubenswrapper[4830]: I0227 16:27:06.152543 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-metrics-certs\") pod \"openstack-operator-controller-manager-7987977d84-9b7m9\" (UID: \"7dcda287-c580-4c6d-881d-d2500541cfba\") " pod="openstack-operators/openstack-operator-controller-manager-7987977d84-9b7m9" Feb 27 16:27:06 crc kubenswrapper[4830]: I0227 16:27:06.153404 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7dcda287-c580-4c6d-881d-d2500541cfba-webhook-certs\") pod \"openstack-operator-controller-manager-7987977d84-9b7m9\" (UID: \"7dcda287-c580-4c6d-881d-d2500541cfba\") " pod="openstack-operators/openstack-operator-controller-manager-7987977d84-9b7m9" Feb 27 16:27:06 crc kubenswrapper[4830]: I0227 16:27:06.337498 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-87x5q" Feb 27 16:27:06 crc kubenswrapper[4830]: I0227 16:27:06.345694 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7987977d84-9b7m9" Feb 27 16:27:06 crc kubenswrapper[4830]: I0227 16:27:06.456259 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-68j87"] Feb 27 16:27:06 crc kubenswrapper[4830]: W0227 16:27:06.493881 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb719a387_109a_49fe_b4df_98038c202a0f.slice/crio-7dd851c3d64574a129a97f3c670364c7d58d3524bb3f11dcc1c0708dbd279b76 WatchSource:0}: Error finding container 7dd851c3d64574a129a97f3c670364c7d58d3524bb3f11dcc1c0708dbd279b76: Status 404 returned error can't find the container with id 7dd851c3d64574a129a97f3c670364c7d58d3524bb3f11dcc1c0708dbd279b76 Feb 27 16:27:06 crc kubenswrapper[4830]: I0227 16:27:06.609091 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7987977d84-9b7m9"] Feb 27 16:27:06 crc kubenswrapper[4830]: I0227 16:27:06.626896 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-68j87" event={"ID":"b719a387-109a-49fe-b4df-98038c202a0f","Type":"ContainerStarted","Data":"7dd851c3d64574a129a97f3c670364c7d58d3524bb3f11dcc1c0708dbd279b76"} Feb 27 16:27:06 crc kubenswrapper[4830]: W0227 16:27:06.628591 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7dcda287_c580_4c6d_881d_d2500541cfba.slice/crio-0cdce8b70d003b827ea90d647869ddad1bf875a59b9b8ab9d6d0e6eefc395a0d WatchSource:0}: Error finding container 0cdce8b70d003b827ea90d647869ddad1bf875a59b9b8ab9d6d0e6eefc395a0d: Status 404 returned error can't find the container with id 0cdce8b70d003b827ea90d647869ddad1bf875a59b9b8ab9d6d0e6eefc395a0d Feb 27 16:27:06 crc kubenswrapper[4830]: I0227 16:27:06.629049 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-78b64779b9-fhwn5" event={"ID":"53b4e8e1-00b7-4744-8fcf-a723ae104e53","Type":"ContainerStarted","Data":"e0a422d98d601e588894497fbddee341a52b30f22b178a7f356d411731a0bc5b"} Feb 27 16:27:06 crc kubenswrapper[4830]: I0227 16:27:06.629270 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-78b64779b9-fhwn5" Feb 27 16:27:06 crc kubenswrapper[4830]: I0227 16:27:06.630048 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-c77466965-24fz2" event={"ID":"5b73c28e-36b3-4845-9336-299fc3dd2551","Type":"ContainerStarted","Data":"0d73ae6116e8c8498b67e1181d0bf997a90a997eae969ecd6342d24e06de3e06"} Feb 27 16:27:06 crc kubenswrapper[4830]: I0227 16:27:06.631119 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-8784b4656-29x7g" event={"ID":"e68ac45c-7b30-4cd5-932a-9a0e8a3824f3","Type":"ContainerStarted","Data":"650cac413d62ab8ceba2ef9bed92ce04ee7999f813365e501d9d561da7b5368b"} Feb 27 16:27:06 crc kubenswrapper[4830]: I0227 16:27:06.631366 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-8784b4656-29x7g" Feb 27 16:27:06 crc kubenswrapper[4830]: I0227 16:27:06.653919 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-78b64779b9-fhwn5" podStartSLOduration=2.605756016 podStartE2EDuration="33.653896868s" podCreationTimestamp="2026-02-27 16:26:33 +0000 UTC" firstStartedPulling="2026-02-27 16:26:35.167632744 +0000 UTC m=+1191.256905207" lastFinishedPulling="2026-02-27 16:27:06.215773586 +0000 UTC m=+1222.305046059" observedRunningTime="2026-02-27 16:27:06.644859335 +0000 UTC m=+1222.734131808" watchObservedRunningTime="2026-02-27 16:27:06.653896868 +0000 UTC m=+1222.743169331" Feb 27 16:27:06 crc kubenswrapper[4830]: I0227 16:27:06.668120 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-8784b4656-29x7g" podStartSLOduration=3.37265156 podStartE2EDuration="33.66809993s" podCreationTimestamp="2026-02-27 16:26:33 +0000 UTC" firstStartedPulling="2026-02-27 16:26:35.205011452 +0000 UTC m=+1191.294283905" lastFinishedPulling="2026-02-27 16:27:05.500459802 +0000 UTC m=+1221.589732275" observedRunningTime="2026-02-27 16:27:06.66121818 +0000 UTC m=+1222.750490663" watchObservedRunningTime="2026-02-27 16:27:06.66809993 +0000 UTC m=+1222.757372403" Feb 27 16:27:07 crc kubenswrapper[4830]: I0227 16:27:07.647173 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7987977d84-9b7m9" event={"ID":"7dcda287-c580-4c6d-881d-d2500541cfba","Type":"ContainerStarted","Data":"5a5aad40d9e54427f308f38af937aeb5346653af2446ddc6b06c247c4bba6d1c"} Feb 27 16:27:07 crc kubenswrapper[4830]: I0227 16:27:07.647558 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7987977d84-9b7m9" event={"ID":"7dcda287-c580-4c6d-881d-d2500541cfba","Type":"ContainerStarted","Data":"0cdce8b70d003b827ea90d647869ddad1bf875a59b9b8ab9d6d0e6eefc395a0d"} Feb 27 16:27:07 crc kubenswrapper[4830]: I0227 16:27:07.683203 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7987977d84-9b7m9" podStartSLOduration=34.683186626 podStartE2EDuration="34.683186626s" podCreationTimestamp="2026-02-27 16:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:27:07.678166362 +0000 UTC m=+1223.767438825" watchObservedRunningTime="2026-02-27 16:27:07.683186626 +0000 UTC m=+1223.772459099" Feb 27 16:27:08 crc kubenswrapper[4830]: I0227 16:27:08.667466 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-c77466965-24fz2" event={"ID":"5b73c28e-36b3-4845-9336-299fc3dd2551","Type":"ContainerStarted","Data":"6b99873691798cb8efc6d1650fdd53c5ebba0029418be60b1570ecdc70b20e1b"} Feb 27 16:27:08 crc kubenswrapper[4830]: I0227 16:27:08.667858 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7987977d84-9b7m9" Feb 27 16:27:08 crc kubenswrapper[4830]: I0227 16:27:08.686988 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-c77466965-24fz2" podStartSLOduration=33.683868501 podStartE2EDuration="35.686969912s" podCreationTimestamp="2026-02-27 16:26:33 +0000 UTC" firstStartedPulling="2026-02-27 16:27:05.989907937 +0000 UTC m=+1222.079180410" lastFinishedPulling="2026-02-27 16:27:07.993009358 +0000 UTC m=+1224.082281821" observedRunningTime="2026-02-27 16:27:08.68365893 +0000 UTC m=+1224.772931433" watchObservedRunningTime="2026-02-27 16:27:08.686969912 +0000 UTC m=+1224.776242375" Feb 27 16:27:09 crc kubenswrapper[4830]: I0227 16:27:09.681155 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-68j87" event={"ID":"b719a387-109a-49fe-b4df-98038c202a0f","Type":"ContainerStarted","Data":"a193b3e0b30d1e2f5ea7a00b09592b66115f0b98cfaf31ffe8ce6cacdff4a506"} Feb 27 16:27:09 crc kubenswrapper[4830]: I0227 16:27:09.681637 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-c77466965-24fz2" Feb 27 16:27:09 crc kubenswrapper[4830]: I0227 16:27:09.681676 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-68j87" Feb 27 16:27:09 crc kubenswrapper[4830]: I0227 16:27:09.736705 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-68j87" podStartSLOduration=34.265599865 podStartE2EDuration="36.736675757s" podCreationTimestamp="2026-02-27 16:26:33 +0000 UTC" firstStartedPulling="2026-02-27 16:27:06.500427374 +0000 UTC m=+1222.589699867" lastFinishedPulling="2026-02-27 16:27:08.971503266 +0000 UTC m=+1225.060775759" observedRunningTime="2026-02-27 16:27:09.724093825 +0000 UTC m=+1225.813366298" watchObservedRunningTime="2026-02-27 16:27:09.736675757 +0000 UTC m=+1225.825948250" Feb 27 16:27:13 crc kubenswrapper[4830]: I0227 16:27:13.634505 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-768c8b45bb-7pp52" Feb 27 16:27:13 crc kubenswrapper[4830]: I0227 16:27:13.652097 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-6fb74c6d59-zw5q9" Feb 27 16:27:13 crc kubenswrapper[4830]: I0227 16:27:13.667346 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-55cc45767f-m892c" Feb 27 16:27:13 crc kubenswrapper[4830]: I0227 16:27:13.697080 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-7f748f8b74-f9pxf" Feb 27 16:27:13 crc kubenswrapper[4830]: I0227 16:27:13.813707 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-585b788787-slc8g" Feb 27 16:27:13 crc kubenswrapper[4830]: I0227 16:27:13.840720 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-7db95d7ffb-59k4p" Feb 27 16:27:13 crc kubenswrapper[4830]: I0227 16:27:13.880453 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-78b64779b9-fhwn5" Feb 27 16:27:13 crc kubenswrapper[4830]: I0227 16:27:13.881015 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-8784b4656-29x7g" Feb 27 16:27:13 crc kubenswrapper[4830]: I0227 16:27:13.939714 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-76fd76856-vtdk8" Feb 27 16:27:13 crc kubenswrapper[4830]: I0227 16:27:13.954805 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-745fc45789-w8lqb" Feb 27 16:27:14 crc kubenswrapper[4830]: I0227 16:27:14.042933 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-6c67ff7674-ftbbj" Feb 27 16:27:14 crc kubenswrapper[4830]: I0227 16:27:14.083289 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-768f998cf4-qvwzn" Feb 27 16:27:14 crc kubenswrapper[4830]: I0227 16:27:14.110185 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-cc79fdffd-2wlpz" Feb 27 16:27:14 crc kubenswrapper[4830]: I0227 16:27:14.166558 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-684c7d77b-2n88g" Feb 27 16:27:14 crc kubenswrapper[4830]: I0227 16:27:14.253099 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-bff955cc4-fhgdd" Feb 27 16:27:14 crc kubenswrapper[4830]: I0227 16:27:14.281049 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-55f4bf89cb-lqgtj" Feb 27 16:27:14 crc kubenswrapper[4830]: I0227 16:27:14.412640 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-56dc67d744-44hlt" Feb 27 16:27:14 crc kubenswrapper[4830]: I0227 16:27:14.503467 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-8467ccb4c8-mh9d6" Feb 27 16:27:14 crc kubenswrapper[4830]: I0227 16:27:14.522479 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-65c9f4f6b-w6kw7" Feb 27 16:27:15 crc kubenswrapper[4830]: I0227 16:27:15.669924 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-c77466965-24fz2" Feb 27 16:27:15 crc kubenswrapper[4830]: I0227 16:27:15.998592 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-c5677dc5d-68j87" Feb 27 16:27:16 crc kubenswrapper[4830]: I0227 16:27:16.356436 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7987977d84-9b7m9" Feb 27 16:27:34 crc kubenswrapper[4830]: I0227 16:27:34.391288 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-r77wz"] Feb 27 16:27:34 crc kubenswrapper[4830]: I0227 16:27:34.393175 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-r77wz" Feb 27 16:27:34 crc kubenswrapper[4830]: I0227 16:27:34.395340 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 27 16:27:34 crc kubenswrapper[4830]: I0227 16:27:34.395938 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 27 16:27:34 crc kubenswrapper[4830]: I0227 16:27:34.396148 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 27 16:27:34 crc kubenswrapper[4830]: I0227 16:27:34.396425 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-bq2r9" Feb 27 16:27:34 crc kubenswrapper[4830]: I0227 16:27:34.396824 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-r77wz"] Feb 27 16:27:34 crc kubenswrapper[4830]: I0227 16:27:34.408287 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7vqj6"] Feb 27 16:27:34 crc kubenswrapper[4830]: I0227 16:27:34.421446 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7vqj6" Feb 27 16:27:34 crc kubenswrapper[4830]: I0227 16:27:34.425925 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 27 16:27:34 crc kubenswrapper[4830]: I0227 16:27:34.440829 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7vqj6"] Feb 27 16:27:34 crc kubenswrapper[4830]: I0227 16:27:34.498251 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9f2n\" (UniqueName: \"kubernetes.io/projected/80f367ea-86aa-4385-b62c-35fd25a2355f-kube-api-access-f9f2n\") pod \"dnsmasq-dns-675f4bcbfc-r77wz\" (UID: \"80f367ea-86aa-4385-b62c-35fd25a2355f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-r77wz" Feb 27 16:27:34 crc kubenswrapper[4830]: I0227 16:27:34.498438 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80f367ea-86aa-4385-b62c-35fd25a2355f-config\") pod \"dnsmasq-dns-675f4bcbfc-r77wz\" (UID: \"80f367ea-86aa-4385-b62c-35fd25a2355f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-r77wz" Feb 27 16:27:34 crc kubenswrapper[4830]: I0227 16:27:34.599650 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9f2n\" (UniqueName: \"kubernetes.io/projected/80f367ea-86aa-4385-b62c-35fd25a2355f-kube-api-access-f9f2n\") pod \"dnsmasq-dns-675f4bcbfc-r77wz\" (UID: \"80f367ea-86aa-4385-b62c-35fd25a2355f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-r77wz" Feb 27 16:27:34 crc kubenswrapper[4830]: I0227 16:27:34.599741 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a40d14a-19ad-406b-bc28-5be5b879ba20-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-7vqj6\" (UID: \"3a40d14a-19ad-406b-bc28-5be5b879ba20\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7vqj6" Feb 27 16:27:34 crc kubenswrapper[4830]: I0227 16:27:34.599797 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a40d14a-19ad-406b-bc28-5be5b879ba20-config\") pod \"dnsmasq-dns-78dd6ddcc-7vqj6\" (UID: \"3a40d14a-19ad-406b-bc28-5be5b879ba20\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7vqj6" Feb 27 16:27:34 crc kubenswrapper[4830]: I0227 16:27:34.599856 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjmlz\" (UniqueName: \"kubernetes.io/projected/3a40d14a-19ad-406b-bc28-5be5b879ba20-kube-api-access-vjmlz\") pod \"dnsmasq-dns-78dd6ddcc-7vqj6\" (UID: \"3a40d14a-19ad-406b-bc28-5be5b879ba20\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7vqj6" Feb 27 16:27:34 crc kubenswrapper[4830]: I0227 16:27:34.600243 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80f367ea-86aa-4385-b62c-35fd25a2355f-config\") pod \"dnsmasq-dns-675f4bcbfc-r77wz\" (UID: \"80f367ea-86aa-4385-b62c-35fd25a2355f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-r77wz" Feb 27 16:27:34 crc kubenswrapper[4830]: I0227 16:27:34.602713 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80f367ea-86aa-4385-b62c-35fd25a2355f-config\") pod \"dnsmasq-dns-675f4bcbfc-r77wz\" (UID: \"80f367ea-86aa-4385-b62c-35fd25a2355f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-r77wz" Feb 27 16:27:34 crc kubenswrapper[4830]: I0227 16:27:34.629003 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9f2n\" (UniqueName: \"kubernetes.io/projected/80f367ea-86aa-4385-b62c-35fd25a2355f-kube-api-access-f9f2n\") pod \"dnsmasq-dns-675f4bcbfc-r77wz\" (UID: \"80f367ea-86aa-4385-b62c-35fd25a2355f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-r77wz" Feb 27 16:27:34 crc kubenswrapper[4830]: I0227 16:27:34.702647 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a40d14a-19ad-406b-bc28-5be5b879ba20-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-7vqj6\" (UID: \"3a40d14a-19ad-406b-bc28-5be5b879ba20\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7vqj6" Feb 27 16:27:34 crc kubenswrapper[4830]: I0227 16:27:34.702769 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a40d14a-19ad-406b-bc28-5be5b879ba20-config\") pod \"dnsmasq-dns-78dd6ddcc-7vqj6\" (UID: \"3a40d14a-19ad-406b-bc28-5be5b879ba20\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7vqj6" Feb 27 16:27:34 crc kubenswrapper[4830]: I0227 16:27:34.702856 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjmlz\" (UniqueName: \"kubernetes.io/projected/3a40d14a-19ad-406b-bc28-5be5b879ba20-kube-api-access-vjmlz\") pod \"dnsmasq-dns-78dd6ddcc-7vqj6\" (UID: \"3a40d14a-19ad-406b-bc28-5be5b879ba20\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7vqj6" Feb 27 16:27:34 crc kubenswrapper[4830]: I0227 16:27:34.704660 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a40d14a-19ad-406b-bc28-5be5b879ba20-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-7vqj6\" (UID: \"3a40d14a-19ad-406b-bc28-5be5b879ba20\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7vqj6" Feb 27 16:27:34 crc kubenswrapper[4830]: I0227 16:27:34.704712 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a40d14a-19ad-406b-bc28-5be5b879ba20-config\") pod \"dnsmasq-dns-78dd6ddcc-7vqj6\" (UID: \"3a40d14a-19ad-406b-bc28-5be5b879ba20\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7vqj6" Feb 27 16:27:34 crc kubenswrapper[4830]: I0227 16:27:34.714242 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-r77wz" Feb 27 16:27:34 crc kubenswrapper[4830]: I0227 16:27:34.737315 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjmlz\" (UniqueName: \"kubernetes.io/projected/3a40d14a-19ad-406b-bc28-5be5b879ba20-kube-api-access-vjmlz\") pod \"dnsmasq-dns-78dd6ddcc-7vqj6\" (UID: \"3a40d14a-19ad-406b-bc28-5be5b879ba20\") " pod="openstack/dnsmasq-dns-78dd6ddcc-7vqj6" Feb 27 16:27:34 crc kubenswrapper[4830]: I0227 16:27:34.740576 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7vqj6" Feb 27 16:27:35 crc kubenswrapper[4830]: I0227 16:27:35.060432 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-r77wz"] Feb 27 16:27:35 crc kubenswrapper[4830]: I0227 16:27:35.117894 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7vqj6"] Feb 27 16:27:35 crc kubenswrapper[4830]: I0227 16:27:35.928486 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-7vqj6" event={"ID":"3a40d14a-19ad-406b-bc28-5be5b879ba20","Type":"ContainerStarted","Data":"9cca093b215ef4d9e3cd6e67196954aad77f1b39abe8c8c9ea9da812cdadb87f"} Feb 27 16:27:35 crc kubenswrapper[4830]: I0227 16:27:35.930287 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-r77wz" event={"ID":"80f367ea-86aa-4385-b62c-35fd25a2355f","Type":"ContainerStarted","Data":"11c567ab777215389d798b4b5a99ea77276cd0e685ab7f5f79cf981d52c2e44d"} Feb 27 16:27:36 crc kubenswrapper[4830]: I0227 16:27:36.398844 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-r77wz"] Feb 27 16:27:36 crc kubenswrapper[4830]: I0227 16:27:36.422097 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-f5dpx"] Feb 27 16:27:36 crc kubenswrapper[4830]: I0227 16:27:36.423932 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-f5dpx" Feb 27 16:27:36 crc kubenswrapper[4830]: I0227 16:27:36.427360 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-f5dpx"] Feb 27 16:27:36 crc kubenswrapper[4830]: I0227 16:27:36.537144 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0800fc83-7606-4be1-8a04-aab5b8226a0c-config\") pod \"dnsmasq-dns-5ccc8479f9-f5dpx\" (UID: \"0800fc83-7606-4be1-8a04-aab5b8226a0c\") " pod="openstack/dnsmasq-dns-5ccc8479f9-f5dpx" Feb 27 16:27:36 crc kubenswrapper[4830]: I0227 16:27:36.537225 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0800fc83-7606-4be1-8a04-aab5b8226a0c-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-f5dpx\" (UID: \"0800fc83-7606-4be1-8a04-aab5b8226a0c\") " pod="openstack/dnsmasq-dns-5ccc8479f9-f5dpx" Feb 27 16:27:36 crc kubenswrapper[4830]: I0227 16:27:36.537281 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz975\" (UniqueName: \"kubernetes.io/projected/0800fc83-7606-4be1-8a04-aab5b8226a0c-kube-api-access-gz975\") pod \"dnsmasq-dns-5ccc8479f9-f5dpx\" (UID: \"0800fc83-7606-4be1-8a04-aab5b8226a0c\") " pod="openstack/dnsmasq-dns-5ccc8479f9-f5dpx" Feb 27 16:27:36 crc kubenswrapper[4830]: I0227 16:27:36.638484 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0800fc83-7606-4be1-8a04-aab5b8226a0c-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-f5dpx\" (UID: \"0800fc83-7606-4be1-8a04-aab5b8226a0c\") " pod="openstack/dnsmasq-dns-5ccc8479f9-f5dpx" Feb 27 16:27:36 crc kubenswrapper[4830]: I0227 16:27:36.638575 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz975\" (UniqueName: \"kubernetes.io/projected/0800fc83-7606-4be1-8a04-aab5b8226a0c-kube-api-access-gz975\") pod \"dnsmasq-dns-5ccc8479f9-f5dpx\" (UID: \"0800fc83-7606-4be1-8a04-aab5b8226a0c\") " pod="openstack/dnsmasq-dns-5ccc8479f9-f5dpx" Feb 27 16:27:36 crc kubenswrapper[4830]: I0227 16:27:36.638616 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0800fc83-7606-4be1-8a04-aab5b8226a0c-config\") pod \"dnsmasq-dns-5ccc8479f9-f5dpx\" (UID: \"0800fc83-7606-4be1-8a04-aab5b8226a0c\") " pod="openstack/dnsmasq-dns-5ccc8479f9-f5dpx" Feb 27 16:27:36 crc kubenswrapper[4830]: I0227 16:27:36.639570 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0800fc83-7606-4be1-8a04-aab5b8226a0c-config\") pod \"dnsmasq-dns-5ccc8479f9-f5dpx\" (UID: \"0800fc83-7606-4be1-8a04-aab5b8226a0c\") " pod="openstack/dnsmasq-dns-5ccc8479f9-f5dpx" Feb 27 16:27:36 crc kubenswrapper[4830]: I0227 16:27:36.641537 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0800fc83-7606-4be1-8a04-aab5b8226a0c-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-f5dpx\" (UID: \"0800fc83-7606-4be1-8a04-aab5b8226a0c\") " pod="openstack/dnsmasq-dns-5ccc8479f9-f5dpx" Feb 27 16:27:36 crc kubenswrapper[4830]: I0227 16:27:36.645928 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7vqj6"] Feb 27 16:27:36 crc kubenswrapper[4830]: I0227 16:27:36.662689 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-b4knd"] Feb 27 16:27:36 crc kubenswrapper[4830]: I0227 16:27:36.663759 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz975\" (UniqueName: \"kubernetes.io/projected/0800fc83-7606-4be1-8a04-aab5b8226a0c-kube-api-access-gz975\") pod \"dnsmasq-dns-5ccc8479f9-f5dpx\" (UID: \"0800fc83-7606-4be1-8a04-aab5b8226a0c\") " pod="openstack/dnsmasq-dns-5ccc8479f9-f5dpx" Feb 27 16:27:36 crc kubenswrapper[4830]: I0227 16:27:36.663778 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-b4knd" Feb 27 16:27:36 crc kubenswrapper[4830]: I0227 16:27:36.680368 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-b4knd"] Feb 27 16:27:36 crc kubenswrapper[4830]: I0227 16:27:36.750572 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-f5dpx" Feb 27 16:27:36 crc kubenswrapper[4830]: I0227 16:27:36.841238 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebbe238c-4f40-46a8-b549-b9b0ae97fb82-config\") pod \"dnsmasq-dns-57d769cc4f-b4knd\" (UID: \"ebbe238c-4f40-46a8-b549-b9b0ae97fb82\") " pod="openstack/dnsmasq-dns-57d769cc4f-b4knd" Feb 27 16:27:36 crc kubenswrapper[4830]: I0227 16:27:36.841655 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-682g6\" (UniqueName: \"kubernetes.io/projected/ebbe238c-4f40-46a8-b549-b9b0ae97fb82-kube-api-access-682g6\") pod \"dnsmasq-dns-57d769cc4f-b4knd\" (UID: \"ebbe238c-4f40-46a8-b549-b9b0ae97fb82\") " pod="openstack/dnsmasq-dns-57d769cc4f-b4knd" Feb 27 16:27:36 crc kubenswrapper[4830]: I0227 16:27:36.841691 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebbe238c-4f40-46a8-b549-b9b0ae97fb82-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-b4knd\" (UID: \"ebbe238c-4f40-46a8-b549-b9b0ae97fb82\") " pod="openstack/dnsmasq-dns-57d769cc4f-b4knd" Feb 27 16:27:36 crc kubenswrapper[4830]: I0227 16:27:36.943540 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebbe238c-4f40-46a8-b549-b9b0ae97fb82-config\") pod \"dnsmasq-dns-57d769cc4f-b4knd\" (UID: \"ebbe238c-4f40-46a8-b549-b9b0ae97fb82\") " pod="openstack/dnsmasq-dns-57d769cc4f-b4knd" Feb 27 16:27:36 crc kubenswrapper[4830]: I0227 16:27:36.943584 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-682g6\" (UniqueName: \"kubernetes.io/projected/ebbe238c-4f40-46a8-b549-b9b0ae97fb82-kube-api-access-682g6\") pod \"dnsmasq-dns-57d769cc4f-b4knd\" (UID: \"ebbe238c-4f40-46a8-b549-b9b0ae97fb82\") " pod="openstack/dnsmasq-dns-57d769cc4f-b4knd" Feb 27 16:27:36 crc kubenswrapper[4830]: I0227 16:27:36.943605 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebbe238c-4f40-46a8-b549-b9b0ae97fb82-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-b4knd\" (UID: \"ebbe238c-4f40-46a8-b549-b9b0ae97fb82\") " pod="openstack/dnsmasq-dns-57d769cc4f-b4knd" Feb 27 16:27:36 crc kubenswrapper[4830]: I0227 16:27:36.944384 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebbe238c-4f40-46a8-b549-b9b0ae97fb82-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-b4knd\" (UID: \"ebbe238c-4f40-46a8-b549-b9b0ae97fb82\") " pod="openstack/dnsmasq-dns-57d769cc4f-b4knd" Feb 27 16:27:36 crc kubenswrapper[4830]: I0227 16:27:36.945481 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebbe238c-4f40-46a8-b549-b9b0ae97fb82-config\") pod \"dnsmasq-dns-57d769cc4f-b4knd\" (UID: \"ebbe238c-4f40-46a8-b549-b9b0ae97fb82\") " pod="openstack/dnsmasq-dns-57d769cc4f-b4knd" Feb 27 16:27:36 crc kubenswrapper[4830]: I0227 16:27:36.962590 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-682g6\" (UniqueName: \"kubernetes.io/projected/ebbe238c-4f40-46a8-b549-b9b0ae97fb82-kube-api-access-682g6\") pod \"dnsmasq-dns-57d769cc4f-b4knd\" (UID: \"ebbe238c-4f40-46a8-b549-b9b0ae97fb82\") " pod="openstack/dnsmasq-dns-57d769cc4f-b4knd" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.006794 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-b4knd" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.254970 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-f5dpx"] Feb 27 16:27:37 crc kubenswrapper[4830]: W0227 16:27:37.258882 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0800fc83_7606_4be1_8a04_aab5b8226a0c.slice/crio-c6e3c0711184c89c7dee67ccc7e15a7d797cec7ad03266664a5a6d7d03fab54c WatchSource:0}: Error finding container c6e3c0711184c89c7dee67ccc7e15a7d797cec7ad03266664a5a6d7d03fab54c: Status 404 returned error can't find the container with id c6e3c0711184c89c7dee67ccc7e15a7d797cec7ad03266664a5a6d7d03fab54c Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.417656 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-b4knd"] Feb 27 16:27:37 crc kubenswrapper[4830]: W0227 16:27:37.424732 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebbe238c_4f40_46a8_b549_b9b0ae97fb82.slice/crio-5c8363827c77e3f477de62fb43eb9f24db779ae4eb4b79a0b31817d5b319fe0a WatchSource:0}: Error finding container 5c8363827c77e3f477de62fb43eb9f24db779ae4eb4b79a0b31817d5b319fe0a: Status 404 returned error can't find the container with id 5c8363827c77e3f477de62fb43eb9f24db779ae4eb4b79a0b31817d5b319fe0a Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.554939 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.556183 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.560282 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.560461 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.560519 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-bxbvd" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.560619 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.560703 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.560790 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.560854 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.566311 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.668078 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/47514135-95a6-4b77-815a-ebf23a3cab82-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.668122 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/47514135-95a6-4b77-815a-ebf23a3cab82-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.668144 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/47514135-95a6-4b77-815a-ebf23a3cab82-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.668165 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/47514135-95a6-4b77-815a-ebf23a3cab82-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.668253 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc6fh\" (UniqueName: \"kubernetes.io/projected/47514135-95a6-4b77-815a-ebf23a3cab82-kube-api-access-kc6fh\") pod \"rabbitmq-cell1-server-0\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.668284 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/47514135-95a6-4b77-815a-ebf23a3cab82-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.668301 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/47514135-95a6-4b77-815a-ebf23a3cab82-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.668317 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/47514135-95a6-4b77-815a-ebf23a3cab82-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.668337 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/47514135-95a6-4b77-815a-ebf23a3cab82-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.668408 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.668539 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/47514135-95a6-4b77-815a-ebf23a3cab82-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.770083 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/47514135-95a6-4b77-815a-ebf23a3cab82-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.770143 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/47514135-95a6-4b77-815a-ebf23a3cab82-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.770168 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/47514135-95a6-4b77-815a-ebf23a3cab82-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.770197 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/47514135-95a6-4b77-815a-ebf23a3cab82-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.770225 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.770283 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/47514135-95a6-4b77-815a-ebf23a3cab82-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.770437 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/47514135-95a6-4b77-815a-ebf23a3cab82-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.770465 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/47514135-95a6-4b77-815a-ebf23a3cab82-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.770492 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/47514135-95a6-4b77-815a-ebf23a3cab82-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.770522 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/47514135-95a6-4b77-815a-ebf23a3cab82-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.770554 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc6fh\" (UniqueName: \"kubernetes.io/projected/47514135-95a6-4b77-815a-ebf23a3cab82-kube-api-access-kc6fh\") pod \"rabbitmq-cell1-server-0\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.771011 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.771226 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/47514135-95a6-4b77-815a-ebf23a3cab82-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.771762 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/47514135-95a6-4b77-815a-ebf23a3cab82-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.772428 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/47514135-95a6-4b77-815a-ebf23a3cab82-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.772647 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/47514135-95a6-4b77-815a-ebf23a3cab82-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.776117 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/47514135-95a6-4b77-815a-ebf23a3cab82-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.777517 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/47514135-95a6-4b77-815a-ebf23a3cab82-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.783100 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/47514135-95a6-4b77-815a-ebf23a3cab82-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.785672 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/47514135-95a6-4b77-815a-ebf23a3cab82-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.787541 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/47514135-95a6-4b77-815a-ebf23a3cab82-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.791309 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc6fh\" (UniqueName: \"kubernetes.io/projected/47514135-95a6-4b77-815a-ebf23a3cab82-kube-api-access-kc6fh\") pod \"rabbitmq-cell1-server-0\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.800472 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.801586 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.806511 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.806640 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.807258 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.807664 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.807673 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-mx8md" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.807744 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.814718 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.815153 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.834485 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.872065 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " pod="openstack/rabbitmq-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.872097 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " pod="openstack/rabbitmq-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.872114 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb7f9\" (UniqueName: \"kubernetes.io/projected/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-kube-api-access-jb7f9\") pod \"rabbitmq-server-0\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " pod="openstack/rabbitmq-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.872152 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " pod="openstack/rabbitmq-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.872171 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " pod="openstack/rabbitmq-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.872193 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-config-data\") pod \"rabbitmq-server-0\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " pod="openstack/rabbitmq-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.872210 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " pod="openstack/rabbitmq-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.872243 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " pod="openstack/rabbitmq-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.872298 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " pod="openstack/rabbitmq-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.872337 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " pod="openstack/rabbitmq-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.872356 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " pod="openstack/rabbitmq-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.924972 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.970833 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-b4knd" event={"ID":"ebbe238c-4f40-46a8-b549-b9b0ae97fb82","Type":"ContainerStarted","Data":"5c8363827c77e3f477de62fb43eb9f24db779ae4eb4b79a0b31817d5b319fe0a"} Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.973575 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " pod="openstack/rabbitmq-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.973603 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " pod="openstack/rabbitmq-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.973633 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " pod="openstack/rabbitmq-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.973647 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " pod="openstack/rabbitmq-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.973664 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb7f9\" (UniqueName: \"kubernetes.io/projected/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-kube-api-access-jb7f9\") pod \"rabbitmq-server-0\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " pod="openstack/rabbitmq-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.973686 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " pod="openstack/rabbitmq-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.973699 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " pod="openstack/rabbitmq-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.973718 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-config-data\") pod \"rabbitmq-server-0\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " pod="openstack/rabbitmq-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.973731 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " pod="openstack/rabbitmq-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.973758 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " pod="openstack/rabbitmq-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.973786 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " pod="openstack/rabbitmq-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.973954 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.978978 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " pod="openstack/rabbitmq-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.979499 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " pod="openstack/rabbitmq-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.980644 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " pod="openstack/rabbitmq-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.980662 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " pod="openstack/rabbitmq-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.982059 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " pod="openstack/rabbitmq-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.982139 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-config-data\") pod \"rabbitmq-server-0\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " pod="openstack/rabbitmq-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.983042 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " pod="openstack/rabbitmq-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.987845 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " pod="openstack/rabbitmq-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.996801 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " pod="openstack/rabbitmq-server-0" Feb 27 16:27:37 crc kubenswrapper[4830]: I0227 16:27:37.997610 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " pod="openstack/rabbitmq-server-0" Feb 27 16:27:38 crc kubenswrapper[4830]: I0227 16:27:38.001144 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb7f9\" (UniqueName: \"kubernetes.io/projected/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-kube-api-access-jb7f9\") pod \"rabbitmq-server-0\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " pod="openstack/rabbitmq-server-0" Feb 27 16:27:38 crc kubenswrapper[4830]: I0227 16:27:38.003034 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-f5dpx" event={"ID":"0800fc83-7606-4be1-8a04-aab5b8226a0c","Type":"ContainerStarted","Data":"c6e3c0711184c89c7dee67ccc7e15a7d797cec7ad03266664a5a6d7d03fab54c"} Feb 27 16:27:38 crc kubenswrapper[4830]: I0227 16:27:38.178821 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 27 16:27:38 crc kubenswrapper[4830]: I0227 16:27:38.611308 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 16:27:38 crc kubenswrapper[4830]: W0227 16:27:38.623005 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47514135_95a6_4b77_815a_ebf23a3cab82.slice/crio-bd95fbc21262734a4243970c4ca8c0c8132b401d0826f72dac04a87fe9febbf4 WatchSource:0}: Error finding container bd95fbc21262734a4243970c4ca8c0c8132b401d0826f72dac04a87fe9febbf4: Status 404 returned error can't find the container with id bd95fbc21262734a4243970c4ca8c0c8132b401d0826f72dac04a87fe9febbf4 Feb 27 16:27:38 crc kubenswrapper[4830]: I0227 16:27:38.722260 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 16:27:39 crc kubenswrapper[4830]: I0227 16:27:39.010482 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"47514135-95a6-4b77-815a-ebf23a3cab82","Type":"ContainerStarted","Data":"bd95fbc21262734a4243970c4ca8c0c8132b401d0826f72dac04a87fe9febbf4"} Feb 27 16:27:39 crc kubenswrapper[4830]: I0227 16:27:39.048078 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 27 16:27:39 crc kubenswrapper[4830]: I0227 16:27:39.049307 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 27 16:27:39 crc kubenswrapper[4830]: I0227 16:27:39.054244 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-jd86w" Feb 27 16:27:39 crc kubenswrapper[4830]: I0227 16:27:39.054830 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 27 16:27:39 crc kubenswrapper[4830]: I0227 16:27:39.054988 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 27 16:27:39 crc kubenswrapper[4830]: I0227 16:27:39.055114 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 27 16:27:39 crc kubenswrapper[4830]: I0227 16:27:39.055523 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 27 16:27:39 crc kubenswrapper[4830]: I0227 16:27:39.059826 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 27 16:27:39 crc kubenswrapper[4830]: I0227 16:27:39.190935 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-kolla-config\") pod \"openstack-galera-0\" (UID: \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\") " pod="openstack/openstack-galera-0" Feb 27 16:27:39 crc kubenswrapper[4830]: I0227 16:27:39.191062 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdmdk\" (UniqueName: \"kubernetes.io/projected/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-kube-api-access-pdmdk\") pod \"openstack-galera-0\" (UID: \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\") " pod="openstack/openstack-galera-0" Feb 27 16:27:39 crc kubenswrapper[4830]: I0227 16:27:39.191092 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-config-data-generated\") pod \"openstack-galera-0\" (UID: \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\") " pod="openstack/openstack-galera-0" Feb 27 16:27:39 crc kubenswrapper[4830]: I0227 16:27:39.191141 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-config-data-default\") pod \"openstack-galera-0\" (UID: \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\") " pod="openstack/openstack-galera-0" Feb 27 16:27:39 crc kubenswrapper[4830]: I0227 16:27:39.191160 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\") " pod="openstack/openstack-galera-0" Feb 27 16:27:39 crc kubenswrapper[4830]: I0227 16:27:39.191180 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\") " pod="openstack/openstack-galera-0" Feb 27 16:27:39 crc kubenswrapper[4830]: I0227 16:27:39.191210 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\") " pod="openstack/openstack-galera-0" Feb 27 16:27:39 crc kubenswrapper[4830]: I0227 16:27:39.191244 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-operator-scripts\") pod \"openstack-galera-0\" (UID: \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\") " pod="openstack/openstack-galera-0" Feb 27 16:27:39 crc kubenswrapper[4830]: I0227 16:27:39.292805 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-config-data-default\") pod \"openstack-galera-0\" (UID: \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\") " pod="openstack/openstack-galera-0" Feb 27 16:27:39 crc kubenswrapper[4830]: I0227 16:27:39.292857 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\") " pod="openstack/openstack-galera-0" Feb 27 16:27:39 crc kubenswrapper[4830]: I0227 16:27:39.292901 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\") " pod="openstack/openstack-galera-0" Feb 27 16:27:39 crc kubenswrapper[4830]: I0227 16:27:39.292936 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\") " pod="openstack/openstack-galera-0" Feb 27 16:27:39 crc kubenswrapper[4830]: I0227 16:27:39.292980 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-operator-scripts\") pod \"openstack-galera-0\" (UID: \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\") " pod="openstack/openstack-galera-0" Feb 27 16:27:39 crc kubenswrapper[4830]: I0227 16:27:39.293011 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-kolla-config\") pod \"openstack-galera-0\" (UID: \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\") " pod="openstack/openstack-galera-0" Feb 27 16:27:39 crc kubenswrapper[4830]: I0227 16:27:39.293032 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdmdk\" (UniqueName: \"kubernetes.io/projected/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-kube-api-access-pdmdk\") pod \"openstack-galera-0\" (UID: \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\") " pod="openstack/openstack-galera-0" Feb 27 16:27:39 crc kubenswrapper[4830]: I0227 16:27:39.293054 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-config-data-generated\") pod \"openstack-galera-0\" (UID: \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\") " pod="openstack/openstack-galera-0" Feb 27 16:27:39 crc kubenswrapper[4830]: I0227 16:27:39.293413 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-config-data-generated\") pod \"openstack-galera-0\" (UID: \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\") " pod="openstack/openstack-galera-0" Feb 27 16:27:39 crc kubenswrapper[4830]: I0227 16:27:39.294351 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-galera-0" Feb 27 16:27:39 crc kubenswrapper[4830]: I0227 16:27:39.294570 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-kolla-config\") pod \"openstack-galera-0\" (UID: \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\") " pod="openstack/openstack-galera-0" Feb 27 16:27:39 crc kubenswrapper[4830]: I0227 16:27:39.294769 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-config-data-default\") pod \"openstack-galera-0\" (UID: \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\") " pod="openstack/openstack-galera-0" Feb 27 16:27:39 crc kubenswrapper[4830]: I0227 16:27:39.296319 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-operator-scripts\") pod \"openstack-galera-0\" (UID: \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\") " pod="openstack/openstack-galera-0" Feb 27 16:27:39 crc kubenswrapper[4830]: I0227 16:27:39.313930 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\") " pod="openstack/openstack-galera-0" Feb 27 16:27:39 crc kubenswrapper[4830]: I0227 16:27:39.313976 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\") " pod="openstack/openstack-galera-0" Feb 27 16:27:39 crc kubenswrapper[4830]: I0227 16:27:39.318513 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdmdk\" (UniqueName: \"kubernetes.io/projected/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-kube-api-access-pdmdk\") pod \"openstack-galera-0\" (UID: \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\") " pod="openstack/openstack-galera-0" Feb 27 16:27:39 crc kubenswrapper[4830]: I0227 16:27:39.326712 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\") " pod="openstack/openstack-galera-0" Feb 27 16:27:39 crc kubenswrapper[4830]: I0227 16:27:39.387143 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.423743 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.425614 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.427371 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-jrlvg" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.428702 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.429363 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.429530 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.430901 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.612095 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\") " pod="openstack/openstack-cell1-galera-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.612156 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b63af300-2b1c-47a7-ae1d-1334deeefdb1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\") " pod="openstack/openstack-cell1-galera-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.612214 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b63af300-2b1c-47a7-ae1d-1334deeefdb1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\") " pod="openstack/openstack-cell1-galera-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.612362 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b63af300-2b1c-47a7-ae1d-1334deeefdb1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\") " pod="openstack/openstack-cell1-galera-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.612382 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b63af300-2b1c-47a7-ae1d-1334deeefdb1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\") " pod="openstack/openstack-cell1-galera-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.612432 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b63af300-2b1c-47a7-ae1d-1334deeefdb1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\") " pod="openstack/openstack-cell1-galera-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.612458 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b63af300-2b1c-47a7-ae1d-1334deeefdb1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\") " pod="openstack/openstack-cell1-galera-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.612505 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p76kb\" (UniqueName: \"kubernetes.io/projected/b63af300-2b1c-47a7-ae1d-1334deeefdb1-kube-api-access-p76kb\") pod \"openstack-cell1-galera-0\" (UID: \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\") " pod="openstack/openstack-cell1-galera-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.625014 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.625850 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.629451 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-sgsls" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.629639 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.629783 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.636674 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.713823 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b63af300-2b1c-47a7-ae1d-1334deeefdb1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\") " pod="openstack/openstack-cell1-galera-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.713888 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b63af300-2b1c-47a7-ae1d-1334deeefdb1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\") " pod="openstack/openstack-cell1-galera-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.713923 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eb3cdab6-15fa-40e1-a187-e277086227da-config-data\") pod \"memcached-0\" (UID: \"eb3cdab6-15fa-40e1-a187-e277086227da\") " pod="openstack/memcached-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.713970 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j8x9\" (UniqueName: \"kubernetes.io/projected/eb3cdab6-15fa-40e1-a187-e277086227da-kube-api-access-5j8x9\") pod \"memcached-0\" (UID: \"eb3cdab6-15fa-40e1-a187-e277086227da\") " pod="openstack/memcached-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.713988 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eb3cdab6-15fa-40e1-a187-e277086227da-kolla-config\") pod \"memcached-0\" (UID: \"eb3cdab6-15fa-40e1-a187-e277086227da\") " pod="openstack/memcached-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.714009 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb3cdab6-15fa-40e1-a187-e277086227da-memcached-tls-certs\") pod \"memcached-0\" (UID: \"eb3cdab6-15fa-40e1-a187-e277086227da\") " pod="openstack/memcached-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.714031 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b63af300-2b1c-47a7-ae1d-1334deeefdb1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\") " pod="openstack/openstack-cell1-galera-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.714049 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b63af300-2b1c-47a7-ae1d-1334deeefdb1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\") " pod="openstack/openstack-cell1-galera-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.714066 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b63af300-2b1c-47a7-ae1d-1334deeefdb1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\") " pod="openstack/openstack-cell1-galera-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.714083 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb3cdab6-15fa-40e1-a187-e277086227da-combined-ca-bundle\") pod \"memcached-0\" (UID: \"eb3cdab6-15fa-40e1-a187-e277086227da\") " pod="openstack/memcached-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.714106 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b63af300-2b1c-47a7-ae1d-1334deeefdb1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\") " pod="openstack/openstack-cell1-galera-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.714126 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p76kb\" (UniqueName: \"kubernetes.io/projected/b63af300-2b1c-47a7-ae1d-1334deeefdb1-kube-api-access-p76kb\") pod \"openstack-cell1-galera-0\" (UID: \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\") " pod="openstack/openstack-cell1-galera-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.714152 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\") " pod="openstack/openstack-cell1-galera-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.714297 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b63af300-2b1c-47a7-ae1d-1334deeefdb1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\") " pod="openstack/openstack-cell1-galera-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.714430 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-cell1-galera-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.715322 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b63af300-2b1c-47a7-ae1d-1334deeefdb1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\") " pod="openstack/openstack-cell1-galera-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.716480 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b63af300-2b1c-47a7-ae1d-1334deeefdb1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\") " pod="openstack/openstack-cell1-galera-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.716651 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b63af300-2b1c-47a7-ae1d-1334deeefdb1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\") " pod="openstack/openstack-cell1-galera-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.723178 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b63af300-2b1c-47a7-ae1d-1334deeefdb1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\") " pod="openstack/openstack-cell1-galera-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.728065 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b63af300-2b1c-47a7-ae1d-1334deeefdb1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\") " pod="openstack/openstack-cell1-galera-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.732991 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\") " pod="openstack/openstack-cell1-galera-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.733872 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p76kb\" (UniqueName: \"kubernetes.io/projected/b63af300-2b1c-47a7-ae1d-1334deeefdb1-kube-api-access-p76kb\") pod \"openstack-cell1-galera-0\" (UID: \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\") " pod="openstack/openstack-cell1-galera-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.741770 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.816121 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb3cdab6-15fa-40e1-a187-e277086227da-memcached-tls-certs\") pod \"memcached-0\" (UID: \"eb3cdab6-15fa-40e1-a187-e277086227da\") " pod="openstack/memcached-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.819449 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb3cdab6-15fa-40e1-a187-e277086227da-combined-ca-bundle\") pod \"memcached-0\" (UID: \"eb3cdab6-15fa-40e1-a187-e277086227da\") " pod="openstack/memcached-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.819704 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eb3cdab6-15fa-40e1-a187-e277086227da-config-data\") pod \"memcached-0\" (UID: \"eb3cdab6-15fa-40e1-a187-e277086227da\") " pod="openstack/memcached-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.819778 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j8x9\" (UniqueName: \"kubernetes.io/projected/eb3cdab6-15fa-40e1-a187-e277086227da-kube-api-access-5j8x9\") pod \"memcached-0\" (UID: \"eb3cdab6-15fa-40e1-a187-e277086227da\") " pod="openstack/memcached-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.819800 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eb3cdab6-15fa-40e1-a187-e277086227da-kolla-config\") pod \"memcached-0\" (UID: \"eb3cdab6-15fa-40e1-a187-e277086227da\") " pod="openstack/memcached-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.820534 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eb3cdab6-15fa-40e1-a187-e277086227da-kolla-config\") pod \"memcached-0\" (UID: \"eb3cdab6-15fa-40e1-a187-e277086227da\") " pod="openstack/memcached-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.821579 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eb3cdab6-15fa-40e1-a187-e277086227da-config-data\") pod \"memcached-0\" (UID: \"eb3cdab6-15fa-40e1-a187-e277086227da\") " pod="openstack/memcached-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.827532 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb3cdab6-15fa-40e1-a187-e277086227da-combined-ca-bundle\") pod \"memcached-0\" (UID: \"eb3cdab6-15fa-40e1-a187-e277086227da\") " pod="openstack/memcached-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.834511 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb3cdab6-15fa-40e1-a187-e277086227da-memcached-tls-certs\") pod \"memcached-0\" (UID: \"eb3cdab6-15fa-40e1-a187-e277086227da\") " pod="openstack/memcached-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.853427 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j8x9\" (UniqueName: \"kubernetes.io/projected/eb3cdab6-15fa-40e1-a187-e277086227da-kube-api-access-5j8x9\") pod \"memcached-0\" (UID: \"eb3cdab6-15fa-40e1-a187-e277086227da\") " pod="openstack/memcached-0" Feb 27 16:27:40 crc kubenswrapper[4830]: I0227 16:27:40.938897 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 27 16:27:42 crc kubenswrapper[4830]: W0227 16:27:42.613062 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa5b7bdd_50bb_4123_a32a_0c7e97035a3f.slice/crio-04a0e9026bdd37ee0f8f5e146fe81b31fad50f2da7639fc5b02226cffac84e09 WatchSource:0}: Error finding container 04a0e9026bdd37ee0f8f5e146fe81b31fad50f2da7639fc5b02226cffac84e09: Status 404 returned error can't find the container with id 04a0e9026bdd37ee0f8f5e146fe81b31fad50f2da7639fc5b02226cffac84e09 Feb 27 16:27:42 crc kubenswrapper[4830]: I0227 16:27:42.860482 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 16:27:42 crc kubenswrapper[4830]: I0227 16:27:42.861341 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 27 16:27:42 crc kubenswrapper[4830]: I0227 16:27:42.869242 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 16:27:42 crc kubenswrapper[4830]: I0227 16:27:42.874572 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-qfx54" Feb 27 16:27:42 crc kubenswrapper[4830]: I0227 16:27:42.954220 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6965\" (UniqueName: \"kubernetes.io/projected/4627a6ad-d0c1-4e72-9090-3ed47a060c24-kube-api-access-w6965\") pod \"kube-state-metrics-0\" (UID: \"4627a6ad-d0c1-4e72-9090-3ed47a060c24\") " pod="openstack/kube-state-metrics-0" Feb 27 16:27:43 crc kubenswrapper[4830]: I0227 16:27:43.040720 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f","Type":"ContainerStarted","Data":"04a0e9026bdd37ee0f8f5e146fe81b31fad50f2da7639fc5b02226cffac84e09"} Feb 27 16:27:43 crc kubenswrapper[4830]: I0227 16:27:43.055800 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6965\" (UniqueName: \"kubernetes.io/projected/4627a6ad-d0c1-4e72-9090-3ed47a060c24-kube-api-access-w6965\") pod \"kube-state-metrics-0\" (UID: \"4627a6ad-d0c1-4e72-9090-3ed47a060c24\") " pod="openstack/kube-state-metrics-0" Feb 27 16:27:43 crc kubenswrapper[4830]: I0227 16:27:43.072820 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6965\" (UniqueName: \"kubernetes.io/projected/4627a6ad-d0c1-4e72-9090-3ed47a060c24-kube-api-access-w6965\") pod \"kube-state-metrics-0\" (UID: \"4627a6ad-d0c1-4e72-9090-3ed47a060c24\") " pod="openstack/kube-state-metrics-0" Feb 27 16:27:43 crc kubenswrapper[4830]: I0227 16:27:43.202686 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 27 16:27:46 crc kubenswrapper[4830]: I0227 16:27:46.511973 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 27 16:27:46 crc kubenswrapper[4830]: I0227 16:27:46.514627 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 27 16:27:46 crc kubenswrapper[4830]: I0227 16:27:46.517292 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-99lwc" Feb 27 16:27:46 crc kubenswrapper[4830]: I0227 16:27:46.517511 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 27 16:27:46 crc kubenswrapper[4830]: I0227 16:27:46.517834 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 27 16:27:46 crc kubenswrapper[4830]: I0227 16:27:46.518019 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 27 16:27:46 crc kubenswrapper[4830]: I0227 16:27:46.518732 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 27 16:27:46 crc kubenswrapper[4830]: I0227 16:27:46.535450 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 27 16:27:46 crc kubenswrapper[4830]: I0227 16:27:46.620714 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f17706c-2060-4191-b63a-df7dea2c4c95-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9f17706c-2060-4191-b63a-df7dea2c4c95\") " pod="openstack/ovsdbserver-nb-0" Feb 27 16:27:46 crc kubenswrapper[4830]: I0227 16:27:46.620782 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9f17706c-2060-4191-b63a-df7dea2c4c95-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"9f17706c-2060-4191-b63a-df7dea2c4c95\") " pod="openstack/ovsdbserver-nb-0" Feb 27 16:27:46 crc kubenswrapper[4830]: I0227 16:27:46.620882 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f17706c-2060-4191-b63a-df7dea2c4c95-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9f17706c-2060-4191-b63a-df7dea2c4c95\") " pod="openstack/ovsdbserver-nb-0" Feb 27 16:27:46 crc kubenswrapper[4830]: I0227 16:27:46.620937 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9f17706c-2060-4191-b63a-df7dea2c4c95-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"9f17706c-2060-4191-b63a-df7dea2c4c95\") " pod="openstack/ovsdbserver-nb-0" Feb 27 16:27:46 crc kubenswrapper[4830]: I0227 16:27:46.621015 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f17706c-2060-4191-b63a-df7dea2c4c95-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"9f17706c-2060-4191-b63a-df7dea2c4c95\") " pod="openstack/ovsdbserver-nb-0" Feb 27 16:27:46 crc kubenswrapper[4830]: I0227 16:27:46.621040 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f17706c-2060-4191-b63a-df7dea2c4c95-config\") pod \"ovsdbserver-nb-0\" (UID: \"9f17706c-2060-4191-b63a-df7dea2c4c95\") " pod="openstack/ovsdbserver-nb-0" Feb 27 16:27:46 crc kubenswrapper[4830]: I0227 16:27:46.621063 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"9f17706c-2060-4191-b63a-df7dea2c4c95\") " pod="openstack/ovsdbserver-nb-0" Feb 27 16:27:46 crc kubenswrapper[4830]: I0227 16:27:46.621166 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clnqr\" (UniqueName: \"kubernetes.io/projected/9f17706c-2060-4191-b63a-df7dea2c4c95-kube-api-access-clnqr\") pod \"ovsdbserver-nb-0\" (UID: \"9f17706c-2060-4191-b63a-df7dea2c4c95\") " pod="openstack/ovsdbserver-nb-0" Feb 27 16:27:46 crc kubenswrapper[4830]: I0227 16:27:46.722345 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"9f17706c-2060-4191-b63a-df7dea2c4c95\") " pod="openstack/ovsdbserver-nb-0" Feb 27 16:27:46 crc kubenswrapper[4830]: I0227 16:27:46.722577 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"9f17706c-2060-4191-b63a-df7dea2c4c95\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-nb-0" Feb 27 16:27:46 crc kubenswrapper[4830]: I0227 16:27:46.723050 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f17706c-2060-4191-b63a-df7dea2c4c95-config\") pod \"ovsdbserver-nb-0\" (UID: \"9f17706c-2060-4191-b63a-df7dea2c4c95\") " pod="openstack/ovsdbserver-nb-0" Feb 27 16:27:46 crc kubenswrapper[4830]: I0227 16:27:46.723199 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clnqr\" (UniqueName: \"kubernetes.io/projected/9f17706c-2060-4191-b63a-df7dea2c4c95-kube-api-access-clnqr\") pod \"ovsdbserver-nb-0\" (UID: \"9f17706c-2060-4191-b63a-df7dea2c4c95\") " pod="openstack/ovsdbserver-nb-0" Feb 27 16:27:46 crc kubenswrapper[4830]: I0227 16:27:46.723352 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f17706c-2060-4191-b63a-df7dea2c4c95-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9f17706c-2060-4191-b63a-df7dea2c4c95\") " pod="openstack/ovsdbserver-nb-0" Feb 27 16:27:46 crc kubenswrapper[4830]: I0227 16:27:46.723444 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9f17706c-2060-4191-b63a-df7dea2c4c95-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"9f17706c-2060-4191-b63a-df7dea2c4c95\") " pod="openstack/ovsdbserver-nb-0" Feb 27 16:27:46 crc kubenswrapper[4830]: I0227 16:27:46.723555 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f17706c-2060-4191-b63a-df7dea2c4c95-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9f17706c-2060-4191-b63a-df7dea2c4c95\") " pod="openstack/ovsdbserver-nb-0" Feb 27 16:27:46 crc kubenswrapper[4830]: I0227 16:27:46.723617 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9f17706c-2060-4191-b63a-df7dea2c4c95-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"9f17706c-2060-4191-b63a-df7dea2c4c95\") " pod="openstack/ovsdbserver-nb-0" Feb 27 16:27:46 crc kubenswrapper[4830]: I0227 16:27:46.723774 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f17706c-2060-4191-b63a-df7dea2c4c95-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"9f17706c-2060-4191-b63a-df7dea2c4c95\") " pod="openstack/ovsdbserver-nb-0" Feb 27 16:27:46 crc kubenswrapper[4830]: I0227 16:27:46.723967 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9f17706c-2060-4191-b63a-df7dea2c4c95-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"9f17706c-2060-4191-b63a-df7dea2c4c95\") " pod="openstack/ovsdbserver-nb-0" Feb 27 16:27:46 crc kubenswrapper[4830]: I0227 16:27:46.725162 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9f17706c-2060-4191-b63a-df7dea2c4c95-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"9f17706c-2060-4191-b63a-df7dea2c4c95\") " pod="openstack/ovsdbserver-nb-0" Feb 27 16:27:46 crc kubenswrapper[4830]: I0227 16:27:46.725381 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f17706c-2060-4191-b63a-df7dea2c4c95-config\") pod \"ovsdbserver-nb-0\" (UID: \"9f17706c-2060-4191-b63a-df7dea2c4c95\") " pod="openstack/ovsdbserver-nb-0" Feb 27 16:27:46 crc kubenswrapper[4830]: I0227 16:27:46.730098 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f17706c-2060-4191-b63a-df7dea2c4c95-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"9f17706c-2060-4191-b63a-df7dea2c4c95\") " pod="openstack/ovsdbserver-nb-0" Feb 27 16:27:46 crc kubenswrapper[4830]: I0227 16:27:46.730580 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f17706c-2060-4191-b63a-df7dea2c4c95-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9f17706c-2060-4191-b63a-df7dea2c4c95\") " pod="openstack/ovsdbserver-nb-0" Feb 27 16:27:46 crc kubenswrapper[4830]: I0227 16:27:46.730882 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f17706c-2060-4191-b63a-df7dea2c4c95-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9f17706c-2060-4191-b63a-df7dea2c4c95\") " pod="openstack/ovsdbserver-nb-0" Feb 27 16:27:46 crc kubenswrapper[4830]: I0227 16:27:46.743529 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"9f17706c-2060-4191-b63a-df7dea2c4c95\") " pod="openstack/ovsdbserver-nb-0" Feb 27 16:27:46 crc kubenswrapper[4830]: I0227 16:27:46.746990 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clnqr\" (UniqueName: \"kubernetes.io/projected/9f17706c-2060-4191-b63a-df7dea2c4c95-kube-api-access-clnqr\") pod \"ovsdbserver-nb-0\" (UID: \"9f17706c-2060-4191-b63a-df7dea2c4c95\") " pod="openstack/ovsdbserver-nb-0" Feb 27 16:27:46 crc kubenswrapper[4830]: I0227 16:27:46.845676 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.279285 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-mncqx"] Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.280421 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mncqx" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.284165 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-2964v" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.284648 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.284875 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.304994 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mncqx"] Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.364637 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-qt6mr"] Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.366592 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-qt6mr" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.382260 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-qt6mr"] Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.450541 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-var-run\") pod \"ovn-controller-mncqx\" (UID: \"2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60\") " pod="openstack/ovn-controller-mncqx" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.450865 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-ovn-controller-tls-certs\") pod \"ovn-controller-mncqx\" (UID: \"2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60\") " pod="openstack/ovn-controller-mncqx" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.450984 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lffj5\" (UniqueName: \"kubernetes.io/projected/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-kube-api-access-lffj5\") pod \"ovn-controller-mncqx\" (UID: \"2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60\") " pod="openstack/ovn-controller-mncqx" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.451123 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-var-log-ovn\") pod \"ovn-controller-mncqx\" (UID: \"2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60\") " pod="openstack/ovn-controller-mncqx" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.451268 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-combined-ca-bundle\") pod \"ovn-controller-mncqx\" (UID: \"2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60\") " pod="openstack/ovn-controller-mncqx" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.451373 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-scripts\") pod \"ovn-controller-mncqx\" (UID: \"2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60\") " pod="openstack/ovn-controller-mncqx" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.451477 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-var-run-ovn\") pod \"ovn-controller-mncqx\" (UID: \"2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60\") " pod="openstack/ovn-controller-mncqx" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.552667 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-ovn-controller-tls-certs\") pod \"ovn-controller-mncqx\" (UID: \"2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60\") " pod="openstack/ovn-controller-mncqx" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.552721 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bc737ee4-d87c-4276-a6d1-6f3144879542-var-log\") pod \"ovn-controller-ovs-qt6mr\" (UID: \"bc737ee4-d87c-4276-a6d1-6f3144879542\") " pod="openstack/ovn-controller-ovs-qt6mr" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.552739 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-var-log-ovn\") pod \"ovn-controller-mncqx\" (UID: \"2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60\") " pod="openstack/ovn-controller-mncqx" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.552773 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-var-run-ovn\") pod \"ovn-controller-mncqx\" (UID: \"2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60\") " pod="openstack/ovn-controller-mncqx" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.552792 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc737ee4-d87c-4276-a6d1-6f3144879542-scripts\") pod \"ovn-controller-ovs-qt6mr\" (UID: \"bc737ee4-d87c-4276-a6d1-6f3144879542\") " pod="openstack/ovn-controller-ovs-qt6mr" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.553117 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bc737ee4-d87c-4276-a6d1-6f3144879542-var-run\") pod \"ovn-controller-ovs-qt6mr\" (UID: \"bc737ee4-d87c-4276-a6d1-6f3144879542\") " pod="openstack/ovn-controller-ovs-qt6mr" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.553662 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-var-run\") pod \"ovn-controller-mncqx\" (UID: \"2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60\") " pod="openstack/ovn-controller-mncqx" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.553739 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49zwz\" (UniqueName: \"kubernetes.io/projected/bc737ee4-d87c-4276-a6d1-6f3144879542-kube-api-access-49zwz\") pod \"ovn-controller-ovs-qt6mr\" (UID: \"bc737ee4-d87c-4276-a6d1-6f3144879542\") " pod="openstack/ovn-controller-ovs-qt6mr" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.553789 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bc737ee4-d87c-4276-a6d1-6f3144879542-etc-ovs\") pod \"ovn-controller-ovs-qt6mr\" (UID: \"bc737ee4-d87c-4276-a6d1-6f3144879542\") " pod="openstack/ovn-controller-ovs-qt6mr" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.553844 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lffj5\" (UniqueName: \"kubernetes.io/projected/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-kube-api-access-lffj5\") pod \"ovn-controller-mncqx\" (UID: \"2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60\") " pod="openstack/ovn-controller-mncqx" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.553977 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-combined-ca-bundle\") pod \"ovn-controller-mncqx\" (UID: \"2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60\") " pod="openstack/ovn-controller-mncqx" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.554038 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-scripts\") pod \"ovn-controller-mncqx\" (UID: \"2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60\") " pod="openstack/ovn-controller-mncqx" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.554137 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-var-run\") pod \"ovn-controller-mncqx\" (UID: \"2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60\") " pod="openstack/ovn-controller-mncqx" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.553298 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-var-run-ovn\") pod \"ovn-controller-mncqx\" (UID: \"2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60\") " pod="openstack/ovn-controller-mncqx" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.553388 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-var-log-ovn\") pod \"ovn-controller-mncqx\" (UID: \"2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60\") " pod="openstack/ovn-controller-mncqx" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.555270 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bc737ee4-d87c-4276-a6d1-6f3144879542-var-lib\") pod \"ovn-controller-ovs-qt6mr\" (UID: \"bc737ee4-d87c-4276-a6d1-6f3144879542\") " pod="openstack/ovn-controller-ovs-qt6mr" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.558925 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-combined-ca-bundle\") pod \"ovn-controller-mncqx\" (UID: \"2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60\") " pod="openstack/ovn-controller-mncqx" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.559351 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-scripts\") pod \"ovn-controller-mncqx\" (UID: \"2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60\") " pod="openstack/ovn-controller-mncqx" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.560002 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-ovn-controller-tls-certs\") pod \"ovn-controller-mncqx\" (UID: \"2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60\") " pod="openstack/ovn-controller-mncqx" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.587693 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lffj5\" (UniqueName: \"kubernetes.io/projected/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-kube-api-access-lffj5\") pod \"ovn-controller-mncqx\" (UID: \"2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60\") " pod="openstack/ovn-controller-mncqx" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.603485 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mncqx" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.656910 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bc737ee4-d87c-4276-a6d1-6f3144879542-var-lib\") pod \"ovn-controller-ovs-qt6mr\" (UID: \"bc737ee4-d87c-4276-a6d1-6f3144879542\") " pod="openstack/ovn-controller-ovs-qt6mr" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.657038 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bc737ee4-d87c-4276-a6d1-6f3144879542-var-log\") pod \"ovn-controller-ovs-qt6mr\" (UID: \"bc737ee4-d87c-4276-a6d1-6f3144879542\") " pod="openstack/ovn-controller-ovs-qt6mr" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.657110 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc737ee4-d87c-4276-a6d1-6f3144879542-scripts\") pod \"ovn-controller-ovs-qt6mr\" (UID: \"bc737ee4-d87c-4276-a6d1-6f3144879542\") " pod="openstack/ovn-controller-ovs-qt6mr" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.657181 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bc737ee4-d87c-4276-a6d1-6f3144879542-var-run\") pod \"ovn-controller-ovs-qt6mr\" (UID: \"bc737ee4-d87c-4276-a6d1-6f3144879542\") " pod="openstack/ovn-controller-ovs-qt6mr" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.657223 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49zwz\" (UniqueName: \"kubernetes.io/projected/bc737ee4-d87c-4276-a6d1-6f3144879542-kube-api-access-49zwz\") pod \"ovn-controller-ovs-qt6mr\" (UID: \"bc737ee4-d87c-4276-a6d1-6f3144879542\") " pod="openstack/ovn-controller-ovs-qt6mr" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.657255 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bc737ee4-d87c-4276-a6d1-6f3144879542-etc-ovs\") pod \"ovn-controller-ovs-qt6mr\" (UID: \"bc737ee4-d87c-4276-a6d1-6f3144879542\") " pod="openstack/ovn-controller-ovs-qt6mr" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.657258 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bc737ee4-d87c-4276-a6d1-6f3144879542-var-lib\") pod \"ovn-controller-ovs-qt6mr\" (UID: \"bc737ee4-d87c-4276-a6d1-6f3144879542\") " pod="openstack/ovn-controller-ovs-qt6mr" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.657470 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bc737ee4-d87c-4276-a6d1-6f3144879542-var-log\") pod \"ovn-controller-ovs-qt6mr\" (UID: \"bc737ee4-d87c-4276-a6d1-6f3144879542\") " pod="openstack/ovn-controller-ovs-qt6mr" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.657574 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bc737ee4-d87c-4276-a6d1-6f3144879542-var-run\") pod \"ovn-controller-ovs-qt6mr\" (UID: \"bc737ee4-d87c-4276-a6d1-6f3144879542\") " pod="openstack/ovn-controller-ovs-qt6mr" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.657598 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bc737ee4-d87c-4276-a6d1-6f3144879542-etc-ovs\") pod \"ovn-controller-ovs-qt6mr\" (UID: \"bc737ee4-d87c-4276-a6d1-6f3144879542\") " pod="openstack/ovn-controller-ovs-qt6mr" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.659745 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc737ee4-d87c-4276-a6d1-6f3144879542-scripts\") pod \"ovn-controller-ovs-qt6mr\" (UID: \"bc737ee4-d87c-4276-a6d1-6f3144879542\") " pod="openstack/ovn-controller-ovs-qt6mr" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.687352 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49zwz\" (UniqueName: \"kubernetes.io/projected/bc737ee4-d87c-4276-a6d1-6f3144879542-kube-api-access-49zwz\") pod \"ovn-controller-ovs-qt6mr\" (UID: \"bc737ee4-d87c-4276-a6d1-6f3144879542\") " pod="openstack/ovn-controller-ovs-qt6mr" Feb 27 16:27:47 crc kubenswrapper[4830]: I0227 16:27:47.989627 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-qt6mr" Feb 27 16:27:50 crc kubenswrapper[4830]: I0227 16:27:50.064929 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 27 16:27:50 crc kubenswrapper[4830]: I0227 16:27:50.067568 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 27 16:27:50 crc kubenswrapper[4830]: I0227 16:27:50.070541 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-bnq6k" Feb 27 16:27:50 crc kubenswrapper[4830]: I0227 16:27:50.070665 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 27 16:27:50 crc kubenswrapper[4830]: I0227 16:27:50.071111 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 27 16:27:50 crc kubenswrapper[4830]: I0227 16:27:50.071925 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 27 16:27:50 crc kubenswrapper[4830]: I0227 16:27:50.090478 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 27 16:27:50 crc kubenswrapper[4830]: I0227 16:27:50.234637 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7285a360-7ff1-4e35-b91a-d472a0ee591b-config\") pod \"ovsdbserver-sb-0\" (UID: \"7285a360-7ff1-4e35-b91a-d472a0ee591b\") " pod="openstack/ovsdbserver-sb-0" Feb 27 16:27:50 crc kubenswrapper[4830]: I0227 16:27:50.234800 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7285a360-7ff1-4e35-b91a-d472a0ee591b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7285a360-7ff1-4e35-b91a-d472a0ee591b\") " pod="openstack/ovsdbserver-sb-0" Feb 27 16:27:50 crc kubenswrapper[4830]: I0227 16:27:50.234865 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7285a360-7ff1-4e35-b91a-d472a0ee591b\") " pod="openstack/ovsdbserver-sb-0" Feb 27 16:27:50 crc kubenswrapper[4830]: I0227 16:27:50.235091 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7285a360-7ff1-4e35-b91a-d472a0ee591b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7285a360-7ff1-4e35-b91a-d472a0ee591b\") " pod="openstack/ovsdbserver-sb-0" Feb 27 16:27:50 crc kubenswrapper[4830]: I0227 16:27:50.235148 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn479\" (UniqueName: \"kubernetes.io/projected/7285a360-7ff1-4e35-b91a-d472a0ee591b-kube-api-access-wn479\") pod \"ovsdbserver-sb-0\" (UID: \"7285a360-7ff1-4e35-b91a-d472a0ee591b\") " pod="openstack/ovsdbserver-sb-0" Feb 27 16:27:50 crc kubenswrapper[4830]: I0227 16:27:50.235265 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7285a360-7ff1-4e35-b91a-d472a0ee591b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7285a360-7ff1-4e35-b91a-d472a0ee591b\") " pod="openstack/ovsdbserver-sb-0" Feb 27 16:27:50 crc kubenswrapper[4830]: I0227 16:27:50.235325 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7285a360-7ff1-4e35-b91a-d472a0ee591b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7285a360-7ff1-4e35-b91a-d472a0ee591b\") " pod="openstack/ovsdbserver-sb-0" Feb 27 16:27:50 crc kubenswrapper[4830]: I0227 16:27:50.235406 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7285a360-7ff1-4e35-b91a-d472a0ee591b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7285a360-7ff1-4e35-b91a-d472a0ee591b\") " pod="openstack/ovsdbserver-sb-0" Feb 27 16:27:50 crc kubenswrapper[4830]: I0227 16:27:50.337686 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7285a360-7ff1-4e35-b91a-d472a0ee591b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7285a360-7ff1-4e35-b91a-d472a0ee591b\") " pod="openstack/ovsdbserver-sb-0" Feb 27 16:27:50 crc kubenswrapper[4830]: I0227 16:27:50.337752 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7285a360-7ff1-4e35-b91a-d472a0ee591b\") " pod="openstack/ovsdbserver-sb-0" Feb 27 16:27:50 crc kubenswrapper[4830]: I0227 16:27:50.337811 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7285a360-7ff1-4e35-b91a-d472a0ee591b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7285a360-7ff1-4e35-b91a-d472a0ee591b\") " pod="openstack/ovsdbserver-sb-0" Feb 27 16:27:50 crc kubenswrapper[4830]: I0227 16:27:50.337837 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn479\" (UniqueName: \"kubernetes.io/projected/7285a360-7ff1-4e35-b91a-d472a0ee591b-kube-api-access-wn479\") pod \"ovsdbserver-sb-0\" (UID: \"7285a360-7ff1-4e35-b91a-d472a0ee591b\") " pod="openstack/ovsdbserver-sb-0" Feb 27 16:27:50 crc kubenswrapper[4830]: I0227 16:27:50.337880 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7285a360-7ff1-4e35-b91a-d472a0ee591b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7285a360-7ff1-4e35-b91a-d472a0ee591b\") " pod="openstack/ovsdbserver-sb-0" Feb 27 16:27:50 crc kubenswrapper[4830]: I0227 16:27:50.337909 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7285a360-7ff1-4e35-b91a-d472a0ee591b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7285a360-7ff1-4e35-b91a-d472a0ee591b\") " pod="openstack/ovsdbserver-sb-0" Feb 27 16:27:50 crc kubenswrapper[4830]: I0227 16:27:50.337970 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7285a360-7ff1-4e35-b91a-d472a0ee591b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7285a360-7ff1-4e35-b91a-d472a0ee591b\") " pod="openstack/ovsdbserver-sb-0" Feb 27 16:27:50 crc kubenswrapper[4830]: I0227 16:27:50.338003 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7285a360-7ff1-4e35-b91a-d472a0ee591b-config\") pod \"ovsdbserver-sb-0\" (UID: \"7285a360-7ff1-4e35-b91a-d472a0ee591b\") " pod="openstack/ovsdbserver-sb-0" Feb 27 16:27:50 crc kubenswrapper[4830]: I0227 16:27:50.338386 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7285a360-7ff1-4e35-b91a-d472a0ee591b\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-sb-0" Feb 27 16:27:50 crc kubenswrapper[4830]: I0227 16:27:50.339249 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7285a360-7ff1-4e35-b91a-d472a0ee591b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7285a360-7ff1-4e35-b91a-d472a0ee591b\") " pod="openstack/ovsdbserver-sb-0" Feb 27 16:27:50 crc kubenswrapper[4830]: I0227 16:27:50.339734 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7285a360-7ff1-4e35-b91a-d472a0ee591b-config\") pod \"ovsdbserver-sb-0\" (UID: \"7285a360-7ff1-4e35-b91a-d472a0ee591b\") " pod="openstack/ovsdbserver-sb-0" Feb 27 16:27:50 crc kubenswrapper[4830]: I0227 16:27:50.339993 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7285a360-7ff1-4e35-b91a-d472a0ee591b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7285a360-7ff1-4e35-b91a-d472a0ee591b\") " pod="openstack/ovsdbserver-sb-0" Feb 27 16:27:50 crc kubenswrapper[4830]: I0227 16:27:50.346207 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7285a360-7ff1-4e35-b91a-d472a0ee591b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7285a360-7ff1-4e35-b91a-d472a0ee591b\") " pod="openstack/ovsdbserver-sb-0" Feb 27 16:27:50 crc kubenswrapper[4830]: I0227 16:27:50.347164 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7285a360-7ff1-4e35-b91a-d472a0ee591b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7285a360-7ff1-4e35-b91a-d472a0ee591b\") " pod="openstack/ovsdbserver-sb-0" Feb 27 16:27:50 crc kubenswrapper[4830]: I0227 16:27:50.353592 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7285a360-7ff1-4e35-b91a-d472a0ee591b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7285a360-7ff1-4e35-b91a-d472a0ee591b\") " pod="openstack/ovsdbserver-sb-0" Feb 27 16:27:50 crc kubenswrapper[4830]: I0227 16:27:50.369489 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn479\" (UniqueName: \"kubernetes.io/projected/7285a360-7ff1-4e35-b91a-d472a0ee591b-kube-api-access-wn479\") pod \"ovsdbserver-sb-0\" (UID: \"7285a360-7ff1-4e35-b91a-d472a0ee591b\") " pod="openstack/ovsdbserver-sb-0" Feb 27 16:27:50 crc kubenswrapper[4830]: I0227 16:27:50.374915 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7285a360-7ff1-4e35-b91a-d472a0ee591b\") " pod="openstack/ovsdbserver-sb-0" Feb 27 16:27:50 crc kubenswrapper[4830]: I0227 16:27:50.406295 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 27 16:27:57 crc kubenswrapper[4830]: E0227 16:27:57.466519 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 27 16:27:57 crc kubenswrapper[4830]: E0227 16:27:57.467112 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9f2n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-r77wz_openstack(80f367ea-86aa-4385-b62c-35fd25a2355f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 16:27:57 crc kubenswrapper[4830]: E0227 16:27:57.470075 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-r77wz" podUID="80f367ea-86aa-4385-b62c-35fd25a2355f" Feb 27 16:27:57 crc kubenswrapper[4830]: E0227 16:27:57.531306 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 27 16:27:57 crc kubenswrapper[4830]: E0227 16:27:57.531545 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gz975,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-f5dpx_openstack(0800fc83-7606-4be1-8a04-aab5b8226a0c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 16:27:57 crc kubenswrapper[4830]: E0227 16:27:57.535109 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5ccc8479f9-f5dpx" podUID="0800fc83-7606-4be1-8a04-aab5b8226a0c" Feb 27 16:27:57 crc kubenswrapper[4830]: E0227 16:27:57.549131 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 27 16:27:57 crc kubenswrapper[4830]: E0227 16:27:57.549283 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-682g6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-b4knd_openstack(ebbe238c-4f40-46a8-b549-b9b0ae97fb82): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 16:27:57 crc kubenswrapper[4830]: E0227 16:27:57.549500 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 27 16:27:57 crc kubenswrapper[4830]: E0227 16:27:57.549556 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjmlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-7vqj6_openstack(3a40d14a-19ad-406b-bc28-5be5b879ba20): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 16:27:57 crc kubenswrapper[4830]: E0227 16:27:57.550668 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-7vqj6" podUID="3a40d14a-19ad-406b-bc28-5be5b879ba20" Feb 27 16:27:57 crc kubenswrapper[4830]: E0227 16:27:57.550709 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-b4knd" podUID="ebbe238c-4f40-46a8-b549-b9b0ae97fb82" Feb 27 16:27:57 crc kubenswrapper[4830]: I0227 16:27:57.930017 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 27 16:27:58 crc kubenswrapper[4830]: E0227 16:27:58.158396 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-5ccc8479f9-f5dpx" podUID="0800fc83-7606-4be1-8a04-aab5b8226a0c" Feb 27 16:27:58 crc kubenswrapper[4830]: E0227 16:27:58.158448 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-b4knd" podUID="ebbe238c-4f40-46a8-b549-b9b0ae97fb82" Feb 27 16:27:58 crc kubenswrapper[4830]: W0227 16:27:58.842238 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb3cdab6_15fa_40e1_a187_e277086227da.slice/crio-23f9b2043dd7472d750b86599a6ec4fd73edb0ad6c2affdab8a506cb40cd6394 WatchSource:0}: Error finding container 23f9b2043dd7472d750b86599a6ec4fd73edb0ad6c2affdab8a506cb40cd6394: Status 404 returned error can't find the container with id 23f9b2043dd7472d750b86599a6ec4fd73edb0ad6c2affdab8a506cb40cd6394 Feb 27 16:27:59 crc kubenswrapper[4830]: I0227 16:27:59.007616 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-r77wz" Feb 27 16:27:59 crc kubenswrapper[4830]: I0227 16:27:59.028391 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7vqj6" Feb 27 16:27:59 crc kubenswrapper[4830]: I0227 16:27:59.122527 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80f367ea-86aa-4385-b62c-35fd25a2355f-config\") pod \"80f367ea-86aa-4385-b62c-35fd25a2355f\" (UID: \"80f367ea-86aa-4385-b62c-35fd25a2355f\") " Feb 27 16:27:59 crc kubenswrapper[4830]: I0227 16:27:59.122566 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9f2n\" (UniqueName: \"kubernetes.io/projected/80f367ea-86aa-4385-b62c-35fd25a2355f-kube-api-access-f9f2n\") pod \"80f367ea-86aa-4385-b62c-35fd25a2355f\" (UID: \"80f367ea-86aa-4385-b62c-35fd25a2355f\") " Feb 27 16:27:59 crc kubenswrapper[4830]: I0227 16:27:59.123317 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80f367ea-86aa-4385-b62c-35fd25a2355f-config" (OuterVolumeSpecName: "config") pod "80f367ea-86aa-4385-b62c-35fd25a2355f" (UID: "80f367ea-86aa-4385-b62c-35fd25a2355f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:27:59 crc kubenswrapper[4830]: I0227 16:27:59.127632 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80f367ea-86aa-4385-b62c-35fd25a2355f-kube-api-access-f9f2n" (OuterVolumeSpecName: "kube-api-access-f9f2n") pod "80f367ea-86aa-4385-b62c-35fd25a2355f" (UID: "80f367ea-86aa-4385-b62c-35fd25a2355f"). InnerVolumeSpecName "kube-api-access-f9f2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:27:59 crc kubenswrapper[4830]: I0227 16:27:59.164573 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-r77wz" event={"ID":"80f367ea-86aa-4385-b62c-35fd25a2355f","Type":"ContainerDied","Data":"11c567ab777215389d798b4b5a99ea77276cd0e685ab7f5f79cf981d52c2e44d"} Feb 27 16:27:59 crc kubenswrapper[4830]: I0227 16:27:59.164649 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-r77wz" Feb 27 16:27:59 crc kubenswrapper[4830]: I0227 16:27:59.168177 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"eb3cdab6-15fa-40e1-a187-e277086227da","Type":"ContainerStarted","Data":"23f9b2043dd7472d750b86599a6ec4fd73edb0ad6c2affdab8a506cb40cd6394"} Feb 27 16:27:59 crc kubenswrapper[4830]: I0227 16:27:59.169811 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-7vqj6" event={"ID":"3a40d14a-19ad-406b-bc28-5be5b879ba20","Type":"ContainerDied","Data":"9cca093b215ef4d9e3cd6e67196954aad77f1b39abe8c8c9ea9da812cdadb87f"} Feb 27 16:27:59 crc kubenswrapper[4830]: I0227 16:27:59.169899 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-7vqj6" Feb 27 16:27:59 crc kubenswrapper[4830]: I0227 16:27:59.213332 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-r77wz"] Feb 27 16:27:59 crc kubenswrapper[4830]: I0227 16:27:59.220091 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-r77wz"] Feb 27 16:27:59 crc kubenswrapper[4830]: I0227 16:27:59.224298 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a40d14a-19ad-406b-bc28-5be5b879ba20-dns-svc\") pod \"3a40d14a-19ad-406b-bc28-5be5b879ba20\" (UID: \"3a40d14a-19ad-406b-bc28-5be5b879ba20\") " Feb 27 16:27:59 crc kubenswrapper[4830]: I0227 16:27:59.224392 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjmlz\" (UniqueName: \"kubernetes.io/projected/3a40d14a-19ad-406b-bc28-5be5b879ba20-kube-api-access-vjmlz\") pod \"3a40d14a-19ad-406b-bc28-5be5b879ba20\" (UID: \"3a40d14a-19ad-406b-bc28-5be5b879ba20\") " Feb 27 16:27:59 crc kubenswrapper[4830]: I0227 16:27:59.224475 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a40d14a-19ad-406b-bc28-5be5b879ba20-config\") pod \"3a40d14a-19ad-406b-bc28-5be5b879ba20\" (UID: \"3a40d14a-19ad-406b-bc28-5be5b879ba20\") " Feb 27 16:27:59 crc kubenswrapper[4830]: I0227 16:27:59.224755 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a40d14a-19ad-406b-bc28-5be5b879ba20-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3a40d14a-19ad-406b-bc28-5be5b879ba20" (UID: "3a40d14a-19ad-406b-bc28-5be5b879ba20"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:27:59 crc kubenswrapper[4830]: I0227 16:27:59.224825 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80f367ea-86aa-4385-b62c-35fd25a2355f-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:27:59 crc kubenswrapper[4830]: I0227 16:27:59.224845 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9f2n\" (UniqueName: \"kubernetes.io/projected/80f367ea-86aa-4385-b62c-35fd25a2355f-kube-api-access-f9f2n\") on node \"crc\" DevicePath \"\"" Feb 27 16:27:59 crc kubenswrapper[4830]: I0227 16:27:59.225117 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a40d14a-19ad-406b-bc28-5be5b879ba20-config" (OuterVolumeSpecName: "config") pod "3a40d14a-19ad-406b-bc28-5be5b879ba20" (UID: "3a40d14a-19ad-406b-bc28-5be5b879ba20"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:27:59 crc kubenswrapper[4830]: I0227 16:27:59.227084 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a40d14a-19ad-406b-bc28-5be5b879ba20-kube-api-access-vjmlz" (OuterVolumeSpecName: "kube-api-access-vjmlz") pod "3a40d14a-19ad-406b-bc28-5be5b879ba20" (UID: "3a40d14a-19ad-406b-bc28-5be5b879ba20"). InnerVolumeSpecName "kube-api-access-vjmlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:27:59 crc kubenswrapper[4830]: I0227 16:27:59.326241 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjmlz\" (UniqueName: \"kubernetes.io/projected/3a40d14a-19ad-406b-bc28-5be5b879ba20-kube-api-access-vjmlz\") on node \"crc\" DevicePath \"\"" Feb 27 16:27:59 crc kubenswrapper[4830]: I0227 16:27:59.326268 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a40d14a-19ad-406b-bc28-5be5b879ba20-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:27:59 crc kubenswrapper[4830]: I0227 16:27:59.326277 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a40d14a-19ad-406b-bc28-5be5b879ba20-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 16:27:59 crc kubenswrapper[4830]: I0227 16:27:59.436774 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mncqx"] Feb 27 16:27:59 crc kubenswrapper[4830]: I0227 16:27:59.442297 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 27 16:27:59 crc kubenswrapper[4830]: W0227 16:27:59.453458 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf33c958_d345_4a0b_a2d8_7c8aedfb5cf3.slice/crio-cfd42446c7904e4ee2b3cc8caf83bb44f68fa23e91c0df8dccd39789d5275b09 WatchSource:0}: Error finding container cfd42446c7904e4ee2b3cc8caf83bb44f68fa23e91c0df8dccd39789d5275b09: Status 404 returned error can't find the container with id cfd42446c7904e4ee2b3cc8caf83bb44f68fa23e91c0df8dccd39789d5275b09 Feb 27 16:27:59 crc kubenswrapper[4830]: I0227 16:27:59.565357 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7vqj6"] Feb 27 16:27:59 crc kubenswrapper[4830]: W0227 16:27:59.569193 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7285a360_7ff1_4e35_b91a_d472a0ee591b.slice/crio-49bf1f87f98ae2644a84087142c3c92892d9fad6b91ed15bc982a4c0b71e5a49 WatchSource:0}: Error finding container 49bf1f87f98ae2644a84087142c3c92892d9fad6b91ed15bc982a4c0b71e5a49: Status 404 returned error can't find the container with id 49bf1f87f98ae2644a84087142c3c92892d9fad6b91ed15bc982a4c0b71e5a49 Feb 27 16:27:59 crc kubenswrapper[4830]: I0227 16:27:59.572563 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-7vqj6"] Feb 27 16:27:59 crc kubenswrapper[4830]: I0227 16:27:59.580630 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 27 16:28:00 crc kubenswrapper[4830]: I0227 16:28:00.156447 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536828-8jpfz"] Feb 27 16:28:00 crc kubenswrapper[4830]: I0227 16:28:00.158052 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536828-8jpfz" Feb 27 16:28:00 crc kubenswrapper[4830]: I0227 16:28:00.161012 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 16:28:00 crc kubenswrapper[4830]: I0227 16:28:00.161124 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 16:28:00 crc kubenswrapper[4830]: I0227 16:28:00.161399 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 16:28:00 crc kubenswrapper[4830]: I0227 16:28:00.173501 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536828-8jpfz"] Feb 27 16:28:00 crc kubenswrapper[4830]: I0227 16:28:00.184055 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7285a360-7ff1-4e35-b91a-d472a0ee591b","Type":"ContainerStarted","Data":"49bf1f87f98ae2644a84087142c3c92892d9fad6b91ed15bc982a4c0b71e5a49"} Feb 27 16:28:00 crc kubenswrapper[4830]: I0227 16:28:00.185348 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3","Type":"ContainerStarted","Data":"cfd42446c7904e4ee2b3cc8caf83bb44f68fa23e91c0df8dccd39789d5275b09"} Feb 27 16:28:00 crc kubenswrapper[4830]: I0227 16:28:00.187303 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f","Type":"ContainerStarted","Data":"aea522c2ecab41c50d2a7430cd094093e90f5bf0a044bc4b659d102558a7db55"} Feb 27 16:28:00 crc kubenswrapper[4830]: I0227 16:28:00.189188 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mncqx" event={"ID":"2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60","Type":"ContainerStarted","Data":"76bb760f76d65ac29dbbac945a7c3f50503139f52918e3dcc5f430bb0fd782bc"} Feb 27 16:28:00 crc kubenswrapper[4830]: I0227 16:28:00.208372 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 27 16:28:00 crc kubenswrapper[4830]: W0227 16:28:00.224635 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb63af300_2b1c_47a7_ae1d_1334deeefdb1.slice/crio-b7a67994406dc1ea6f1f20f4e7e5d5e87710cb482538e09640f4bf18261843b5 WatchSource:0}: Error finding container b7a67994406dc1ea6f1f20f4e7e5d5e87710cb482538e09640f4bf18261843b5: Status 404 returned error can't find the container with id b7a67994406dc1ea6f1f20f4e7e5d5e87710cb482538e09640f4bf18261843b5 Feb 27 16:28:00 crc kubenswrapper[4830]: I0227 16:28:00.357378 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t4mf\" (UniqueName: \"kubernetes.io/projected/ea3b2c46-c98b-4cf7-b7a1-0a7dfe22cc63-kube-api-access-9t4mf\") pod \"auto-csr-approver-29536828-8jpfz\" (UID: \"ea3b2c46-c98b-4cf7-b7a1-0a7dfe22cc63\") " pod="openshift-infra/auto-csr-approver-29536828-8jpfz" Feb 27 16:28:00 crc kubenswrapper[4830]: I0227 16:28:00.360290 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 16:28:00 crc kubenswrapper[4830]: I0227 16:28:00.451068 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 27 16:28:00 crc kubenswrapper[4830]: W0227 16:28:00.456498 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f17706c_2060_4191_b63a_df7dea2c4c95.slice/crio-00f79ecb78dd4a17bddeadf9a166b9472a51ed8ecdd2c84a404a74f15cdc18f4 WatchSource:0}: Error finding container 00f79ecb78dd4a17bddeadf9a166b9472a51ed8ecdd2c84a404a74f15cdc18f4: Status 404 returned error can't find the container with id 00f79ecb78dd4a17bddeadf9a166b9472a51ed8ecdd2c84a404a74f15cdc18f4 Feb 27 16:28:00 crc kubenswrapper[4830]: I0227 16:28:00.458816 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t4mf\" (UniqueName: \"kubernetes.io/projected/ea3b2c46-c98b-4cf7-b7a1-0a7dfe22cc63-kube-api-access-9t4mf\") pod \"auto-csr-approver-29536828-8jpfz\" (UID: \"ea3b2c46-c98b-4cf7-b7a1-0a7dfe22cc63\") " pod="openshift-infra/auto-csr-approver-29536828-8jpfz" Feb 27 16:28:00 crc kubenswrapper[4830]: I0227 16:28:00.481458 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t4mf\" (UniqueName: \"kubernetes.io/projected/ea3b2c46-c98b-4cf7-b7a1-0a7dfe22cc63-kube-api-access-9t4mf\") pod \"auto-csr-approver-29536828-8jpfz\" (UID: \"ea3b2c46-c98b-4cf7-b7a1-0a7dfe22cc63\") " pod="openshift-infra/auto-csr-approver-29536828-8jpfz" Feb 27 16:28:00 crc kubenswrapper[4830]: I0227 16:28:00.484194 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536828-8jpfz" Feb 27 16:28:00 crc kubenswrapper[4830]: I0227 16:28:00.544537 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-qt6mr"] Feb 27 16:28:00 crc kubenswrapper[4830]: I0227 16:28:00.770004 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a40d14a-19ad-406b-bc28-5be5b879ba20" path="/var/lib/kubelet/pods/3a40d14a-19ad-406b-bc28-5be5b879ba20/volumes" Feb 27 16:28:00 crc kubenswrapper[4830]: I0227 16:28:00.770377 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80f367ea-86aa-4385-b62c-35fd25a2355f" path="/var/lib/kubelet/pods/80f367ea-86aa-4385-b62c-35fd25a2355f/volumes" Feb 27 16:28:00 crc kubenswrapper[4830]: I0227 16:28:00.886705 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536828-8jpfz"] Feb 27 16:28:00 crc kubenswrapper[4830]: W0227 16:28:00.897053 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea3b2c46_c98b_4cf7_b7a1_0a7dfe22cc63.slice/crio-5eeff5f088f246d6702de38b6d1d4f6bb60d6a193c2e2444a04fe0847890aa0e WatchSource:0}: Error finding container 5eeff5f088f246d6702de38b6d1d4f6bb60d6a193c2e2444a04fe0847890aa0e: Status 404 returned error can't find the container with id 5eeff5f088f246d6702de38b6d1d4f6bb60d6a193c2e2444a04fe0847890aa0e Feb 27 16:28:01 crc kubenswrapper[4830]: I0227 16:28:01.198769 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536828-8jpfz" event={"ID":"ea3b2c46-c98b-4cf7-b7a1-0a7dfe22cc63","Type":"ContainerStarted","Data":"5eeff5f088f246d6702de38b6d1d4f6bb60d6a193c2e2444a04fe0847890aa0e"} Feb 27 16:28:01 crc kubenswrapper[4830]: I0227 16:28:01.202525 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qt6mr" event={"ID":"bc737ee4-d87c-4276-a6d1-6f3144879542","Type":"ContainerStarted","Data":"55e9b8ebb3a52da47ce3bb0f86fc446908427f7967fa006357a03cd8be4789b9"} Feb 27 16:28:01 crc kubenswrapper[4830]: I0227 16:28:01.204439 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"47514135-95a6-4b77-815a-ebf23a3cab82","Type":"ContainerStarted","Data":"5a4ec36b1a76d0a19cb17b92fc8ea7c7d1d244acdec968ae755d558d3eadddc7"} Feb 27 16:28:01 crc kubenswrapper[4830]: I0227 16:28:01.206208 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4627a6ad-d0c1-4e72-9090-3ed47a060c24","Type":"ContainerStarted","Data":"75df5a7f07d1ff3fee1c155c7a5ec6bec4d204132d3f0a4ac9f4c73374d43908"} Feb 27 16:28:01 crc kubenswrapper[4830]: I0227 16:28:01.208761 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"9f17706c-2060-4191-b63a-df7dea2c4c95","Type":"ContainerStarted","Data":"00f79ecb78dd4a17bddeadf9a166b9472a51ed8ecdd2c84a404a74f15cdc18f4"} Feb 27 16:28:01 crc kubenswrapper[4830]: I0227 16:28:01.210751 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b63af300-2b1c-47a7-ae1d-1334deeefdb1","Type":"ContainerStarted","Data":"b7a67994406dc1ea6f1f20f4e7e5d5e87710cb482538e09640f4bf18261843b5"} Feb 27 16:28:08 crc kubenswrapper[4830]: I0227 16:28:08.275931 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4627a6ad-d0c1-4e72-9090-3ed47a060c24","Type":"ContainerStarted","Data":"ffd8687317c4fccae792a8d33e62b6e1f2b8b993f9c6bb0e24dc3a98f16c4c50"} Feb 27 16:28:08 crc kubenswrapper[4830]: I0227 16:28:08.277297 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 27 16:28:08 crc kubenswrapper[4830]: I0227 16:28:08.280464 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"9f17706c-2060-4191-b63a-df7dea2c4c95","Type":"ContainerStarted","Data":"6ec8f1e6a925dda75bf2b25d6d091880ed805d81e677fbee45551ce4d31bc846"} Feb 27 16:28:08 crc kubenswrapper[4830]: I0227 16:28:08.282651 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3","Type":"ContainerStarted","Data":"1b96ec56ecc45649c019ca46229cb367a2a6fcf878e737c27d2446d8365254f8"} Feb 27 16:28:08 crc kubenswrapper[4830]: I0227 16:28:08.285697 4830 generic.go:334] "Generic (PLEG): container finished" podID="ea3b2c46-c98b-4cf7-b7a1-0a7dfe22cc63" containerID="cb84941fad9c3a38a9d12732b8e29c8e9b49915990ba8ad56e2677abfe635ad9" exitCode=0 Feb 27 16:28:08 crc kubenswrapper[4830]: I0227 16:28:08.285784 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536828-8jpfz" event={"ID":"ea3b2c46-c98b-4cf7-b7a1-0a7dfe22cc63","Type":"ContainerDied","Data":"cb84941fad9c3a38a9d12732b8e29c8e9b49915990ba8ad56e2677abfe635ad9"} Feb 27 16:28:08 crc kubenswrapper[4830]: I0227 16:28:08.288246 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mncqx" event={"ID":"2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60","Type":"ContainerStarted","Data":"37bbcf553bf7957f279c1d2e295e8937f67d4c1dc6186d8ebb25e160632ad917"} Feb 27 16:28:08 crc kubenswrapper[4830]: I0227 16:28:08.302477 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-mncqx" Feb 27 16:28:08 crc kubenswrapper[4830]: I0227 16:28:08.302757 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b63af300-2b1c-47a7-ae1d-1334deeefdb1","Type":"ContainerStarted","Data":"5d8587b51be5ddb11f190a631ac9ccd9976c6c15ea332cdd922d4924a56f8686"} Feb 27 16:28:08 crc kubenswrapper[4830]: I0227 16:28:08.304837 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=18.981956226 podStartE2EDuration="26.304823886s" podCreationTimestamp="2026-02-27 16:27:42 +0000 UTC" firstStartedPulling="2026-02-27 16:28:00.369910401 +0000 UTC m=+1276.459182864" lastFinishedPulling="2026-02-27 16:28:07.692778061 +0000 UTC m=+1283.782050524" observedRunningTime="2026-02-27 16:28:08.292937375 +0000 UTC m=+1284.382209838" watchObservedRunningTime="2026-02-27 16:28:08.304823886 +0000 UTC m=+1284.394096349" Feb 27 16:28:08 crc kubenswrapper[4830]: I0227 16:28:08.317435 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7285a360-7ff1-4e35-b91a-d472a0ee591b","Type":"ContainerStarted","Data":"5618df31dec13a8fa8c264acbc16b8fc53b1c9f9523f6216c8bce6be25fbacb1"} Feb 27 16:28:08 crc kubenswrapper[4830]: I0227 16:28:08.320125 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"eb3cdab6-15fa-40e1-a187-e277086227da","Type":"ContainerStarted","Data":"1d243201cb634428da46e5d01d1c419016026f2c349204898c21d5e7060a1280"} Feb 27 16:28:08 crc kubenswrapper[4830]: I0227 16:28:08.320810 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 27 16:28:08 crc kubenswrapper[4830]: I0227 16:28:08.321444 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-mncqx" podStartSLOduration=13.733171113000001 podStartE2EDuration="21.321426334s" podCreationTimestamp="2026-02-27 16:27:47 +0000 UTC" firstStartedPulling="2026-02-27 16:27:59.451155321 +0000 UTC m=+1275.540427824" lastFinishedPulling="2026-02-27 16:28:07.039410542 +0000 UTC m=+1283.128683045" observedRunningTime="2026-02-27 16:28:08.31436361 +0000 UTC m=+1284.403636083" watchObservedRunningTime="2026-02-27 16:28:08.321426334 +0000 UTC m=+1284.410698817" Feb 27 16:28:08 crc kubenswrapper[4830]: I0227 16:28:08.322897 4830 generic.go:334] "Generic (PLEG): container finished" podID="bc737ee4-d87c-4276-a6d1-6f3144879542" containerID="75f105c69a81a404a85e4253f51be6a0844b8fa41fe1407a258ae3b5998a42f6" exitCode=0 Feb 27 16:28:08 crc kubenswrapper[4830]: I0227 16:28:08.322931 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qt6mr" event={"ID":"bc737ee4-d87c-4276-a6d1-6f3144879542","Type":"ContainerDied","Data":"75f105c69a81a404a85e4253f51be6a0844b8fa41fe1407a258ae3b5998a42f6"} Feb 27 16:28:08 crc kubenswrapper[4830]: I0227 16:28:08.393033 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=20.829515346 podStartE2EDuration="28.39301844s" podCreationTimestamp="2026-02-27 16:27:40 +0000 UTC" firstStartedPulling="2026-02-27 16:27:58.875098698 +0000 UTC m=+1274.964371161" lastFinishedPulling="2026-02-27 16:28:06.438601762 +0000 UTC m=+1282.527874255" observedRunningTime="2026-02-27 16:28:08.368886948 +0000 UTC m=+1284.458159411" watchObservedRunningTime="2026-02-27 16:28:08.39301844 +0000 UTC m=+1284.482290903" Feb 27 16:28:09 crc kubenswrapper[4830]: I0227 16:28:09.340228 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qt6mr" event={"ID":"bc737ee4-d87c-4276-a6d1-6f3144879542","Type":"ContainerStarted","Data":"4a66491103ecf784427f1721d3379810efd654720c7344f0ebbf5e10bc7a1b3d"} Feb 27 16:28:09 crc kubenswrapper[4830]: I0227 16:28:09.340513 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qt6mr" event={"ID":"bc737ee4-d87c-4276-a6d1-6f3144879542","Type":"ContainerStarted","Data":"6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5"} Feb 27 16:28:09 crc kubenswrapper[4830]: I0227 16:28:09.371632 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-qt6mr" podStartSLOduration=16.438979155 podStartE2EDuration="22.371609638s" podCreationTimestamp="2026-02-27 16:27:47 +0000 UTC" firstStartedPulling="2026-02-27 16:28:00.550557413 +0000 UTC m=+1276.639829876" lastFinishedPulling="2026-02-27 16:28:06.483187886 +0000 UTC m=+1282.572460359" observedRunningTime="2026-02-27 16:28:09.366055701 +0000 UTC m=+1285.455328164" watchObservedRunningTime="2026-02-27 16:28:09.371609638 +0000 UTC m=+1285.460882101" Feb 27 16:28:09 crc kubenswrapper[4830]: I0227 16:28:09.686520 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536828-8jpfz" Feb 27 16:28:09 crc kubenswrapper[4830]: I0227 16:28:09.836067 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9t4mf\" (UniqueName: \"kubernetes.io/projected/ea3b2c46-c98b-4cf7-b7a1-0a7dfe22cc63-kube-api-access-9t4mf\") pod \"ea3b2c46-c98b-4cf7-b7a1-0a7dfe22cc63\" (UID: \"ea3b2c46-c98b-4cf7-b7a1-0a7dfe22cc63\") " Feb 27 16:28:09 crc kubenswrapper[4830]: I0227 16:28:09.845210 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea3b2c46-c98b-4cf7-b7a1-0a7dfe22cc63-kube-api-access-9t4mf" (OuterVolumeSpecName: "kube-api-access-9t4mf") pod "ea3b2c46-c98b-4cf7-b7a1-0a7dfe22cc63" (UID: "ea3b2c46-c98b-4cf7-b7a1-0a7dfe22cc63"). InnerVolumeSpecName "kube-api-access-9t4mf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:28:09 crc kubenswrapper[4830]: I0227 16:28:09.940784 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9t4mf\" (UniqueName: \"kubernetes.io/projected/ea3b2c46-c98b-4cf7-b7a1-0a7dfe22cc63-kube-api-access-9t4mf\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:10 crc kubenswrapper[4830]: I0227 16:28:10.351595 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536828-8jpfz" event={"ID":"ea3b2c46-c98b-4cf7-b7a1-0a7dfe22cc63","Type":"ContainerDied","Data":"5eeff5f088f246d6702de38b6d1d4f6bb60d6a193c2e2444a04fe0847890aa0e"} Feb 27 16:28:10 crc kubenswrapper[4830]: I0227 16:28:10.351675 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5eeff5f088f246d6702de38b6d1d4f6bb60d6a193c2e2444a04fe0847890aa0e" Feb 27 16:28:10 crc kubenswrapper[4830]: I0227 16:28:10.351873 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536828-8jpfz" Feb 27 16:28:10 crc kubenswrapper[4830]: I0227 16:28:10.353304 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-qt6mr" Feb 27 16:28:10 crc kubenswrapper[4830]: I0227 16:28:10.353370 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-qt6mr" Feb 27 16:28:10 crc kubenswrapper[4830]: I0227 16:28:10.772913 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536822-xbmsc"] Feb 27 16:28:10 crc kubenswrapper[4830]: I0227 16:28:10.774645 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536822-xbmsc"] Feb 27 16:28:12 crc kubenswrapper[4830]: I0227 16:28:12.390680 4830 generic.go:334] "Generic (PLEG): container finished" podID="b63af300-2b1c-47a7-ae1d-1334deeefdb1" containerID="5d8587b51be5ddb11f190a631ac9ccd9976c6c15ea332cdd922d4924a56f8686" exitCode=0 Feb 27 16:28:12 crc kubenswrapper[4830]: I0227 16:28:12.390800 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b63af300-2b1c-47a7-ae1d-1334deeefdb1","Type":"ContainerDied","Data":"5d8587b51be5ddb11f190a631ac9ccd9976c6c15ea332cdd922d4924a56f8686"} Feb 27 16:28:12 crc kubenswrapper[4830]: I0227 16:28:12.400361 4830 generic.go:334] "Generic (PLEG): container finished" podID="bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3" containerID="1b96ec56ecc45649c019ca46229cb367a2a6fcf878e737c27d2446d8365254f8" exitCode=0 Feb 27 16:28:12 crc kubenswrapper[4830]: I0227 16:28:12.400442 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3","Type":"ContainerDied","Data":"1b96ec56ecc45649c019ca46229cb367a2a6fcf878e737c27d2446d8365254f8"} Feb 27 16:28:12 crc kubenswrapper[4830]: I0227 16:28:12.458672 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=10.911599377 podStartE2EDuration="23.458644771s" podCreationTimestamp="2026-02-27 16:27:49 +0000 UTC" firstStartedPulling="2026-02-27 16:27:59.57218095 +0000 UTC m=+1275.661453423" lastFinishedPulling="2026-02-27 16:28:12.119226314 +0000 UTC m=+1288.208498817" observedRunningTime="2026-02-27 16:28:12.450233685 +0000 UTC m=+1288.539506148" watchObservedRunningTime="2026-02-27 16:28:12.458644771 +0000 UTC m=+1288.547917284" Feb 27 16:28:12 crc kubenswrapper[4830]: I0227 16:28:12.487690 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=15.796294472 podStartE2EDuration="27.487669304s" podCreationTimestamp="2026-02-27 16:27:45 +0000 UTC" firstStartedPulling="2026-02-27 16:28:00.45998686 +0000 UTC m=+1276.549259323" lastFinishedPulling="2026-02-27 16:28:12.151361652 +0000 UTC m=+1288.240634155" observedRunningTime="2026-02-27 16:28:12.479993175 +0000 UTC m=+1288.569265648" watchObservedRunningTime="2026-02-27 16:28:12.487669304 +0000 UTC m=+1288.576941777" Feb 27 16:28:12 crc kubenswrapper[4830]: I0227 16:28:12.778643 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="275d93b7-6091-41c7-98d8-7a7a67d6f043" path="/var/lib/kubelet/pods/275d93b7-6091-41c7-98d8-7a7a67d6f043/volumes" Feb 27 16:28:13 crc kubenswrapper[4830]: I0227 16:28:13.208843 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 27 16:28:13 crc kubenswrapper[4830]: I0227 16:28:13.417755 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b63af300-2b1c-47a7-ae1d-1334deeefdb1","Type":"ContainerStarted","Data":"58b3931eed123fb0912adbb48ae5835fb65012c51cabfe8279f65b2fb158c0e1"} Feb 27 16:28:13 crc kubenswrapper[4830]: I0227 16:28:13.421501 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7285a360-7ff1-4e35-b91a-d472a0ee591b","Type":"ContainerStarted","Data":"03fae1fb8e9a6d2c747afacdabeb6fc5b1752527700bbfdf259b9f15c3429baa"} Feb 27 16:28:13 crc kubenswrapper[4830]: I0227 16:28:13.431030 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3","Type":"ContainerStarted","Data":"68dcbd84b2ee99bb92f47d75adccd5e677bcf1de6646eeea5b827c8e802fad81"} Feb 27 16:28:13 crc kubenswrapper[4830]: I0227 16:28:13.432759 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"9f17706c-2060-4191-b63a-df7dea2c4c95","Type":"ContainerStarted","Data":"aef48ea8d72edf5f1504d9101a6b5d6f742a96bb0bdea5a1647ced04e0be6ed1"} Feb 27 16:28:13 crc kubenswrapper[4830]: I0227 16:28:13.454768 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=27.803692899 podStartE2EDuration="34.454753048s" podCreationTimestamp="2026-02-27 16:27:39 +0000 UTC" firstStartedPulling="2026-02-27 16:28:00.229429905 +0000 UTC m=+1276.318702388" lastFinishedPulling="2026-02-27 16:28:06.880490064 +0000 UTC m=+1282.969762537" observedRunningTime="2026-02-27 16:28:13.438034028 +0000 UTC m=+1289.527306521" watchObservedRunningTime="2026-02-27 16:28:13.454753048 +0000 UTC m=+1289.544025511" Feb 27 16:28:13 crc kubenswrapper[4830]: I0227 16:28:13.472825 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=28.449722795 podStartE2EDuration="35.472808331s" podCreationTimestamp="2026-02-27 16:27:38 +0000 UTC" firstStartedPulling="2026-02-27 16:27:59.460319416 +0000 UTC m=+1275.549591879" lastFinishedPulling="2026-02-27 16:28:06.483404912 +0000 UTC m=+1282.572677415" observedRunningTime="2026-02-27 16:28:13.470770302 +0000 UTC m=+1289.560042765" watchObservedRunningTime="2026-02-27 16:28:13.472808331 +0000 UTC m=+1289.562080794" Feb 27 16:28:13 crc kubenswrapper[4830]: I0227 16:28:13.847030 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 27 16:28:13 crc kubenswrapper[4830]: I0227 16:28:13.901577 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 27 16:28:14 crc kubenswrapper[4830]: I0227 16:28:14.406930 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 27 16:28:14 crc kubenswrapper[4830]: I0227 16:28:14.447375 4830 generic.go:334] "Generic (PLEG): container finished" podID="0800fc83-7606-4be1-8a04-aab5b8226a0c" containerID="f9937c68c2876e1a07e8537a17025ca71c90da298a92642f2d67a01e8e039a92" exitCode=0 Feb 27 16:28:14 crc kubenswrapper[4830]: I0227 16:28:14.447528 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-f5dpx" event={"ID":"0800fc83-7606-4be1-8a04-aab5b8226a0c","Type":"ContainerDied","Data":"f9937c68c2876e1a07e8537a17025ca71c90da298a92642f2d67a01e8e039a92"} Feb 27 16:28:14 crc kubenswrapper[4830]: I0227 16:28:14.451045 4830 generic.go:334] "Generic (PLEG): container finished" podID="ebbe238c-4f40-46a8-b549-b9b0ae97fb82" containerID="ee561560d393480957a5f923d11840252443c08568992587a34ef179e28cdaec" exitCode=0 Feb 27 16:28:14 crc kubenswrapper[4830]: I0227 16:28:14.451116 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-b4knd" event={"ID":"ebbe238c-4f40-46a8-b549-b9b0ae97fb82","Type":"ContainerDied","Data":"ee561560d393480957a5f923d11840252443c08568992587a34ef179e28cdaec"} Feb 27 16:28:14 crc kubenswrapper[4830]: I0227 16:28:14.451844 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 27 16:28:14 crc kubenswrapper[4830]: I0227 16:28:14.494674 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 27 16:28:14 crc kubenswrapper[4830]: I0227 16:28:14.529199 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 27 16:28:14 crc kubenswrapper[4830]: I0227 16:28:14.808441 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-b4knd"] Feb 27 16:28:14 crc kubenswrapper[4830]: I0227 16:28:14.834288 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-zbxr4"] Feb 27 16:28:14 crc kubenswrapper[4830]: E0227 16:28:14.834600 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea3b2c46-c98b-4cf7-b7a1-0a7dfe22cc63" containerName="oc" Feb 27 16:28:14 crc kubenswrapper[4830]: I0227 16:28:14.834617 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea3b2c46-c98b-4cf7-b7a1-0a7dfe22cc63" containerName="oc" Feb 27 16:28:14 crc kubenswrapper[4830]: I0227 16:28:14.834808 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea3b2c46-c98b-4cf7-b7a1-0a7dfe22cc63" containerName="oc" Feb 27 16:28:14 crc kubenswrapper[4830]: I0227 16:28:14.835607 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-zbxr4" Feb 27 16:28:14 crc kubenswrapper[4830]: I0227 16:28:14.840195 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 27 16:28:14 crc kubenswrapper[4830]: I0227 16:28:14.859059 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-zbxr4"] Feb 27 16:28:14 crc kubenswrapper[4830]: I0227 16:28:14.888761 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-mtj7r"] Feb 27 16:28:14 crc kubenswrapper[4830]: I0227 16:28:14.889765 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-mtj7r" Feb 27 16:28:14 crc kubenswrapper[4830]: I0227 16:28:14.894373 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 27 16:28:14 crc kubenswrapper[4830]: I0227 16:28:14.915345 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-mtj7r"] Feb 27 16:28:14 crc kubenswrapper[4830]: I0227 16:28:14.932053 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b64de41e-9e05-48b2-87e5-387aad57532a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mtj7r\" (UID: \"b64de41e-9e05-48b2-87e5-387aad57532a\") " pod="openstack/ovn-controller-metrics-mtj7r" Feb 27 16:28:14 crc kubenswrapper[4830]: I0227 16:28:14.932125 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f384d75-651d-4e2b-9944-6df7727f9878-config\") pod \"dnsmasq-dns-5bf47b49b7-zbxr4\" (UID: \"6f384d75-651d-4e2b-9944-6df7727f9878\") " pod="openstack/dnsmasq-dns-5bf47b49b7-zbxr4" Feb 27 16:28:14 crc kubenswrapper[4830]: I0227 16:28:14.932161 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b64de41e-9e05-48b2-87e5-387aad57532a-ovn-rundir\") pod \"ovn-controller-metrics-mtj7r\" (UID: \"b64de41e-9e05-48b2-87e5-387aad57532a\") " pod="openstack/ovn-controller-metrics-mtj7r" Feb 27 16:28:14 crc kubenswrapper[4830]: I0227 16:28:14.932186 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5dsc\" (UniqueName: \"kubernetes.io/projected/6f384d75-651d-4e2b-9944-6df7727f9878-kube-api-access-d5dsc\") pod \"dnsmasq-dns-5bf47b49b7-zbxr4\" (UID: \"6f384d75-651d-4e2b-9944-6df7727f9878\") " pod="openstack/dnsmasq-dns-5bf47b49b7-zbxr4" Feb 27 16:28:14 crc kubenswrapper[4830]: I0227 16:28:14.932207 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f384d75-651d-4e2b-9944-6df7727f9878-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-zbxr4\" (UID: \"6f384d75-651d-4e2b-9944-6df7727f9878\") " pod="openstack/dnsmasq-dns-5bf47b49b7-zbxr4" Feb 27 16:28:14 crc kubenswrapper[4830]: I0227 16:28:14.932224 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f384d75-651d-4e2b-9944-6df7727f9878-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-zbxr4\" (UID: \"6f384d75-651d-4e2b-9944-6df7727f9878\") " pod="openstack/dnsmasq-dns-5bf47b49b7-zbxr4" Feb 27 16:28:14 crc kubenswrapper[4830]: I0227 16:28:14.932252 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b64de41e-9e05-48b2-87e5-387aad57532a-config\") pod \"ovn-controller-metrics-mtj7r\" (UID: \"b64de41e-9e05-48b2-87e5-387aad57532a\") " pod="openstack/ovn-controller-metrics-mtj7r" Feb 27 16:28:14 crc kubenswrapper[4830]: I0227 16:28:14.932270 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqgvk\" (UniqueName: \"kubernetes.io/projected/b64de41e-9e05-48b2-87e5-387aad57532a-kube-api-access-sqgvk\") pod \"ovn-controller-metrics-mtj7r\" (UID: \"b64de41e-9e05-48b2-87e5-387aad57532a\") " pod="openstack/ovn-controller-metrics-mtj7r" Feb 27 16:28:14 crc kubenswrapper[4830]: I0227 16:28:14.932322 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b64de41e-9e05-48b2-87e5-387aad57532a-combined-ca-bundle\") pod \"ovn-controller-metrics-mtj7r\" (UID: \"b64de41e-9e05-48b2-87e5-387aad57532a\") " pod="openstack/ovn-controller-metrics-mtj7r" Feb 27 16:28:14 crc kubenswrapper[4830]: I0227 16:28:14.932384 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b64de41e-9e05-48b2-87e5-387aad57532a-ovs-rundir\") pod \"ovn-controller-metrics-mtj7r\" (UID: \"b64de41e-9e05-48b2-87e5-387aad57532a\") " pod="openstack/ovn-controller-metrics-mtj7r" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.033593 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f384d75-651d-4e2b-9944-6df7727f9878-config\") pod \"dnsmasq-dns-5bf47b49b7-zbxr4\" (UID: \"6f384d75-651d-4e2b-9944-6df7727f9878\") " pod="openstack/dnsmasq-dns-5bf47b49b7-zbxr4" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.033644 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b64de41e-9e05-48b2-87e5-387aad57532a-ovn-rundir\") pod \"ovn-controller-metrics-mtj7r\" (UID: \"b64de41e-9e05-48b2-87e5-387aad57532a\") " pod="openstack/ovn-controller-metrics-mtj7r" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.033667 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5dsc\" (UniqueName: \"kubernetes.io/projected/6f384d75-651d-4e2b-9944-6df7727f9878-kube-api-access-d5dsc\") pod \"dnsmasq-dns-5bf47b49b7-zbxr4\" (UID: \"6f384d75-651d-4e2b-9944-6df7727f9878\") " pod="openstack/dnsmasq-dns-5bf47b49b7-zbxr4" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.033686 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f384d75-651d-4e2b-9944-6df7727f9878-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-zbxr4\" (UID: \"6f384d75-651d-4e2b-9944-6df7727f9878\") " pod="openstack/dnsmasq-dns-5bf47b49b7-zbxr4" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.033707 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f384d75-651d-4e2b-9944-6df7727f9878-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-zbxr4\" (UID: \"6f384d75-651d-4e2b-9944-6df7727f9878\") " pod="openstack/dnsmasq-dns-5bf47b49b7-zbxr4" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.033726 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b64de41e-9e05-48b2-87e5-387aad57532a-config\") pod \"ovn-controller-metrics-mtj7r\" (UID: \"b64de41e-9e05-48b2-87e5-387aad57532a\") " pod="openstack/ovn-controller-metrics-mtj7r" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.033744 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqgvk\" (UniqueName: \"kubernetes.io/projected/b64de41e-9e05-48b2-87e5-387aad57532a-kube-api-access-sqgvk\") pod \"ovn-controller-metrics-mtj7r\" (UID: \"b64de41e-9e05-48b2-87e5-387aad57532a\") " pod="openstack/ovn-controller-metrics-mtj7r" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.033763 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b64de41e-9e05-48b2-87e5-387aad57532a-combined-ca-bundle\") pod \"ovn-controller-metrics-mtj7r\" (UID: \"b64de41e-9e05-48b2-87e5-387aad57532a\") " pod="openstack/ovn-controller-metrics-mtj7r" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.033807 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b64de41e-9e05-48b2-87e5-387aad57532a-ovs-rundir\") pod \"ovn-controller-metrics-mtj7r\" (UID: \"b64de41e-9e05-48b2-87e5-387aad57532a\") " pod="openstack/ovn-controller-metrics-mtj7r" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.033850 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b64de41e-9e05-48b2-87e5-387aad57532a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mtj7r\" (UID: \"b64de41e-9e05-48b2-87e5-387aad57532a\") " pod="openstack/ovn-controller-metrics-mtj7r" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.034774 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b64de41e-9e05-48b2-87e5-387aad57532a-ovs-rundir\") pod \"ovn-controller-metrics-mtj7r\" (UID: \"b64de41e-9e05-48b2-87e5-387aad57532a\") " pod="openstack/ovn-controller-metrics-mtj7r" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.034848 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b64de41e-9e05-48b2-87e5-387aad57532a-ovn-rundir\") pod \"ovn-controller-metrics-mtj7r\" (UID: \"b64de41e-9e05-48b2-87e5-387aad57532a\") " pod="openstack/ovn-controller-metrics-mtj7r" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.035209 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f384d75-651d-4e2b-9944-6df7727f9878-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-zbxr4\" (UID: \"6f384d75-651d-4e2b-9944-6df7727f9878\") " pod="openstack/dnsmasq-dns-5bf47b49b7-zbxr4" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.035263 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b64de41e-9e05-48b2-87e5-387aad57532a-config\") pod \"ovn-controller-metrics-mtj7r\" (UID: \"b64de41e-9e05-48b2-87e5-387aad57532a\") " pod="openstack/ovn-controller-metrics-mtj7r" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.035411 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f384d75-651d-4e2b-9944-6df7727f9878-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-zbxr4\" (UID: \"6f384d75-651d-4e2b-9944-6df7727f9878\") " pod="openstack/dnsmasq-dns-5bf47b49b7-zbxr4" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.035421 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f384d75-651d-4e2b-9944-6df7727f9878-config\") pod \"dnsmasq-dns-5bf47b49b7-zbxr4\" (UID: \"6f384d75-651d-4e2b-9944-6df7727f9878\") " pod="openstack/dnsmasq-dns-5bf47b49b7-zbxr4" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.038294 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b64de41e-9e05-48b2-87e5-387aad57532a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mtj7r\" (UID: \"b64de41e-9e05-48b2-87e5-387aad57532a\") " pod="openstack/ovn-controller-metrics-mtj7r" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.038715 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b64de41e-9e05-48b2-87e5-387aad57532a-combined-ca-bundle\") pod \"ovn-controller-metrics-mtj7r\" (UID: \"b64de41e-9e05-48b2-87e5-387aad57532a\") " pod="openstack/ovn-controller-metrics-mtj7r" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.050886 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqgvk\" (UniqueName: \"kubernetes.io/projected/b64de41e-9e05-48b2-87e5-387aad57532a-kube-api-access-sqgvk\") pod \"ovn-controller-metrics-mtj7r\" (UID: \"b64de41e-9e05-48b2-87e5-387aad57532a\") " pod="openstack/ovn-controller-metrics-mtj7r" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.052766 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5dsc\" (UniqueName: \"kubernetes.io/projected/6f384d75-651d-4e2b-9944-6df7727f9878-kube-api-access-d5dsc\") pod \"dnsmasq-dns-5bf47b49b7-zbxr4\" (UID: \"6f384d75-651d-4e2b-9944-6df7727f9878\") " pod="openstack/dnsmasq-dns-5bf47b49b7-zbxr4" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.150363 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-zbxr4" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.161106 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-f5dpx"] Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.204003 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-mp6xh"] Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.205310 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-mp6xh" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.207848 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.217443 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-mtj7r" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.225132 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-mp6xh"] Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.236345 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abf16d54-1d80-400e-8da6-077a9b307708-config\") pod \"dnsmasq-dns-8554648995-mp6xh\" (UID: \"abf16d54-1d80-400e-8da6-077a9b307708\") " pod="openstack/dnsmasq-dns-8554648995-mp6xh" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.236402 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abf16d54-1d80-400e-8da6-077a9b307708-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-mp6xh\" (UID: \"abf16d54-1d80-400e-8da6-077a9b307708\") " pod="openstack/dnsmasq-dns-8554648995-mp6xh" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.236553 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abf16d54-1d80-400e-8da6-077a9b307708-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-mp6xh\" (UID: \"abf16d54-1d80-400e-8da6-077a9b307708\") " pod="openstack/dnsmasq-dns-8554648995-mp6xh" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.236674 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xks98\" (UniqueName: \"kubernetes.io/projected/abf16d54-1d80-400e-8da6-077a9b307708-kube-api-access-xks98\") pod \"dnsmasq-dns-8554648995-mp6xh\" (UID: \"abf16d54-1d80-400e-8da6-077a9b307708\") " pod="openstack/dnsmasq-dns-8554648995-mp6xh" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.236696 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abf16d54-1d80-400e-8da6-077a9b307708-dns-svc\") pod \"dnsmasq-dns-8554648995-mp6xh\" (UID: \"abf16d54-1d80-400e-8da6-077a9b307708\") " pod="openstack/dnsmasq-dns-8554648995-mp6xh" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.343567 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abf16d54-1d80-400e-8da6-077a9b307708-config\") pod \"dnsmasq-dns-8554648995-mp6xh\" (UID: \"abf16d54-1d80-400e-8da6-077a9b307708\") " pod="openstack/dnsmasq-dns-8554648995-mp6xh" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.344123 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abf16d54-1d80-400e-8da6-077a9b307708-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-mp6xh\" (UID: \"abf16d54-1d80-400e-8da6-077a9b307708\") " pod="openstack/dnsmasq-dns-8554648995-mp6xh" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.344183 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abf16d54-1d80-400e-8da6-077a9b307708-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-mp6xh\" (UID: \"abf16d54-1d80-400e-8da6-077a9b307708\") " pod="openstack/dnsmasq-dns-8554648995-mp6xh" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.344231 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xks98\" (UniqueName: \"kubernetes.io/projected/abf16d54-1d80-400e-8da6-077a9b307708-kube-api-access-xks98\") pod \"dnsmasq-dns-8554648995-mp6xh\" (UID: \"abf16d54-1d80-400e-8da6-077a9b307708\") " pod="openstack/dnsmasq-dns-8554648995-mp6xh" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.344246 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abf16d54-1d80-400e-8da6-077a9b307708-dns-svc\") pod \"dnsmasq-dns-8554648995-mp6xh\" (UID: \"abf16d54-1d80-400e-8da6-077a9b307708\") " pod="openstack/dnsmasq-dns-8554648995-mp6xh" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.345189 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abf16d54-1d80-400e-8da6-077a9b307708-config\") pod \"dnsmasq-dns-8554648995-mp6xh\" (UID: \"abf16d54-1d80-400e-8da6-077a9b307708\") " pod="openstack/dnsmasq-dns-8554648995-mp6xh" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.349142 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abf16d54-1d80-400e-8da6-077a9b307708-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-mp6xh\" (UID: \"abf16d54-1d80-400e-8da6-077a9b307708\") " pod="openstack/dnsmasq-dns-8554648995-mp6xh" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.349362 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abf16d54-1d80-400e-8da6-077a9b307708-dns-svc\") pod \"dnsmasq-dns-8554648995-mp6xh\" (UID: \"abf16d54-1d80-400e-8da6-077a9b307708\") " pod="openstack/dnsmasq-dns-8554648995-mp6xh" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.349393 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abf16d54-1d80-400e-8da6-077a9b307708-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-mp6xh\" (UID: \"abf16d54-1d80-400e-8da6-077a9b307708\") " pod="openstack/dnsmasq-dns-8554648995-mp6xh" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.372399 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xks98\" (UniqueName: \"kubernetes.io/projected/abf16d54-1d80-400e-8da6-077a9b307708-kube-api-access-xks98\") pod \"dnsmasq-dns-8554648995-mp6xh\" (UID: \"abf16d54-1d80-400e-8da6-077a9b307708\") " pod="openstack/dnsmasq-dns-8554648995-mp6xh" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.406500 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.458576 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.463367 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-b4knd" event={"ID":"ebbe238c-4f40-46a8-b549-b9b0ae97fb82","Type":"ContainerStarted","Data":"0ca007d751ca916b44683981f64267af656c255f4e4123eb603b027d73f416af"} Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.463494 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-b4knd" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.463500 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-b4knd" podUID="ebbe238c-4f40-46a8-b549-b9b0ae97fb82" containerName="dnsmasq-dns" containerID="cri-o://0ca007d751ca916b44683981f64267af656c255f4e4123eb603b027d73f416af" gracePeriod=10 Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.468124 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-f5dpx" event={"ID":"0800fc83-7606-4be1-8a04-aab5b8226a0c","Type":"ContainerStarted","Data":"5d1b87569297a1bd470ba8ca9e4a299b273f2827077b7afc4b9b9e9535c0fbb8"} Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.469053 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc8479f9-f5dpx" podUID="0800fc83-7606-4be1-8a04-aab5b8226a0c" containerName="dnsmasq-dns" containerID="cri-o://5d1b87569297a1bd470ba8ca9e4a299b273f2827077b7afc4b9b9e9535c0fbb8" gracePeriod=10 Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.517107 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc8479f9-f5dpx" podStartSLOduration=3.536184863 podStartE2EDuration="39.517085873s" podCreationTimestamp="2026-02-27 16:27:36 +0000 UTC" firstStartedPulling="2026-02-27 16:27:37.280623158 +0000 UTC m=+1253.369895621" lastFinishedPulling="2026-02-27 16:28:13.261524138 +0000 UTC m=+1289.350796631" observedRunningTime="2026-02-27 16:28:15.497222006 +0000 UTC m=+1291.586494469" watchObservedRunningTime="2026-02-27 16:28:15.517085873 +0000 UTC m=+1291.606358336" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.524495 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-b4knd" podStartSLOduration=3.600762317 podStartE2EDuration="39.524483824s" podCreationTimestamp="2026-02-27 16:27:36 +0000 UTC" firstStartedPulling="2026-02-27 16:27:37.426984919 +0000 UTC m=+1253.516257372" lastFinishedPulling="2026-02-27 16:28:13.350706406 +0000 UTC m=+1289.439978879" observedRunningTime="2026-02-27 16:28:15.516830357 +0000 UTC m=+1291.606102810" watchObservedRunningTime="2026-02-27 16:28:15.524483824 +0000 UTC m=+1291.613756287" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.548782 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-mp6xh" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.585759 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-mtj7r"] Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.624349 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.625431 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.627975 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.628000 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.628129 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-c5rkt" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.628149 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.666598 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 27 16:28:15 crc kubenswrapper[4830]: W0227 16:28:15.678173 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb64de41e_9e05_48b2_87e5_387aad57532a.slice/crio-6474c9f1bef2ad51145b280febe52f680adfa6000d6ca748d69a65cd5b075580 WatchSource:0}: Error finding container 6474c9f1bef2ad51145b280febe52f680adfa6000d6ca748d69a65cd5b075580: Status 404 returned error can't find the container with id 6474c9f1bef2ad51145b280febe52f680adfa6000d6ca748d69a65cd5b075580 Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.687464 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-zbxr4"] Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.756159 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c017daa-cb8f-4629-80e6-a671a8455149-config\") pod \"ovn-northd-0\" (UID: \"7c017daa-cb8f-4629-80e6-a671a8455149\") " pod="openstack/ovn-northd-0" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.756207 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c017daa-cb8f-4629-80e6-a671a8455149-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7c017daa-cb8f-4629-80e6-a671a8455149\") " pod="openstack/ovn-northd-0" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.756227 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c017daa-cb8f-4629-80e6-a671a8455149-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7c017daa-cb8f-4629-80e6-a671a8455149\") " pod="openstack/ovn-northd-0" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.756248 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c017daa-cb8f-4629-80e6-a671a8455149-scripts\") pod \"ovn-northd-0\" (UID: \"7c017daa-cb8f-4629-80e6-a671a8455149\") " pod="openstack/ovn-northd-0" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.756380 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcrzd\" (UniqueName: \"kubernetes.io/projected/7c017daa-cb8f-4629-80e6-a671a8455149-kube-api-access-dcrzd\") pod \"ovn-northd-0\" (UID: \"7c017daa-cb8f-4629-80e6-a671a8455149\") " pod="openstack/ovn-northd-0" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.756413 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c017daa-cb8f-4629-80e6-a671a8455149-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7c017daa-cb8f-4629-80e6-a671a8455149\") " pod="openstack/ovn-northd-0" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.756434 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7c017daa-cb8f-4629-80e6-a671a8455149-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7c017daa-cb8f-4629-80e6-a671a8455149\") " pod="openstack/ovn-northd-0" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.859096 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c017daa-cb8f-4629-80e6-a671a8455149-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7c017daa-cb8f-4629-80e6-a671a8455149\") " pod="openstack/ovn-northd-0" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.859712 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7c017daa-cb8f-4629-80e6-a671a8455149-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7c017daa-cb8f-4629-80e6-a671a8455149\") " pod="openstack/ovn-northd-0" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.859764 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c017daa-cb8f-4629-80e6-a671a8455149-config\") pod \"ovn-northd-0\" (UID: \"7c017daa-cb8f-4629-80e6-a671a8455149\") " pod="openstack/ovn-northd-0" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.859794 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c017daa-cb8f-4629-80e6-a671a8455149-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7c017daa-cb8f-4629-80e6-a671a8455149\") " pod="openstack/ovn-northd-0" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.859823 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c017daa-cb8f-4629-80e6-a671a8455149-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7c017daa-cb8f-4629-80e6-a671a8455149\") " pod="openstack/ovn-northd-0" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.859838 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c017daa-cb8f-4629-80e6-a671a8455149-scripts\") pod \"ovn-northd-0\" (UID: \"7c017daa-cb8f-4629-80e6-a671a8455149\") " pod="openstack/ovn-northd-0" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.860004 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcrzd\" (UniqueName: \"kubernetes.io/projected/7c017daa-cb8f-4629-80e6-a671a8455149-kube-api-access-dcrzd\") pod \"ovn-northd-0\" (UID: \"7c017daa-cb8f-4629-80e6-a671a8455149\") " pod="openstack/ovn-northd-0" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.860895 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7c017daa-cb8f-4629-80e6-a671a8455149-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7c017daa-cb8f-4629-80e6-a671a8455149\") " pod="openstack/ovn-northd-0" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.861052 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c017daa-cb8f-4629-80e6-a671a8455149-config\") pod \"ovn-northd-0\" (UID: \"7c017daa-cb8f-4629-80e6-a671a8455149\") " pod="openstack/ovn-northd-0" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.861305 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c017daa-cb8f-4629-80e6-a671a8455149-scripts\") pod \"ovn-northd-0\" (UID: \"7c017daa-cb8f-4629-80e6-a671a8455149\") " pod="openstack/ovn-northd-0" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.869207 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c017daa-cb8f-4629-80e6-a671a8455149-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7c017daa-cb8f-4629-80e6-a671a8455149\") " pod="openstack/ovn-northd-0" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.872652 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c017daa-cb8f-4629-80e6-a671a8455149-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7c017daa-cb8f-4629-80e6-a671a8455149\") " pod="openstack/ovn-northd-0" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.873339 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c017daa-cb8f-4629-80e6-a671a8455149-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7c017daa-cb8f-4629-80e6-a671a8455149\") " pod="openstack/ovn-northd-0" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.876963 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcrzd\" (UniqueName: \"kubernetes.io/projected/7c017daa-cb8f-4629-80e6-a671a8455149-kube-api-access-dcrzd\") pod \"ovn-northd-0\" (UID: \"7c017daa-cb8f-4629-80e6-a671a8455149\") " pod="openstack/ovn-northd-0" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.941477 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 27 16:28:15 crc kubenswrapper[4830]: I0227 16:28:15.997174 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 27 16:28:16 crc kubenswrapper[4830]: I0227 16:28:16.081896 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-mp6xh"] Feb 27 16:28:16 crc kubenswrapper[4830]: W0227 16:28:16.083221 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabf16d54_1d80_400e_8da6_077a9b307708.slice/crio-7c1f5d325ae11fd1ceea92c542969658abbfbc49317e4cdeba5a24fe32372723 WatchSource:0}: Error finding container 7c1f5d325ae11fd1ceea92c542969658abbfbc49317e4cdeba5a24fe32372723: Status 404 returned error can't find the container with id 7c1f5d325ae11fd1ceea92c542969658abbfbc49317e4cdeba5a24fe32372723 Feb 27 16:28:16 crc kubenswrapper[4830]: I0227 16:28:16.479401 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-zbxr4" event={"ID":"6f384d75-651d-4e2b-9944-6df7727f9878","Type":"ContainerStarted","Data":"eaf69066a3542729d5977c59c8669428d4fbe9e310644d14aaff2447fb4a1cbd"} Feb 27 16:28:16 crc kubenswrapper[4830]: I0227 16:28:16.482456 4830 generic.go:334] "Generic (PLEG): container finished" podID="0800fc83-7606-4be1-8a04-aab5b8226a0c" containerID="5d1b87569297a1bd470ba8ca9e4a299b273f2827077b7afc4b9b9e9535c0fbb8" exitCode=0 Feb 27 16:28:16 crc kubenswrapper[4830]: I0227 16:28:16.482541 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-f5dpx" event={"ID":"0800fc83-7606-4be1-8a04-aab5b8226a0c","Type":"ContainerDied","Data":"5d1b87569297a1bd470ba8ca9e4a299b273f2827077b7afc4b9b9e9535c0fbb8"} Feb 27 16:28:16 crc kubenswrapper[4830]: I0227 16:28:16.484177 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-mp6xh" event={"ID":"abf16d54-1d80-400e-8da6-077a9b307708","Type":"ContainerStarted","Data":"7c1f5d325ae11fd1ceea92c542969658abbfbc49317e4cdeba5a24fe32372723"} Feb 27 16:28:16 crc kubenswrapper[4830]: I0227 16:28:16.485505 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-mtj7r" event={"ID":"b64de41e-9e05-48b2-87e5-387aad57532a","Type":"ContainerStarted","Data":"6474c9f1bef2ad51145b280febe52f680adfa6000d6ca748d69a65cd5b075580"} Feb 27 16:28:16 crc kubenswrapper[4830]: I0227 16:28:16.487821 4830 generic.go:334] "Generic (PLEG): container finished" podID="ebbe238c-4f40-46a8-b549-b9b0ae97fb82" containerID="0ca007d751ca916b44683981f64267af656c255f4e4123eb603b027d73f416af" exitCode=0 Feb 27 16:28:16 crc kubenswrapper[4830]: I0227 16:28:16.487898 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-b4knd" event={"ID":"ebbe238c-4f40-46a8-b549-b9b0ae97fb82","Type":"ContainerDied","Data":"0ca007d751ca916b44683981f64267af656c255f4e4123eb603b027d73f416af"} Feb 27 16:28:16 crc kubenswrapper[4830]: I0227 16:28:16.502566 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 27 16:28:16 crc kubenswrapper[4830]: W0227 16:28:16.513935 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c017daa_cb8f_4629_80e6_a671a8455149.slice/crio-3cc30a613b2117b4f5cbfde73330d0349be12252716eaff7963497d00f69d2cd WatchSource:0}: Error finding container 3cc30a613b2117b4f5cbfde73330d0349be12252716eaff7963497d00f69d2cd: Status 404 returned error can't find the container with id 3cc30a613b2117b4f5cbfde73330d0349be12252716eaff7963497d00f69d2cd Feb 27 16:28:16 crc kubenswrapper[4830]: I0227 16:28:16.794906 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc8479f9-f5dpx" Feb 27 16:28:17 crc kubenswrapper[4830]: I0227 16:28:17.433930 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-f5dpx" Feb 27 16:28:17 crc kubenswrapper[4830]: I0227 16:28:17.504876 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-mtj7r" event={"ID":"b64de41e-9e05-48b2-87e5-387aad57532a","Type":"ContainerStarted","Data":"68e148d9c338e25590dbfaf5b9ed31c09c1d25b0cdfd43f35a0878475443aaf7"} Feb 27 16:28:17 crc kubenswrapper[4830]: I0227 16:28:17.508296 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-b4knd" Feb 27 16:28:17 crc kubenswrapper[4830]: I0227 16:28:17.508886 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-b4knd" event={"ID":"ebbe238c-4f40-46a8-b549-b9b0ae97fb82","Type":"ContainerDied","Data":"5c8363827c77e3f477de62fb43eb9f24db779ae4eb4b79a0b31817d5b319fe0a"} Feb 27 16:28:17 crc kubenswrapper[4830]: I0227 16:28:17.508974 4830 scope.go:117] "RemoveContainer" containerID="0ca007d751ca916b44683981f64267af656c255f4e4123eb603b027d73f416af" Feb 27 16:28:17 crc kubenswrapper[4830]: I0227 16:28:17.511515 4830 generic.go:334] "Generic (PLEG): container finished" podID="6f384d75-651d-4e2b-9944-6df7727f9878" containerID="d4ee9c2c430661332588c970967f4c09f2e829e0985441d7f545389edd89de23" exitCode=0 Feb 27 16:28:17 crc kubenswrapper[4830]: I0227 16:28:17.511630 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-zbxr4" event={"ID":"6f384d75-651d-4e2b-9944-6df7727f9878","Type":"ContainerDied","Data":"d4ee9c2c430661332588c970967f4c09f2e829e0985441d7f545389edd89de23"} Feb 27 16:28:17 crc kubenswrapper[4830]: I0227 16:28:17.513085 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7c017daa-cb8f-4629-80e6-a671a8455149","Type":"ContainerStarted","Data":"3cc30a613b2117b4f5cbfde73330d0349be12252716eaff7963497d00f69d2cd"} Feb 27 16:28:17 crc kubenswrapper[4830]: I0227 16:28:17.515820 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-f5dpx" event={"ID":"0800fc83-7606-4be1-8a04-aab5b8226a0c","Type":"ContainerDied","Data":"c6e3c0711184c89c7dee67ccc7e15a7d797cec7ad03266664a5a6d7d03fab54c"} Feb 27 16:28:17 crc kubenswrapper[4830]: I0227 16:28:17.515867 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-f5dpx" Feb 27 16:28:17 crc kubenswrapper[4830]: I0227 16:28:17.518382 4830 generic.go:334] "Generic (PLEG): container finished" podID="abf16d54-1d80-400e-8da6-077a9b307708" containerID="f9d1fa7dc71dd0c9d71ed9b2a227548876fd5f7b2701cb7e12592040345369b8" exitCode=0 Feb 27 16:28:17 crc kubenswrapper[4830]: I0227 16:28:17.518425 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-mp6xh" event={"ID":"abf16d54-1d80-400e-8da6-077a9b307708","Type":"ContainerDied","Data":"f9d1fa7dc71dd0c9d71ed9b2a227548876fd5f7b2701cb7e12592040345369b8"} Feb 27 16:28:17 crc kubenswrapper[4830]: I0227 16:28:17.532566 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-mtj7r" podStartSLOduration=3.532545979 podStartE2EDuration="3.532545979s" podCreationTimestamp="2026-02-27 16:28:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:28:17.522242455 +0000 UTC m=+1293.611514928" watchObservedRunningTime="2026-02-27 16:28:17.532545979 +0000 UTC m=+1293.621818442" Feb 27 16:28:17 crc kubenswrapper[4830]: I0227 16:28:17.538250 4830 scope.go:117] "RemoveContainer" containerID="ee561560d393480957a5f923d11840252443c08568992587a34ef179e28cdaec" Feb 27 16:28:17 crc kubenswrapper[4830]: I0227 16:28:17.598254 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0800fc83-7606-4be1-8a04-aab5b8226a0c-dns-svc\") pod \"0800fc83-7606-4be1-8a04-aab5b8226a0c\" (UID: \"0800fc83-7606-4be1-8a04-aab5b8226a0c\") " Feb 27 16:28:17 crc kubenswrapper[4830]: I0227 16:28:17.598293 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0800fc83-7606-4be1-8a04-aab5b8226a0c-config\") pod \"0800fc83-7606-4be1-8a04-aab5b8226a0c\" (UID: \"0800fc83-7606-4be1-8a04-aab5b8226a0c\") " Feb 27 16:28:17 crc kubenswrapper[4830]: I0227 16:28:17.598367 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-682g6\" (UniqueName: \"kubernetes.io/projected/ebbe238c-4f40-46a8-b549-b9b0ae97fb82-kube-api-access-682g6\") pod \"ebbe238c-4f40-46a8-b549-b9b0ae97fb82\" (UID: \"ebbe238c-4f40-46a8-b549-b9b0ae97fb82\") " Feb 27 16:28:17 crc kubenswrapper[4830]: I0227 16:28:17.598393 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebbe238c-4f40-46a8-b549-b9b0ae97fb82-dns-svc\") pod \"ebbe238c-4f40-46a8-b549-b9b0ae97fb82\" (UID: \"ebbe238c-4f40-46a8-b549-b9b0ae97fb82\") " Feb 27 16:28:17 crc kubenswrapper[4830]: I0227 16:28:17.598437 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebbe238c-4f40-46a8-b549-b9b0ae97fb82-config\") pod \"ebbe238c-4f40-46a8-b549-b9b0ae97fb82\" (UID: \"ebbe238c-4f40-46a8-b549-b9b0ae97fb82\") " Feb 27 16:28:17 crc kubenswrapper[4830]: I0227 16:28:17.598518 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gz975\" (UniqueName: \"kubernetes.io/projected/0800fc83-7606-4be1-8a04-aab5b8226a0c-kube-api-access-gz975\") pod \"0800fc83-7606-4be1-8a04-aab5b8226a0c\" (UID: \"0800fc83-7606-4be1-8a04-aab5b8226a0c\") " Feb 27 16:28:17 crc kubenswrapper[4830]: I0227 16:28:17.608983 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0800fc83-7606-4be1-8a04-aab5b8226a0c-kube-api-access-gz975" (OuterVolumeSpecName: "kube-api-access-gz975") pod "0800fc83-7606-4be1-8a04-aab5b8226a0c" (UID: "0800fc83-7606-4be1-8a04-aab5b8226a0c"). InnerVolumeSpecName "kube-api-access-gz975". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:28:17 crc kubenswrapper[4830]: I0227 16:28:17.631137 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebbe238c-4f40-46a8-b549-b9b0ae97fb82-kube-api-access-682g6" (OuterVolumeSpecName: "kube-api-access-682g6") pod "ebbe238c-4f40-46a8-b549-b9b0ae97fb82" (UID: "ebbe238c-4f40-46a8-b549-b9b0ae97fb82"). InnerVolumeSpecName "kube-api-access-682g6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:28:17 crc kubenswrapper[4830]: I0227 16:28:17.634992 4830 scope.go:117] "RemoveContainer" containerID="5d1b87569297a1bd470ba8ca9e4a299b273f2827077b7afc4b9b9e9535c0fbb8" Feb 27 16:28:17 crc kubenswrapper[4830]: I0227 16:28:17.650861 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0800fc83-7606-4be1-8a04-aab5b8226a0c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0800fc83-7606-4be1-8a04-aab5b8226a0c" (UID: "0800fc83-7606-4be1-8a04-aab5b8226a0c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:28:17 crc kubenswrapper[4830]: I0227 16:28:17.651045 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebbe238c-4f40-46a8-b549-b9b0ae97fb82-config" (OuterVolumeSpecName: "config") pod "ebbe238c-4f40-46a8-b549-b9b0ae97fb82" (UID: "ebbe238c-4f40-46a8-b549-b9b0ae97fb82"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:28:17 crc kubenswrapper[4830]: I0227 16:28:17.658813 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0800fc83-7606-4be1-8a04-aab5b8226a0c-config" (OuterVolumeSpecName: "config") pod "0800fc83-7606-4be1-8a04-aab5b8226a0c" (UID: "0800fc83-7606-4be1-8a04-aab5b8226a0c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:28:17 crc kubenswrapper[4830]: I0227 16:28:17.666854 4830 scope.go:117] "RemoveContainer" containerID="f9937c68c2876e1a07e8537a17025ca71c90da298a92642f2d67a01e8e039a92" Feb 27 16:28:17 crc kubenswrapper[4830]: I0227 16:28:17.668221 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebbe238c-4f40-46a8-b549-b9b0ae97fb82-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ebbe238c-4f40-46a8-b549-b9b0ae97fb82" (UID: "ebbe238c-4f40-46a8-b549-b9b0ae97fb82"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:28:17 crc kubenswrapper[4830]: I0227 16:28:17.699934 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gz975\" (UniqueName: \"kubernetes.io/projected/0800fc83-7606-4be1-8a04-aab5b8226a0c-kube-api-access-gz975\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:17 crc kubenswrapper[4830]: I0227 16:28:17.700007 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0800fc83-7606-4be1-8a04-aab5b8226a0c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:17 crc kubenswrapper[4830]: I0227 16:28:17.700021 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0800fc83-7606-4be1-8a04-aab5b8226a0c-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:17 crc kubenswrapper[4830]: I0227 16:28:17.700031 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-682g6\" (UniqueName: \"kubernetes.io/projected/ebbe238c-4f40-46a8-b549-b9b0ae97fb82-kube-api-access-682g6\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:17 crc kubenswrapper[4830]: I0227 16:28:17.700040 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebbe238c-4f40-46a8-b549-b9b0ae97fb82-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:17 crc kubenswrapper[4830]: I0227 16:28:17.700049 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebbe238c-4f40-46a8-b549-b9b0ae97fb82-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:17 crc kubenswrapper[4830]: I0227 16:28:17.852254 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-f5dpx"] Feb 27 16:28:17 crc kubenswrapper[4830]: I0227 16:28:17.866137 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-f5dpx"] Feb 27 16:28:18 crc kubenswrapper[4830]: I0227 16:28:18.530210 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-b4knd" Feb 27 16:28:18 crc kubenswrapper[4830]: I0227 16:28:18.532767 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-zbxr4" event={"ID":"6f384d75-651d-4e2b-9944-6df7727f9878","Type":"ContainerStarted","Data":"ab1c302f8e2dc9c6d9032fc223f76d68d879aed2dbc5335e79baa1bd10e14fc5"} Feb 27 16:28:18 crc kubenswrapper[4830]: I0227 16:28:18.533044 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-zbxr4" Feb 27 16:28:18 crc kubenswrapper[4830]: I0227 16:28:18.549877 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7c017daa-cb8f-4629-80e6-a671a8455149","Type":"ContainerStarted","Data":"2bcb11d594f79ad72480623964189734a371dbef467c7efb79b03ffa6975a1e6"} Feb 27 16:28:18 crc kubenswrapper[4830]: I0227 16:28:18.550000 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7c017daa-cb8f-4629-80e6-a671a8455149","Type":"ContainerStarted","Data":"3a1363ee7dd262f992c78bd580ddc30893c1d6cb37be4314ebecd1d79f83e351"} Feb 27 16:28:18 crc kubenswrapper[4830]: I0227 16:28:18.551053 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 27 16:28:18 crc kubenswrapper[4830]: I0227 16:28:18.560604 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-mp6xh" event={"ID":"abf16d54-1d80-400e-8da6-077a9b307708","Type":"ContainerStarted","Data":"ab904d5717e9c6f6ed5f342d0e7d57fb54ad0abf3b0d854b486d6dc5a0825b5f"} Feb 27 16:28:18 crc kubenswrapper[4830]: I0227 16:28:18.560887 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-mp6xh" Feb 27 16:28:18 crc kubenswrapper[4830]: I0227 16:28:18.576497 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-zbxr4" podStartSLOduration=4.576466498 podStartE2EDuration="4.576466498s" podCreationTimestamp="2026-02-27 16:28:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:28:18.565371686 +0000 UTC m=+1294.654644159" watchObservedRunningTime="2026-02-27 16:28:18.576466498 +0000 UTC m=+1294.665738961" Feb 27 16:28:18 crc kubenswrapper[4830]: I0227 16:28:18.604925 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.163556196 podStartE2EDuration="3.604903177s" podCreationTimestamp="2026-02-27 16:28:15 +0000 UTC" firstStartedPulling="2026-02-27 16:28:16.52257042 +0000 UTC m=+1292.611842883" lastFinishedPulling="2026-02-27 16:28:17.963917401 +0000 UTC m=+1294.053189864" observedRunningTime="2026-02-27 16:28:18.599429182 +0000 UTC m=+1294.688701685" watchObservedRunningTime="2026-02-27 16:28:18.604903177 +0000 UTC m=+1294.694175650" Feb 27 16:28:18 crc kubenswrapper[4830]: I0227 16:28:18.647271 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-mp6xh" podStartSLOduration=3.647251905 podStartE2EDuration="3.647251905s" podCreationTimestamp="2026-02-27 16:28:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:28:18.639430373 +0000 UTC m=+1294.728702846" watchObservedRunningTime="2026-02-27 16:28:18.647251905 +0000 UTC m=+1294.736524378" Feb 27 16:28:18 crc kubenswrapper[4830]: I0227 16:28:18.662226 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-b4knd"] Feb 27 16:28:18 crc kubenswrapper[4830]: I0227 16:28:18.667635 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-b4knd"] Feb 27 16:28:18 crc kubenswrapper[4830]: I0227 16:28:18.777764 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0800fc83-7606-4be1-8a04-aab5b8226a0c" path="/var/lib/kubelet/pods/0800fc83-7606-4be1-8a04-aab5b8226a0c/volumes" Feb 27 16:28:18 crc kubenswrapper[4830]: I0227 16:28:18.779099 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebbe238c-4f40-46a8-b549-b9b0ae97fb82" path="/var/lib/kubelet/pods/ebbe238c-4f40-46a8-b549-b9b0ae97fb82/volumes" Feb 27 16:28:19 crc kubenswrapper[4830]: I0227 16:28:19.388941 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 27 16:28:19 crc kubenswrapper[4830]: I0227 16:28:19.389043 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 27 16:28:19 crc kubenswrapper[4830]: I0227 16:28:19.500382 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 27 16:28:19 crc kubenswrapper[4830]: I0227 16:28:19.696212 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 27 16:28:20 crc kubenswrapper[4830]: I0227 16:28:20.742711 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 27 16:28:20 crc kubenswrapper[4830]: I0227 16:28:20.744223 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 27 16:28:21 crc kubenswrapper[4830]: I0227 16:28:21.334651 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 27 16:28:21 crc kubenswrapper[4830]: I0227 16:28:21.718874 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.229838 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-mz2rm"] Feb 27 16:28:22 crc kubenswrapper[4830]: E0227 16:28:22.230730 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebbe238c-4f40-46a8-b549-b9b0ae97fb82" containerName="dnsmasq-dns" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.230753 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebbe238c-4f40-46a8-b549-b9b0ae97fb82" containerName="dnsmasq-dns" Feb 27 16:28:22 crc kubenswrapper[4830]: E0227 16:28:22.230783 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebbe238c-4f40-46a8-b549-b9b0ae97fb82" containerName="init" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.230795 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebbe238c-4f40-46a8-b549-b9b0ae97fb82" containerName="init" Feb 27 16:28:22 crc kubenswrapper[4830]: E0227 16:28:22.230816 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0800fc83-7606-4be1-8a04-aab5b8226a0c" containerName="dnsmasq-dns" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.230829 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="0800fc83-7606-4be1-8a04-aab5b8226a0c" containerName="dnsmasq-dns" Feb 27 16:28:22 crc kubenswrapper[4830]: E0227 16:28:22.230864 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0800fc83-7606-4be1-8a04-aab5b8226a0c" containerName="init" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.230875 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="0800fc83-7606-4be1-8a04-aab5b8226a0c" containerName="init" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.231211 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebbe238c-4f40-46a8-b549-b9b0ae97fb82" containerName="dnsmasq-dns" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.231237 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="0800fc83-7606-4be1-8a04-aab5b8226a0c" containerName="dnsmasq-dns" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.232114 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mz2rm" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.241712 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5550-account-create-update-5hslr"] Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.242742 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5550-account-create-update-5hslr" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.245396 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.258472 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-mz2rm"] Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.267123 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5550-account-create-update-5hslr"] Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.296879 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab8f9b5d-9b2e-4cb2-ab5c-392f0bf8ad89-operator-scripts\") pod \"keystone-5550-account-create-update-5hslr\" (UID: \"ab8f9b5d-9b2e-4cb2-ab5c-392f0bf8ad89\") " pod="openstack/keystone-5550-account-create-update-5hslr" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.296973 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l82gk\" (UniqueName: \"kubernetes.io/projected/fc88df57-1ce1-47f5-b850-7072073c4d72-kube-api-access-l82gk\") pod \"keystone-db-create-mz2rm\" (UID: \"fc88df57-1ce1-47f5-b850-7072073c4d72\") " pod="openstack/keystone-db-create-mz2rm" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.296995 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmczj\" (UniqueName: \"kubernetes.io/projected/ab8f9b5d-9b2e-4cb2-ab5c-392f0bf8ad89-kube-api-access-nmczj\") pod \"keystone-5550-account-create-update-5hslr\" (UID: \"ab8f9b5d-9b2e-4cb2-ab5c-392f0bf8ad89\") " pod="openstack/keystone-5550-account-create-update-5hslr" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.297116 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc88df57-1ce1-47f5-b850-7072073c4d72-operator-scripts\") pod \"keystone-db-create-mz2rm\" (UID: \"fc88df57-1ce1-47f5-b850-7072073c4d72\") " pod="openstack/keystone-db-create-mz2rm" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.396465 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-2jzwm"] Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.397388 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2jzwm" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.398551 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l82gk\" (UniqueName: \"kubernetes.io/projected/fc88df57-1ce1-47f5-b850-7072073c4d72-kube-api-access-l82gk\") pod \"keystone-db-create-mz2rm\" (UID: \"fc88df57-1ce1-47f5-b850-7072073c4d72\") " pod="openstack/keystone-db-create-mz2rm" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.398620 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmczj\" (UniqueName: \"kubernetes.io/projected/ab8f9b5d-9b2e-4cb2-ab5c-392f0bf8ad89-kube-api-access-nmczj\") pod \"keystone-5550-account-create-update-5hslr\" (UID: \"ab8f9b5d-9b2e-4cb2-ab5c-392f0bf8ad89\") " pod="openstack/keystone-5550-account-create-update-5hslr" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.398780 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc88df57-1ce1-47f5-b850-7072073c4d72-operator-scripts\") pod \"keystone-db-create-mz2rm\" (UID: \"fc88df57-1ce1-47f5-b850-7072073c4d72\") " pod="openstack/keystone-db-create-mz2rm" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.398892 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab8f9b5d-9b2e-4cb2-ab5c-392f0bf8ad89-operator-scripts\") pod \"keystone-5550-account-create-update-5hslr\" (UID: \"ab8f9b5d-9b2e-4cb2-ab5c-392f0bf8ad89\") " pod="openstack/keystone-5550-account-create-update-5hslr" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.400132 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc88df57-1ce1-47f5-b850-7072073c4d72-operator-scripts\") pod \"keystone-db-create-mz2rm\" (UID: \"fc88df57-1ce1-47f5-b850-7072073c4d72\") " pod="openstack/keystone-db-create-mz2rm" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.400192 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab8f9b5d-9b2e-4cb2-ab5c-392f0bf8ad89-operator-scripts\") pod \"keystone-5550-account-create-update-5hslr\" (UID: \"ab8f9b5d-9b2e-4cb2-ab5c-392f0bf8ad89\") " pod="openstack/keystone-5550-account-create-update-5hslr" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.414889 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-2jzwm"] Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.425813 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmczj\" (UniqueName: \"kubernetes.io/projected/ab8f9b5d-9b2e-4cb2-ab5c-392f0bf8ad89-kube-api-access-nmczj\") pod \"keystone-5550-account-create-update-5hslr\" (UID: \"ab8f9b5d-9b2e-4cb2-ab5c-392f0bf8ad89\") " pod="openstack/keystone-5550-account-create-update-5hslr" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.425839 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l82gk\" (UniqueName: \"kubernetes.io/projected/fc88df57-1ce1-47f5-b850-7072073c4d72-kube-api-access-l82gk\") pod \"keystone-db-create-mz2rm\" (UID: \"fc88df57-1ce1-47f5-b850-7072073c4d72\") " pod="openstack/keystone-db-create-mz2rm" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.499926 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phvsh\" (UniqueName: \"kubernetes.io/projected/e94cb22b-b51c-4f6d-8cdd-45d6180f8462-kube-api-access-phvsh\") pod \"placement-db-create-2jzwm\" (UID: \"e94cb22b-b51c-4f6d-8cdd-45d6180f8462\") " pod="openstack/placement-db-create-2jzwm" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.500008 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e94cb22b-b51c-4f6d-8cdd-45d6180f8462-operator-scripts\") pod \"placement-db-create-2jzwm\" (UID: \"e94cb22b-b51c-4f6d-8cdd-45d6180f8462\") " pod="openstack/placement-db-create-2jzwm" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.512730 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-776e-account-create-update-dkfsh"] Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.513586 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-776e-account-create-update-dkfsh" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.516692 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.533711 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-776e-account-create-update-dkfsh"] Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.549529 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mz2rm" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.567702 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5550-account-create-update-5hslr" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.601079 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phvsh\" (UniqueName: \"kubernetes.io/projected/e94cb22b-b51c-4f6d-8cdd-45d6180f8462-kube-api-access-phvsh\") pod \"placement-db-create-2jzwm\" (UID: \"e94cb22b-b51c-4f6d-8cdd-45d6180f8462\") " pod="openstack/placement-db-create-2jzwm" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.601153 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e94cb22b-b51c-4f6d-8cdd-45d6180f8462-operator-scripts\") pod \"placement-db-create-2jzwm\" (UID: \"e94cb22b-b51c-4f6d-8cdd-45d6180f8462\") " pod="openstack/placement-db-create-2jzwm" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.601206 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk25c\" (UniqueName: \"kubernetes.io/projected/b44b6447-25d6-4a6a-986d-b49fc2729061-kube-api-access-gk25c\") pod \"placement-776e-account-create-update-dkfsh\" (UID: \"b44b6447-25d6-4a6a-986d-b49fc2729061\") " pod="openstack/placement-776e-account-create-update-dkfsh" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.601284 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b44b6447-25d6-4a6a-986d-b49fc2729061-operator-scripts\") pod \"placement-776e-account-create-update-dkfsh\" (UID: \"b44b6447-25d6-4a6a-986d-b49fc2729061\") " pod="openstack/placement-776e-account-create-update-dkfsh" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.602053 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e94cb22b-b51c-4f6d-8cdd-45d6180f8462-operator-scripts\") pod \"placement-db-create-2jzwm\" (UID: \"e94cb22b-b51c-4f6d-8cdd-45d6180f8462\") " pod="openstack/placement-db-create-2jzwm" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.620165 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phvsh\" (UniqueName: \"kubernetes.io/projected/e94cb22b-b51c-4f6d-8cdd-45d6180f8462-kube-api-access-phvsh\") pod \"placement-db-create-2jzwm\" (UID: \"e94cb22b-b51c-4f6d-8cdd-45d6180f8462\") " pod="openstack/placement-db-create-2jzwm" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.703325 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk25c\" (UniqueName: \"kubernetes.io/projected/b44b6447-25d6-4a6a-986d-b49fc2729061-kube-api-access-gk25c\") pod \"placement-776e-account-create-update-dkfsh\" (UID: \"b44b6447-25d6-4a6a-986d-b49fc2729061\") " pod="openstack/placement-776e-account-create-update-dkfsh" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.703683 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b44b6447-25d6-4a6a-986d-b49fc2729061-operator-scripts\") pod \"placement-776e-account-create-update-dkfsh\" (UID: \"b44b6447-25d6-4a6a-986d-b49fc2729061\") " pod="openstack/placement-776e-account-create-update-dkfsh" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.707588 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b44b6447-25d6-4a6a-986d-b49fc2729061-operator-scripts\") pod \"placement-776e-account-create-update-dkfsh\" (UID: \"b44b6447-25d6-4a6a-986d-b49fc2729061\") " pod="openstack/placement-776e-account-create-update-dkfsh" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.713030 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2jzwm" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.724026 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk25c\" (UniqueName: \"kubernetes.io/projected/b44b6447-25d6-4a6a-986d-b49fc2729061-kube-api-access-gk25c\") pod \"placement-776e-account-create-update-dkfsh\" (UID: \"b44b6447-25d6-4a6a-986d-b49fc2729061\") " pod="openstack/placement-776e-account-create-update-dkfsh" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.834549 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-776e-account-create-update-dkfsh" Feb 27 16:28:22 crc kubenswrapper[4830]: I0227 16:28:22.985253 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-2jzwm"] Feb 27 16:28:22 crc kubenswrapper[4830]: W0227 16:28:22.998387 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode94cb22b_b51c_4f6d_8cdd_45d6180f8462.slice/crio-bc56063b69929f6b51d6e8d1cfcbbb6fec3e54ce967766bc0bbf75c487f5806d WatchSource:0}: Error finding container bc56063b69929f6b51d6e8d1cfcbbb6fec3e54ce967766bc0bbf75c487f5806d: Status 404 returned error can't find the container with id bc56063b69929f6b51d6e8d1cfcbbb6fec3e54ce967766bc0bbf75c487f5806d Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.033922 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-mz2rm"] Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.116324 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5550-account-create-update-5hslr"] Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.144312 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-zbxr4"] Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.144513 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-zbxr4" podUID="6f384d75-651d-4e2b-9944-6df7727f9878" containerName="dnsmasq-dns" containerID="cri-o://ab1c302f8e2dc9c6d9032fc223f76d68d879aed2dbc5335e79baa1bd10e14fc5" gracePeriod=10 Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.149468 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bf47b49b7-zbxr4" Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.178985 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-cjq7v"] Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.180576 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-cjq7v" Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.184936 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-cjq7v"] Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.211681 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1434c895-fa3e-4feb-a56a-0451f1f16a3b-config\") pod \"dnsmasq-dns-b8fbc5445-cjq7v\" (UID: \"1434c895-fa3e-4feb-a56a-0451f1f16a3b\") " pod="openstack/dnsmasq-dns-b8fbc5445-cjq7v" Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.211736 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1434c895-fa3e-4feb-a56a-0451f1f16a3b-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-cjq7v\" (UID: \"1434c895-fa3e-4feb-a56a-0451f1f16a3b\") " pod="openstack/dnsmasq-dns-b8fbc5445-cjq7v" Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.211810 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l57bg\" (UniqueName: \"kubernetes.io/projected/1434c895-fa3e-4feb-a56a-0451f1f16a3b-kube-api-access-l57bg\") pod \"dnsmasq-dns-b8fbc5445-cjq7v\" (UID: \"1434c895-fa3e-4feb-a56a-0451f1f16a3b\") " pod="openstack/dnsmasq-dns-b8fbc5445-cjq7v" Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.211848 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1434c895-fa3e-4feb-a56a-0451f1f16a3b-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-cjq7v\" (UID: \"1434c895-fa3e-4feb-a56a-0451f1f16a3b\") " pod="openstack/dnsmasq-dns-b8fbc5445-cjq7v" Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.212007 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1434c895-fa3e-4feb-a56a-0451f1f16a3b-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-cjq7v\" (UID: \"1434c895-fa3e-4feb-a56a-0451f1f16a3b\") " pod="openstack/dnsmasq-dns-b8fbc5445-cjq7v" Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.315572 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-776e-account-create-update-dkfsh"] Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.316400 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1434c895-fa3e-4feb-a56a-0451f1f16a3b-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-cjq7v\" (UID: \"1434c895-fa3e-4feb-a56a-0451f1f16a3b\") " pod="openstack/dnsmasq-dns-b8fbc5445-cjq7v" Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.316480 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l57bg\" (UniqueName: \"kubernetes.io/projected/1434c895-fa3e-4feb-a56a-0451f1f16a3b-kube-api-access-l57bg\") pod \"dnsmasq-dns-b8fbc5445-cjq7v\" (UID: \"1434c895-fa3e-4feb-a56a-0451f1f16a3b\") " pod="openstack/dnsmasq-dns-b8fbc5445-cjq7v" Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.316509 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1434c895-fa3e-4feb-a56a-0451f1f16a3b-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-cjq7v\" (UID: \"1434c895-fa3e-4feb-a56a-0451f1f16a3b\") " pod="openstack/dnsmasq-dns-b8fbc5445-cjq7v" Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.316527 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1434c895-fa3e-4feb-a56a-0451f1f16a3b-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-cjq7v\" (UID: \"1434c895-fa3e-4feb-a56a-0451f1f16a3b\") " pod="openstack/dnsmasq-dns-b8fbc5445-cjq7v" Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.316591 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1434c895-fa3e-4feb-a56a-0451f1f16a3b-config\") pod \"dnsmasq-dns-b8fbc5445-cjq7v\" (UID: \"1434c895-fa3e-4feb-a56a-0451f1f16a3b\") " pod="openstack/dnsmasq-dns-b8fbc5445-cjq7v" Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.317585 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1434c895-fa3e-4feb-a56a-0451f1f16a3b-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-cjq7v\" (UID: \"1434c895-fa3e-4feb-a56a-0451f1f16a3b\") " pod="openstack/dnsmasq-dns-b8fbc5445-cjq7v" Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.317699 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1434c895-fa3e-4feb-a56a-0451f1f16a3b-config\") pod \"dnsmasq-dns-b8fbc5445-cjq7v\" (UID: \"1434c895-fa3e-4feb-a56a-0451f1f16a3b\") " pod="openstack/dnsmasq-dns-b8fbc5445-cjq7v" Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.317856 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1434c895-fa3e-4feb-a56a-0451f1f16a3b-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-cjq7v\" (UID: \"1434c895-fa3e-4feb-a56a-0451f1f16a3b\") " pod="openstack/dnsmasq-dns-b8fbc5445-cjq7v" Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.318430 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1434c895-fa3e-4feb-a56a-0451f1f16a3b-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-cjq7v\" (UID: \"1434c895-fa3e-4feb-a56a-0451f1f16a3b\") " pod="openstack/dnsmasq-dns-b8fbc5445-cjq7v" Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.336341 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l57bg\" (UniqueName: \"kubernetes.io/projected/1434c895-fa3e-4feb-a56a-0451f1f16a3b-kube-api-access-l57bg\") pod \"dnsmasq-dns-b8fbc5445-cjq7v\" (UID: \"1434c895-fa3e-4feb-a56a-0451f1f16a3b\") " pod="openstack/dnsmasq-dns-b8fbc5445-cjq7v" Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.495329 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-cjq7v" Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.610686 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-776e-account-create-update-dkfsh" event={"ID":"b44b6447-25d6-4a6a-986d-b49fc2729061","Type":"ContainerStarted","Data":"ecb8bd5d0eb2c9090d00fc7c2e75ec3a65b6414bbffd98c8d36fcdd1b36d3983"} Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.610722 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-776e-account-create-update-dkfsh" event={"ID":"b44b6447-25d6-4a6a-986d-b49fc2729061","Type":"ContainerStarted","Data":"ae798cbab13771b93416497d24547f216d0d071f63b00a90a8857e432c41aaeb"} Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.614762 4830 generic.go:334] "Generic (PLEG): container finished" podID="6f384d75-651d-4e2b-9944-6df7727f9878" containerID="ab1c302f8e2dc9c6d9032fc223f76d68d879aed2dbc5335e79baa1bd10e14fc5" exitCode=0 Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.615177 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-zbxr4" event={"ID":"6f384d75-651d-4e2b-9944-6df7727f9878","Type":"ContainerDied","Data":"ab1c302f8e2dc9c6d9032fc223f76d68d879aed2dbc5335e79baa1bd10e14fc5"} Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.617479 4830 generic.go:334] "Generic (PLEG): container finished" podID="e94cb22b-b51c-4f6d-8cdd-45d6180f8462" containerID="4ad23027e7d75e6249247d76978f4d82e1283097eecebb5ce536bbb32a4f656a" exitCode=0 Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.617522 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2jzwm" event={"ID":"e94cb22b-b51c-4f6d-8cdd-45d6180f8462","Type":"ContainerDied","Data":"4ad23027e7d75e6249247d76978f4d82e1283097eecebb5ce536bbb32a4f656a"} Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.617549 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2jzwm" event={"ID":"e94cb22b-b51c-4f6d-8cdd-45d6180f8462","Type":"ContainerStarted","Data":"bc56063b69929f6b51d6e8d1cfcbbb6fec3e54ce967766bc0bbf75c487f5806d"} Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.618475 4830 generic.go:334] "Generic (PLEG): container finished" podID="ab8f9b5d-9b2e-4cb2-ab5c-392f0bf8ad89" containerID="3f5b67b1fe465ff975e3223d66d6907410f1c1f41206c171986f3359ac5885d2" exitCode=0 Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.618512 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5550-account-create-update-5hslr" event={"ID":"ab8f9b5d-9b2e-4cb2-ab5c-392f0bf8ad89","Type":"ContainerDied","Data":"3f5b67b1fe465ff975e3223d66d6907410f1c1f41206c171986f3359ac5885d2"} Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.618525 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5550-account-create-update-5hslr" event={"ID":"ab8f9b5d-9b2e-4cb2-ab5c-392f0bf8ad89","Type":"ContainerStarted","Data":"20866c5c55d45c055eb5f895b36f6b10133000fd30b60a89515a8ce49b77dadf"} Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.620810 4830 generic.go:334] "Generic (PLEG): container finished" podID="fc88df57-1ce1-47f5-b850-7072073c4d72" containerID="edf7280348701155c989d49d0431a7c220e4237323ae8e514c1fed6e11d215dd" exitCode=0 Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.620890 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mz2rm" event={"ID":"fc88df57-1ce1-47f5-b850-7072073c4d72","Type":"ContainerDied","Data":"edf7280348701155c989d49d0431a7c220e4237323ae8e514c1fed6e11d215dd"} Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.620903 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mz2rm" event={"ID":"fc88df57-1ce1-47f5-b850-7072073c4d72","Type":"ContainerStarted","Data":"6b0b5d6efbcc72da6e2cca0c5849de3070d77d3f6f18aec76ccd8c335fc148d7"} Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.632051 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-776e-account-create-update-dkfsh" podStartSLOduration=1.632035876 podStartE2EDuration="1.632035876s" podCreationTimestamp="2026-02-27 16:28:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:28:23.622282576 +0000 UTC m=+1299.711555039" watchObservedRunningTime="2026-02-27 16:28:23.632035876 +0000 UTC m=+1299.721308329" Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.824508 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-zbxr4" Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.933746 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f384d75-651d-4e2b-9944-6df7727f9878-dns-svc\") pod \"6f384d75-651d-4e2b-9944-6df7727f9878\" (UID: \"6f384d75-651d-4e2b-9944-6df7727f9878\") " Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.933810 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f384d75-651d-4e2b-9944-6df7727f9878-ovsdbserver-nb\") pod \"6f384d75-651d-4e2b-9944-6df7727f9878\" (UID: \"6f384d75-651d-4e2b-9944-6df7727f9878\") " Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.934125 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f384d75-651d-4e2b-9944-6df7727f9878-config\") pod \"6f384d75-651d-4e2b-9944-6df7727f9878\" (UID: \"6f384d75-651d-4e2b-9944-6df7727f9878\") " Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.934154 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5dsc\" (UniqueName: \"kubernetes.io/projected/6f384d75-651d-4e2b-9944-6df7727f9878-kube-api-access-d5dsc\") pod \"6f384d75-651d-4e2b-9944-6df7727f9878\" (UID: \"6f384d75-651d-4e2b-9944-6df7727f9878\") " Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.939759 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f384d75-651d-4e2b-9944-6df7727f9878-kube-api-access-d5dsc" (OuterVolumeSpecName: "kube-api-access-d5dsc") pod "6f384d75-651d-4e2b-9944-6df7727f9878" (UID: "6f384d75-651d-4e2b-9944-6df7727f9878"). InnerVolumeSpecName "kube-api-access-d5dsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.941022 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-cjq7v"] Feb 27 16:28:23 crc kubenswrapper[4830]: W0227 16:28:23.944993 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1434c895_fa3e_4feb_a56a_0451f1f16a3b.slice/crio-6712aaa3bc10e702ad1242fad1603a54acc7021a5eb45e728d20766d74ad02f8 WatchSource:0}: Error finding container 6712aaa3bc10e702ad1242fad1603a54acc7021a5eb45e728d20766d74ad02f8: Status 404 returned error can't find the container with id 6712aaa3bc10e702ad1242fad1603a54acc7021a5eb45e728d20766d74ad02f8 Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.974335 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f384d75-651d-4e2b-9944-6df7727f9878-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6f384d75-651d-4e2b-9944-6df7727f9878" (UID: "6f384d75-651d-4e2b-9944-6df7727f9878"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.975308 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f384d75-651d-4e2b-9944-6df7727f9878-config" (OuterVolumeSpecName: "config") pod "6f384d75-651d-4e2b-9944-6df7727f9878" (UID: "6f384d75-651d-4e2b-9944-6df7727f9878"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:28:23 crc kubenswrapper[4830]: I0227 16:28:23.977515 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f384d75-651d-4e2b-9944-6df7727f9878-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6f384d75-651d-4e2b-9944-6df7727f9878" (UID: "6f384d75-651d-4e2b-9944-6df7727f9878"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.041253 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f384d75-651d-4e2b-9944-6df7727f9878-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.041286 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f384d75-651d-4e2b-9944-6df7727f9878-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.041300 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f384d75-651d-4e2b-9944-6df7727f9878-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.041312 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5dsc\" (UniqueName: \"kubernetes.io/projected/6f384d75-651d-4e2b-9944-6df7727f9878-kube-api-access-d5dsc\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.260604 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 27 16:28:24 crc kubenswrapper[4830]: E0227 16:28:24.261436 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f384d75-651d-4e2b-9944-6df7727f9878" containerName="init" Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.261461 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f384d75-651d-4e2b-9944-6df7727f9878" containerName="init" Feb 27 16:28:24 crc kubenswrapper[4830]: E0227 16:28:24.261481 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f384d75-651d-4e2b-9944-6df7727f9878" containerName="dnsmasq-dns" Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.261489 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f384d75-651d-4e2b-9944-6df7727f9878" containerName="dnsmasq-dns" Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.261670 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f384d75-651d-4e2b-9944-6df7727f9878" containerName="dnsmasq-dns" Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.267708 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.270865 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.271210 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.271869 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-wp9zs" Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.272208 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.276787 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.346665 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrcrs\" (UniqueName: \"kubernetes.io/projected/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-kube-api-access-wrcrs\") pod \"swift-storage-0\" (UID: \"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f\") " pod="openstack/swift-storage-0" Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.346779 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f\") " pod="openstack/swift-storage-0" Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.346878 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f\") " pod="openstack/swift-storage-0" Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.346908 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-lock\") pod \"swift-storage-0\" (UID: \"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f\") " pod="openstack/swift-storage-0" Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.347245 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-cache\") pod \"swift-storage-0\" (UID: \"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f\") " pod="openstack/swift-storage-0" Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.347275 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-etc-swift\") pod \"swift-storage-0\" (UID: \"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f\") " pod="openstack/swift-storage-0" Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.449555 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-cache\") pod \"swift-storage-0\" (UID: \"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f\") " pod="openstack/swift-storage-0" Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.449637 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-etc-swift\") pod \"swift-storage-0\" (UID: \"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f\") " pod="openstack/swift-storage-0" Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.449765 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrcrs\" (UniqueName: \"kubernetes.io/projected/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-kube-api-access-wrcrs\") pod \"swift-storage-0\" (UID: \"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f\") " pod="openstack/swift-storage-0" Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.449814 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f\") " pod="openstack/swift-storage-0" Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.449913 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f\") " pod="openstack/swift-storage-0" Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.449971 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-lock\") pod \"swift-storage-0\" (UID: \"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f\") " pod="openstack/swift-storage-0" Feb 27 16:28:24 crc kubenswrapper[4830]: E0227 16:28:24.450035 4830 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 27 16:28:24 crc kubenswrapper[4830]: E0227 16:28:24.450084 4830 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 27 16:28:24 crc kubenswrapper[4830]: E0227 16:28:24.450178 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-etc-swift podName:f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f nodeName:}" failed. No retries permitted until 2026-02-27 16:28:24.950149566 +0000 UTC m=+1301.039422109 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-etc-swift") pod "swift-storage-0" (UID: "f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f") : configmap "swift-ring-files" not found Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.450433 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/swift-storage-0" Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.450534 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-cache\") pod \"swift-storage-0\" (UID: \"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f\") " pod="openstack/swift-storage-0" Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.450874 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-lock\") pod \"swift-storage-0\" (UID: \"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f\") " pod="openstack/swift-storage-0" Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.456720 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f\") " pod="openstack/swift-storage-0" Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.479502 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrcrs\" (UniqueName: \"kubernetes.io/projected/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-kube-api-access-wrcrs\") pod \"swift-storage-0\" (UID: \"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f\") " pod="openstack/swift-storage-0" Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.489577 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f\") " pod="openstack/swift-storage-0" Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.647838 4830 generic.go:334] "Generic (PLEG): container finished" podID="b44b6447-25d6-4a6a-986d-b49fc2729061" containerID="ecb8bd5d0eb2c9090d00fc7c2e75ec3a65b6414bbffd98c8d36fcdd1b36d3983" exitCode=0 Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.647956 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-776e-account-create-update-dkfsh" event={"ID":"b44b6447-25d6-4a6a-986d-b49fc2729061","Type":"ContainerDied","Data":"ecb8bd5d0eb2c9090d00fc7c2e75ec3a65b6414bbffd98c8d36fcdd1b36d3983"} Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.652102 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-zbxr4" Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.652112 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-zbxr4" event={"ID":"6f384d75-651d-4e2b-9944-6df7727f9878","Type":"ContainerDied","Data":"eaf69066a3542729d5977c59c8669428d4fbe9e310644d14aaff2447fb4a1cbd"} Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.652460 4830 scope.go:117] "RemoveContainer" containerID="ab1c302f8e2dc9c6d9032fc223f76d68d879aed2dbc5335e79baa1bd10e14fc5" Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.658458 4830 generic.go:334] "Generic (PLEG): container finished" podID="1434c895-fa3e-4feb-a56a-0451f1f16a3b" containerID="0e61e2eba7cefcaeb7cc49da2fcf3fb946c76fa49968f3858bb6de35d92d599a" exitCode=0 Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.659475 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-cjq7v" event={"ID":"1434c895-fa3e-4feb-a56a-0451f1f16a3b","Type":"ContainerDied","Data":"0e61e2eba7cefcaeb7cc49da2fcf3fb946c76fa49968f3858bb6de35d92d599a"} Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.659505 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-cjq7v" event={"ID":"1434c895-fa3e-4feb-a56a-0451f1f16a3b","Type":"ContainerStarted","Data":"6712aaa3bc10e702ad1242fad1603a54acc7021a5eb45e728d20766d74ad02f8"} Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.795493 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-zbxr4"] Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.797465 4830 scope.go:117] "RemoveContainer" containerID="d4ee9c2c430661332588c970967f4c09f2e829e0985441d7f545389edd89de23" Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.797707 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-zbxr4"] Feb 27 16:28:24 crc kubenswrapper[4830]: I0227 16:28:24.966105 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-etc-swift\") pod \"swift-storage-0\" (UID: \"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f\") " pod="openstack/swift-storage-0" Feb 27 16:28:24 crc kubenswrapper[4830]: E0227 16:28:24.966377 4830 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 27 16:28:24 crc kubenswrapper[4830]: E0227 16:28:24.966390 4830 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 27 16:28:24 crc kubenswrapper[4830]: E0227 16:28:24.966431 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-etc-swift podName:f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f nodeName:}" failed. No retries permitted until 2026-02-27 16:28:25.966416982 +0000 UTC m=+1302.055689445 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-etc-swift") pod "swift-storage-0" (UID: "f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f") : configmap "swift-ring-files" not found Feb 27 16:28:25 crc kubenswrapper[4830]: I0227 16:28:25.042171 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2jzwm" Feb 27 16:28:25 crc kubenswrapper[4830]: I0227 16:28:25.067655 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phvsh\" (UniqueName: \"kubernetes.io/projected/e94cb22b-b51c-4f6d-8cdd-45d6180f8462-kube-api-access-phvsh\") pod \"e94cb22b-b51c-4f6d-8cdd-45d6180f8462\" (UID: \"e94cb22b-b51c-4f6d-8cdd-45d6180f8462\") " Feb 27 16:28:25 crc kubenswrapper[4830]: I0227 16:28:25.067829 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e94cb22b-b51c-4f6d-8cdd-45d6180f8462-operator-scripts\") pod \"e94cb22b-b51c-4f6d-8cdd-45d6180f8462\" (UID: \"e94cb22b-b51c-4f6d-8cdd-45d6180f8462\") " Feb 27 16:28:25 crc kubenswrapper[4830]: I0227 16:28:25.068658 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e94cb22b-b51c-4f6d-8cdd-45d6180f8462-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e94cb22b-b51c-4f6d-8cdd-45d6180f8462" (UID: "e94cb22b-b51c-4f6d-8cdd-45d6180f8462"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:28:25 crc kubenswrapper[4830]: I0227 16:28:25.073116 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e94cb22b-b51c-4f6d-8cdd-45d6180f8462-kube-api-access-phvsh" (OuterVolumeSpecName: "kube-api-access-phvsh") pod "e94cb22b-b51c-4f6d-8cdd-45d6180f8462" (UID: "e94cb22b-b51c-4f6d-8cdd-45d6180f8462"). InnerVolumeSpecName "kube-api-access-phvsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:28:25 crc kubenswrapper[4830]: I0227 16:28:25.142768 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5550-account-create-update-5hslr" Feb 27 16:28:25 crc kubenswrapper[4830]: I0227 16:28:25.146226 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mz2rm" Feb 27 16:28:25 crc kubenswrapper[4830]: I0227 16:28:25.169045 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmczj\" (UniqueName: \"kubernetes.io/projected/ab8f9b5d-9b2e-4cb2-ab5c-392f0bf8ad89-kube-api-access-nmczj\") pod \"ab8f9b5d-9b2e-4cb2-ab5c-392f0bf8ad89\" (UID: \"ab8f9b5d-9b2e-4cb2-ab5c-392f0bf8ad89\") " Feb 27 16:28:25 crc kubenswrapper[4830]: I0227 16:28:25.169111 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l82gk\" (UniqueName: \"kubernetes.io/projected/fc88df57-1ce1-47f5-b850-7072073c4d72-kube-api-access-l82gk\") pod \"fc88df57-1ce1-47f5-b850-7072073c4d72\" (UID: \"fc88df57-1ce1-47f5-b850-7072073c4d72\") " Feb 27 16:28:25 crc kubenswrapper[4830]: I0227 16:28:25.169156 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab8f9b5d-9b2e-4cb2-ab5c-392f0bf8ad89-operator-scripts\") pod \"ab8f9b5d-9b2e-4cb2-ab5c-392f0bf8ad89\" (UID: \"ab8f9b5d-9b2e-4cb2-ab5c-392f0bf8ad89\") " Feb 27 16:28:25 crc kubenswrapper[4830]: I0227 16:28:25.169250 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc88df57-1ce1-47f5-b850-7072073c4d72-operator-scripts\") pod \"fc88df57-1ce1-47f5-b850-7072073c4d72\" (UID: \"fc88df57-1ce1-47f5-b850-7072073c4d72\") " Feb 27 16:28:25 crc kubenswrapper[4830]: I0227 16:28:25.169595 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e94cb22b-b51c-4f6d-8cdd-45d6180f8462-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:25 crc kubenswrapper[4830]: I0227 16:28:25.169605 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phvsh\" (UniqueName: \"kubernetes.io/projected/e94cb22b-b51c-4f6d-8cdd-45d6180f8462-kube-api-access-phvsh\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:25 crc kubenswrapper[4830]: I0227 16:28:25.169956 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc88df57-1ce1-47f5-b850-7072073c4d72-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fc88df57-1ce1-47f5-b850-7072073c4d72" (UID: "fc88df57-1ce1-47f5-b850-7072073c4d72"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:28:25 crc kubenswrapper[4830]: I0227 16:28:25.171167 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab8f9b5d-9b2e-4cb2-ab5c-392f0bf8ad89-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ab8f9b5d-9b2e-4cb2-ab5c-392f0bf8ad89" (UID: "ab8f9b5d-9b2e-4cb2-ab5c-392f0bf8ad89"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:28:25 crc kubenswrapper[4830]: I0227 16:28:25.173068 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc88df57-1ce1-47f5-b850-7072073c4d72-kube-api-access-l82gk" (OuterVolumeSpecName: "kube-api-access-l82gk") pod "fc88df57-1ce1-47f5-b850-7072073c4d72" (UID: "fc88df57-1ce1-47f5-b850-7072073c4d72"). InnerVolumeSpecName "kube-api-access-l82gk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:28:25 crc kubenswrapper[4830]: I0227 16:28:25.174207 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab8f9b5d-9b2e-4cb2-ab5c-392f0bf8ad89-kube-api-access-nmczj" (OuterVolumeSpecName: "kube-api-access-nmczj") pod "ab8f9b5d-9b2e-4cb2-ab5c-392f0bf8ad89" (UID: "ab8f9b5d-9b2e-4cb2-ab5c-392f0bf8ad89"). InnerVolumeSpecName "kube-api-access-nmczj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:28:25 crc kubenswrapper[4830]: I0227 16:28:25.271554 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmczj\" (UniqueName: \"kubernetes.io/projected/ab8f9b5d-9b2e-4cb2-ab5c-392f0bf8ad89-kube-api-access-nmczj\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:25 crc kubenswrapper[4830]: I0227 16:28:25.271619 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l82gk\" (UniqueName: \"kubernetes.io/projected/fc88df57-1ce1-47f5-b850-7072073c4d72-kube-api-access-l82gk\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:25 crc kubenswrapper[4830]: I0227 16:28:25.271639 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab8f9b5d-9b2e-4cb2-ab5c-392f0bf8ad89-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:25 crc kubenswrapper[4830]: I0227 16:28:25.271658 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc88df57-1ce1-47f5-b850-7072073c4d72-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:25 crc kubenswrapper[4830]: I0227 16:28:25.551147 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-mp6xh" Feb 27 16:28:25 crc kubenswrapper[4830]: I0227 16:28:25.668340 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2jzwm" Feb 27 16:28:25 crc kubenswrapper[4830]: I0227 16:28:25.668432 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2jzwm" event={"ID":"e94cb22b-b51c-4f6d-8cdd-45d6180f8462","Type":"ContainerDied","Data":"bc56063b69929f6b51d6e8d1cfcbbb6fec3e54ce967766bc0bbf75c487f5806d"} Feb 27 16:28:25 crc kubenswrapper[4830]: I0227 16:28:25.669000 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc56063b69929f6b51d6e8d1cfcbbb6fec3e54ce967766bc0bbf75c487f5806d" Feb 27 16:28:25 crc kubenswrapper[4830]: I0227 16:28:25.670118 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5550-account-create-update-5hslr" Feb 27 16:28:25 crc kubenswrapper[4830]: I0227 16:28:25.670110 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5550-account-create-update-5hslr" event={"ID":"ab8f9b5d-9b2e-4cb2-ab5c-392f0bf8ad89","Type":"ContainerDied","Data":"20866c5c55d45c055eb5f895b36f6b10133000fd30b60a89515a8ce49b77dadf"} Feb 27 16:28:25 crc kubenswrapper[4830]: I0227 16:28:25.670340 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20866c5c55d45c055eb5f895b36f6b10133000fd30b60a89515a8ce49b77dadf" Feb 27 16:28:25 crc kubenswrapper[4830]: I0227 16:28:25.671855 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mz2rm" Feb 27 16:28:25 crc kubenswrapper[4830]: I0227 16:28:25.671906 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mz2rm" event={"ID":"fc88df57-1ce1-47f5-b850-7072073c4d72","Type":"ContainerDied","Data":"6b0b5d6efbcc72da6e2cca0c5849de3070d77d3f6f18aec76ccd8c335fc148d7"} Feb 27 16:28:25 crc kubenswrapper[4830]: I0227 16:28:25.671995 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b0b5d6efbcc72da6e2cca0c5849de3070d77d3f6f18aec76ccd8c335fc148d7" Feb 27 16:28:25 crc kubenswrapper[4830]: I0227 16:28:25.673990 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-cjq7v" event={"ID":"1434c895-fa3e-4feb-a56a-0451f1f16a3b","Type":"ContainerStarted","Data":"51057a0e1285abbf0d8d8183a853aec44ee1a9c4c03ece1d5f094ba69d645778"} Feb 27 16:28:25 crc kubenswrapper[4830]: I0227 16:28:25.674131 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-cjq7v" Feb 27 16:28:25 crc kubenswrapper[4830]: I0227 16:28:25.711088 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-cjq7v" podStartSLOduration=2.711073011 podStartE2EDuration="2.711073011s" podCreationTimestamp="2026-02-27 16:28:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:28:25.696563445 +0000 UTC m=+1301.785835918" watchObservedRunningTime="2026-02-27 16:28:25.711073011 +0000 UTC m=+1301.800345474" Feb 27 16:28:25 crc kubenswrapper[4830]: I0227 16:28:25.988656 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-etc-swift\") pod \"swift-storage-0\" (UID: \"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f\") " pod="openstack/swift-storage-0" Feb 27 16:28:25 crc kubenswrapper[4830]: E0227 16:28:25.988827 4830 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 27 16:28:25 crc kubenswrapper[4830]: E0227 16:28:25.988849 4830 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 27 16:28:25 crc kubenswrapper[4830]: E0227 16:28:25.988892 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-etc-swift podName:f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f nodeName:}" failed. No retries permitted until 2026-02-27 16:28:27.988876095 +0000 UTC m=+1304.078148568 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-etc-swift") pod "swift-storage-0" (UID: "f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f") : configmap "swift-ring-files" not found Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.026014 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-776e-account-create-update-dkfsh" Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.089612 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gk25c\" (UniqueName: \"kubernetes.io/projected/b44b6447-25d6-4a6a-986d-b49fc2729061-kube-api-access-gk25c\") pod \"b44b6447-25d6-4a6a-986d-b49fc2729061\" (UID: \"b44b6447-25d6-4a6a-986d-b49fc2729061\") " Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.089783 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b44b6447-25d6-4a6a-986d-b49fc2729061-operator-scripts\") pod \"b44b6447-25d6-4a6a-986d-b49fc2729061\" (UID: \"b44b6447-25d6-4a6a-986d-b49fc2729061\") " Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.090397 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b44b6447-25d6-4a6a-986d-b49fc2729061-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b44b6447-25d6-4a6a-986d-b49fc2729061" (UID: "b44b6447-25d6-4a6a-986d-b49fc2729061"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.097259 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b44b6447-25d6-4a6a-986d-b49fc2729061-kube-api-access-gk25c" (OuterVolumeSpecName: "kube-api-access-gk25c") pod "b44b6447-25d6-4a6a-986d-b49fc2729061" (UID: "b44b6447-25d6-4a6a-986d-b49fc2729061"). InnerVolumeSpecName "kube-api-access-gk25c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.181288 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-mz689"] Feb 27 16:28:26 crc kubenswrapper[4830]: E0227 16:28:26.181701 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e94cb22b-b51c-4f6d-8cdd-45d6180f8462" containerName="mariadb-database-create" Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.181722 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e94cb22b-b51c-4f6d-8cdd-45d6180f8462" containerName="mariadb-database-create" Feb 27 16:28:26 crc kubenswrapper[4830]: E0227 16:28:26.181738 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc88df57-1ce1-47f5-b850-7072073c4d72" containerName="mariadb-database-create" Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.181747 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc88df57-1ce1-47f5-b850-7072073c4d72" containerName="mariadb-database-create" Feb 27 16:28:26 crc kubenswrapper[4830]: E0227 16:28:26.181774 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b44b6447-25d6-4a6a-986d-b49fc2729061" containerName="mariadb-account-create-update" Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.181783 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b44b6447-25d6-4a6a-986d-b49fc2729061" containerName="mariadb-account-create-update" Feb 27 16:28:26 crc kubenswrapper[4830]: E0227 16:28:26.181802 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab8f9b5d-9b2e-4cb2-ab5c-392f0bf8ad89" containerName="mariadb-account-create-update" Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.181810 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab8f9b5d-9b2e-4cb2-ab5c-392f0bf8ad89" containerName="mariadb-account-create-update" Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.182015 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab8f9b5d-9b2e-4cb2-ab5c-392f0bf8ad89" containerName="mariadb-account-create-update" Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.182029 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="b44b6447-25d6-4a6a-986d-b49fc2729061" containerName="mariadb-account-create-update" Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.182044 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc88df57-1ce1-47f5-b850-7072073c4d72" containerName="mariadb-database-create" Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.182058 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="e94cb22b-b51c-4f6d-8cdd-45d6180f8462" containerName="mariadb-database-create" Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.182661 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mz689" Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.192048 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gk25c\" (UniqueName: \"kubernetes.io/projected/b44b6447-25d6-4a6a-986d-b49fc2729061-kube-api-access-gk25c\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.192091 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b44b6447-25d6-4a6a-986d-b49fc2729061-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.214749 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-mz689"] Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.286615 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-7668-account-create-update-fvfp5"] Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.288011 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7668-account-create-update-fvfp5" Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.293318 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.294298 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d8cj\" (UniqueName: \"kubernetes.io/projected/df197887-2b7c-4c2c-b482-d411aad7f89d-kube-api-access-9d8cj\") pod \"glance-db-create-mz689\" (UID: \"df197887-2b7c-4c2c-b482-d411aad7f89d\") " pod="openstack/glance-db-create-mz689" Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.294389 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df197887-2b7c-4c2c-b482-d411aad7f89d-operator-scripts\") pod \"glance-db-create-mz689\" (UID: \"df197887-2b7c-4c2c-b482-d411aad7f89d\") " pod="openstack/glance-db-create-mz689" Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.298003 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7668-account-create-update-fvfp5"] Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.396399 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d8cj\" (UniqueName: \"kubernetes.io/projected/df197887-2b7c-4c2c-b482-d411aad7f89d-kube-api-access-9d8cj\") pod \"glance-db-create-mz689\" (UID: \"df197887-2b7c-4c2c-b482-d411aad7f89d\") " pod="openstack/glance-db-create-mz689" Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.396466 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df197887-2b7c-4c2c-b482-d411aad7f89d-operator-scripts\") pod \"glance-db-create-mz689\" (UID: \"df197887-2b7c-4c2c-b482-d411aad7f89d\") " pod="openstack/glance-db-create-mz689" Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.396531 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptlhf\" (UniqueName: \"kubernetes.io/projected/e1998358-5e92-4f90-8163-1705c1614197-kube-api-access-ptlhf\") pod \"glance-7668-account-create-update-fvfp5\" (UID: \"e1998358-5e92-4f90-8163-1705c1614197\") " pod="openstack/glance-7668-account-create-update-fvfp5" Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.396561 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1998358-5e92-4f90-8163-1705c1614197-operator-scripts\") pod \"glance-7668-account-create-update-fvfp5\" (UID: \"e1998358-5e92-4f90-8163-1705c1614197\") " pod="openstack/glance-7668-account-create-update-fvfp5" Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.401280 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df197887-2b7c-4c2c-b482-d411aad7f89d-operator-scripts\") pod \"glance-db-create-mz689\" (UID: \"df197887-2b7c-4c2c-b482-d411aad7f89d\") " pod="openstack/glance-db-create-mz689" Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.419206 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d8cj\" (UniqueName: \"kubernetes.io/projected/df197887-2b7c-4c2c-b482-d411aad7f89d-kube-api-access-9d8cj\") pod \"glance-db-create-mz689\" (UID: \"df197887-2b7c-4c2c-b482-d411aad7f89d\") " pod="openstack/glance-db-create-mz689" Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.497612 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptlhf\" (UniqueName: \"kubernetes.io/projected/e1998358-5e92-4f90-8163-1705c1614197-kube-api-access-ptlhf\") pod \"glance-7668-account-create-update-fvfp5\" (UID: \"e1998358-5e92-4f90-8163-1705c1614197\") " pod="openstack/glance-7668-account-create-update-fvfp5" Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.497662 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1998358-5e92-4f90-8163-1705c1614197-operator-scripts\") pod \"glance-7668-account-create-update-fvfp5\" (UID: \"e1998358-5e92-4f90-8163-1705c1614197\") " pod="openstack/glance-7668-account-create-update-fvfp5" Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.498444 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1998358-5e92-4f90-8163-1705c1614197-operator-scripts\") pod \"glance-7668-account-create-update-fvfp5\" (UID: \"e1998358-5e92-4f90-8163-1705c1614197\") " pod="openstack/glance-7668-account-create-update-fvfp5" Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.503221 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mz689" Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.549548 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptlhf\" (UniqueName: \"kubernetes.io/projected/e1998358-5e92-4f90-8163-1705c1614197-kube-api-access-ptlhf\") pod \"glance-7668-account-create-update-fvfp5\" (UID: \"e1998358-5e92-4f90-8163-1705c1614197\") " pod="openstack/glance-7668-account-create-update-fvfp5" Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.611305 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7668-account-create-update-fvfp5" Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.694783 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-776e-account-create-update-dkfsh" Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.694828 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-776e-account-create-update-dkfsh" event={"ID":"b44b6447-25d6-4a6a-986d-b49fc2729061","Type":"ContainerDied","Data":"ae798cbab13771b93416497d24547f216d0d071f63b00a90a8857e432c41aaeb"} Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.694863 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae798cbab13771b93416497d24547f216d0d071f63b00a90a8857e432c41aaeb" Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.783082 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f384d75-651d-4e2b-9944-6df7727f9878" path="/var/lib/kubelet/pods/6f384d75-651d-4e2b-9944-6df7727f9878/volumes" Feb 27 16:28:26 crc kubenswrapper[4830]: I0227 16:28:26.809512 4830 scope.go:117] "RemoveContainer" containerID="886dac081110561ac958d0214372fee20a21a53a90469a1c53e73815d1340221" Feb 27 16:28:27 crc kubenswrapper[4830]: I0227 16:28:27.076167 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-mz689"] Feb 27 16:28:27 crc kubenswrapper[4830]: W0227 16:28:27.081387 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf197887_2b7c_4c2c_b482_d411aad7f89d.slice/crio-fadd91929e5c5620326ec75e0f35cfc9c29df22b3db2ff587779d65bb0a61c9a WatchSource:0}: Error finding container fadd91929e5c5620326ec75e0f35cfc9c29df22b3db2ff587779d65bb0a61c9a: Status 404 returned error can't find the container with id fadd91929e5c5620326ec75e0f35cfc9c29df22b3db2ff587779d65bb0a61c9a Feb 27 16:28:27 crc kubenswrapper[4830]: I0227 16:28:27.140245 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7668-account-create-update-fvfp5"] Feb 27 16:28:27 crc kubenswrapper[4830]: W0227 16:28:27.149615 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1998358_5e92_4f90_8163_1705c1614197.slice/crio-aa4efa6ef92b19014b3498a1cb1b19e31c4e14e105ec2f5c5a7f0e7a0b82cd16 WatchSource:0}: Error finding container aa4efa6ef92b19014b3498a1cb1b19e31c4e14e105ec2f5c5a7f0e7a0b82cd16: Status 404 returned error can't find the container with id aa4efa6ef92b19014b3498a1cb1b19e31c4e14e105ec2f5c5a7f0e7a0b82cd16 Feb 27 16:28:27 crc kubenswrapper[4830]: I0227 16:28:27.707838 4830 generic.go:334] "Generic (PLEG): container finished" podID="e1998358-5e92-4f90-8163-1705c1614197" containerID="500cf1204bd29c7d932fe8fd9f4fcaa432d627c80cd7cc1c4807fae6e659c38a" exitCode=0 Feb 27 16:28:27 crc kubenswrapper[4830]: I0227 16:28:27.707905 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7668-account-create-update-fvfp5" event={"ID":"e1998358-5e92-4f90-8163-1705c1614197","Type":"ContainerDied","Data":"500cf1204bd29c7d932fe8fd9f4fcaa432d627c80cd7cc1c4807fae6e659c38a"} Feb 27 16:28:27 crc kubenswrapper[4830]: I0227 16:28:27.708023 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7668-account-create-update-fvfp5" event={"ID":"e1998358-5e92-4f90-8163-1705c1614197","Type":"ContainerStarted","Data":"aa4efa6ef92b19014b3498a1cb1b19e31c4e14e105ec2f5c5a7f0e7a0b82cd16"} Feb 27 16:28:27 crc kubenswrapper[4830]: I0227 16:28:27.710441 4830 generic.go:334] "Generic (PLEG): container finished" podID="df197887-2b7c-4c2c-b482-d411aad7f89d" containerID="32f67a0fa88a204c52134df945dd8bacfe73574220c11eccbe9250a8c9a31014" exitCode=0 Feb 27 16:28:27 crc kubenswrapper[4830]: I0227 16:28:27.710504 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mz689" event={"ID":"df197887-2b7c-4c2c-b482-d411aad7f89d","Type":"ContainerDied","Data":"32f67a0fa88a204c52134df945dd8bacfe73574220c11eccbe9250a8c9a31014"} Feb 27 16:28:27 crc kubenswrapper[4830]: I0227 16:28:27.710539 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mz689" event={"ID":"df197887-2b7c-4c2c-b482-d411aad7f89d","Type":"ContainerStarted","Data":"fadd91929e5c5620326ec75e0f35cfc9c29df22b3db2ff587779d65bb0a61c9a"} Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.021462 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-etc-swift\") pod \"swift-storage-0\" (UID: \"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f\") " pod="openstack/swift-storage-0" Feb 27 16:28:28 crc kubenswrapper[4830]: E0227 16:28:28.021775 4830 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 27 16:28:28 crc kubenswrapper[4830]: E0227 16:28:28.021846 4830 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 27 16:28:28 crc kubenswrapper[4830]: E0227 16:28:28.022019 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-etc-swift podName:f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f nodeName:}" failed. No retries permitted until 2026-02-27 16:28:32.021925922 +0000 UTC m=+1308.111198435 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-etc-swift") pod "swift-storage-0" (UID: "f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f") : configmap "swift-ring-files" not found Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.025813 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-vw4sx"] Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.030090 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vw4sx" Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.034561 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.061497 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vw4sx"] Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.124066 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnhjx\" (UniqueName: \"kubernetes.io/projected/c3bcc49e-737a-4e05-a8e2-e8007c30d9c7-kube-api-access-nnhjx\") pod \"root-account-create-update-vw4sx\" (UID: \"c3bcc49e-737a-4e05-a8e2-e8007c30d9c7\") " pod="openstack/root-account-create-update-vw4sx" Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.124219 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3bcc49e-737a-4e05-a8e2-e8007c30d9c7-operator-scripts\") pod \"root-account-create-update-vw4sx\" (UID: \"c3bcc49e-737a-4e05-a8e2-e8007c30d9c7\") " pod="openstack/root-account-create-update-vw4sx" Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.225780 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnhjx\" (UniqueName: \"kubernetes.io/projected/c3bcc49e-737a-4e05-a8e2-e8007c30d9c7-kube-api-access-nnhjx\") pod \"root-account-create-update-vw4sx\" (UID: \"c3bcc49e-737a-4e05-a8e2-e8007c30d9c7\") " pod="openstack/root-account-create-update-vw4sx" Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.225926 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3bcc49e-737a-4e05-a8e2-e8007c30d9c7-operator-scripts\") pod \"root-account-create-update-vw4sx\" (UID: \"c3bcc49e-737a-4e05-a8e2-e8007c30d9c7\") " pod="openstack/root-account-create-update-vw4sx" Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.227256 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3bcc49e-737a-4e05-a8e2-e8007c30d9c7-operator-scripts\") pod \"root-account-create-update-vw4sx\" (UID: \"c3bcc49e-737a-4e05-a8e2-e8007c30d9c7\") " pod="openstack/root-account-create-update-vw4sx" Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.270231 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnhjx\" (UniqueName: \"kubernetes.io/projected/c3bcc49e-737a-4e05-a8e2-e8007c30d9c7-kube-api-access-nnhjx\") pod \"root-account-create-update-vw4sx\" (UID: \"c3bcc49e-737a-4e05-a8e2-e8007c30d9c7\") " pod="openstack/root-account-create-update-vw4sx" Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.286639 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-v5xs2"] Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.288041 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v5xs2" Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.293162 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.293294 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.293398 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.297426 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-v5xs2"] Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.327184 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-etc-swift\") pod \"swift-ring-rebalance-v5xs2\" (UID: \"2687dd0d-1fea-48d6-a53a-b10ccfa7d223\") " pod="openstack/swift-ring-rebalance-v5xs2" Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.327684 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbhhm\" (UniqueName: \"kubernetes.io/projected/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-kube-api-access-hbhhm\") pod \"swift-ring-rebalance-v5xs2\" (UID: \"2687dd0d-1fea-48d6-a53a-b10ccfa7d223\") " pod="openstack/swift-ring-rebalance-v5xs2" Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.327749 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-dispersionconf\") pod \"swift-ring-rebalance-v5xs2\" (UID: \"2687dd0d-1fea-48d6-a53a-b10ccfa7d223\") " pod="openstack/swift-ring-rebalance-v5xs2" Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.327848 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-ring-data-devices\") pod \"swift-ring-rebalance-v5xs2\" (UID: \"2687dd0d-1fea-48d6-a53a-b10ccfa7d223\") " pod="openstack/swift-ring-rebalance-v5xs2" Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.327940 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-scripts\") pod \"swift-ring-rebalance-v5xs2\" (UID: \"2687dd0d-1fea-48d6-a53a-b10ccfa7d223\") " pod="openstack/swift-ring-rebalance-v5xs2" Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.328010 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-swiftconf\") pod \"swift-ring-rebalance-v5xs2\" (UID: \"2687dd0d-1fea-48d6-a53a-b10ccfa7d223\") " pod="openstack/swift-ring-rebalance-v5xs2" Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.328047 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-combined-ca-bundle\") pod \"swift-ring-rebalance-v5xs2\" (UID: \"2687dd0d-1fea-48d6-a53a-b10ccfa7d223\") " pod="openstack/swift-ring-rebalance-v5xs2" Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.360202 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vw4sx" Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.430437 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-dispersionconf\") pod \"swift-ring-rebalance-v5xs2\" (UID: \"2687dd0d-1fea-48d6-a53a-b10ccfa7d223\") " pod="openstack/swift-ring-rebalance-v5xs2" Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.430538 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-ring-data-devices\") pod \"swift-ring-rebalance-v5xs2\" (UID: \"2687dd0d-1fea-48d6-a53a-b10ccfa7d223\") " pod="openstack/swift-ring-rebalance-v5xs2" Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.430590 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-scripts\") pod \"swift-ring-rebalance-v5xs2\" (UID: \"2687dd0d-1fea-48d6-a53a-b10ccfa7d223\") " pod="openstack/swift-ring-rebalance-v5xs2" Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.430631 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-swiftconf\") pod \"swift-ring-rebalance-v5xs2\" (UID: \"2687dd0d-1fea-48d6-a53a-b10ccfa7d223\") " pod="openstack/swift-ring-rebalance-v5xs2" Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.430651 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-combined-ca-bundle\") pod \"swift-ring-rebalance-v5xs2\" (UID: \"2687dd0d-1fea-48d6-a53a-b10ccfa7d223\") " pod="openstack/swift-ring-rebalance-v5xs2" Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.430718 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-etc-swift\") pod \"swift-ring-rebalance-v5xs2\" (UID: \"2687dd0d-1fea-48d6-a53a-b10ccfa7d223\") " pod="openstack/swift-ring-rebalance-v5xs2" Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.430750 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbhhm\" (UniqueName: \"kubernetes.io/projected/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-kube-api-access-hbhhm\") pod \"swift-ring-rebalance-v5xs2\" (UID: \"2687dd0d-1fea-48d6-a53a-b10ccfa7d223\") " pod="openstack/swift-ring-rebalance-v5xs2" Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.431595 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-etc-swift\") pod \"swift-ring-rebalance-v5xs2\" (UID: \"2687dd0d-1fea-48d6-a53a-b10ccfa7d223\") " pod="openstack/swift-ring-rebalance-v5xs2" Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.432141 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-scripts\") pod \"swift-ring-rebalance-v5xs2\" (UID: \"2687dd0d-1fea-48d6-a53a-b10ccfa7d223\") " pod="openstack/swift-ring-rebalance-v5xs2" Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.432279 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-ring-data-devices\") pod \"swift-ring-rebalance-v5xs2\" (UID: \"2687dd0d-1fea-48d6-a53a-b10ccfa7d223\") " pod="openstack/swift-ring-rebalance-v5xs2" Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.434209 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-combined-ca-bundle\") pod \"swift-ring-rebalance-v5xs2\" (UID: \"2687dd0d-1fea-48d6-a53a-b10ccfa7d223\") " pod="openstack/swift-ring-rebalance-v5xs2" Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.434338 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-dispersionconf\") pod \"swift-ring-rebalance-v5xs2\" (UID: \"2687dd0d-1fea-48d6-a53a-b10ccfa7d223\") " pod="openstack/swift-ring-rebalance-v5xs2" Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.436925 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-swiftconf\") pod \"swift-ring-rebalance-v5xs2\" (UID: \"2687dd0d-1fea-48d6-a53a-b10ccfa7d223\") " pod="openstack/swift-ring-rebalance-v5xs2" Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.448115 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbhhm\" (UniqueName: \"kubernetes.io/projected/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-kube-api-access-hbhhm\") pod \"swift-ring-rebalance-v5xs2\" (UID: \"2687dd0d-1fea-48d6-a53a-b10ccfa7d223\") " pod="openstack/swift-ring-rebalance-v5xs2" Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.629438 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v5xs2" Feb 27 16:28:28 crc kubenswrapper[4830]: I0227 16:28:28.829940 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vw4sx"] Feb 27 16:28:28 crc kubenswrapper[4830]: W0227 16:28:28.835680 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3bcc49e_737a_4e05_a8e2_e8007c30d9c7.slice/crio-713b8135a475a2ecfe89a7ec9ef43e70f72a25057c613d2c4cafbc7677ed79cd WatchSource:0}: Error finding container 713b8135a475a2ecfe89a7ec9ef43e70f72a25057c613d2c4cafbc7677ed79cd: Status 404 returned error can't find the container with id 713b8135a475a2ecfe89a7ec9ef43e70f72a25057c613d2c4cafbc7677ed79cd Feb 27 16:28:29 crc kubenswrapper[4830]: I0227 16:28:29.085743 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mz689" Feb 27 16:28:29 crc kubenswrapper[4830]: I0227 16:28:29.113932 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-v5xs2"] Feb 27 16:28:29 crc kubenswrapper[4830]: I0227 16:28:29.143651 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9d8cj\" (UniqueName: \"kubernetes.io/projected/df197887-2b7c-4c2c-b482-d411aad7f89d-kube-api-access-9d8cj\") pod \"df197887-2b7c-4c2c-b482-d411aad7f89d\" (UID: \"df197887-2b7c-4c2c-b482-d411aad7f89d\") " Feb 27 16:28:29 crc kubenswrapper[4830]: I0227 16:28:29.143762 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df197887-2b7c-4c2c-b482-d411aad7f89d-operator-scripts\") pod \"df197887-2b7c-4c2c-b482-d411aad7f89d\" (UID: \"df197887-2b7c-4c2c-b482-d411aad7f89d\") " Feb 27 16:28:29 crc kubenswrapper[4830]: I0227 16:28:29.144771 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df197887-2b7c-4c2c-b482-d411aad7f89d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "df197887-2b7c-4c2c-b482-d411aad7f89d" (UID: "df197887-2b7c-4c2c-b482-d411aad7f89d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:28:29 crc kubenswrapper[4830]: I0227 16:28:29.151766 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df197887-2b7c-4c2c-b482-d411aad7f89d-kube-api-access-9d8cj" (OuterVolumeSpecName: "kube-api-access-9d8cj") pod "df197887-2b7c-4c2c-b482-d411aad7f89d" (UID: "df197887-2b7c-4c2c-b482-d411aad7f89d"). InnerVolumeSpecName "kube-api-access-9d8cj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:28:29 crc kubenswrapper[4830]: I0227 16:28:29.153479 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7668-account-create-update-fvfp5" Feb 27 16:28:29 crc kubenswrapper[4830]: I0227 16:28:29.245444 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptlhf\" (UniqueName: \"kubernetes.io/projected/e1998358-5e92-4f90-8163-1705c1614197-kube-api-access-ptlhf\") pod \"e1998358-5e92-4f90-8163-1705c1614197\" (UID: \"e1998358-5e92-4f90-8163-1705c1614197\") " Feb 27 16:28:29 crc kubenswrapper[4830]: I0227 16:28:29.245495 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1998358-5e92-4f90-8163-1705c1614197-operator-scripts\") pod \"e1998358-5e92-4f90-8163-1705c1614197\" (UID: \"e1998358-5e92-4f90-8163-1705c1614197\") " Feb 27 16:28:29 crc kubenswrapper[4830]: I0227 16:28:29.245851 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df197887-2b7c-4c2c-b482-d411aad7f89d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:29 crc kubenswrapper[4830]: I0227 16:28:29.245872 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9d8cj\" (UniqueName: \"kubernetes.io/projected/df197887-2b7c-4c2c-b482-d411aad7f89d-kube-api-access-9d8cj\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:29 crc kubenswrapper[4830]: I0227 16:28:29.246385 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1998358-5e92-4f90-8163-1705c1614197-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e1998358-5e92-4f90-8163-1705c1614197" (UID: "e1998358-5e92-4f90-8163-1705c1614197"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:28:29 crc kubenswrapper[4830]: I0227 16:28:29.258508 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1998358-5e92-4f90-8163-1705c1614197-kube-api-access-ptlhf" (OuterVolumeSpecName: "kube-api-access-ptlhf") pod "e1998358-5e92-4f90-8163-1705c1614197" (UID: "e1998358-5e92-4f90-8163-1705c1614197"). InnerVolumeSpecName "kube-api-access-ptlhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:28:29 crc kubenswrapper[4830]: I0227 16:28:29.347921 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptlhf\" (UniqueName: \"kubernetes.io/projected/e1998358-5e92-4f90-8163-1705c1614197-kube-api-access-ptlhf\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:29 crc kubenswrapper[4830]: I0227 16:28:29.347976 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1998358-5e92-4f90-8163-1705c1614197-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:29 crc kubenswrapper[4830]: I0227 16:28:29.733990 4830 generic.go:334] "Generic (PLEG): container finished" podID="c3bcc49e-737a-4e05-a8e2-e8007c30d9c7" containerID="c38481cf7ee01c4ffc8908412dd17ed7ec743f3072b5c6e5861cbac77132070e" exitCode=0 Feb 27 16:28:29 crc kubenswrapper[4830]: I0227 16:28:29.734090 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vw4sx" event={"ID":"c3bcc49e-737a-4e05-a8e2-e8007c30d9c7","Type":"ContainerDied","Data":"c38481cf7ee01c4ffc8908412dd17ed7ec743f3072b5c6e5861cbac77132070e"} Feb 27 16:28:29 crc kubenswrapper[4830]: I0227 16:28:29.734128 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vw4sx" event={"ID":"c3bcc49e-737a-4e05-a8e2-e8007c30d9c7","Type":"ContainerStarted","Data":"713b8135a475a2ecfe89a7ec9ef43e70f72a25057c613d2c4cafbc7677ed79cd"} Feb 27 16:28:29 crc kubenswrapper[4830]: I0227 16:28:29.736399 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-v5xs2" event={"ID":"2687dd0d-1fea-48d6-a53a-b10ccfa7d223","Type":"ContainerStarted","Data":"27ac5df190b55dc161aa97f52b0d755b5b217c436b44c42416a82e14035c31d0"} Feb 27 16:28:29 crc kubenswrapper[4830]: I0227 16:28:29.741849 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7668-account-create-update-fvfp5" Feb 27 16:28:29 crc kubenswrapper[4830]: I0227 16:28:29.742489 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7668-account-create-update-fvfp5" event={"ID":"e1998358-5e92-4f90-8163-1705c1614197","Type":"ContainerDied","Data":"aa4efa6ef92b19014b3498a1cb1b19e31c4e14e105ec2f5c5a7f0e7a0b82cd16"} Feb 27 16:28:29 crc kubenswrapper[4830]: I0227 16:28:29.742544 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa4efa6ef92b19014b3498a1cb1b19e31c4e14e105ec2f5c5a7f0e7a0b82cd16" Feb 27 16:28:29 crc kubenswrapper[4830]: I0227 16:28:29.745295 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mz689" event={"ID":"df197887-2b7c-4c2c-b482-d411aad7f89d","Type":"ContainerDied","Data":"fadd91929e5c5620326ec75e0f35cfc9c29df22b3db2ff587779d65bb0a61c9a"} Feb 27 16:28:29 crc kubenswrapper[4830]: I0227 16:28:29.745358 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fadd91929e5c5620326ec75e0f35cfc9c29df22b3db2ff587779d65bb0a61c9a" Feb 27 16:28:29 crc kubenswrapper[4830]: I0227 16:28:29.745441 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mz689" Feb 27 16:28:31 crc kubenswrapper[4830]: I0227 16:28:31.518656 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-jhwfg"] Feb 27 16:28:31 crc kubenswrapper[4830]: E0227 16:28:31.519453 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1998358-5e92-4f90-8163-1705c1614197" containerName="mariadb-account-create-update" Feb 27 16:28:31 crc kubenswrapper[4830]: I0227 16:28:31.519469 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1998358-5e92-4f90-8163-1705c1614197" containerName="mariadb-account-create-update" Feb 27 16:28:31 crc kubenswrapper[4830]: E0227 16:28:31.519488 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df197887-2b7c-4c2c-b482-d411aad7f89d" containerName="mariadb-database-create" Feb 27 16:28:31 crc kubenswrapper[4830]: I0227 16:28:31.519494 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="df197887-2b7c-4c2c-b482-d411aad7f89d" containerName="mariadb-database-create" Feb 27 16:28:31 crc kubenswrapper[4830]: I0227 16:28:31.519661 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="df197887-2b7c-4c2c-b482-d411aad7f89d" containerName="mariadb-database-create" Feb 27 16:28:31 crc kubenswrapper[4830]: I0227 16:28:31.519678 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1998358-5e92-4f90-8163-1705c1614197" containerName="mariadb-account-create-update" Feb 27 16:28:31 crc kubenswrapper[4830]: I0227 16:28:31.520219 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jhwfg" Feb 27 16:28:31 crc kubenswrapper[4830]: I0227 16:28:31.522220 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 27 16:28:31 crc kubenswrapper[4830]: I0227 16:28:31.522252 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mh994" Feb 27 16:28:31 crc kubenswrapper[4830]: I0227 16:28:31.528007 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-jhwfg"] Feb 27 16:28:31 crc kubenswrapper[4830]: I0227 16:28:31.587699 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khdl9\" (UniqueName: \"kubernetes.io/projected/034a69b5-6540-4b46-b0d5-55098d2f6467-kube-api-access-khdl9\") pod \"glance-db-sync-jhwfg\" (UID: \"034a69b5-6540-4b46-b0d5-55098d2f6467\") " pod="openstack/glance-db-sync-jhwfg" Feb 27 16:28:31 crc kubenswrapper[4830]: I0227 16:28:31.587776 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/034a69b5-6540-4b46-b0d5-55098d2f6467-db-sync-config-data\") pod \"glance-db-sync-jhwfg\" (UID: \"034a69b5-6540-4b46-b0d5-55098d2f6467\") " pod="openstack/glance-db-sync-jhwfg" Feb 27 16:28:31 crc kubenswrapper[4830]: I0227 16:28:31.587806 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/034a69b5-6540-4b46-b0d5-55098d2f6467-combined-ca-bundle\") pod \"glance-db-sync-jhwfg\" (UID: \"034a69b5-6540-4b46-b0d5-55098d2f6467\") " pod="openstack/glance-db-sync-jhwfg" Feb 27 16:28:31 crc kubenswrapper[4830]: I0227 16:28:31.588003 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/034a69b5-6540-4b46-b0d5-55098d2f6467-config-data\") pod \"glance-db-sync-jhwfg\" (UID: \"034a69b5-6540-4b46-b0d5-55098d2f6467\") " pod="openstack/glance-db-sync-jhwfg" Feb 27 16:28:31 crc kubenswrapper[4830]: I0227 16:28:31.688640 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/034a69b5-6540-4b46-b0d5-55098d2f6467-config-data\") pod \"glance-db-sync-jhwfg\" (UID: \"034a69b5-6540-4b46-b0d5-55098d2f6467\") " pod="openstack/glance-db-sync-jhwfg" Feb 27 16:28:31 crc kubenswrapper[4830]: I0227 16:28:31.688698 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khdl9\" (UniqueName: \"kubernetes.io/projected/034a69b5-6540-4b46-b0d5-55098d2f6467-kube-api-access-khdl9\") pod \"glance-db-sync-jhwfg\" (UID: \"034a69b5-6540-4b46-b0d5-55098d2f6467\") " pod="openstack/glance-db-sync-jhwfg" Feb 27 16:28:31 crc kubenswrapper[4830]: I0227 16:28:31.688733 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/034a69b5-6540-4b46-b0d5-55098d2f6467-db-sync-config-data\") pod \"glance-db-sync-jhwfg\" (UID: \"034a69b5-6540-4b46-b0d5-55098d2f6467\") " pod="openstack/glance-db-sync-jhwfg" Feb 27 16:28:31 crc kubenswrapper[4830]: I0227 16:28:31.688757 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/034a69b5-6540-4b46-b0d5-55098d2f6467-combined-ca-bundle\") pod \"glance-db-sync-jhwfg\" (UID: \"034a69b5-6540-4b46-b0d5-55098d2f6467\") " pod="openstack/glance-db-sync-jhwfg" Feb 27 16:28:31 crc kubenswrapper[4830]: I0227 16:28:31.697548 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/034a69b5-6540-4b46-b0d5-55098d2f6467-combined-ca-bundle\") pod \"glance-db-sync-jhwfg\" (UID: \"034a69b5-6540-4b46-b0d5-55098d2f6467\") " pod="openstack/glance-db-sync-jhwfg" Feb 27 16:28:31 crc kubenswrapper[4830]: I0227 16:28:31.701483 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/034a69b5-6540-4b46-b0d5-55098d2f6467-db-sync-config-data\") pod \"glance-db-sync-jhwfg\" (UID: \"034a69b5-6540-4b46-b0d5-55098d2f6467\") " pod="openstack/glance-db-sync-jhwfg" Feb 27 16:28:31 crc kubenswrapper[4830]: I0227 16:28:31.705578 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/034a69b5-6540-4b46-b0d5-55098d2f6467-config-data\") pod \"glance-db-sync-jhwfg\" (UID: \"034a69b5-6540-4b46-b0d5-55098d2f6467\") " pod="openstack/glance-db-sync-jhwfg" Feb 27 16:28:31 crc kubenswrapper[4830]: I0227 16:28:31.708702 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khdl9\" (UniqueName: \"kubernetes.io/projected/034a69b5-6540-4b46-b0d5-55098d2f6467-kube-api-access-khdl9\") pod \"glance-db-sync-jhwfg\" (UID: \"034a69b5-6540-4b46-b0d5-55098d2f6467\") " pod="openstack/glance-db-sync-jhwfg" Feb 27 16:28:31 crc kubenswrapper[4830]: I0227 16:28:31.838745 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jhwfg" Feb 27 16:28:32 crc kubenswrapper[4830]: I0227 16:28:32.094332 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-etc-swift\") pod \"swift-storage-0\" (UID: \"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f\") " pod="openstack/swift-storage-0" Feb 27 16:28:32 crc kubenswrapper[4830]: E0227 16:28:32.094565 4830 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 27 16:28:32 crc kubenswrapper[4830]: E0227 16:28:32.094599 4830 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 27 16:28:32 crc kubenswrapper[4830]: E0227 16:28:32.094665 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-etc-swift podName:f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f nodeName:}" failed. No retries permitted until 2026-02-27 16:28:40.094643945 +0000 UTC m=+1316.183916418 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-etc-swift") pod "swift-storage-0" (UID: "f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f") : configmap "swift-ring-files" not found Feb 27 16:28:32 crc kubenswrapper[4830]: I0227 16:28:32.526204 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vw4sx" Feb 27 16:28:32 crc kubenswrapper[4830]: I0227 16:28:32.612705 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3bcc49e-737a-4e05-a8e2-e8007c30d9c7-operator-scripts\") pod \"c3bcc49e-737a-4e05-a8e2-e8007c30d9c7\" (UID: \"c3bcc49e-737a-4e05-a8e2-e8007c30d9c7\") " Feb 27 16:28:32 crc kubenswrapper[4830]: I0227 16:28:32.612831 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnhjx\" (UniqueName: \"kubernetes.io/projected/c3bcc49e-737a-4e05-a8e2-e8007c30d9c7-kube-api-access-nnhjx\") pod \"c3bcc49e-737a-4e05-a8e2-e8007c30d9c7\" (UID: \"c3bcc49e-737a-4e05-a8e2-e8007c30d9c7\") " Feb 27 16:28:32 crc kubenswrapper[4830]: I0227 16:28:32.613727 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3bcc49e-737a-4e05-a8e2-e8007c30d9c7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c3bcc49e-737a-4e05-a8e2-e8007c30d9c7" (UID: "c3bcc49e-737a-4e05-a8e2-e8007c30d9c7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:28:32 crc kubenswrapper[4830]: I0227 16:28:32.617195 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3bcc49e-737a-4e05-a8e2-e8007c30d9c7-kube-api-access-nnhjx" (OuterVolumeSpecName: "kube-api-access-nnhjx") pod "c3bcc49e-737a-4e05-a8e2-e8007c30d9c7" (UID: "c3bcc49e-737a-4e05-a8e2-e8007c30d9c7"). InnerVolumeSpecName "kube-api-access-nnhjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:28:32 crc kubenswrapper[4830]: I0227 16:28:32.714687 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3bcc49e-737a-4e05-a8e2-e8007c30d9c7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:32 crc kubenswrapper[4830]: I0227 16:28:32.714715 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnhjx\" (UniqueName: \"kubernetes.io/projected/c3bcc49e-737a-4e05-a8e2-e8007c30d9c7-kube-api-access-nnhjx\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:32 crc kubenswrapper[4830]: I0227 16:28:32.777678 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-v5xs2" event={"ID":"2687dd0d-1fea-48d6-a53a-b10ccfa7d223","Type":"ContainerStarted","Data":"03feac40296d7a4209bb84be744dfc7a7221fe91f52d107820ff8c50b9949c8f"} Feb 27 16:28:32 crc kubenswrapper[4830]: I0227 16:28:32.780283 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vw4sx" event={"ID":"c3bcc49e-737a-4e05-a8e2-e8007c30d9c7","Type":"ContainerDied","Data":"713b8135a475a2ecfe89a7ec9ef43e70f72a25057c613d2c4cafbc7677ed79cd"} Feb 27 16:28:32 crc kubenswrapper[4830]: I0227 16:28:32.780326 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vw4sx" Feb 27 16:28:32 crc kubenswrapper[4830]: I0227 16:28:32.780358 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="713b8135a475a2ecfe89a7ec9ef43e70f72a25057c613d2c4cafbc7677ed79cd" Feb 27 16:28:32 crc kubenswrapper[4830]: I0227 16:28:32.796729 4830 generic.go:334] "Generic (PLEG): container finished" podID="aa5b7bdd-50bb-4123-a32a-0c7e97035a3f" containerID="aea522c2ecab41c50d2a7430cd094093e90f5bf0a044bc4b659d102558a7db55" exitCode=0 Feb 27 16:28:32 crc kubenswrapper[4830]: I0227 16:28:32.796928 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f","Type":"ContainerDied","Data":"aea522c2ecab41c50d2a7430cd094093e90f5bf0a044bc4b659d102558a7db55"} Feb 27 16:28:32 crc kubenswrapper[4830]: I0227 16:28:32.805101 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-v5xs2" podStartSLOduration=1.420931433 podStartE2EDuration="4.805081085s" podCreationTimestamp="2026-02-27 16:28:28 +0000 UTC" firstStartedPulling="2026-02-27 16:28:29.127164976 +0000 UTC m=+1305.216437459" lastFinishedPulling="2026-02-27 16:28:32.511314648 +0000 UTC m=+1308.600587111" observedRunningTime="2026-02-27 16:28:32.798636496 +0000 UTC m=+1308.887908959" watchObservedRunningTime="2026-02-27 16:28:32.805081085 +0000 UTC m=+1308.894353568" Feb 27 16:28:32 crc kubenswrapper[4830]: I0227 16:28:32.806353 4830 generic.go:334] "Generic (PLEG): container finished" podID="47514135-95a6-4b77-815a-ebf23a3cab82" containerID="5a4ec36b1a76d0a19cb17b92fc8ea7c7d1d244acdec968ae755d558d3eadddc7" exitCode=0 Feb 27 16:28:32 crc kubenswrapper[4830]: I0227 16:28:32.806426 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"47514135-95a6-4b77-815a-ebf23a3cab82","Type":"ContainerDied","Data":"5a4ec36b1a76d0a19cb17b92fc8ea7c7d1d244acdec968ae755d558d3eadddc7"} Feb 27 16:28:32 crc kubenswrapper[4830]: I0227 16:28:32.975368 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-jhwfg"] Feb 27 16:28:32 crc kubenswrapper[4830]: W0227 16:28:32.983480 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod034a69b5_6540_4b46_b0d5_55098d2f6467.slice/crio-f1d0d41e8d36e7c156813c4bc49762c505ab5fd297e882b232c71c2dc440ea7f WatchSource:0}: Error finding container f1d0d41e8d36e7c156813c4bc49762c505ab5fd297e882b232c71c2dc440ea7f: Status 404 returned error can't find the container with id f1d0d41e8d36e7c156813c4bc49762c505ab5fd297e882b232c71c2dc440ea7f Feb 27 16:28:33 crc kubenswrapper[4830]: I0227 16:28:33.497228 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-cjq7v" Feb 27 16:28:33 crc kubenswrapper[4830]: I0227 16:28:33.578759 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-mp6xh"] Feb 27 16:28:33 crc kubenswrapper[4830]: I0227 16:28:33.579006 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-mp6xh" podUID="abf16d54-1d80-400e-8da6-077a9b307708" containerName="dnsmasq-dns" containerID="cri-o://ab904d5717e9c6f6ed5f342d0e7d57fb54ad0abf3b0d854b486d6dc5a0825b5f" gracePeriod=10 Feb 27 16:28:33 crc kubenswrapper[4830]: I0227 16:28:33.821447 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jhwfg" event={"ID":"034a69b5-6540-4b46-b0d5-55098d2f6467","Type":"ContainerStarted","Data":"f1d0d41e8d36e7c156813c4bc49762c505ab5fd297e882b232c71c2dc440ea7f"} Feb 27 16:28:33 crc kubenswrapper[4830]: I0227 16:28:33.824296 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f","Type":"ContainerStarted","Data":"60b83b906afc06b23e5e1362e3117ceeff1474cd84090478f13efba3e31b7cf5"} Feb 27 16:28:33 crc kubenswrapper[4830]: I0227 16:28:33.824516 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 27 16:28:33 crc kubenswrapper[4830]: I0227 16:28:33.830350 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"47514135-95a6-4b77-815a-ebf23a3cab82","Type":"ContainerStarted","Data":"bfe6a195e3ce6d71c1ea474fd65d55d683efc1b57f94f1cbfda130d8058970ed"} Feb 27 16:28:33 crc kubenswrapper[4830]: I0227 16:28:33.830602 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:28:33 crc kubenswrapper[4830]: I0227 16:28:33.831800 4830 generic.go:334] "Generic (PLEG): container finished" podID="abf16d54-1d80-400e-8da6-077a9b307708" containerID="ab904d5717e9c6f6ed5f342d0e7d57fb54ad0abf3b0d854b486d6dc5a0825b5f" exitCode=0 Feb 27 16:28:33 crc kubenswrapper[4830]: I0227 16:28:33.831926 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-mp6xh" event={"ID":"abf16d54-1d80-400e-8da6-077a9b307708","Type":"ContainerDied","Data":"ab904d5717e9c6f6ed5f342d0e7d57fb54ad0abf3b0d854b486d6dc5a0825b5f"} Feb 27 16:28:33 crc kubenswrapper[4830]: I0227 16:28:33.855819 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=41.500479922 podStartE2EDuration="57.855803632s" podCreationTimestamp="2026-02-27 16:27:36 +0000 UTC" firstStartedPulling="2026-02-27 16:27:42.615247911 +0000 UTC m=+1258.704520374" lastFinishedPulling="2026-02-27 16:27:58.970571621 +0000 UTC m=+1275.059844084" observedRunningTime="2026-02-27 16:28:33.847644901 +0000 UTC m=+1309.936917364" watchObservedRunningTime="2026-02-27 16:28:33.855803632 +0000 UTC m=+1309.945076095" Feb 27 16:28:33 crc kubenswrapper[4830]: I0227 16:28:33.870940 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.50696843 podStartE2EDuration="57.870923523s" podCreationTimestamp="2026-02-27 16:27:36 +0000 UTC" firstStartedPulling="2026-02-27 16:27:38.626981538 +0000 UTC m=+1254.716254001" lastFinishedPulling="2026-02-27 16:27:58.990936641 +0000 UTC m=+1275.080209094" observedRunningTime="2026-02-27 16:28:33.870291377 +0000 UTC m=+1309.959563830" watchObservedRunningTime="2026-02-27 16:28:33.870923523 +0000 UTC m=+1309.960195986" Feb 27 16:28:34 crc kubenswrapper[4830]: I0227 16:28:34.073350 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-mp6xh" Feb 27 16:28:34 crc kubenswrapper[4830]: I0227 16:28:34.253483 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xks98\" (UniqueName: \"kubernetes.io/projected/abf16d54-1d80-400e-8da6-077a9b307708-kube-api-access-xks98\") pod \"abf16d54-1d80-400e-8da6-077a9b307708\" (UID: \"abf16d54-1d80-400e-8da6-077a9b307708\") " Feb 27 16:28:34 crc kubenswrapper[4830]: I0227 16:28:34.253538 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abf16d54-1d80-400e-8da6-077a9b307708-ovsdbserver-sb\") pod \"abf16d54-1d80-400e-8da6-077a9b307708\" (UID: \"abf16d54-1d80-400e-8da6-077a9b307708\") " Feb 27 16:28:34 crc kubenswrapper[4830]: I0227 16:28:34.253585 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abf16d54-1d80-400e-8da6-077a9b307708-config\") pod \"abf16d54-1d80-400e-8da6-077a9b307708\" (UID: \"abf16d54-1d80-400e-8da6-077a9b307708\") " Feb 27 16:28:34 crc kubenswrapper[4830]: I0227 16:28:34.253615 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abf16d54-1d80-400e-8da6-077a9b307708-dns-svc\") pod \"abf16d54-1d80-400e-8da6-077a9b307708\" (UID: \"abf16d54-1d80-400e-8da6-077a9b307708\") " Feb 27 16:28:34 crc kubenswrapper[4830]: I0227 16:28:34.253744 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abf16d54-1d80-400e-8da6-077a9b307708-ovsdbserver-nb\") pod \"abf16d54-1d80-400e-8da6-077a9b307708\" (UID: \"abf16d54-1d80-400e-8da6-077a9b307708\") " Feb 27 16:28:34 crc kubenswrapper[4830]: I0227 16:28:34.258571 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abf16d54-1d80-400e-8da6-077a9b307708-kube-api-access-xks98" (OuterVolumeSpecName: "kube-api-access-xks98") pod "abf16d54-1d80-400e-8da6-077a9b307708" (UID: "abf16d54-1d80-400e-8da6-077a9b307708"). InnerVolumeSpecName "kube-api-access-xks98". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:28:34 crc kubenswrapper[4830]: I0227 16:28:34.293745 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abf16d54-1d80-400e-8da6-077a9b307708-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "abf16d54-1d80-400e-8da6-077a9b307708" (UID: "abf16d54-1d80-400e-8da6-077a9b307708"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:28:34 crc kubenswrapper[4830]: I0227 16:28:34.297070 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abf16d54-1d80-400e-8da6-077a9b307708-config" (OuterVolumeSpecName: "config") pod "abf16d54-1d80-400e-8da6-077a9b307708" (UID: "abf16d54-1d80-400e-8da6-077a9b307708"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:28:34 crc kubenswrapper[4830]: I0227 16:28:34.312637 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abf16d54-1d80-400e-8da6-077a9b307708-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "abf16d54-1d80-400e-8da6-077a9b307708" (UID: "abf16d54-1d80-400e-8da6-077a9b307708"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:28:34 crc kubenswrapper[4830]: I0227 16:28:34.315019 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abf16d54-1d80-400e-8da6-077a9b307708-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "abf16d54-1d80-400e-8da6-077a9b307708" (UID: "abf16d54-1d80-400e-8da6-077a9b307708"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:28:34 crc kubenswrapper[4830]: I0227 16:28:34.355510 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abf16d54-1d80-400e-8da6-077a9b307708-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:34 crc kubenswrapper[4830]: I0227 16:28:34.355545 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xks98\" (UniqueName: \"kubernetes.io/projected/abf16d54-1d80-400e-8da6-077a9b307708-kube-api-access-xks98\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:34 crc kubenswrapper[4830]: I0227 16:28:34.355557 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abf16d54-1d80-400e-8da6-077a9b307708-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:34 crc kubenswrapper[4830]: I0227 16:28:34.355565 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abf16d54-1d80-400e-8da6-077a9b307708-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:34 crc kubenswrapper[4830]: I0227 16:28:34.355575 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abf16d54-1d80-400e-8da6-077a9b307708-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:34 crc kubenswrapper[4830]: I0227 16:28:34.387346 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-vw4sx"] Feb 27 16:28:34 crc kubenswrapper[4830]: I0227 16:28:34.398717 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-vw4sx"] Feb 27 16:28:34 crc kubenswrapper[4830]: I0227 16:28:34.781972 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3bcc49e-737a-4e05-a8e2-e8007c30d9c7" path="/var/lib/kubelet/pods/c3bcc49e-737a-4e05-a8e2-e8007c30d9c7/volumes" Feb 27 16:28:34 crc kubenswrapper[4830]: I0227 16:28:34.849733 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-mp6xh" event={"ID":"abf16d54-1d80-400e-8da6-077a9b307708","Type":"ContainerDied","Data":"7c1f5d325ae11fd1ceea92c542969658abbfbc49317e4cdeba5a24fe32372723"} Feb 27 16:28:34 crc kubenswrapper[4830]: I0227 16:28:34.850091 4830 scope.go:117] "RemoveContainer" containerID="ab904d5717e9c6f6ed5f342d0e7d57fb54ad0abf3b0d854b486d6dc5a0825b5f" Feb 27 16:28:34 crc kubenswrapper[4830]: I0227 16:28:34.849811 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-mp6xh" Feb 27 16:28:34 crc kubenswrapper[4830]: I0227 16:28:34.877048 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-mp6xh"] Feb 27 16:28:34 crc kubenswrapper[4830]: I0227 16:28:34.877865 4830 scope.go:117] "RemoveContainer" containerID="f9d1fa7dc71dd0c9d71ed9b2a227548876fd5f7b2701cb7e12592040345369b8" Feb 27 16:28:34 crc kubenswrapper[4830]: I0227 16:28:34.883443 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-mp6xh"] Feb 27 16:28:36 crc kubenswrapper[4830]: I0227 16:28:36.072770 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 27 16:28:36 crc kubenswrapper[4830]: I0227 16:28:36.778559 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abf16d54-1d80-400e-8da6-077a9b307708" path="/var/lib/kubelet/pods/abf16d54-1d80-400e-8da6-077a9b307708/volumes" Feb 27 16:28:37 crc kubenswrapper[4830]: I0227 16:28:37.643836 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-mncqx" podUID="2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60" containerName="ovn-controller" probeResult="failure" output=< Feb 27 16:28:37 crc kubenswrapper[4830]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 27 16:28:37 crc kubenswrapper[4830]: > Feb 27 16:28:38 crc kubenswrapper[4830]: I0227 16:28:38.045885 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-wn56z"] Feb 27 16:28:38 crc kubenswrapper[4830]: E0227 16:28:38.046323 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abf16d54-1d80-400e-8da6-077a9b307708" containerName="init" Feb 27 16:28:38 crc kubenswrapper[4830]: I0227 16:28:38.046337 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="abf16d54-1d80-400e-8da6-077a9b307708" containerName="init" Feb 27 16:28:38 crc kubenswrapper[4830]: E0227 16:28:38.046367 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3bcc49e-737a-4e05-a8e2-e8007c30d9c7" containerName="mariadb-account-create-update" Feb 27 16:28:38 crc kubenswrapper[4830]: I0227 16:28:38.046373 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3bcc49e-737a-4e05-a8e2-e8007c30d9c7" containerName="mariadb-account-create-update" Feb 27 16:28:38 crc kubenswrapper[4830]: E0227 16:28:38.046381 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abf16d54-1d80-400e-8da6-077a9b307708" containerName="dnsmasq-dns" Feb 27 16:28:38 crc kubenswrapper[4830]: I0227 16:28:38.046388 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="abf16d54-1d80-400e-8da6-077a9b307708" containerName="dnsmasq-dns" Feb 27 16:28:38 crc kubenswrapper[4830]: I0227 16:28:38.046544 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3bcc49e-737a-4e05-a8e2-e8007c30d9c7" containerName="mariadb-account-create-update" Feb 27 16:28:38 crc kubenswrapper[4830]: I0227 16:28:38.046561 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="abf16d54-1d80-400e-8da6-077a9b307708" containerName="dnsmasq-dns" Feb 27 16:28:38 crc kubenswrapper[4830]: I0227 16:28:38.047209 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wn56z" Feb 27 16:28:38 crc kubenswrapper[4830]: I0227 16:28:38.050166 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 27 16:28:38 crc kubenswrapper[4830]: I0227 16:28:38.053681 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wn56z"] Feb 27 16:28:38 crc kubenswrapper[4830]: I0227 16:28:38.229891 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w28dn\" (UniqueName: \"kubernetes.io/projected/8c54825e-123b-4328-a0d5-c5afb0670045-kube-api-access-w28dn\") pod \"root-account-create-update-wn56z\" (UID: \"8c54825e-123b-4328-a0d5-c5afb0670045\") " pod="openstack/root-account-create-update-wn56z" Feb 27 16:28:38 crc kubenswrapper[4830]: I0227 16:28:38.230093 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c54825e-123b-4328-a0d5-c5afb0670045-operator-scripts\") pod \"root-account-create-update-wn56z\" (UID: \"8c54825e-123b-4328-a0d5-c5afb0670045\") " pod="openstack/root-account-create-update-wn56z" Feb 27 16:28:38 crc kubenswrapper[4830]: I0227 16:28:38.332427 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c54825e-123b-4328-a0d5-c5afb0670045-operator-scripts\") pod \"root-account-create-update-wn56z\" (UID: \"8c54825e-123b-4328-a0d5-c5afb0670045\") " pod="openstack/root-account-create-update-wn56z" Feb 27 16:28:38 crc kubenswrapper[4830]: I0227 16:28:38.332582 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w28dn\" (UniqueName: \"kubernetes.io/projected/8c54825e-123b-4328-a0d5-c5afb0670045-kube-api-access-w28dn\") pod \"root-account-create-update-wn56z\" (UID: \"8c54825e-123b-4328-a0d5-c5afb0670045\") " pod="openstack/root-account-create-update-wn56z" Feb 27 16:28:38 crc kubenswrapper[4830]: I0227 16:28:38.333532 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c54825e-123b-4328-a0d5-c5afb0670045-operator-scripts\") pod \"root-account-create-update-wn56z\" (UID: \"8c54825e-123b-4328-a0d5-c5afb0670045\") " pod="openstack/root-account-create-update-wn56z" Feb 27 16:28:38 crc kubenswrapper[4830]: I0227 16:28:38.357552 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w28dn\" (UniqueName: \"kubernetes.io/projected/8c54825e-123b-4328-a0d5-c5afb0670045-kube-api-access-w28dn\") pod \"root-account-create-update-wn56z\" (UID: \"8c54825e-123b-4328-a0d5-c5afb0670045\") " pod="openstack/root-account-create-update-wn56z" Feb 27 16:28:38 crc kubenswrapper[4830]: I0227 16:28:38.370572 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wn56z" Feb 27 16:28:39 crc kubenswrapper[4830]: I0227 16:28:39.896387 4830 generic.go:334] "Generic (PLEG): container finished" podID="2687dd0d-1fea-48d6-a53a-b10ccfa7d223" containerID="03feac40296d7a4209bb84be744dfc7a7221fe91f52d107820ff8c50b9949c8f" exitCode=0 Feb 27 16:28:39 crc kubenswrapper[4830]: I0227 16:28:39.896518 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-v5xs2" event={"ID":"2687dd0d-1fea-48d6-a53a-b10ccfa7d223","Type":"ContainerDied","Data":"03feac40296d7a4209bb84be744dfc7a7221fe91f52d107820ff8c50b9949c8f"} Feb 27 16:28:40 crc kubenswrapper[4830]: I0227 16:28:40.166430 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-etc-swift\") pod \"swift-storage-0\" (UID: \"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f\") " pod="openstack/swift-storage-0" Feb 27 16:28:40 crc kubenswrapper[4830]: I0227 16:28:40.181359 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-etc-swift\") pod \"swift-storage-0\" (UID: \"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f\") " pod="openstack/swift-storage-0" Feb 27 16:28:40 crc kubenswrapper[4830]: I0227 16:28:40.190653 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 27 16:28:42 crc kubenswrapper[4830]: I0227 16:28:42.642976 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-mncqx" podUID="2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60" containerName="ovn-controller" probeResult="failure" output=< Feb 27 16:28:42 crc kubenswrapper[4830]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 27 16:28:42 crc kubenswrapper[4830]: > Feb 27 16:28:43 crc kubenswrapper[4830]: I0227 16:28:43.041371 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-qt6mr" Feb 27 16:28:43 crc kubenswrapper[4830]: I0227 16:28:43.084181 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-qt6mr" Feb 27 16:28:43 crc kubenswrapper[4830]: I0227 16:28:43.289274 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-mncqx-config-xrs8p"] Feb 27 16:28:43 crc kubenswrapper[4830]: I0227 16:28:43.292029 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mncqx-config-xrs8p" Feb 27 16:28:43 crc kubenswrapper[4830]: I0227 16:28:43.299119 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 27 16:28:43 crc kubenswrapper[4830]: I0227 16:28:43.303331 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mncqx-config-xrs8p"] Feb 27 16:28:43 crc kubenswrapper[4830]: I0227 16:28:43.443606 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1ff07a1d-2f3d-4360-b724-76db3d44e464-var-log-ovn\") pod \"ovn-controller-mncqx-config-xrs8p\" (UID: \"1ff07a1d-2f3d-4360-b724-76db3d44e464\") " pod="openstack/ovn-controller-mncqx-config-xrs8p" Feb 27 16:28:43 crc kubenswrapper[4830]: I0227 16:28:43.443660 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm42b\" (UniqueName: \"kubernetes.io/projected/1ff07a1d-2f3d-4360-b724-76db3d44e464-kube-api-access-cm42b\") pod \"ovn-controller-mncqx-config-xrs8p\" (UID: \"1ff07a1d-2f3d-4360-b724-76db3d44e464\") " pod="openstack/ovn-controller-mncqx-config-xrs8p" Feb 27 16:28:43 crc kubenswrapper[4830]: I0227 16:28:43.443693 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ff07a1d-2f3d-4360-b724-76db3d44e464-scripts\") pod \"ovn-controller-mncqx-config-xrs8p\" (UID: \"1ff07a1d-2f3d-4360-b724-76db3d44e464\") " pod="openstack/ovn-controller-mncqx-config-xrs8p" Feb 27 16:28:43 crc kubenswrapper[4830]: I0227 16:28:43.443851 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1ff07a1d-2f3d-4360-b724-76db3d44e464-additional-scripts\") pod \"ovn-controller-mncqx-config-xrs8p\" (UID: \"1ff07a1d-2f3d-4360-b724-76db3d44e464\") " pod="openstack/ovn-controller-mncqx-config-xrs8p" Feb 27 16:28:43 crc kubenswrapper[4830]: I0227 16:28:43.444008 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1ff07a1d-2f3d-4360-b724-76db3d44e464-var-run-ovn\") pod \"ovn-controller-mncqx-config-xrs8p\" (UID: \"1ff07a1d-2f3d-4360-b724-76db3d44e464\") " pod="openstack/ovn-controller-mncqx-config-xrs8p" Feb 27 16:28:43 crc kubenswrapper[4830]: I0227 16:28:43.444158 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1ff07a1d-2f3d-4360-b724-76db3d44e464-var-run\") pod \"ovn-controller-mncqx-config-xrs8p\" (UID: \"1ff07a1d-2f3d-4360-b724-76db3d44e464\") " pod="openstack/ovn-controller-mncqx-config-xrs8p" Feb 27 16:28:43 crc kubenswrapper[4830]: I0227 16:28:43.546013 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1ff07a1d-2f3d-4360-b724-76db3d44e464-additional-scripts\") pod \"ovn-controller-mncqx-config-xrs8p\" (UID: \"1ff07a1d-2f3d-4360-b724-76db3d44e464\") " pod="openstack/ovn-controller-mncqx-config-xrs8p" Feb 27 16:28:43 crc kubenswrapper[4830]: I0227 16:28:43.546085 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1ff07a1d-2f3d-4360-b724-76db3d44e464-var-run-ovn\") pod \"ovn-controller-mncqx-config-xrs8p\" (UID: \"1ff07a1d-2f3d-4360-b724-76db3d44e464\") " pod="openstack/ovn-controller-mncqx-config-xrs8p" Feb 27 16:28:43 crc kubenswrapper[4830]: I0227 16:28:43.546146 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1ff07a1d-2f3d-4360-b724-76db3d44e464-var-run\") pod \"ovn-controller-mncqx-config-xrs8p\" (UID: \"1ff07a1d-2f3d-4360-b724-76db3d44e464\") " pod="openstack/ovn-controller-mncqx-config-xrs8p" Feb 27 16:28:43 crc kubenswrapper[4830]: I0227 16:28:43.546182 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1ff07a1d-2f3d-4360-b724-76db3d44e464-var-log-ovn\") pod \"ovn-controller-mncqx-config-xrs8p\" (UID: \"1ff07a1d-2f3d-4360-b724-76db3d44e464\") " pod="openstack/ovn-controller-mncqx-config-xrs8p" Feb 27 16:28:43 crc kubenswrapper[4830]: I0227 16:28:43.546218 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm42b\" (UniqueName: \"kubernetes.io/projected/1ff07a1d-2f3d-4360-b724-76db3d44e464-kube-api-access-cm42b\") pod \"ovn-controller-mncqx-config-xrs8p\" (UID: \"1ff07a1d-2f3d-4360-b724-76db3d44e464\") " pod="openstack/ovn-controller-mncqx-config-xrs8p" Feb 27 16:28:43 crc kubenswrapper[4830]: I0227 16:28:43.546255 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ff07a1d-2f3d-4360-b724-76db3d44e464-scripts\") pod \"ovn-controller-mncqx-config-xrs8p\" (UID: \"1ff07a1d-2f3d-4360-b724-76db3d44e464\") " pod="openstack/ovn-controller-mncqx-config-xrs8p" Feb 27 16:28:43 crc kubenswrapper[4830]: I0227 16:28:43.546481 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1ff07a1d-2f3d-4360-b724-76db3d44e464-var-run-ovn\") pod \"ovn-controller-mncqx-config-xrs8p\" (UID: \"1ff07a1d-2f3d-4360-b724-76db3d44e464\") " pod="openstack/ovn-controller-mncqx-config-xrs8p" Feb 27 16:28:43 crc kubenswrapper[4830]: I0227 16:28:43.546504 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1ff07a1d-2f3d-4360-b724-76db3d44e464-var-run\") pod \"ovn-controller-mncqx-config-xrs8p\" (UID: \"1ff07a1d-2f3d-4360-b724-76db3d44e464\") " pod="openstack/ovn-controller-mncqx-config-xrs8p" Feb 27 16:28:43 crc kubenswrapper[4830]: I0227 16:28:43.546576 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1ff07a1d-2f3d-4360-b724-76db3d44e464-var-log-ovn\") pod \"ovn-controller-mncqx-config-xrs8p\" (UID: \"1ff07a1d-2f3d-4360-b724-76db3d44e464\") " pod="openstack/ovn-controller-mncqx-config-xrs8p" Feb 27 16:28:43 crc kubenswrapper[4830]: I0227 16:28:43.546887 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1ff07a1d-2f3d-4360-b724-76db3d44e464-additional-scripts\") pod \"ovn-controller-mncqx-config-xrs8p\" (UID: \"1ff07a1d-2f3d-4360-b724-76db3d44e464\") " pod="openstack/ovn-controller-mncqx-config-xrs8p" Feb 27 16:28:43 crc kubenswrapper[4830]: I0227 16:28:43.548534 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ff07a1d-2f3d-4360-b724-76db3d44e464-scripts\") pod \"ovn-controller-mncqx-config-xrs8p\" (UID: \"1ff07a1d-2f3d-4360-b724-76db3d44e464\") " pod="openstack/ovn-controller-mncqx-config-xrs8p" Feb 27 16:28:43 crc kubenswrapper[4830]: I0227 16:28:43.569588 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm42b\" (UniqueName: \"kubernetes.io/projected/1ff07a1d-2f3d-4360-b724-76db3d44e464-kube-api-access-cm42b\") pod \"ovn-controller-mncqx-config-xrs8p\" (UID: \"1ff07a1d-2f3d-4360-b724-76db3d44e464\") " pod="openstack/ovn-controller-mncqx-config-xrs8p" Feb 27 16:28:43 crc kubenswrapper[4830]: I0227 16:28:43.631541 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mncqx-config-xrs8p" Feb 27 16:28:44 crc kubenswrapper[4830]: I0227 16:28:44.952106 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-v5xs2" event={"ID":"2687dd0d-1fea-48d6-a53a-b10ccfa7d223","Type":"ContainerDied","Data":"27ac5df190b55dc161aa97f52b0d755b5b217c436b44c42416a82e14035c31d0"} Feb 27 16:28:44 crc kubenswrapper[4830]: I0227 16:28:44.952865 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27ac5df190b55dc161aa97f52b0d755b5b217c436b44c42416a82e14035c31d0" Feb 27 16:28:44 crc kubenswrapper[4830]: I0227 16:28:44.970492 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v5xs2" Feb 27 16:28:45 crc kubenswrapper[4830]: I0227 16:28:45.068602 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-combined-ca-bundle\") pod \"2687dd0d-1fea-48d6-a53a-b10ccfa7d223\" (UID: \"2687dd0d-1fea-48d6-a53a-b10ccfa7d223\") " Feb 27 16:28:45 crc kubenswrapper[4830]: I0227 16:28:45.068663 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbhhm\" (UniqueName: \"kubernetes.io/projected/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-kube-api-access-hbhhm\") pod \"2687dd0d-1fea-48d6-a53a-b10ccfa7d223\" (UID: \"2687dd0d-1fea-48d6-a53a-b10ccfa7d223\") " Feb 27 16:28:45 crc kubenswrapper[4830]: I0227 16:28:45.068694 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-dispersionconf\") pod \"2687dd0d-1fea-48d6-a53a-b10ccfa7d223\" (UID: \"2687dd0d-1fea-48d6-a53a-b10ccfa7d223\") " Feb 27 16:28:45 crc kubenswrapper[4830]: I0227 16:28:45.068757 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-swiftconf\") pod \"2687dd0d-1fea-48d6-a53a-b10ccfa7d223\" (UID: \"2687dd0d-1fea-48d6-a53a-b10ccfa7d223\") " Feb 27 16:28:45 crc kubenswrapper[4830]: I0227 16:28:45.068815 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-ring-data-devices\") pod \"2687dd0d-1fea-48d6-a53a-b10ccfa7d223\" (UID: \"2687dd0d-1fea-48d6-a53a-b10ccfa7d223\") " Feb 27 16:28:45 crc kubenswrapper[4830]: I0227 16:28:45.068863 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-scripts\") pod \"2687dd0d-1fea-48d6-a53a-b10ccfa7d223\" (UID: \"2687dd0d-1fea-48d6-a53a-b10ccfa7d223\") " Feb 27 16:28:45 crc kubenswrapper[4830]: I0227 16:28:45.069038 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-etc-swift\") pod \"2687dd0d-1fea-48d6-a53a-b10ccfa7d223\" (UID: \"2687dd0d-1fea-48d6-a53a-b10ccfa7d223\") " Feb 27 16:28:45 crc kubenswrapper[4830]: I0227 16:28:45.070286 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "2687dd0d-1fea-48d6-a53a-b10ccfa7d223" (UID: "2687dd0d-1fea-48d6-a53a-b10ccfa7d223"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:28:45 crc kubenswrapper[4830]: I0227 16:28:45.070372 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "2687dd0d-1fea-48d6-a53a-b10ccfa7d223" (UID: "2687dd0d-1fea-48d6-a53a-b10ccfa7d223"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:28:45 crc kubenswrapper[4830]: I0227 16:28:45.077540 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-kube-api-access-hbhhm" (OuterVolumeSpecName: "kube-api-access-hbhhm") pod "2687dd0d-1fea-48d6-a53a-b10ccfa7d223" (UID: "2687dd0d-1fea-48d6-a53a-b10ccfa7d223"). InnerVolumeSpecName "kube-api-access-hbhhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:28:45 crc kubenswrapper[4830]: I0227 16:28:45.089910 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "2687dd0d-1fea-48d6-a53a-b10ccfa7d223" (UID: "2687dd0d-1fea-48d6-a53a-b10ccfa7d223"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:28:45 crc kubenswrapper[4830]: I0227 16:28:45.099549 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "2687dd0d-1fea-48d6-a53a-b10ccfa7d223" (UID: "2687dd0d-1fea-48d6-a53a-b10ccfa7d223"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:28:45 crc kubenswrapper[4830]: I0227 16:28:45.099826 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-scripts" (OuterVolumeSpecName: "scripts") pod "2687dd0d-1fea-48d6-a53a-b10ccfa7d223" (UID: "2687dd0d-1fea-48d6-a53a-b10ccfa7d223"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:28:45 crc kubenswrapper[4830]: I0227 16:28:45.114641 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2687dd0d-1fea-48d6-a53a-b10ccfa7d223" (UID: "2687dd0d-1fea-48d6-a53a-b10ccfa7d223"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:28:45 crc kubenswrapper[4830]: I0227 16:28:45.171322 4830 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:45 crc kubenswrapper[4830]: I0227 16:28:45.171353 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:45 crc kubenswrapper[4830]: I0227 16:28:45.171365 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbhhm\" (UniqueName: \"kubernetes.io/projected/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-kube-api-access-hbhhm\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:45 crc kubenswrapper[4830]: I0227 16:28:45.171374 4830 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:45 crc kubenswrapper[4830]: I0227 16:28:45.171384 4830 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:45 crc kubenswrapper[4830]: I0227 16:28:45.171392 4830 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:45 crc kubenswrapper[4830]: I0227 16:28:45.171400 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2687dd0d-1fea-48d6-a53a-b10ccfa7d223-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:45 crc kubenswrapper[4830]: I0227 16:28:45.501829 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wn56z"] Feb 27 16:28:45 crc kubenswrapper[4830]: W0227 16:28:45.503146 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c54825e_123b_4328_a0d5_c5afb0670045.slice/crio-e37040d021906fa79eea781b160fa98108badcd39afe903837c4e7454c2c4b13 WatchSource:0}: Error finding container e37040d021906fa79eea781b160fa98108badcd39afe903837c4e7454c2c4b13: Status 404 returned error can't find the container with id e37040d021906fa79eea781b160fa98108badcd39afe903837c4e7454c2c4b13 Feb 27 16:28:45 crc kubenswrapper[4830]: I0227 16:28:45.511521 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 27 16:28:45 crc kubenswrapper[4830]: I0227 16:28:45.559111 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mncqx-config-xrs8p"] Feb 27 16:28:45 crc kubenswrapper[4830]: I0227 16:28:45.661685 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 27 16:28:45 crc kubenswrapper[4830]: W0227 16:28:45.666868 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0f13fa9_3e9d_4d0b_8f8f_bcca14e1617f.slice/crio-901d194be787f5ed6546be3354e5327541c03bd1ff10b0104ee52b902078a56c WatchSource:0}: Error finding container 901d194be787f5ed6546be3354e5327541c03bd1ff10b0104ee52b902078a56c: Status 404 returned error can't find the container with id 901d194be787f5ed6546be3354e5327541c03bd1ff10b0104ee52b902078a56c Feb 27 16:28:45 crc kubenswrapper[4830]: I0227 16:28:45.964890 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mncqx-config-xrs8p" event={"ID":"1ff07a1d-2f3d-4360-b724-76db3d44e464","Type":"ContainerStarted","Data":"fcd0cf746476d0b7be97ffb1c75b108abc11696f572c833e767692e6bea3ef19"} Feb 27 16:28:45 crc kubenswrapper[4830]: I0227 16:28:45.964939 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mncqx-config-xrs8p" event={"ID":"1ff07a1d-2f3d-4360-b724-76db3d44e464","Type":"ContainerStarted","Data":"73503cac9a3503678ad9a3f011d6ed9c505271635f0b233cd088142a7c8f47cc"} Feb 27 16:28:45 crc kubenswrapper[4830]: I0227 16:28:45.966515 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wn56z" event={"ID":"8c54825e-123b-4328-a0d5-c5afb0670045","Type":"ContainerStarted","Data":"5ba71f57d4ef167e52e99073d88d3906f54807c2add0033c3a350acff76a4f58"} Feb 27 16:28:45 crc kubenswrapper[4830]: I0227 16:28:45.966549 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wn56z" event={"ID":"8c54825e-123b-4328-a0d5-c5afb0670045","Type":"ContainerStarted","Data":"e37040d021906fa79eea781b160fa98108badcd39afe903837c4e7454c2c4b13"} Feb 27 16:28:45 crc kubenswrapper[4830]: I0227 16:28:45.971657 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f","Type":"ContainerStarted","Data":"901d194be787f5ed6546be3354e5327541c03bd1ff10b0104ee52b902078a56c"} Feb 27 16:28:45 crc kubenswrapper[4830]: I0227 16:28:45.976376 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jhwfg" event={"ID":"034a69b5-6540-4b46-b0d5-55098d2f6467","Type":"ContainerStarted","Data":"2dd0daef7553edc948d313e884252b38bd2ca52a2e86007a5c75ebe4c3a88a04"} Feb 27 16:28:45 crc kubenswrapper[4830]: I0227 16:28:45.976400 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v5xs2" Feb 27 16:28:45 crc kubenswrapper[4830]: I0227 16:28:45.986835 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-mncqx-config-xrs8p" podStartSLOduration=2.986820009 podStartE2EDuration="2.986820009s" podCreationTimestamp="2026-02-27 16:28:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:28:45.980832533 +0000 UTC m=+1322.070105026" watchObservedRunningTime="2026-02-27 16:28:45.986820009 +0000 UTC m=+1322.076092472" Feb 27 16:28:46 crc kubenswrapper[4830]: I0227 16:28:46.006616 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-jhwfg" podStartSLOduration=2.9111437589999998 podStartE2EDuration="15.006596354s" podCreationTimestamp="2026-02-27 16:28:31 +0000 UTC" firstStartedPulling="2026-02-27 16:28:32.986179318 +0000 UTC m=+1309.075451781" lastFinishedPulling="2026-02-27 16:28:45.081631913 +0000 UTC m=+1321.170904376" observedRunningTime="2026-02-27 16:28:45.997697956 +0000 UTC m=+1322.086970419" watchObservedRunningTime="2026-02-27 16:28:46.006596354 +0000 UTC m=+1322.095868837" Feb 27 16:28:46 crc kubenswrapper[4830]: I0227 16:28:46.038013 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-wn56z" podStartSLOduration=8.037986615 podStartE2EDuration="8.037986615s" podCreationTimestamp="2026-02-27 16:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:28:46.01580112 +0000 UTC m=+1322.105073583" watchObservedRunningTime="2026-02-27 16:28:46.037986615 +0000 UTC m=+1322.127259088" Feb 27 16:28:46 crc kubenswrapper[4830]: I0227 16:28:46.987040 4830 generic.go:334] "Generic (PLEG): container finished" podID="8c54825e-123b-4328-a0d5-c5afb0670045" containerID="5ba71f57d4ef167e52e99073d88d3906f54807c2add0033c3a350acff76a4f58" exitCode=0 Feb 27 16:28:46 crc kubenswrapper[4830]: I0227 16:28:46.987168 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wn56z" event={"ID":"8c54825e-123b-4328-a0d5-c5afb0670045","Type":"ContainerDied","Data":"5ba71f57d4ef167e52e99073d88d3906f54807c2add0033c3a350acff76a4f58"} Feb 27 16:28:46 crc kubenswrapper[4830]: I0227 16:28:46.989907 4830 generic.go:334] "Generic (PLEG): container finished" podID="1ff07a1d-2f3d-4360-b724-76db3d44e464" containerID="fcd0cf746476d0b7be97ffb1c75b108abc11696f572c833e767692e6bea3ef19" exitCode=0 Feb 27 16:28:46 crc kubenswrapper[4830]: I0227 16:28:46.989991 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mncqx-config-xrs8p" event={"ID":"1ff07a1d-2f3d-4360-b724-76db3d44e464","Type":"ContainerDied","Data":"fcd0cf746476d0b7be97ffb1c75b108abc11696f572c833e767692e6bea3ef19"} Feb 27 16:28:47 crc kubenswrapper[4830]: I0227 16:28:47.643306 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-mncqx" Feb 27 16:28:47 crc kubenswrapper[4830]: I0227 16:28:47.929254 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.024852 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f","Type":"ContainerStarted","Data":"b54307be9a881794a66b55a9bca85b4703855db739e2c59f98b8842a64710ed1"} Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.024916 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f","Type":"ContainerStarted","Data":"09edcd425fc07104a2a290237930b325e8877e8ef116e51111ef81ba1b7710e2"} Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.024934 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f","Type":"ContainerStarted","Data":"a6f8e6e02ca541ffa4fab936a485162a21cf976d73c728274bb3fd83cc01abb4"} Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.024996 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f","Type":"ContainerStarted","Data":"d31525bce81210150593ba3db8f8611a5b2d43ff82b2e5c7435f34ad45248c17"} Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.183287 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.491214 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mncqx-config-xrs8p" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.496995 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wn56z" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.535897 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1ff07a1d-2f3d-4360-b724-76db3d44e464-var-run-ovn\") pod \"1ff07a1d-2f3d-4360-b724-76db3d44e464\" (UID: \"1ff07a1d-2f3d-4360-b724-76db3d44e464\") " Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.536011 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1ff07a1d-2f3d-4360-b724-76db3d44e464-additional-scripts\") pod \"1ff07a1d-2f3d-4360-b724-76db3d44e464\" (UID: \"1ff07a1d-2f3d-4360-b724-76db3d44e464\") " Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.536022 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ff07a1d-2f3d-4360-b724-76db3d44e464-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "1ff07a1d-2f3d-4360-b724-76db3d44e464" (UID: "1ff07a1d-2f3d-4360-b724-76db3d44e464"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.536070 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1ff07a1d-2f3d-4360-b724-76db3d44e464-var-run\") pod \"1ff07a1d-2f3d-4360-b724-76db3d44e464\" (UID: \"1ff07a1d-2f3d-4360-b724-76db3d44e464\") " Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.536161 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w28dn\" (UniqueName: \"kubernetes.io/projected/8c54825e-123b-4328-a0d5-c5afb0670045-kube-api-access-w28dn\") pod \"8c54825e-123b-4328-a0d5-c5afb0670045\" (UID: \"8c54825e-123b-4328-a0d5-c5afb0670045\") " Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.536229 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c54825e-123b-4328-a0d5-c5afb0670045-operator-scripts\") pod \"8c54825e-123b-4328-a0d5-c5afb0670045\" (UID: \"8c54825e-123b-4328-a0d5-c5afb0670045\") " Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.536272 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ff07a1d-2f3d-4360-b724-76db3d44e464-scripts\") pod \"1ff07a1d-2f3d-4360-b724-76db3d44e464\" (UID: \"1ff07a1d-2f3d-4360-b724-76db3d44e464\") " Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.536321 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm42b\" (UniqueName: \"kubernetes.io/projected/1ff07a1d-2f3d-4360-b724-76db3d44e464-kube-api-access-cm42b\") pod \"1ff07a1d-2f3d-4360-b724-76db3d44e464\" (UID: \"1ff07a1d-2f3d-4360-b724-76db3d44e464\") " Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.536484 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1ff07a1d-2f3d-4360-b724-76db3d44e464-var-log-ovn\") pod \"1ff07a1d-2f3d-4360-b724-76db3d44e464\" (UID: \"1ff07a1d-2f3d-4360-b724-76db3d44e464\") " Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.536483 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ff07a1d-2f3d-4360-b724-76db3d44e464-var-run" (OuterVolumeSpecName: "var-run") pod "1ff07a1d-2f3d-4360-b724-76db3d44e464" (UID: "1ff07a1d-2f3d-4360-b724-76db3d44e464"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.536896 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c54825e-123b-4328-a0d5-c5afb0670045-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8c54825e-123b-4328-a0d5-c5afb0670045" (UID: "8c54825e-123b-4328-a0d5-c5afb0670045"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.536928 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ff07a1d-2f3d-4360-b724-76db3d44e464-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "1ff07a1d-2f3d-4360-b724-76db3d44e464" (UID: "1ff07a1d-2f3d-4360-b724-76db3d44e464"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.536988 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ff07a1d-2f3d-4360-b724-76db3d44e464-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "1ff07a1d-2f3d-4360-b724-76db3d44e464" (UID: "1ff07a1d-2f3d-4360-b724-76db3d44e464"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.537164 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c54825e-123b-4328-a0d5-c5afb0670045-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.537190 4830 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1ff07a1d-2f3d-4360-b724-76db3d44e464-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.537207 4830 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1ff07a1d-2f3d-4360-b724-76db3d44e464-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.537227 4830 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1ff07a1d-2f3d-4360-b724-76db3d44e464-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.537245 4830 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1ff07a1d-2f3d-4360-b724-76db3d44e464-var-run\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.537264 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ff07a1d-2f3d-4360-b724-76db3d44e464-scripts" (OuterVolumeSpecName: "scripts") pod "1ff07a1d-2f3d-4360-b724-76db3d44e464" (UID: "1ff07a1d-2f3d-4360-b724-76db3d44e464"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.542028 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c54825e-123b-4328-a0d5-c5afb0670045-kube-api-access-w28dn" (OuterVolumeSpecName: "kube-api-access-w28dn") pod "8c54825e-123b-4328-a0d5-c5afb0670045" (UID: "8c54825e-123b-4328-a0d5-c5afb0670045"). InnerVolumeSpecName "kube-api-access-w28dn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.544882 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ff07a1d-2f3d-4360-b724-76db3d44e464-kube-api-access-cm42b" (OuterVolumeSpecName: "kube-api-access-cm42b") pod "1ff07a1d-2f3d-4360-b724-76db3d44e464" (UID: "1ff07a1d-2f3d-4360-b724-76db3d44e464"). InnerVolumeSpecName "kube-api-access-cm42b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.638911 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w28dn\" (UniqueName: \"kubernetes.io/projected/8c54825e-123b-4328-a0d5-c5afb0670045-kube-api-access-w28dn\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.638977 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ff07a1d-2f3d-4360-b724-76db3d44e464-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.638991 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cm42b\" (UniqueName: \"kubernetes.io/projected/1ff07a1d-2f3d-4360-b724-76db3d44e464-kube-api-access-cm42b\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.652056 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-mncqx-config-xrs8p"] Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.660795 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-mncqx-config-xrs8p"] Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.780700 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ff07a1d-2f3d-4360-b724-76db3d44e464" path="/var/lib/kubelet/pods/1ff07a1d-2f3d-4360-b724-76db3d44e464/volumes" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.785110 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-mncqx-config-x5ftz"] Feb 27 16:28:48 crc kubenswrapper[4830]: E0227 16:28:48.788265 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ff07a1d-2f3d-4360-b724-76db3d44e464" containerName="ovn-config" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.788559 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ff07a1d-2f3d-4360-b724-76db3d44e464" containerName="ovn-config" Feb 27 16:28:48 crc kubenswrapper[4830]: E0227 16:28:48.788588 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c54825e-123b-4328-a0d5-c5afb0670045" containerName="mariadb-account-create-update" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.788602 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c54825e-123b-4328-a0d5-c5afb0670045" containerName="mariadb-account-create-update" Feb 27 16:28:48 crc kubenswrapper[4830]: E0227 16:28:48.788620 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2687dd0d-1fea-48d6-a53a-b10ccfa7d223" containerName="swift-ring-rebalance" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.788632 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2687dd0d-1fea-48d6-a53a-b10ccfa7d223" containerName="swift-ring-rebalance" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.788878 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ff07a1d-2f3d-4360-b724-76db3d44e464" containerName="ovn-config" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.789680 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="2687dd0d-1fea-48d6-a53a-b10ccfa7d223" containerName="swift-ring-rebalance" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.789712 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c54825e-123b-4328-a0d5-c5afb0670045" containerName="mariadb-account-create-update" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.790585 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mncqx-config-x5ftz"] Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.790696 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mncqx-config-x5ftz" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.841852 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/903af975-adfa-4548-b8bb-45994e5dc194-var-run\") pod \"ovn-controller-mncqx-config-x5ftz\" (UID: \"903af975-adfa-4548-b8bb-45994e5dc194\") " pod="openstack/ovn-controller-mncqx-config-x5ftz" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.842344 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/903af975-adfa-4548-b8bb-45994e5dc194-var-log-ovn\") pod \"ovn-controller-mncqx-config-x5ftz\" (UID: \"903af975-adfa-4548-b8bb-45994e5dc194\") " pod="openstack/ovn-controller-mncqx-config-x5ftz" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.842400 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/903af975-adfa-4548-b8bb-45994e5dc194-var-run-ovn\") pod \"ovn-controller-mncqx-config-x5ftz\" (UID: \"903af975-adfa-4548-b8bb-45994e5dc194\") " pod="openstack/ovn-controller-mncqx-config-x5ftz" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.842433 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjdmj\" (UniqueName: \"kubernetes.io/projected/903af975-adfa-4548-b8bb-45994e5dc194-kube-api-access-pjdmj\") pod \"ovn-controller-mncqx-config-x5ftz\" (UID: \"903af975-adfa-4548-b8bb-45994e5dc194\") " pod="openstack/ovn-controller-mncqx-config-x5ftz" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.842558 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/903af975-adfa-4548-b8bb-45994e5dc194-additional-scripts\") pod \"ovn-controller-mncqx-config-x5ftz\" (UID: \"903af975-adfa-4548-b8bb-45994e5dc194\") " pod="openstack/ovn-controller-mncqx-config-x5ftz" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.842712 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/903af975-adfa-4548-b8bb-45994e5dc194-scripts\") pod \"ovn-controller-mncqx-config-x5ftz\" (UID: \"903af975-adfa-4548-b8bb-45994e5dc194\") " pod="openstack/ovn-controller-mncqx-config-x5ftz" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.944229 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/903af975-adfa-4548-b8bb-45994e5dc194-var-run\") pod \"ovn-controller-mncqx-config-x5ftz\" (UID: \"903af975-adfa-4548-b8bb-45994e5dc194\") " pod="openstack/ovn-controller-mncqx-config-x5ftz" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.944309 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/903af975-adfa-4548-b8bb-45994e5dc194-var-log-ovn\") pod \"ovn-controller-mncqx-config-x5ftz\" (UID: \"903af975-adfa-4548-b8bb-45994e5dc194\") " pod="openstack/ovn-controller-mncqx-config-x5ftz" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.944394 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/903af975-adfa-4548-b8bb-45994e5dc194-var-run-ovn\") pod \"ovn-controller-mncqx-config-x5ftz\" (UID: \"903af975-adfa-4548-b8bb-45994e5dc194\") " pod="openstack/ovn-controller-mncqx-config-x5ftz" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.944445 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjdmj\" (UniqueName: \"kubernetes.io/projected/903af975-adfa-4548-b8bb-45994e5dc194-kube-api-access-pjdmj\") pod \"ovn-controller-mncqx-config-x5ftz\" (UID: \"903af975-adfa-4548-b8bb-45994e5dc194\") " pod="openstack/ovn-controller-mncqx-config-x5ftz" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.944492 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/903af975-adfa-4548-b8bb-45994e5dc194-var-run\") pod \"ovn-controller-mncqx-config-x5ftz\" (UID: \"903af975-adfa-4548-b8bb-45994e5dc194\") " pod="openstack/ovn-controller-mncqx-config-x5ftz" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.944515 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/903af975-adfa-4548-b8bb-45994e5dc194-additional-scripts\") pod \"ovn-controller-mncqx-config-x5ftz\" (UID: \"903af975-adfa-4548-b8bb-45994e5dc194\") " pod="openstack/ovn-controller-mncqx-config-x5ftz" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.944563 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/903af975-adfa-4548-b8bb-45994e5dc194-var-run-ovn\") pod \"ovn-controller-mncqx-config-x5ftz\" (UID: \"903af975-adfa-4548-b8bb-45994e5dc194\") " pod="openstack/ovn-controller-mncqx-config-x5ftz" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.944576 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/903af975-adfa-4548-b8bb-45994e5dc194-var-log-ovn\") pod \"ovn-controller-mncqx-config-x5ftz\" (UID: \"903af975-adfa-4548-b8bb-45994e5dc194\") " pod="openstack/ovn-controller-mncqx-config-x5ftz" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.944623 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/903af975-adfa-4548-b8bb-45994e5dc194-scripts\") pod \"ovn-controller-mncqx-config-x5ftz\" (UID: \"903af975-adfa-4548-b8bb-45994e5dc194\") " pod="openstack/ovn-controller-mncqx-config-x5ftz" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.945311 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/903af975-adfa-4548-b8bb-45994e5dc194-additional-scripts\") pod \"ovn-controller-mncqx-config-x5ftz\" (UID: \"903af975-adfa-4548-b8bb-45994e5dc194\") " pod="openstack/ovn-controller-mncqx-config-x5ftz" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.948523 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/903af975-adfa-4548-b8bb-45994e5dc194-scripts\") pod \"ovn-controller-mncqx-config-x5ftz\" (UID: \"903af975-adfa-4548-b8bb-45994e5dc194\") " pod="openstack/ovn-controller-mncqx-config-x5ftz" Feb 27 16:28:48 crc kubenswrapper[4830]: I0227 16:28:48.963144 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjdmj\" (UniqueName: \"kubernetes.io/projected/903af975-adfa-4548-b8bb-45994e5dc194-kube-api-access-pjdmj\") pod \"ovn-controller-mncqx-config-x5ftz\" (UID: \"903af975-adfa-4548-b8bb-45994e5dc194\") " pod="openstack/ovn-controller-mncqx-config-x5ftz" Feb 27 16:28:49 crc kubenswrapper[4830]: I0227 16:28:49.034460 4830 scope.go:117] "RemoveContainer" containerID="fcd0cf746476d0b7be97ffb1c75b108abc11696f572c833e767692e6bea3ef19" Feb 27 16:28:49 crc kubenswrapper[4830]: I0227 16:28:49.034573 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mncqx-config-xrs8p" Feb 27 16:28:49 crc kubenswrapper[4830]: I0227 16:28:49.036972 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wn56z" event={"ID":"8c54825e-123b-4328-a0d5-c5afb0670045","Type":"ContainerDied","Data":"e37040d021906fa79eea781b160fa98108badcd39afe903837c4e7454c2c4b13"} Feb 27 16:28:49 crc kubenswrapper[4830]: I0227 16:28:49.037016 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e37040d021906fa79eea781b160fa98108badcd39afe903837c4e7454c2c4b13" Feb 27 16:28:49 crc kubenswrapper[4830]: I0227 16:28:49.037078 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wn56z" Feb 27 16:28:49 crc kubenswrapper[4830]: I0227 16:28:49.125095 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mncqx-config-x5ftz" Feb 27 16:28:49 crc kubenswrapper[4830]: I0227 16:28:49.724164 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mncqx-config-x5ftz"] Feb 27 16:28:49 crc kubenswrapper[4830]: I0227 16:28:49.792504 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-h7wpk"] Feb 27 16:28:49 crc kubenswrapper[4830]: I0227 16:28:49.793746 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-h7wpk" Feb 27 16:28:49 crc kubenswrapper[4830]: I0227 16:28:49.799633 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-h7wpk"] Feb 27 16:28:49 crc kubenswrapper[4830]: I0227 16:28:49.879536 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-37dc-account-create-update-j859r"] Feb 27 16:28:49 crc kubenswrapper[4830]: I0227 16:28:49.881133 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-37dc-account-create-update-j859r" Feb 27 16:28:49 crc kubenswrapper[4830]: I0227 16:28:49.888622 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-37dc-account-create-update-j859r"] Feb 27 16:28:49 crc kubenswrapper[4830]: I0227 16:28:49.890666 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 27 16:28:49 crc kubenswrapper[4830]: I0227 16:28:49.966411 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p87sb\" (UniqueName: \"kubernetes.io/projected/d0d104e4-315f-406d-ac89-21878f96a166-kube-api-access-p87sb\") pod \"cinder-db-create-h7wpk\" (UID: \"d0d104e4-315f-406d-ac89-21878f96a166\") " pod="openstack/cinder-db-create-h7wpk" Feb 27 16:28:49 crc kubenswrapper[4830]: I0227 16:28:49.966550 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0d104e4-315f-406d-ac89-21878f96a166-operator-scripts\") pod \"cinder-db-create-h7wpk\" (UID: \"d0d104e4-315f-406d-ac89-21878f96a166\") " pod="openstack/cinder-db-create-h7wpk" Feb 27 16:28:49 crc kubenswrapper[4830]: I0227 16:28:49.971104 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-v5pmq"] Feb 27 16:28:49 crc kubenswrapper[4830]: I0227 16:28:49.972066 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-v5pmq" Feb 27 16:28:49 crc kubenswrapper[4830]: I0227 16:28:49.978694 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-v5pmq"] Feb 27 16:28:49 crc kubenswrapper[4830]: W0227 16:28:49.986645 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod903af975_adfa_4548_b8bb_45994e5dc194.slice/crio-fe250d220dc5d3773ac5027417726112c047d03308064c5aadd25614db41b870 WatchSource:0}: Error finding container fe250d220dc5d3773ac5027417726112c047d03308064c5aadd25614db41b870: Status 404 returned error can't find the container with id fe250d220dc5d3773ac5027417726112c047d03308064c5aadd25614db41b870 Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.069010 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mncqx-config-x5ftz" event={"ID":"903af975-adfa-4548-b8bb-45994e5dc194","Type":"ContainerStarted","Data":"fe250d220dc5d3773ac5027417726112c047d03308064c5aadd25614db41b870"} Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.070197 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6af8e619-e07a-4702-ac64-7fcf5077aef8-operator-scripts\") pod \"cinder-37dc-account-create-update-j859r\" (UID: \"6af8e619-e07a-4702-ac64-7fcf5077aef8\") " pod="openstack/cinder-37dc-account-create-update-j859r" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.070298 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0d104e4-315f-406d-ac89-21878f96a166-operator-scripts\") pod \"cinder-db-create-h7wpk\" (UID: \"d0d104e4-315f-406d-ac89-21878f96a166\") " pod="openstack/cinder-db-create-h7wpk" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.070351 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lpd2\" (UniqueName: \"kubernetes.io/projected/6af8e619-e07a-4702-ac64-7fcf5077aef8-kube-api-access-9lpd2\") pod \"cinder-37dc-account-create-update-j859r\" (UID: \"6af8e619-e07a-4702-ac64-7fcf5077aef8\") " pod="openstack/cinder-37dc-account-create-update-j859r" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.070988 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p87sb\" (UniqueName: \"kubernetes.io/projected/d0d104e4-315f-406d-ac89-21878f96a166-kube-api-access-p87sb\") pod \"cinder-db-create-h7wpk\" (UID: \"d0d104e4-315f-406d-ac89-21878f96a166\") " pod="openstack/cinder-db-create-h7wpk" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.071156 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0d104e4-315f-406d-ac89-21878f96a166-operator-scripts\") pod \"cinder-db-create-h7wpk\" (UID: \"d0d104e4-315f-406d-ac89-21878f96a166\") " pod="openstack/cinder-db-create-h7wpk" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.096272 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p87sb\" (UniqueName: \"kubernetes.io/projected/d0d104e4-315f-406d-ac89-21878f96a166-kube-api-access-p87sb\") pod \"cinder-db-create-h7wpk\" (UID: \"d0d104e4-315f-406d-ac89-21878f96a166\") " pod="openstack/cinder-db-create-h7wpk" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.110055 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-d4d2-account-create-update-qbmct"] Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.111361 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d4d2-account-create-update-qbmct" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.113878 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.119996 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-h7wpk" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.122323 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-d4d2-account-create-update-qbmct"] Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.170834 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-w22r8"] Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.172841 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc8f3bdd-7355-46ce-8ac6-75cb6a21f325-operator-scripts\") pod \"barbican-db-create-v5pmq\" (UID: \"fc8f3bdd-7355-46ce-8ac6-75cb6a21f325\") " pod="openstack/barbican-db-create-v5pmq" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.172911 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5hdz\" (UniqueName: \"kubernetes.io/projected/fc8f3bdd-7355-46ce-8ac6-75cb6a21f325-kube-api-access-x5hdz\") pod \"barbican-db-create-v5pmq\" (UID: \"fc8f3bdd-7355-46ce-8ac6-75cb6a21f325\") " pod="openstack/barbican-db-create-v5pmq" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.172969 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nczw\" (UniqueName: \"kubernetes.io/projected/120619c7-5358-455a-bf71-e3d60389fb05-kube-api-access-4nczw\") pod \"barbican-d4d2-account-create-update-qbmct\" (UID: \"120619c7-5358-455a-bf71-e3d60389fb05\") " pod="openstack/barbican-d4d2-account-create-update-qbmct" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.173002 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6af8e619-e07a-4702-ac64-7fcf5077aef8-operator-scripts\") pod \"cinder-37dc-account-create-update-j859r\" (UID: \"6af8e619-e07a-4702-ac64-7fcf5077aef8\") " pod="openstack/cinder-37dc-account-create-update-j859r" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.173032 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/120619c7-5358-455a-bf71-e3d60389fb05-operator-scripts\") pod \"barbican-d4d2-account-create-update-qbmct\" (UID: \"120619c7-5358-455a-bf71-e3d60389fb05\") " pod="openstack/barbican-d4d2-account-create-update-qbmct" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.173061 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lpd2\" (UniqueName: \"kubernetes.io/projected/6af8e619-e07a-4702-ac64-7fcf5077aef8-kube-api-access-9lpd2\") pod \"cinder-37dc-account-create-update-j859r\" (UID: \"6af8e619-e07a-4702-ac64-7fcf5077aef8\") " pod="openstack/cinder-37dc-account-create-update-j859r" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.173382 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-w22r8" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.176875 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6af8e619-e07a-4702-ac64-7fcf5077aef8-operator-scripts\") pod \"cinder-37dc-account-create-update-j859r\" (UID: \"6af8e619-e07a-4702-ac64-7fcf5077aef8\") " pod="openstack/cinder-37dc-account-create-update-j859r" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.177936 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-w22r8"] Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.184260 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.184416 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.184564 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.184712 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zm2zz" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.193227 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lpd2\" (UniqueName: \"kubernetes.io/projected/6af8e619-e07a-4702-ac64-7fcf5077aef8-kube-api-access-9lpd2\") pod \"cinder-37dc-account-create-update-j859r\" (UID: \"6af8e619-e07a-4702-ac64-7fcf5077aef8\") " pod="openstack/cinder-37dc-account-create-update-j859r" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.196427 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-37dc-account-create-update-j859r" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.202036 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-jhhnm"] Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.203448 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jhhnm" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.217062 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-2d81-account-create-update-6xn6z"] Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.218087 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2d81-account-create-update-6xn6z" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.222095 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.228154 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-jhhnm"] Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.252018 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2d81-account-create-update-6xn6z"] Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.279262 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c44ae554-632d-4347-ac9c-ce0c467ddce7-config-data\") pod \"keystone-db-sync-w22r8\" (UID: \"c44ae554-632d-4347-ac9c-ce0c467ddce7\") " pod="openstack/keystone-db-sync-w22r8" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.279313 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5hdz\" (UniqueName: \"kubernetes.io/projected/fc8f3bdd-7355-46ce-8ac6-75cb6a21f325-kube-api-access-x5hdz\") pod \"barbican-db-create-v5pmq\" (UID: \"fc8f3bdd-7355-46ce-8ac6-75cb6a21f325\") " pod="openstack/barbican-db-create-v5pmq" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.279336 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt4nz\" (UniqueName: \"kubernetes.io/projected/4989a1bf-9609-47ae-99c3-561023cff325-kube-api-access-wt4nz\") pod \"neutron-2d81-account-create-update-6xn6z\" (UID: \"4989a1bf-9609-47ae-99c3-561023cff325\") " pod="openstack/neutron-2d81-account-create-update-6xn6z" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.279364 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtgf8\" (UniqueName: \"kubernetes.io/projected/ce3b7271-1b27-437c-a5a3-7a2f2511d3de-kube-api-access-gtgf8\") pod \"neutron-db-create-jhhnm\" (UID: \"ce3b7271-1b27-437c-a5a3-7a2f2511d3de\") " pod="openstack/neutron-db-create-jhhnm" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.279384 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nczw\" (UniqueName: \"kubernetes.io/projected/120619c7-5358-455a-bf71-e3d60389fb05-kube-api-access-4nczw\") pod \"barbican-d4d2-account-create-update-qbmct\" (UID: \"120619c7-5358-455a-bf71-e3d60389fb05\") " pod="openstack/barbican-d4d2-account-create-update-qbmct" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.279408 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce3b7271-1b27-437c-a5a3-7a2f2511d3de-operator-scripts\") pod \"neutron-db-create-jhhnm\" (UID: \"ce3b7271-1b27-437c-a5a3-7a2f2511d3de\") " pod="openstack/neutron-db-create-jhhnm" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.279431 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/120619c7-5358-455a-bf71-e3d60389fb05-operator-scripts\") pod \"barbican-d4d2-account-create-update-qbmct\" (UID: \"120619c7-5358-455a-bf71-e3d60389fb05\") " pod="openstack/barbican-d4d2-account-create-update-qbmct" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.279467 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4989a1bf-9609-47ae-99c3-561023cff325-operator-scripts\") pod \"neutron-2d81-account-create-update-6xn6z\" (UID: \"4989a1bf-9609-47ae-99c3-561023cff325\") " pod="openstack/neutron-2d81-account-create-update-6xn6z" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.279512 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rcvj\" (UniqueName: \"kubernetes.io/projected/c44ae554-632d-4347-ac9c-ce0c467ddce7-kube-api-access-5rcvj\") pod \"keystone-db-sync-w22r8\" (UID: \"c44ae554-632d-4347-ac9c-ce0c467ddce7\") " pod="openstack/keystone-db-sync-w22r8" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.279538 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc8f3bdd-7355-46ce-8ac6-75cb6a21f325-operator-scripts\") pod \"barbican-db-create-v5pmq\" (UID: \"fc8f3bdd-7355-46ce-8ac6-75cb6a21f325\") " pod="openstack/barbican-db-create-v5pmq" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.279557 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c44ae554-632d-4347-ac9c-ce0c467ddce7-combined-ca-bundle\") pod \"keystone-db-sync-w22r8\" (UID: \"c44ae554-632d-4347-ac9c-ce0c467ddce7\") " pod="openstack/keystone-db-sync-w22r8" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.280294 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/120619c7-5358-455a-bf71-e3d60389fb05-operator-scripts\") pod \"barbican-d4d2-account-create-update-qbmct\" (UID: \"120619c7-5358-455a-bf71-e3d60389fb05\") " pod="openstack/barbican-d4d2-account-create-update-qbmct" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.280886 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc8f3bdd-7355-46ce-8ac6-75cb6a21f325-operator-scripts\") pod \"barbican-db-create-v5pmq\" (UID: \"fc8f3bdd-7355-46ce-8ac6-75cb6a21f325\") " pod="openstack/barbican-db-create-v5pmq" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.297848 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5hdz\" (UniqueName: \"kubernetes.io/projected/fc8f3bdd-7355-46ce-8ac6-75cb6a21f325-kube-api-access-x5hdz\") pod \"barbican-db-create-v5pmq\" (UID: \"fc8f3bdd-7355-46ce-8ac6-75cb6a21f325\") " pod="openstack/barbican-db-create-v5pmq" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.302662 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nczw\" (UniqueName: \"kubernetes.io/projected/120619c7-5358-455a-bf71-e3d60389fb05-kube-api-access-4nczw\") pod \"barbican-d4d2-account-create-update-qbmct\" (UID: \"120619c7-5358-455a-bf71-e3d60389fb05\") " pod="openstack/barbican-d4d2-account-create-update-qbmct" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.381192 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c44ae554-632d-4347-ac9c-ce0c467ddce7-combined-ca-bundle\") pod \"keystone-db-sync-w22r8\" (UID: \"c44ae554-632d-4347-ac9c-ce0c467ddce7\") " pod="openstack/keystone-db-sync-w22r8" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.381246 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c44ae554-632d-4347-ac9c-ce0c467ddce7-config-data\") pod \"keystone-db-sync-w22r8\" (UID: \"c44ae554-632d-4347-ac9c-ce0c467ddce7\") " pod="openstack/keystone-db-sync-w22r8" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.381315 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt4nz\" (UniqueName: \"kubernetes.io/projected/4989a1bf-9609-47ae-99c3-561023cff325-kube-api-access-wt4nz\") pod \"neutron-2d81-account-create-update-6xn6z\" (UID: \"4989a1bf-9609-47ae-99c3-561023cff325\") " pod="openstack/neutron-2d81-account-create-update-6xn6z" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.381343 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtgf8\" (UniqueName: \"kubernetes.io/projected/ce3b7271-1b27-437c-a5a3-7a2f2511d3de-kube-api-access-gtgf8\") pod \"neutron-db-create-jhhnm\" (UID: \"ce3b7271-1b27-437c-a5a3-7a2f2511d3de\") " pod="openstack/neutron-db-create-jhhnm" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.381379 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce3b7271-1b27-437c-a5a3-7a2f2511d3de-operator-scripts\") pod \"neutron-db-create-jhhnm\" (UID: \"ce3b7271-1b27-437c-a5a3-7a2f2511d3de\") " pod="openstack/neutron-db-create-jhhnm" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.381447 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4989a1bf-9609-47ae-99c3-561023cff325-operator-scripts\") pod \"neutron-2d81-account-create-update-6xn6z\" (UID: \"4989a1bf-9609-47ae-99c3-561023cff325\") " pod="openstack/neutron-2d81-account-create-update-6xn6z" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.381520 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rcvj\" (UniqueName: \"kubernetes.io/projected/c44ae554-632d-4347-ac9c-ce0c467ddce7-kube-api-access-5rcvj\") pod \"keystone-db-sync-w22r8\" (UID: \"c44ae554-632d-4347-ac9c-ce0c467ddce7\") " pod="openstack/keystone-db-sync-w22r8" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.387600 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c44ae554-632d-4347-ac9c-ce0c467ddce7-combined-ca-bundle\") pod \"keystone-db-sync-w22r8\" (UID: \"c44ae554-632d-4347-ac9c-ce0c467ddce7\") " pod="openstack/keystone-db-sync-w22r8" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.392636 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c44ae554-632d-4347-ac9c-ce0c467ddce7-config-data\") pod \"keystone-db-sync-w22r8\" (UID: \"c44ae554-632d-4347-ac9c-ce0c467ddce7\") " pod="openstack/keystone-db-sync-w22r8" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.395433 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce3b7271-1b27-437c-a5a3-7a2f2511d3de-operator-scripts\") pod \"neutron-db-create-jhhnm\" (UID: \"ce3b7271-1b27-437c-a5a3-7a2f2511d3de\") " pod="openstack/neutron-db-create-jhhnm" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.395488 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4989a1bf-9609-47ae-99c3-561023cff325-operator-scripts\") pod \"neutron-2d81-account-create-update-6xn6z\" (UID: \"4989a1bf-9609-47ae-99c3-561023cff325\") " pod="openstack/neutron-2d81-account-create-update-6xn6z" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.400457 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rcvj\" (UniqueName: \"kubernetes.io/projected/c44ae554-632d-4347-ac9c-ce0c467ddce7-kube-api-access-5rcvj\") pod \"keystone-db-sync-w22r8\" (UID: \"c44ae554-632d-4347-ac9c-ce0c467ddce7\") " pod="openstack/keystone-db-sync-w22r8" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.400884 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtgf8\" (UniqueName: \"kubernetes.io/projected/ce3b7271-1b27-437c-a5a3-7a2f2511d3de-kube-api-access-gtgf8\") pod \"neutron-db-create-jhhnm\" (UID: \"ce3b7271-1b27-437c-a5a3-7a2f2511d3de\") " pod="openstack/neutron-db-create-jhhnm" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.404494 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt4nz\" (UniqueName: \"kubernetes.io/projected/4989a1bf-9609-47ae-99c3-561023cff325-kube-api-access-wt4nz\") pod \"neutron-2d81-account-create-update-6xn6z\" (UID: \"4989a1bf-9609-47ae-99c3-561023cff325\") " pod="openstack/neutron-2d81-account-create-update-6xn6z" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.522779 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-v5pmq" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.567462 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d4d2-account-create-update-qbmct" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.579454 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-w22r8" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.611580 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jhhnm" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.617196 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2d81-account-create-update-6xn6z" Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.628901 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-h7wpk"] Feb 27 16:28:50 crc kubenswrapper[4830]: I0227 16:28:50.753507 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-37dc-account-create-update-j859r"] Feb 27 16:28:50 crc kubenswrapper[4830]: W0227 16:28:50.870233 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6af8e619_e07a_4702_ac64_7fcf5077aef8.slice/crio-f800e7d4f0a139c33d06541a9faabda0a942498ba3218f42119cf26600088f78 WatchSource:0}: Error finding container f800e7d4f0a139c33d06541a9faabda0a942498ba3218f42119cf26600088f78: Status 404 returned error can't find the container with id f800e7d4f0a139c33d06541a9faabda0a942498ba3218f42119cf26600088f78 Feb 27 16:28:51 crc kubenswrapper[4830]: I0227 16:28:51.137385 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f","Type":"ContainerStarted","Data":"fe39e07eaf48b0f3b6310a52d48a7901fe69c67e61f2bc86fcae68e60845e160"} Feb 27 16:28:51 crc kubenswrapper[4830]: I0227 16:28:51.137650 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f","Type":"ContainerStarted","Data":"abb82842a2a5f9faa42c2a6d73afbddfe73443d7841d35f06ec15c1730975fed"} Feb 27 16:28:51 crc kubenswrapper[4830]: I0227 16:28:51.161358 4830 generic.go:334] "Generic (PLEG): container finished" podID="903af975-adfa-4548-b8bb-45994e5dc194" containerID="e4819e5de70fd14096f08664e021aa68dbcaff8638b286c6df70bcb4924b7183" exitCode=0 Feb 27 16:28:51 crc kubenswrapper[4830]: I0227 16:28:51.161456 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mncqx-config-x5ftz" event={"ID":"903af975-adfa-4548-b8bb-45994e5dc194","Type":"ContainerDied","Data":"e4819e5de70fd14096f08664e021aa68dbcaff8638b286c6df70bcb4924b7183"} Feb 27 16:28:51 crc kubenswrapper[4830]: I0227 16:28:51.178133 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-h7wpk" event={"ID":"d0d104e4-315f-406d-ac89-21878f96a166","Type":"ContainerStarted","Data":"7dcfb06edbf48f080c76b2086930bea386a1ba743bbaaaa1e7102badcf04c020"} Feb 27 16:28:51 crc kubenswrapper[4830]: I0227 16:28:51.189498 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-37dc-account-create-update-j859r" event={"ID":"6af8e619-e07a-4702-ac64-7fcf5077aef8","Type":"ContainerStarted","Data":"f800e7d4f0a139c33d06541a9faabda0a942498ba3218f42119cf26600088f78"} Feb 27 16:28:51 crc kubenswrapper[4830]: I0227 16:28:51.261685 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-v5pmq"] Feb 27 16:28:51 crc kubenswrapper[4830]: I0227 16:28:51.291898 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-w22r8"] Feb 27 16:28:51 crc kubenswrapper[4830]: I0227 16:28:51.330025 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2d81-account-create-update-6xn6z"] Feb 27 16:28:51 crc kubenswrapper[4830]: I0227 16:28:51.503544 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-jhhnm"] Feb 27 16:28:51 crc kubenswrapper[4830]: W0227 16:28:51.508768 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce3b7271_1b27_437c_a5a3_7a2f2511d3de.slice/crio-2f6aab744471319f45f9444d1aab0cc069238c729af2aaeb5f6588eedb999008 WatchSource:0}: Error finding container 2f6aab744471319f45f9444d1aab0cc069238c729af2aaeb5f6588eedb999008: Status 404 returned error can't find the container with id 2f6aab744471319f45f9444d1aab0cc069238c729af2aaeb5f6588eedb999008 Feb 27 16:28:51 crc kubenswrapper[4830]: I0227 16:28:51.519863 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-d4d2-account-create-update-qbmct"] Feb 27 16:28:51 crc kubenswrapper[4830]: W0227 16:28:51.527564 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod120619c7_5358_455a_bf71_e3d60389fb05.slice/crio-901a6413935bfba67e9c4c75be4794d70abf5a67fef67ae11b5bcac54adddeb6 WatchSource:0}: Error finding container 901a6413935bfba67e9c4c75be4794d70abf5a67fef67ae11b5bcac54adddeb6: Status 404 returned error can't find the container with id 901a6413935bfba67e9c4c75be4794d70abf5a67fef67ae11b5bcac54adddeb6 Feb 27 16:28:52 crc kubenswrapper[4830]: I0227 16:28:52.204820 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-w22r8" event={"ID":"c44ae554-632d-4347-ac9c-ce0c467ddce7","Type":"ContainerStarted","Data":"65f3155ec69117ad776a3c494a001ad23fdf46b8514ff6c1248fade6ba701d99"} Feb 27 16:28:52 crc kubenswrapper[4830]: I0227 16:28:52.207382 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-v5pmq" event={"ID":"fc8f3bdd-7355-46ce-8ac6-75cb6a21f325","Type":"ContainerStarted","Data":"451fb0be371d26426de1032670cbe01b5e0d72f0687f212f205ce0a0f1049841"} Feb 27 16:28:52 crc kubenswrapper[4830]: I0227 16:28:52.207450 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-v5pmq" event={"ID":"fc8f3bdd-7355-46ce-8ac6-75cb6a21f325","Type":"ContainerStarted","Data":"6ffb785269bea0777d12006401f9faf28b3306775ed72b2e27a96fca7a88ec22"} Feb 27 16:28:52 crc kubenswrapper[4830]: I0227 16:28:52.210289 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-37dc-account-create-update-j859r" event={"ID":"6af8e619-e07a-4702-ac64-7fcf5077aef8","Type":"ContainerStarted","Data":"45a724c55f887a9873187e6e48da3fda84671199ed89e01366feec742267f675"} Feb 27 16:28:52 crc kubenswrapper[4830]: I0227 16:28:52.215423 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f","Type":"ContainerStarted","Data":"63b86b7398c02b758efbf23ee7393a15e9d70cbae4e28af8dae65670306da7a0"} Feb 27 16:28:52 crc kubenswrapper[4830]: I0227 16:28:52.215467 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f","Type":"ContainerStarted","Data":"fddbdac256b4a79af48834ea268b02e9852631ab71cc27740d8344fa2927b417"} Feb 27 16:28:52 crc kubenswrapper[4830]: I0227 16:28:52.217176 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d4d2-account-create-update-qbmct" event={"ID":"120619c7-5358-455a-bf71-e3d60389fb05","Type":"ContainerStarted","Data":"9c796b9641b31cc033bbc7ab7769fb39d6172c20eb3c7d1d5f5b77f73a0e8a9b"} Feb 27 16:28:52 crc kubenswrapper[4830]: I0227 16:28:52.217232 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d4d2-account-create-update-qbmct" event={"ID":"120619c7-5358-455a-bf71-e3d60389fb05","Type":"ContainerStarted","Data":"901a6413935bfba67e9c4c75be4794d70abf5a67fef67ae11b5bcac54adddeb6"} Feb 27 16:28:52 crc kubenswrapper[4830]: I0227 16:28:52.223902 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2d81-account-create-update-6xn6z" event={"ID":"4989a1bf-9609-47ae-99c3-561023cff325","Type":"ContainerStarted","Data":"5aa1a3f44a359ee2559e80363d7b378c4edd45c9e53a4c526fc5cd51ab32b3bd"} Feb 27 16:28:52 crc kubenswrapper[4830]: I0227 16:28:52.223990 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2d81-account-create-update-6xn6z" event={"ID":"4989a1bf-9609-47ae-99c3-561023cff325","Type":"ContainerStarted","Data":"37773fc1d3e379f0dd1bf5a65ba524dc1363b58265d719fb8a450a1021799476"} Feb 27 16:28:52 crc kubenswrapper[4830]: I0227 16:28:52.225594 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-v5pmq" podStartSLOduration=3.225571774 podStartE2EDuration="3.225571774s" podCreationTimestamp="2026-02-27 16:28:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:28:52.221097884 +0000 UTC m=+1328.310370387" watchObservedRunningTime="2026-02-27 16:28:52.225571774 +0000 UTC m=+1328.314844277" Feb 27 16:28:52 crc kubenswrapper[4830]: I0227 16:28:52.228260 4830 generic.go:334] "Generic (PLEG): container finished" podID="d0d104e4-315f-406d-ac89-21878f96a166" containerID="8fc927ca0d436c6b9abd47100b757b549c955783daccc2bc942d4e651824c752" exitCode=0 Feb 27 16:28:52 crc kubenswrapper[4830]: I0227 16:28:52.228838 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-h7wpk" event={"ID":"d0d104e4-315f-406d-ac89-21878f96a166","Type":"ContainerDied","Data":"8fc927ca0d436c6b9abd47100b757b549c955783daccc2bc942d4e651824c752"} Feb 27 16:28:52 crc kubenswrapper[4830]: I0227 16:28:52.233819 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jhhnm" event={"ID":"ce3b7271-1b27-437c-a5a3-7a2f2511d3de","Type":"ContainerStarted","Data":"ed60d71a50308f4619438818ff5aee5f8e275b029bf18e8c1c8441cf23db8dd5"} Feb 27 16:28:52 crc kubenswrapper[4830]: I0227 16:28:52.233854 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jhhnm" event={"ID":"ce3b7271-1b27-437c-a5a3-7a2f2511d3de","Type":"ContainerStarted","Data":"2f6aab744471319f45f9444d1aab0cc069238c729af2aaeb5f6588eedb999008"} Feb 27 16:28:52 crc kubenswrapper[4830]: I0227 16:28:52.241905 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-d4d2-account-create-update-qbmct" podStartSLOduration=2.241885234 podStartE2EDuration="2.241885234s" podCreationTimestamp="2026-02-27 16:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:28:52.237860895 +0000 UTC m=+1328.327133368" watchObservedRunningTime="2026-02-27 16:28:52.241885234 +0000 UTC m=+1328.331157697" Feb 27 16:28:52 crc kubenswrapper[4830]: I0227 16:28:52.265906 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-37dc-account-create-update-j859r" podStartSLOduration=3.265885893 podStartE2EDuration="3.265885893s" podCreationTimestamp="2026-02-27 16:28:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:28:52.255515028 +0000 UTC m=+1328.344787491" watchObservedRunningTime="2026-02-27 16:28:52.265885893 +0000 UTC m=+1328.355158356" Feb 27 16:28:52 crc kubenswrapper[4830]: I0227 16:28:52.305401 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-jhhnm" podStartSLOduration=2.305383712 podStartE2EDuration="2.305383712s" podCreationTimestamp="2026-02-27 16:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:28:52.297658572 +0000 UTC m=+1328.386931045" watchObservedRunningTime="2026-02-27 16:28:52.305383712 +0000 UTC m=+1328.394656175" Feb 27 16:28:52 crc kubenswrapper[4830]: I0227 16:28:52.325166 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-2d81-account-create-update-6xn6z" podStartSLOduration=2.325139506 podStartE2EDuration="2.325139506s" podCreationTimestamp="2026-02-27 16:28:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:28:52.315459389 +0000 UTC m=+1328.404731862" watchObservedRunningTime="2026-02-27 16:28:52.325139506 +0000 UTC m=+1328.414411969" Feb 27 16:28:52 crc kubenswrapper[4830]: I0227 16:28:52.657432 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mncqx-config-x5ftz" Feb 27 16:28:52 crc kubenswrapper[4830]: I0227 16:28:52.828240 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/903af975-adfa-4548-b8bb-45994e5dc194-additional-scripts\") pod \"903af975-adfa-4548-b8bb-45994e5dc194\" (UID: \"903af975-adfa-4548-b8bb-45994e5dc194\") " Feb 27 16:28:52 crc kubenswrapper[4830]: I0227 16:28:52.828660 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/903af975-adfa-4548-b8bb-45994e5dc194-var-log-ovn\") pod \"903af975-adfa-4548-b8bb-45994e5dc194\" (UID: \"903af975-adfa-4548-b8bb-45994e5dc194\") " Feb 27 16:28:52 crc kubenswrapper[4830]: I0227 16:28:52.828774 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/903af975-adfa-4548-b8bb-45994e5dc194-var-run\") pod \"903af975-adfa-4548-b8bb-45994e5dc194\" (UID: \"903af975-adfa-4548-b8bb-45994e5dc194\") " Feb 27 16:28:52 crc kubenswrapper[4830]: I0227 16:28:52.828779 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/903af975-adfa-4548-b8bb-45994e5dc194-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "903af975-adfa-4548-b8bb-45994e5dc194" (UID: "903af975-adfa-4548-b8bb-45994e5dc194"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:28:52 crc kubenswrapper[4830]: I0227 16:28:52.828810 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjdmj\" (UniqueName: \"kubernetes.io/projected/903af975-adfa-4548-b8bb-45994e5dc194-kube-api-access-pjdmj\") pod \"903af975-adfa-4548-b8bb-45994e5dc194\" (UID: \"903af975-adfa-4548-b8bb-45994e5dc194\") " Feb 27 16:28:52 crc kubenswrapper[4830]: I0227 16:28:52.828838 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/903af975-adfa-4548-b8bb-45994e5dc194-var-run" (OuterVolumeSpecName: "var-run") pod "903af975-adfa-4548-b8bb-45994e5dc194" (UID: "903af975-adfa-4548-b8bb-45994e5dc194"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:28:52 crc kubenswrapper[4830]: I0227 16:28:52.828846 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/903af975-adfa-4548-b8bb-45994e5dc194-var-run-ovn\") pod \"903af975-adfa-4548-b8bb-45994e5dc194\" (UID: \"903af975-adfa-4548-b8bb-45994e5dc194\") " Feb 27 16:28:52 crc kubenswrapper[4830]: I0227 16:28:52.828931 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/903af975-adfa-4548-b8bb-45994e5dc194-scripts\") pod \"903af975-adfa-4548-b8bb-45994e5dc194\" (UID: \"903af975-adfa-4548-b8bb-45994e5dc194\") " Feb 27 16:28:52 crc kubenswrapper[4830]: I0227 16:28:52.829056 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/903af975-adfa-4548-b8bb-45994e5dc194-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "903af975-adfa-4548-b8bb-45994e5dc194" (UID: "903af975-adfa-4548-b8bb-45994e5dc194"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:28:52 crc kubenswrapper[4830]: I0227 16:28:52.829117 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/903af975-adfa-4548-b8bb-45994e5dc194-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "903af975-adfa-4548-b8bb-45994e5dc194" (UID: "903af975-adfa-4548-b8bb-45994e5dc194"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:28:52 crc kubenswrapper[4830]: I0227 16:28:52.829419 4830 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/903af975-adfa-4548-b8bb-45994e5dc194-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:52 crc kubenswrapper[4830]: I0227 16:28:52.829448 4830 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/903af975-adfa-4548-b8bb-45994e5dc194-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:52 crc kubenswrapper[4830]: I0227 16:28:52.829460 4830 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/903af975-adfa-4548-b8bb-45994e5dc194-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:52 crc kubenswrapper[4830]: I0227 16:28:52.829470 4830 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/903af975-adfa-4548-b8bb-45994e5dc194-var-run\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:52 crc kubenswrapper[4830]: I0227 16:28:52.829868 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/903af975-adfa-4548-b8bb-45994e5dc194-scripts" (OuterVolumeSpecName: "scripts") pod "903af975-adfa-4548-b8bb-45994e5dc194" (UID: "903af975-adfa-4548-b8bb-45994e5dc194"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:28:52 crc kubenswrapper[4830]: I0227 16:28:52.835848 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/903af975-adfa-4548-b8bb-45994e5dc194-kube-api-access-pjdmj" (OuterVolumeSpecName: "kube-api-access-pjdmj") pod "903af975-adfa-4548-b8bb-45994e5dc194" (UID: "903af975-adfa-4548-b8bb-45994e5dc194"). InnerVolumeSpecName "kube-api-access-pjdmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:28:52 crc kubenswrapper[4830]: I0227 16:28:52.933542 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjdmj\" (UniqueName: \"kubernetes.io/projected/903af975-adfa-4548-b8bb-45994e5dc194-kube-api-access-pjdmj\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:52 crc kubenswrapper[4830]: I0227 16:28:52.933573 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/903af975-adfa-4548-b8bb-45994e5dc194-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:53 crc kubenswrapper[4830]: I0227 16:28:53.243787 4830 generic.go:334] "Generic (PLEG): container finished" podID="4989a1bf-9609-47ae-99c3-561023cff325" containerID="5aa1a3f44a359ee2559e80363d7b378c4edd45c9e53a4c526fc5cd51ab32b3bd" exitCode=0 Feb 27 16:28:53 crc kubenswrapper[4830]: I0227 16:28:53.243867 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2d81-account-create-update-6xn6z" event={"ID":"4989a1bf-9609-47ae-99c3-561023cff325","Type":"ContainerDied","Data":"5aa1a3f44a359ee2559e80363d7b378c4edd45c9e53a4c526fc5cd51ab32b3bd"} Feb 27 16:28:53 crc kubenswrapper[4830]: I0227 16:28:53.245836 4830 generic.go:334] "Generic (PLEG): container finished" podID="ce3b7271-1b27-437c-a5a3-7a2f2511d3de" containerID="ed60d71a50308f4619438818ff5aee5f8e275b029bf18e8c1c8441cf23db8dd5" exitCode=0 Feb 27 16:28:53 crc kubenswrapper[4830]: I0227 16:28:53.245889 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jhhnm" event={"ID":"ce3b7271-1b27-437c-a5a3-7a2f2511d3de","Type":"ContainerDied","Data":"ed60d71a50308f4619438818ff5aee5f8e275b029bf18e8c1c8441cf23db8dd5"} Feb 27 16:28:53 crc kubenswrapper[4830]: I0227 16:28:53.269274 4830 generic.go:334] "Generic (PLEG): container finished" podID="fc8f3bdd-7355-46ce-8ac6-75cb6a21f325" containerID="451fb0be371d26426de1032670cbe01b5e0d72f0687f212f205ce0a0f1049841" exitCode=0 Feb 27 16:28:53 crc kubenswrapper[4830]: I0227 16:28:53.269362 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-v5pmq" event={"ID":"fc8f3bdd-7355-46ce-8ac6-75cb6a21f325","Type":"ContainerDied","Data":"451fb0be371d26426de1032670cbe01b5e0d72f0687f212f205ce0a0f1049841"} Feb 27 16:28:53 crc kubenswrapper[4830]: I0227 16:28:53.272748 4830 generic.go:334] "Generic (PLEG): container finished" podID="6af8e619-e07a-4702-ac64-7fcf5077aef8" containerID="45a724c55f887a9873187e6e48da3fda84671199ed89e01366feec742267f675" exitCode=0 Feb 27 16:28:53 crc kubenswrapper[4830]: I0227 16:28:53.272794 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-37dc-account-create-update-j859r" event={"ID":"6af8e619-e07a-4702-ac64-7fcf5077aef8","Type":"ContainerDied","Data":"45a724c55f887a9873187e6e48da3fda84671199ed89e01366feec742267f675"} Feb 27 16:28:53 crc kubenswrapper[4830]: I0227 16:28:53.291255 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f","Type":"ContainerStarted","Data":"2b750caa248530febbfbd4731fc41f64ef7a9129eab2a66780052a81ccfecb65"} Feb 27 16:28:53 crc kubenswrapper[4830]: I0227 16:28:53.303555 4830 generic.go:334] "Generic (PLEG): container finished" podID="120619c7-5358-455a-bf71-e3d60389fb05" containerID="9c796b9641b31cc033bbc7ab7769fb39d6172c20eb3c7d1d5f5b77f73a0e8a9b" exitCode=0 Feb 27 16:28:53 crc kubenswrapper[4830]: I0227 16:28:53.303643 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d4d2-account-create-update-qbmct" event={"ID":"120619c7-5358-455a-bf71-e3d60389fb05","Type":"ContainerDied","Data":"9c796b9641b31cc033bbc7ab7769fb39d6172c20eb3c7d1d5f5b77f73a0e8a9b"} Feb 27 16:28:53 crc kubenswrapper[4830]: I0227 16:28:53.307717 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mncqx-config-x5ftz" event={"ID":"903af975-adfa-4548-b8bb-45994e5dc194","Type":"ContainerDied","Data":"fe250d220dc5d3773ac5027417726112c047d03308064c5aadd25614db41b870"} Feb 27 16:28:53 crc kubenswrapper[4830]: I0227 16:28:53.307738 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mncqx-config-x5ftz" Feb 27 16:28:53 crc kubenswrapper[4830]: I0227 16:28:53.307755 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe250d220dc5d3773ac5027417726112c047d03308064c5aadd25614db41b870" Feb 27 16:28:53 crc kubenswrapper[4830]: I0227 16:28:53.566471 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-h7wpk" Feb 27 16:28:53 crc kubenswrapper[4830]: I0227 16:28:53.737008 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-mncqx-config-x5ftz"] Feb 27 16:28:53 crc kubenswrapper[4830]: I0227 16:28:53.748141 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-mncqx-config-x5ftz"] Feb 27 16:28:53 crc kubenswrapper[4830]: I0227 16:28:53.752937 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0d104e4-315f-406d-ac89-21878f96a166-operator-scripts\") pod \"d0d104e4-315f-406d-ac89-21878f96a166\" (UID: \"d0d104e4-315f-406d-ac89-21878f96a166\") " Feb 27 16:28:53 crc kubenswrapper[4830]: I0227 16:28:53.753239 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p87sb\" (UniqueName: \"kubernetes.io/projected/d0d104e4-315f-406d-ac89-21878f96a166-kube-api-access-p87sb\") pod \"d0d104e4-315f-406d-ac89-21878f96a166\" (UID: \"d0d104e4-315f-406d-ac89-21878f96a166\") " Feb 27 16:28:53 crc kubenswrapper[4830]: I0227 16:28:53.754537 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0d104e4-315f-406d-ac89-21878f96a166-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d0d104e4-315f-406d-ac89-21878f96a166" (UID: "d0d104e4-315f-406d-ac89-21878f96a166"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:28:53 crc kubenswrapper[4830]: I0227 16:28:53.762856 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0d104e4-315f-406d-ac89-21878f96a166-kube-api-access-p87sb" (OuterVolumeSpecName: "kube-api-access-p87sb") pod "d0d104e4-315f-406d-ac89-21878f96a166" (UID: "d0d104e4-315f-406d-ac89-21878f96a166"). InnerVolumeSpecName "kube-api-access-p87sb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:28:53 crc kubenswrapper[4830]: I0227 16:28:53.856152 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p87sb\" (UniqueName: \"kubernetes.io/projected/d0d104e4-315f-406d-ac89-21878f96a166-kube-api-access-p87sb\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:53 crc kubenswrapper[4830]: I0227 16:28:53.856201 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0d104e4-315f-406d-ac89-21878f96a166-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:54 crc kubenswrapper[4830]: I0227 16:28:54.327105 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f","Type":"ContainerStarted","Data":"2111c96223f006387077459f4429b67f715648783b2df873c937a40d47be2181"} Feb 27 16:28:54 crc kubenswrapper[4830]: I0227 16:28:54.327359 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f","Type":"ContainerStarted","Data":"ee0b677352a33d7fbcb2e9fab57bf5d672b03867dad9240c6c1fbd8e2b1f0b37"} Feb 27 16:28:54 crc kubenswrapper[4830]: I0227 16:28:54.328774 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-h7wpk" event={"ID":"d0d104e4-315f-406d-ac89-21878f96a166","Type":"ContainerDied","Data":"7dcfb06edbf48f080c76b2086930bea386a1ba743bbaaaa1e7102badcf04c020"} Feb 27 16:28:54 crc kubenswrapper[4830]: I0227 16:28:54.328830 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dcfb06edbf48f080c76b2086930bea386a1ba743bbaaaa1e7102badcf04c020" Feb 27 16:28:54 crc kubenswrapper[4830]: I0227 16:28:54.328797 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-h7wpk" Feb 27 16:28:54 crc kubenswrapper[4830]: I0227 16:28:54.422067 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-wn56z"] Feb 27 16:28:54 crc kubenswrapper[4830]: I0227 16:28:54.435662 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-wn56z"] Feb 27 16:28:54 crc kubenswrapper[4830]: I0227 16:28:54.784979 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c54825e-123b-4328-a0d5-c5afb0670045" path="/var/lib/kubelet/pods/8c54825e-123b-4328-a0d5-c5afb0670045/volumes" Feb 27 16:28:54 crc kubenswrapper[4830]: I0227 16:28:54.785485 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="903af975-adfa-4548-b8bb-45994e5dc194" path="/var/lib/kubelet/pods/903af975-adfa-4548-b8bb-45994e5dc194/volumes" Feb 27 16:28:56 crc kubenswrapper[4830]: I0227 16:28:56.580516 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d4d2-account-create-update-qbmct" Feb 27 16:28:56 crc kubenswrapper[4830]: I0227 16:28:56.626821 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2d81-account-create-update-6xn6z" Feb 27 16:28:56 crc kubenswrapper[4830]: I0227 16:28:56.630820 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-37dc-account-create-update-j859r" Feb 27 16:28:56 crc kubenswrapper[4830]: I0227 16:28:56.631082 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-v5pmq" Feb 27 16:28:56 crc kubenswrapper[4830]: I0227 16:28:56.668986 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jhhnm" Feb 27 16:28:56 crc kubenswrapper[4830]: I0227 16:28:56.704503 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nczw\" (UniqueName: \"kubernetes.io/projected/120619c7-5358-455a-bf71-e3d60389fb05-kube-api-access-4nczw\") pod \"120619c7-5358-455a-bf71-e3d60389fb05\" (UID: \"120619c7-5358-455a-bf71-e3d60389fb05\") " Feb 27 16:28:56 crc kubenswrapper[4830]: I0227 16:28:56.705011 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/120619c7-5358-455a-bf71-e3d60389fb05-operator-scripts\") pod \"120619c7-5358-455a-bf71-e3d60389fb05\" (UID: \"120619c7-5358-455a-bf71-e3d60389fb05\") " Feb 27 16:28:56 crc kubenswrapper[4830]: I0227 16:28:56.706667 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/120619c7-5358-455a-bf71-e3d60389fb05-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "120619c7-5358-455a-bf71-e3d60389fb05" (UID: "120619c7-5358-455a-bf71-e3d60389fb05"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:28:56 crc kubenswrapper[4830]: I0227 16:28:56.714592 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/120619c7-5358-455a-bf71-e3d60389fb05-kube-api-access-4nczw" (OuterVolumeSpecName: "kube-api-access-4nczw") pod "120619c7-5358-455a-bf71-e3d60389fb05" (UID: "120619c7-5358-455a-bf71-e3d60389fb05"). InnerVolumeSpecName "kube-api-access-4nczw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:28:56 crc kubenswrapper[4830]: I0227 16:28:56.807834 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wt4nz\" (UniqueName: \"kubernetes.io/projected/4989a1bf-9609-47ae-99c3-561023cff325-kube-api-access-wt4nz\") pod \"4989a1bf-9609-47ae-99c3-561023cff325\" (UID: \"4989a1bf-9609-47ae-99c3-561023cff325\") " Feb 27 16:28:56 crc kubenswrapper[4830]: I0227 16:28:56.807940 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5hdz\" (UniqueName: \"kubernetes.io/projected/fc8f3bdd-7355-46ce-8ac6-75cb6a21f325-kube-api-access-x5hdz\") pod \"fc8f3bdd-7355-46ce-8ac6-75cb6a21f325\" (UID: \"fc8f3bdd-7355-46ce-8ac6-75cb6a21f325\") " Feb 27 16:28:56 crc kubenswrapper[4830]: I0227 16:28:56.808002 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6af8e619-e07a-4702-ac64-7fcf5077aef8-operator-scripts\") pod \"6af8e619-e07a-4702-ac64-7fcf5077aef8\" (UID: \"6af8e619-e07a-4702-ac64-7fcf5077aef8\") " Feb 27 16:28:56 crc kubenswrapper[4830]: I0227 16:28:56.808060 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce3b7271-1b27-437c-a5a3-7a2f2511d3de-operator-scripts\") pod \"ce3b7271-1b27-437c-a5a3-7a2f2511d3de\" (UID: \"ce3b7271-1b27-437c-a5a3-7a2f2511d3de\") " Feb 27 16:28:56 crc kubenswrapper[4830]: I0227 16:28:56.808086 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4989a1bf-9609-47ae-99c3-561023cff325-operator-scripts\") pod \"4989a1bf-9609-47ae-99c3-561023cff325\" (UID: \"4989a1bf-9609-47ae-99c3-561023cff325\") " Feb 27 16:28:56 crc kubenswrapper[4830]: I0227 16:28:56.808132 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lpd2\" (UniqueName: \"kubernetes.io/projected/6af8e619-e07a-4702-ac64-7fcf5077aef8-kube-api-access-9lpd2\") pod \"6af8e619-e07a-4702-ac64-7fcf5077aef8\" (UID: \"6af8e619-e07a-4702-ac64-7fcf5077aef8\") " Feb 27 16:28:56 crc kubenswrapper[4830]: I0227 16:28:56.808159 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc8f3bdd-7355-46ce-8ac6-75cb6a21f325-operator-scripts\") pod \"fc8f3bdd-7355-46ce-8ac6-75cb6a21f325\" (UID: \"fc8f3bdd-7355-46ce-8ac6-75cb6a21f325\") " Feb 27 16:28:56 crc kubenswrapper[4830]: I0227 16:28:56.808236 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtgf8\" (UniqueName: \"kubernetes.io/projected/ce3b7271-1b27-437c-a5a3-7a2f2511d3de-kube-api-access-gtgf8\") pod \"ce3b7271-1b27-437c-a5a3-7a2f2511d3de\" (UID: \"ce3b7271-1b27-437c-a5a3-7a2f2511d3de\") " Feb 27 16:28:56 crc kubenswrapper[4830]: I0227 16:28:56.808534 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/120619c7-5358-455a-bf71-e3d60389fb05-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:56 crc kubenswrapper[4830]: I0227 16:28:56.808555 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nczw\" (UniqueName: \"kubernetes.io/projected/120619c7-5358-455a-bf71-e3d60389fb05-kube-api-access-4nczw\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:56 crc kubenswrapper[4830]: I0227 16:28:56.808647 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce3b7271-1b27-437c-a5a3-7a2f2511d3de-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ce3b7271-1b27-437c-a5a3-7a2f2511d3de" (UID: "ce3b7271-1b27-437c-a5a3-7a2f2511d3de"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:28:56 crc kubenswrapper[4830]: I0227 16:28:56.808776 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6af8e619-e07a-4702-ac64-7fcf5077aef8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6af8e619-e07a-4702-ac64-7fcf5077aef8" (UID: "6af8e619-e07a-4702-ac64-7fcf5077aef8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:28:56 crc kubenswrapper[4830]: I0227 16:28:56.809248 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4989a1bf-9609-47ae-99c3-561023cff325-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4989a1bf-9609-47ae-99c3-561023cff325" (UID: "4989a1bf-9609-47ae-99c3-561023cff325"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:28:56 crc kubenswrapper[4830]: I0227 16:28:56.810565 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8f3bdd-7355-46ce-8ac6-75cb6a21f325-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fc8f3bdd-7355-46ce-8ac6-75cb6a21f325" (UID: "fc8f3bdd-7355-46ce-8ac6-75cb6a21f325"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:28:56 crc kubenswrapper[4830]: I0227 16:28:56.815266 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6af8e619-e07a-4702-ac64-7fcf5077aef8-kube-api-access-9lpd2" (OuterVolumeSpecName: "kube-api-access-9lpd2") pod "6af8e619-e07a-4702-ac64-7fcf5077aef8" (UID: "6af8e619-e07a-4702-ac64-7fcf5077aef8"). InnerVolumeSpecName "kube-api-access-9lpd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:28:56 crc kubenswrapper[4830]: I0227 16:28:56.823613 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4989a1bf-9609-47ae-99c3-561023cff325-kube-api-access-wt4nz" (OuterVolumeSpecName: "kube-api-access-wt4nz") pod "4989a1bf-9609-47ae-99c3-561023cff325" (UID: "4989a1bf-9609-47ae-99c3-561023cff325"). InnerVolumeSpecName "kube-api-access-wt4nz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:28:56 crc kubenswrapper[4830]: I0227 16:28:56.835586 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8f3bdd-7355-46ce-8ac6-75cb6a21f325-kube-api-access-x5hdz" (OuterVolumeSpecName: "kube-api-access-x5hdz") pod "fc8f3bdd-7355-46ce-8ac6-75cb6a21f325" (UID: "fc8f3bdd-7355-46ce-8ac6-75cb6a21f325"). InnerVolumeSpecName "kube-api-access-x5hdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:28:56 crc kubenswrapper[4830]: I0227 16:28:56.835638 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce3b7271-1b27-437c-a5a3-7a2f2511d3de-kube-api-access-gtgf8" (OuterVolumeSpecName: "kube-api-access-gtgf8") pod "ce3b7271-1b27-437c-a5a3-7a2f2511d3de" (UID: "ce3b7271-1b27-437c-a5a3-7a2f2511d3de"). InnerVolumeSpecName "kube-api-access-gtgf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:28:56 crc kubenswrapper[4830]: I0227 16:28:56.910087 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtgf8\" (UniqueName: \"kubernetes.io/projected/ce3b7271-1b27-437c-a5a3-7a2f2511d3de-kube-api-access-gtgf8\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:56 crc kubenswrapper[4830]: I0227 16:28:56.910127 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wt4nz\" (UniqueName: \"kubernetes.io/projected/4989a1bf-9609-47ae-99c3-561023cff325-kube-api-access-wt4nz\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:56 crc kubenswrapper[4830]: I0227 16:28:56.910146 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5hdz\" (UniqueName: \"kubernetes.io/projected/fc8f3bdd-7355-46ce-8ac6-75cb6a21f325-kube-api-access-x5hdz\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:56 crc kubenswrapper[4830]: I0227 16:28:56.910165 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6af8e619-e07a-4702-ac64-7fcf5077aef8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:56 crc kubenswrapper[4830]: I0227 16:28:56.910177 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce3b7271-1b27-437c-a5a3-7a2f2511d3de-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:56 crc kubenswrapper[4830]: I0227 16:28:56.910189 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4989a1bf-9609-47ae-99c3-561023cff325-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:56 crc kubenswrapper[4830]: I0227 16:28:56.910201 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lpd2\" (UniqueName: \"kubernetes.io/projected/6af8e619-e07a-4702-ac64-7fcf5077aef8-kube-api-access-9lpd2\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:56 crc kubenswrapper[4830]: I0227 16:28:56.910213 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc8f3bdd-7355-46ce-8ac6-75cb6a21f325-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:28:57 crc kubenswrapper[4830]: I0227 16:28:57.375269 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jhhnm" event={"ID":"ce3b7271-1b27-437c-a5a3-7a2f2511d3de","Type":"ContainerDied","Data":"2f6aab744471319f45f9444d1aab0cc069238c729af2aaeb5f6588eedb999008"} Feb 27 16:28:57 crc kubenswrapper[4830]: I0227 16:28:57.375340 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f6aab744471319f45f9444d1aab0cc069238c729af2aaeb5f6588eedb999008" Feb 27 16:28:57 crc kubenswrapper[4830]: I0227 16:28:57.375303 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jhhnm" Feb 27 16:28:57 crc kubenswrapper[4830]: I0227 16:28:57.379878 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-w22r8" event={"ID":"c44ae554-632d-4347-ac9c-ce0c467ddce7","Type":"ContainerStarted","Data":"5f5402616bb7611817535016b614d7887cf7031895a3cb81400c32e205dcc9d4"} Feb 27 16:28:57 crc kubenswrapper[4830]: I0227 16:28:57.383046 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-v5pmq" event={"ID":"fc8f3bdd-7355-46ce-8ac6-75cb6a21f325","Type":"ContainerDied","Data":"6ffb785269bea0777d12006401f9faf28b3306775ed72b2e27a96fca7a88ec22"} Feb 27 16:28:57 crc kubenswrapper[4830]: I0227 16:28:57.383147 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ffb785269bea0777d12006401f9faf28b3306775ed72b2e27a96fca7a88ec22" Feb 27 16:28:57 crc kubenswrapper[4830]: I0227 16:28:57.383372 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-v5pmq" Feb 27 16:28:57 crc kubenswrapper[4830]: I0227 16:28:57.411960 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-w22r8" podStartSLOduration=2.26174622 podStartE2EDuration="7.411919559s" podCreationTimestamp="2026-02-27 16:28:50 +0000 UTC" firstStartedPulling="2026-02-27 16:28:51.311882428 +0000 UTC m=+1327.401154881" lastFinishedPulling="2026-02-27 16:28:56.462055767 +0000 UTC m=+1332.551328220" observedRunningTime="2026-02-27 16:28:57.408723521 +0000 UTC m=+1333.497996004" watchObservedRunningTime="2026-02-27 16:28:57.411919559 +0000 UTC m=+1333.501192022" Feb 27 16:28:57 crc kubenswrapper[4830]: I0227 16:28:57.419830 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f","Type":"ContainerStarted","Data":"2ecea93ad489597ba408891f7afe44675c8c3d67fbcc4edfbe9a3debbac6c3a1"} Feb 27 16:28:57 crc kubenswrapper[4830]: I0227 16:28:57.419872 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f","Type":"ContainerStarted","Data":"7cfd581745eb62c04447e2179fa4d6397a6ffb2801133df8571673fd2fc8908e"} Feb 27 16:28:57 crc kubenswrapper[4830]: I0227 16:28:57.419882 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f","Type":"ContainerStarted","Data":"d7c3c63f60fa6c0faabdef005cd6435637f7aa45e44077b6d1579dbcfce2ffa5"} Feb 27 16:28:57 crc kubenswrapper[4830]: I0227 16:28:57.422819 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-37dc-account-create-update-j859r" event={"ID":"6af8e619-e07a-4702-ac64-7fcf5077aef8","Type":"ContainerDied","Data":"f800e7d4f0a139c33d06541a9faabda0a942498ba3218f42119cf26600088f78"} Feb 27 16:28:57 crc kubenswrapper[4830]: I0227 16:28:57.422842 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f800e7d4f0a139c33d06541a9faabda0a942498ba3218f42119cf26600088f78" Feb 27 16:28:57 crc kubenswrapper[4830]: I0227 16:28:57.422905 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-37dc-account-create-update-j859r" Feb 27 16:28:57 crc kubenswrapper[4830]: I0227 16:28:57.426024 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d4d2-account-create-update-qbmct" event={"ID":"120619c7-5358-455a-bf71-e3d60389fb05","Type":"ContainerDied","Data":"901a6413935bfba67e9c4c75be4794d70abf5a67fef67ae11b5bcac54adddeb6"} Feb 27 16:28:57 crc kubenswrapper[4830]: I0227 16:28:57.426073 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="901a6413935bfba67e9c4c75be4794d70abf5a67fef67ae11b5bcac54adddeb6" Feb 27 16:28:57 crc kubenswrapper[4830]: I0227 16:28:57.426120 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d4d2-account-create-update-qbmct" Feb 27 16:28:57 crc kubenswrapper[4830]: I0227 16:28:57.428911 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2d81-account-create-update-6xn6z" event={"ID":"4989a1bf-9609-47ae-99c3-561023cff325","Type":"ContainerDied","Data":"37773fc1d3e379f0dd1bf5a65ba524dc1363b58265d719fb8a450a1021799476"} Feb 27 16:28:57 crc kubenswrapper[4830]: I0227 16:28:57.428984 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37773fc1d3e379f0dd1bf5a65ba524dc1363b58265d719fb8a450a1021799476" Feb 27 16:28:57 crc kubenswrapper[4830]: I0227 16:28:57.429110 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2d81-account-create-update-6xn6z" Feb 27 16:28:59 crc kubenswrapper[4830]: I0227 16:28:59.467084 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-vd8js"] Feb 27 16:28:59 crc kubenswrapper[4830]: E0227 16:28:59.467675 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0d104e4-315f-406d-ac89-21878f96a166" containerName="mariadb-database-create" Feb 27 16:28:59 crc kubenswrapper[4830]: I0227 16:28:59.467691 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0d104e4-315f-406d-ac89-21878f96a166" containerName="mariadb-database-create" Feb 27 16:28:59 crc kubenswrapper[4830]: E0227 16:28:59.467711 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6af8e619-e07a-4702-ac64-7fcf5077aef8" containerName="mariadb-account-create-update" Feb 27 16:28:59 crc kubenswrapper[4830]: I0227 16:28:59.467720 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="6af8e619-e07a-4702-ac64-7fcf5077aef8" containerName="mariadb-account-create-update" Feb 27 16:28:59 crc kubenswrapper[4830]: E0227 16:28:59.467746 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc8f3bdd-7355-46ce-8ac6-75cb6a21f325" containerName="mariadb-database-create" Feb 27 16:28:59 crc kubenswrapper[4830]: I0227 16:28:59.467754 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc8f3bdd-7355-46ce-8ac6-75cb6a21f325" containerName="mariadb-database-create" Feb 27 16:28:59 crc kubenswrapper[4830]: E0227 16:28:59.467776 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="120619c7-5358-455a-bf71-e3d60389fb05" containerName="mariadb-account-create-update" Feb 27 16:28:59 crc kubenswrapper[4830]: I0227 16:28:59.467784 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="120619c7-5358-455a-bf71-e3d60389fb05" containerName="mariadb-account-create-update" Feb 27 16:28:59 crc kubenswrapper[4830]: E0227 16:28:59.467797 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce3b7271-1b27-437c-a5a3-7a2f2511d3de" containerName="mariadb-database-create" Feb 27 16:28:59 crc kubenswrapper[4830]: I0227 16:28:59.467805 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce3b7271-1b27-437c-a5a3-7a2f2511d3de" containerName="mariadb-database-create" Feb 27 16:28:59 crc kubenswrapper[4830]: E0227 16:28:59.467821 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4989a1bf-9609-47ae-99c3-561023cff325" containerName="mariadb-account-create-update" Feb 27 16:28:59 crc kubenswrapper[4830]: I0227 16:28:59.467829 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="4989a1bf-9609-47ae-99c3-561023cff325" containerName="mariadb-account-create-update" Feb 27 16:28:59 crc kubenswrapper[4830]: E0227 16:28:59.467845 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="903af975-adfa-4548-b8bb-45994e5dc194" containerName="ovn-config" Feb 27 16:28:59 crc kubenswrapper[4830]: I0227 16:28:59.467854 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="903af975-adfa-4548-b8bb-45994e5dc194" containerName="ovn-config" Feb 27 16:28:59 crc kubenswrapper[4830]: I0227 16:28:59.468095 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="6af8e619-e07a-4702-ac64-7fcf5077aef8" containerName="mariadb-account-create-update" Feb 27 16:28:59 crc kubenswrapper[4830]: I0227 16:28:59.468113 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="4989a1bf-9609-47ae-99c3-561023cff325" containerName="mariadb-account-create-update" Feb 27 16:28:59 crc kubenswrapper[4830]: I0227 16:28:59.468129 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc8f3bdd-7355-46ce-8ac6-75cb6a21f325" containerName="mariadb-database-create" Feb 27 16:28:59 crc kubenswrapper[4830]: I0227 16:28:59.468148 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="120619c7-5358-455a-bf71-e3d60389fb05" containerName="mariadb-account-create-update" Feb 27 16:28:59 crc kubenswrapper[4830]: I0227 16:28:59.468166 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce3b7271-1b27-437c-a5a3-7a2f2511d3de" containerName="mariadb-database-create" Feb 27 16:28:59 crc kubenswrapper[4830]: I0227 16:28:59.468190 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0d104e4-315f-406d-ac89-21878f96a166" containerName="mariadb-database-create" Feb 27 16:28:59 crc kubenswrapper[4830]: I0227 16:28:59.468204 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="903af975-adfa-4548-b8bb-45994e5dc194" containerName="ovn-config" Feb 27 16:28:59 crc kubenswrapper[4830]: I0227 16:28:59.468780 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vd8js" Feb 27 16:28:59 crc kubenswrapper[4830]: I0227 16:28:59.471682 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 27 16:28:59 crc kubenswrapper[4830]: I0227 16:28:59.475689 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vd8js"] Feb 27 16:28:59 crc kubenswrapper[4830]: I0227 16:28:59.476065 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f","Type":"ContainerStarted","Data":"bd8b53933ff6dda1af3029d46d29a1b791028b8a3ae0508dffa6e043e33ce932"} Feb 27 16:28:59 crc kubenswrapper[4830]: I0227 16:28:59.655129 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8n4w\" (UniqueName: \"kubernetes.io/projected/624a4c06-2a5c-480c-89f1-addc261412f0-kube-api-access-p8n4w\") pod \"root-account-create-update-vd8js\" (UID: \"624a4c06-2a5c-480c-89f1-addc261412f0\") " pod="openstack/root-account-create-update-vd8js" Feb 27 16:28:59 crc kubenswrapper[4830]: I0227 16:28:59.655218 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/624a4c06-2a5c-480c-89f1-addc261412f0-operator-scripts\") pod \"root-account-create-update-vd8js\" (UID: \"624a4c06-2a5c-480c-89f1-addc261412f0\") " pod="openstack/root-account-create-update-vd8js" Feb 27 16:28:59 crc kubenswrapper[4830]: I0227 16:28:59.756980 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8n4w\" (UniqueName: \"kubernetes.io/projected/624a4c06-2a5c-480c-89f1-addc261412f0-kube-api-access-p8n4w\") pod \"root-account-create-update-vd8js\" (UID: \"624a4c06-2a5c-480c-89f1-addc261412f0\") " pod="openstack/root-account-create-update-vd8js" Feb 27 16:28:59 crc kubenswrapper[4830]: I0227 16:28:59.757052 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/624a4c06-2a5c-480c-89f1-addc261412f0-operator-scripts\") pod \"root-account-create-update-vd8js\" (UID: \"624a4c06-2a5c-480c-89f1-addc261412f0\") " pod="openstack/root-account-create-update-vd8js" Feb 27 16:28:59 crc kubenswrapper[4830]: I0227 16:28:59.757868 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/624a4c06-2a5c-480c-89f1-addc261412f0-operator-scripts\") pod \"root-account-create-update-vd8js\" (UID: \"624a4c06-2a5c-480c-89f1-addc261412f0\") " pod="openstack/root-account-create-update-vd8js" Feb 27 16:28:59 crc kubenswrapper[4830]: I0227 16:28:59.798887 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8n4w\" (UniqueName: \"kubernetes.io/projected/624a4c06-2a5c-480c-89f1-addc261412f0-kube-api-access-p8n4w\") pod \"root-account-create-update-vd8js\" (UID: \"624a4c06-2a5c-480c-89f1-addc261412f0\") " pod="openstack/root-account-create-update-vd8js" Feb 27 16:29:00 crc kubenswrapper[4830]: I0227 16:29:00.087477 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vd8js" Feb 27 16:29:00 crc kubenswrapper[4830]: I0227 16:29:00.547005 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=30.178791339 podStartE2EDuration="37.546982281s" podCreationTimestamp="2026-02-27 16:28:23 +0000 UTC" firstStartedPulling="2026-02-27 16:28:45.669632888 +0000 UTC m=+1321.758905351" lastFinishedPulling="2026-02-27 16:28:53.03782382 +0000 UTC m=+1329.127096293" observedRunningTime="2026-02-27 16:29:00.534988247 +0000 UTC m=+1336.624260720" watchObservedRunningTime="2026-02-27 16:29:00.546982281 +0000 UTC m=+1336.636254754" Feb 27 16:29:00 crc kubenswrapper[4830]: I0227 16:29:00.659153 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vd8js"] Feb 27 16:29:00 crc kubenswrapper[4830]: W0227 16:29:00.663345 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod624a4c06_2a5c_480c_89f1_addc261412f0.slice/crio-25c7f1ebf6edcd80f7a9b90d8e5c0d5c937fa66717e8dd08baa8c7c466ddea54 WatchSource:0}: Error finding container 25c7f1ebf6edcd80f7a9b90d8e5c0d5c937fa66717e8dd08baa8c7c466ddea54: Status 404 returned error can't find the container with id 25c7f1ebf6edcd80f7a9b90d8e5c0d5c937fa66717e8dd08baa8c7c466ddea54 Feb 27 16:29:00 crc kubenswrapper[4830]: I0227 16:29:00.829354 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-pchsl"] Feb 27 16:29:00 crc kubenswrapper[4830]: I0227 16:29:00.831094 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-pchsl" Feb 27 16:29:00 crc kubenswrapper[4830]: I0227 16:29:00.833118 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 27 16:29:00 crc kubenswrapper[4830]: I0227 16:29:00.838841 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-pchsl"] Feb 27 16:29:00 crc kubenswrapper[4830]: I0227 16:29:00.992120 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd5a364-2f28-4e8b-831c-08ed09984745-config\") pod \"dnsmasq-dns-5c79d794d7-pchsl\" (UID: \"1dd5a364-2f28-4e8b-831c-08ed09984745\") " pod="openstack/dnsmasq-dns-5c79d794d7-pchsl" Feb 27 16:29:00 crc kubenswrapper[4830]: I0227 16:29:00.992364 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dd5a364-2f28-4e8b-831c-08ed09984745-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-pchsl\" (UID: \"1dd5a364-2f28-4e8b-831c-08ed09984745\") " pod="openstack/dnsmasq-dns-5c79d794d7-pchsl" Feb 27 16:29:00 crc kubenswrapper[4830]: I0227 16:29:00.992453 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dd5a364-2f28-4e8b-831c-08ed09984745-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-pchsl\" (UID: \"1dd5a364-2f28-4e8b-831c-08ed09984745\") " pod="openstack/dnsmasq-dns-5c79d794d7-pchsl" Feb 27 16:29:00 crc kubenswrapper[4830]: I0227 16:29:00.992534 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dd5a364-2f28-4e8b-831c-08ed09984745-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-pchsl\" (UID: \"1dd5a364-2f28-4e8b-831c-08ed09984745\") " pod="openstack/dnsmasq-dns-5c79d794d7-pchsl" Feb 27 16:29:00 crc kubenswrapper[4830]: I0227 16:29:00.992654 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1dd5a364-2f28-4e8b-831c-08ed09984745-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-pchsl\" (UID: \"1dd5a364-2f28-4e8b-831c-08ed09984745\") " pod="openstack/dnsmasq-dns-5c79d794d7-pchsl" Feb 27 16:29:00 crc kubenswrapper[4830]: I0227 16:29:00.992734 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxb9v\" (UniqueName: \"kubernetes.io/projected/1dd5a364-2f28-4e8b-831c-08ed09984745-kube-api-access-sxb9v\") pod \"dnsmasq-dns-5c79d794d7-pchsl\" (UID: \"1dd5a364-2f28-4e8b-831c-08ed09984745\") " pod="openstack/dnsmasq-dns-5c79d794d7-pchsl" Feb 27 16:29:01 crc kubenswrapper[4830]: I0227 16:29:01.094387 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1dd5a364-2f28-4e8b-831c-08ed09984745-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-pchsl\" (UID: \"1dd5a364-2f28-4e8b-831c-08ed09984745\") " pod="openstack/dnsmasq-dns-5c79d794d7-pchsl" Feb 27 16:29:01 crc kubenswrapper[4830]: I0227 16:29:01.094466 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxb9v\" (UniqueName: \"kubernetes.io/projected/1dd5a364-2f28-4e8b-831c-08ed09984745-kube-api-access-sxb9v\") pod \"dnsmasq-dns-5c79d794d7-pchsl\" (UID: \"1dd5a364-2f28-4e8b-831c-08ed09984745\") " pod="openstack/dnsmasq-dns-5c79d794d7-pchsl" Feb 27 16:29:01 crc kubenswrapper[4830]: I0227 16:29:01.094599 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd5a364-2f28-4e8b-831c-08ed09984745-config\") pod \"dnsmasq-dns-5c79d794d7-pchsl\" (UID: \"1dd5a364-2f28-4e8b-831c-08ed09984745\") " pod="openstack/dnsmasq-dns-5c79d794d7-pchsl" Feb 27 16:29:01 crc kubenswrapper[4830]: I0227 16:29:01.094655 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dd5a364-2f28-4e8b-831c-08ed09984745-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-pchsl\" (UID: \"1dd5a364-2f28-4e8b-831c-08ed09984745\") " pod="openstack/dnsmasq-dns-5c79d794d7-pchsl" Feb 27 16:29:01 crc kubenswrapper[4830]: I0227 16:29:01.094704 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dd5a364-2f28-4e8b-831c-08ed09984745-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-pchsl\" (UID: \"1dd5a364-2f28-4e8b-831c-08ed09984745\") " pod="openstack/dnsmasq-dns-5c79d794d7-pchsl" Feb 27 16:29:01 crc kubenswrapper[4830]: I0227 16:29:01.094764 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dd5a364-2f28-4e8b-831c-08ed09984745-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-pchsl\" (UID: \"1dd5a364-2f28-4e8b-831c-08ed09984745\") " pod="openstack/dnsmasq-dns-5c79d794d7-pchsl" Feb 27 16:29:01 crc kubenswrapper[4830]: I0227 16:29:01.096266 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dd5a364-2f28-4e8b-831c-08ed09984745-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-pchsl\" (UID: \"1dd5a364-2f28-4e8b-831c-08ed09984745\") " pod="openstack/dnsmasq-dns-5c79d794d7-pchsl" Feb 27 16:29:01 crc kubenswrapper[4830]: I0227 16:29:01.096564 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dd5a364-2f28-4e8b-831c-08ed09984745-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-pchsl\" (UID: \"1dd5a364-2f28-4e8b-831c-08ed09984745\") " pod="openstack/dnsmasq-dns-5c79d794d7-pchsl" Feb 27 16:29:01 crc kubenswrapper[4830]: I0227 16:29:01.096807 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd5a364-2f28-4e8b-831c-08ed09984745-config\") pod \"dnsmasq-dns-5c79d794d7-pchsl\" (UID: \"1dd5a364-2f28-4e8b-831c-08ed09984745\") " pod="openstack/dnsmasq-dns-5c79d794d7-pchsl" Feb 27 16:29:01 crc kubenswrapper[4830]: I0227 16:29:01.096938 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dd5a364-2f28-4e8b-831c-08ed09984745-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-pchsl\" (UID: \"1dd5a364-2f28-4e8b-831c-08ed09984745\") " pod="openstack/dnsmasq-dns-5c79d794d7-pchsl" Feb 27 16:29:01 crc kubenswrapper[4830]: I0227 16:29:01.097348 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1dd5a364-2f28-4e8b-831c-08ed09984745-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-pchsl\" (UID: \"1dd5a364-2f28-4e8b-831c-08ed09984745\") " pod="openstack/dnsmasq-dns-5c79d794d7-pchsl" Feb 27 16:29:01 crc kubenswrapper[4830]: I0227 16:29:01.117457 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxb9v\" (UniqueName: \"kubernetes.io/projected/1dd5a364-2f28-4e8b-831c-08ed09984745-kube-api-access-sxb9v\") pod \"dnsmasq-dns-5c79d794d7-pchsl\" (UID: \"1dd5a364-2f28-4e8b-831c-08ed09984745\") " pod="openstack/dnsmasq-dns-5c79d794d7-pchsl" Feb 27 16:29:01 crc kubenswrapper[4830]: I0227 16:29:01.148818 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-pchsl" Feb 27 16:29:01 crc kubenswrapper[4830]: I0227 16:29:01.508489 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vd8js" event={"ID":"624a4c06-2a5c-480c-89f1-addc261412f0","Type":"ContainerStarted","Data":"3d21eb9349f83e5f5678001a64d350ad6000cb3e4a2539605409baceb3f4194e"} Feb 27 16:29:01 crc kubenswrapper[4830]: I0227 16:29:01.508543 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vd8js" event={"ID":"624a4c06-2a5c-480c-89f1-addc261412f0","Type":"ContainerStarted","Data":"25c7f1ebf6edcd80f7a9b90d8e5c0d5c937fa66717e8dd08baa8c7c466ddea54"} Feb 27 16:29:01 crc kubenswrapper[4830]: I0227 16:29:01.534354 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-vd8js" podStartSLOduration=2.534331563 podStartE2EDuration="2.534331563s" podCreationTimestamp="2026-02-27 16:28:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:29:01.531059363 +0000 UTC m=+1337.620331856" watchObservedRunningTime="2026-02-27 16:29:01.534331563 +0000 UTC m=+1337.623604066" Feb 27 16:29:01 crc kubenswrapper[4830]: I0227 16:29:01.652161 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-pchsl"] Feb 27 16:29:01 crc kubenswrapper[4830]: W0227 16:29:01.666084 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1dd5a364_2f28_4e8b_831c_08ed09984745.slice/crio-7667a4a15aedca8225515d5ddc41f7663a718322ed85628b4048831fac00fc13 WatchSource:0}: Error finding container 7667a4a15aedca8225515d5ddc41f7663a718322ed85628b4048831fac00fc13: Status 404 returned error can't find the container with id 7667a4a15aedca8225515d5ddc41f7663a718322ed85628b4048831fac00fc13 Feb 27 16:29:02 crc kubenswrapper[4830]: I0227 16:29:02.524048 4830 generic.go:334] "Generic (PLEG): container finished" podID="1dd5a364-2f28-4e8b-831c-08ed09984745" containerID="83468be4a573a535ebb115952f4765ad160eb3dfbc1efdfc8c056f4eb57a9f74" exitCode=0 Feb 27 16:29:02 crc kubenswrapper[4830]: I0227 16:29:02.524089 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-pchsl" event={"ID":"1dd5a364-2f28-4e8b-831c-08ed09984745","Type":"ContainerDied","Data":"83468be4a573a535ebb115952f4765ad160eb3dfbc1efdfc8c056f4eb57a9f74"} Feb 27 16:29:02 crc kubenswrapper[4830]: I0227 16:29:02.524415 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-pchsl" event={"ID":"1dd5a364-2f28-4e8b-831c-08ed09984745","Type":"ContainerStarted","Data":"7667a4a15aedca8225515d5ddc41f7663a718322ed85628b4048831fac00fc13"} Feb 27 16:29:02 crc kubenswrapper[4830]: I0227 16:29:02.526882 4830 generic.go:334] "Generic (PLEG): container finished" podID="624a4c06-2a5c-480c-89f1-addc261412f0" containerID="3d21eb9349f83e5f5678001a64d350ad6000cb3e4a2539605409baceb3f4194e" exitCode=0 Feb 27 16:29:02 crc kubenswrapper[4830]: I0227 16:29:02.526932 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vd8js" event={"ID":"624a4c06-2a5c-480c-89f1-addc261412f0","Type":"ContainerDied","Data":"3d21eb9349f83e5f5678001a64d350ad6000cb3e4a2539605409baceb3f4194e"} Feb 27 16:29:03 crc kubenswrapper[4830]: I0227 16:29:03.160405 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 16:29:03 crc kubenswrapper[4830]: I0227 16:29:03.160784 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 16:29:03 crc kubenswrapper[4830]: I0227 16:29:03.564550 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-pchsl" event={"ID":"1dd5a364-2f28-4e8b-831c-08ed09984745","Type":"ContainerStarted","Data":"15a23e14b83d11b94ee3dc1d1a64b6c64f14f01947565c6e8dd5152c025f9fa1"} Feb 27 16:29:03 crc kubenswrapper[4830]: I0227 16:29:03.564616 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c79d794d7-pchsl" Feb 27 16:29:03 crc kubenswrapper[4830]: I0227 16:29:03.940280 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vd8js" Feb 27 16:29:03 crc kubenswrapper[4830]: I0227 16:29:03.960621 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c79d794d7-pchsl" podStartSLOduration=3.960598177 podStartE2EDuration="3.960598177s" podCreationTimestamp="2026-02-27 16:29:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:29:03.615685065 +0000 UTC m=+1339.704957528" watchObservedRunningTime="2026-02-27 16:29:03.960598177 +0000 UTC m=+1340.049870640" Feb 27 16:29:04 crc kubenswrapper[4830]: I0227 16:29:04.086084 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8n4w\" (UniqueName: \"kubernetes.io/projected/624a4c06-2a5c-480c-89f1-addc261412f0-kube-api-access-p8n4w\") pod \"624a4c06-2a5c-480c-89f1-addc261412f0\" (UID: \"624a4c06-2a5c-480c-89f1-addc261412f0\") " Feb 27 16:29:04 crc kubenswrapper[4830]: I0227 16:29:04.086211 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/624a4c06-2a5c-480c-89f1-addc261412f0-operator-scripts\") pod \"624a4c06-2a5c-480c-89f1-addc261412f0\" (UID: \"624a4c06-2a5c-480c-89f1-addc261412f0\") " Feb 27 16:29:04 crc kubenswrapper[4830]: I0227 16:29:04.087271 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/624a4c06-2a5c-480c-89f1-addc261412f0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "624a4c06-2a5c-480c-89f1-addc261412f0" (UID: "624a4c06-2a5c-480c-89f1-addc261412f0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:29:04 crc kubenswrapper[4830]: I0227 16:29:04.092033 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/624a4c06-2a5c-480c-89f1-addc261412f0-kube-api-access-p8n4w" (OuterVolumeSpecName: "kube-api-access-p8n4w") pod "624a4c06-2a5c-480c-89f1-addc261412f0" (UID: "624a4c06-2a5c-480c-89f1-addc261412f0"). InnerVolumeSpecName "kube-api-access-p8n4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:29:04 crc kubenswrapper[4830]: I0227 16:29:04.188130 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/624a4c06-2a5c-480c-89f1-addc261412f0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:04 crc kubenswrapper[4830]: I0227 16:29:04.188553 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8n4w\" (UniqueName: \"kubernetes.io/projected/624a4c06-2a5c-480c-89f1-addc261412f0-kube-api-access-p8n4w\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:04 crc kubenswrapper[4830]: I0227 16:29:04.577443 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vd8js" event={"ID":"624a4c06-2a5c-480c-89f1-addc261412f0","Type":"ContainerDied","Data":"25c7f1ebf6edcd80f7a9b90d8e5c0d5c937fa66717e8dd08baa8c7c466ddea54"} Feb 27 16:29:04 crc kubenswrapper[4830]: I0227 16:29:04.577494 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25c7f1ebf6edcd80f7a9b90d8e5c0d5c937fa66717e8dd08baa8c7c466ddea54" Feb 27 16:29:04 crc kubenswrapper[4830]: I0227 16:29:04.577505 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vd8js" Feb 27 16:29:04 crc kubenswrapper[4830]: I0227 16:29:04.580469 4830 generic.go:334] "Generic (PLEG): container finished" podID="034a69b5-6540-4b46-b0d5-55098d2f6467" containerID="2dd0daef7553edc948d313e884252b38bd2ca52a2e86007a5c75ebe4c3a88a04" exitCode=0 Feb 27 16:29:04 crc kubenswrapper[4830]: I0227 16:29:04.580582 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jhwfg" event={"ID":"034a69b5-6540-4b46-b0d5-55098d2f6467","Type":"ContainerDied","Data":"2dd0daef7553edc948d313e884252b38bd2ca52a2e86007a5c75ebe4c3a88a04"} Feb 27 16:29:05 crc kubenswrapper[4830]: I0227 16:29:05.593912 4830 generic.go:334] "Generic (PLEG): container finished" podID="c44ae554-632d-4347-ac9c-ce0c467ddce7" containerID="5f5402616bb7611817535016b614d7887cf7031895a3cb81400c32e205dcc9d4" exitCode=0 Feb 27 16:29:05 crc kubenswrapper[4830]: I0227 16:29:05.594054 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-w22r8" event={"ID":"c44ae554-632d-4347-ac9c-ce0c467ddce7","Type":"ContainerDied","Data":"5f5402616bb7611817535016b614d7887cf7031895a3cb81400c32e205dcc9d4"} Feb 27 16:29:06 crc kubenswrapper[4830]: I0227 16:29:06.436567 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jhwfg" Feb 27 16:29:06 crc kubenswrapper[4830]: I0227 16:29:06.529128 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/034a69b5-6540-4b46-b0d5-55098d2f6467-combined-ca-bundle\") pod \"034a69b5-6540-4b46-b0d5-55098d2f6467\" (UID: \"034a69b5-6540-4b46-b0d5-55098d2f6467\") " Feb 27 16:29:06 crc kubenswrapper[4830]: I0227 16:29:06.529753 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khdl9\" (UniqueName: \"kubernetes.io/projected/034a69b5-6540-4b46-b0d5-55098d2f6467-kube-api-access-khdl9\") pod \"034a69b5-6540-4b46-b0d5-55098d2f6467\" (UID: \"034a69b5-6540-4b46-b0d5-55098d2f6467\") " Feb 27 16:29:06 crc kubenswrapper[4830]: I0227 16:29:06.529854 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/034a69b5-6540-4b46-b0d5-55098d2f6467-config-data\") pod \"034a69b5-6540-4b46-b0d5-55098d2f6467\" (UID: \"034a69b5-6540-4b46-b0d5-55098d2f6467\") " Feb 27 16:29:06 crc kubenswrapper[4830]: I0227 16:29:06.529901 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/034a69b5-6540-4b46-b0d5-55098d2f6467-db-sync-config-data\") pod \"034a69b5-6540-4b46-b0d5-55098d2f6467\" (UID: \"034a69b5-6540-4b46-b0d5-55098d2f6467\") " Feb 27 16:29:06 crc kubenswrapper[4830]: I0227 16:29:06.536703 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/034a69b5-6540-4b46-b0d5-55098d2f6467-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "034a69b5-6540-4b46-b0d5-55098d2f6467" (UID: "034a69b5-6540-4b46-b0d5-55098d2f6467"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:29:06 crc kubenswrapper[4830]: I0227 16:29:06.539189 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/034a69b5-6540-4b46-b0d5-55098d2f6467-kube-api-access-khdl9" (OuterVolumeSpecName: "kube-api-access-khdl9") pod "034a69b5-6540-4b46-b0d5-55098d2f6467" (UID: "034a69b5-6540-4b46-b0d5-55098d2f6467"). InnerVolumeSpecName "kube-api-access-khdl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:29:06 crc kubenswrapper[4830]: I0227 16:29:06.570889 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/034a69b5-6540-4b46-b0d5-55098d2f6467-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "034a69b5-6540-4b46-b0d5-55098d2f6467" (UID: "034a69b5-6540-4b46-b0d5-55098d2f6467"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:29:06 crc kubenswrapper[4830]: I0227 16:29:06.584878 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/034a69b5-6540-4b46-b0d5-55098d2f6467-config-data" (OuterVolumeSpecName: "config-data") pod "034a69b5-6540-4b46-b0d5-55098d2f6467" (UID: "034a69b5-6540-4b46-b0d5-55098d2f6467"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:29:06 crc kubenswrapper[4830]: I0227 16:29:06.606108 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jhwfg" event={"ID":"034a69b5-6540-4b46-b0d5-55098d2f6467","Type":"ContainerDied","Data":"f1d0d41e8d36e7c156813c4bc49762c505ab5fd297e882b232c71c2dc440ea7f"} Feb 27 16:29:06 crc kubenswrapper[4830]: I0227 16:29:06.606154 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1d0d41e8d36e7c156813c4bc49762c505ab5fd297e882b232c71c2dc440ea7f" Feb 27 16:29:06 crc kubenswrapper[4830]: I0227 16:29:06.606157 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jhwfg" Feb 27 16:29:06 crc kubenswrapper[4830]: I0227 16:29:06.632584 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/034a69b5-6540-4b46-b0d5-55098d2f6467-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:06 crc kubenswrapper[4830]: I0227 16:29:06.632631 4830 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/034a69b5-6540-4b46-b0d5-55098d2f6467-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:06 crc kubenswrapper[4830]: I0227 16:29:06.632651 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/034a69b5-6540-4b46-b0d5-55098d2f6467-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:06 crc kubenswrapper[4830]: I0227 16:29:06.632671 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khdl9\" (UniqueName: \"kubernetes.io/projected/034a69b5-6540-4b46-b0d5-55098d2f6467-kube-api-access-khdl9\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:06 crc kubenswrapper[4830]: I0227 16:29:06.937233 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-w22r8" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.041068 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rcvj\" (UniqueName: \"kubernetes.io/projected/c44ae554-632d-4347-ac9c-ce0c467ddce7-kube-api-access-5rcvj\") pod \"c44ae554-632d-4347-ac9c-ce0c467ddce7\" (UID: \"c44ae554-632d-4347-ac9c-ce0c467ddce7\") " Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.041202 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c44ae554-632d-4347-ac9c-ce0c467ddce7-config-data\") pod \"c44ae554-632d-4347-ac9c-ce0c467ddce7\" (UID: \"c44ae554-632d-4347-ac9c-ce0c467ddce7\") " Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.041918 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c44ae554-632d-4347-ac9c-ce0c467ddce7-combined-ca-bundle\") pod \"c44ae554-632d-4347-ac9c-ce0c467ddce7\" (UID: \"c44ae554-632d-4347-ac9c-ce0c467ddce7\") " Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.052190 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c44ae554-632d-4347-ac9c-ce0c467ddce7-kube-api-access-5rcvj" (OuterVolumeSpecName: "kube-api-access-5rcvj") pod "c44ae554-632d-4347-ac9c-ce0c467ddce7" (UID: "c44ae554-632d-4347-ac9c-ce0c467ddce7"). InnerVolumeSpecName "kube-api-access-5rcvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.056823 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-pchsl"] Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.057053 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c79d794d7-pchsl" podUID="1dd5a364-2f28-4e8b-831c-08ed09984745" containerName="dnsmasq-dns" containerID="cri-o://15a23e14b83d11b94ee3dc1d1a64b6c64f14f01947565c6e8dd5152c025f9fa1" gracePeriod=10 Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.077113 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c79d794d7-pchsl" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.143985 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rcvj\" (UniqueName: \"kubernetes.io/projected/c44ae554-632d-4347-ac9c-ce0c467ddce7-kube-api-access-5rcvj\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.174007 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-q9f2f"] Feb 27 16:29:07 crc kubenswrapper[4830]: E0227 16:29:07.174377 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="624a4c06-2a5c-480c-89f1-addc261412f0" containerName="mariadb-account-create-update" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.174389 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="624a4c06-2a5c-480c-89f1-addc261412f0" containerName="mariadb-account-create-update" Feb 27 16:29:07 crc kubenswrapper[4830]: E0227 16:29:07.174399 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c44ae554-632d-4347-ac9c-ce0c467ddce7" containerName="keystone-db-sync" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.174405 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="c44ae554-632d-4347-ac9c-ce0c467ddce7" containerName="keystone-db-sync" Feb 27 16:29:07 crc kubenswrapper[4830]: E0227 16:29:07.174436 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="034a69b5-6540-4b46-b0d5-55098d2f6467" containerName="glance-db-sync" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.174442 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="034a69b5-6540-4b46-b0d5-55098d2f6467" containerName="glance-db-sync" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.174586 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="034a69b5-6540-4b46-b0d5-55098d2f6467" containerName="glance-db-sync" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.174597 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="624a4c06-2a5c-480c-89f1-addc261412f0" containerName="mariadb-account-create-update" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.174607 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="c44ae554-632d-4347-ac9c-ce0c467ddce7" containerName="keystone-db-sync" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.175459 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-q9f2f" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.187206 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c44ae554-632d-4347-ac9c-ce0c467ddce7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c44ae554-632d-4347-ac9c-ce0c467ddce7" (UID: "c44ae554-632d-4347-ac9c-ce0c467ddce7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.225369 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c44ae554-632d-4347-ac9c-ce0c467ddce7-config-data" (OuterVolumeSpecName: "config-data") pod "c44ae554-632d-4347-ac9c-ce0c467ddce7" (UID: "c44ae554-632d-4347-ac9c-ce0c467ddce7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.245903 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c44ae554-632d-4347-ac9c-ce0c467ddce7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.245936 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c44ae554-632d-4347-ac9c-ce0c467ddce7-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.254080 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-q9f2f"] Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.347203 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d40c18b-0e28-47d0-8626-7f544a9cd711-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-q9f2f\" (UID: \"9d40c18b-0e28-47d0-8626-7f544a9cd711\") " pod="openstack/dnsmasq-dns-5f59b8f679-q9f2f" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.347260 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d40c18b-0e28-47d0-8626-7f544a9cd711-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-q9f2f\" (UID: \"9d40c18b-0e28-47d0-8626-7f544a9cd711\") " pod="openstack/dnsmasq-dns-5f59b8f679-q9f2f" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.347290 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnptc\" (UniqueName: \"kubernetes.io/projected/9d40c18b-0e28-47d0-8626-7f544a9cd711-kube-api-access-nnptc\") pod \"dnsmasq-dns-5f59b8f679-q9f2f\" (UID: \"9d40c18b-0e28-47d0-8626-7f544a9cd711\") " pod="openstack/dnsmasq-dns-5f59b8f679-q9f2f" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.347502 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d40c18b-0e28-47d0-8626-7f544a9cd711-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-q9f2f\" (UID: \"9d40c18b-0e28-47d0-8626-7f544a9cd711\") " pod="openstack/dnsmasq-dns-5f59b8f679-q9f2f" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.347562 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d40c18b-0e28-47d0-8626-7f544a9cd711-config\") pod \"dnsmasq-dns-5f59b8f679-q9f2f\" (UID: \"9d40c18b-0e28-47d0-8626-7f544a9cd711\") " pod="openstack/dnsmasq-dns-5f59b8f679-q9f2f" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.347673 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d40c18b-0e28-47d0-8626-7f544a9cd711-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-q9f2f\" (UID: \"9d40c18b-0e28-47d0-8626-7f544a9cd711\") " pod="openstack/dnsmasq-dns-5f59b8f679-q9f2f" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.449624 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d40c18b-0e28-47d0-8626-7f544a9cd711-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-q9f2f\" (UID: \"9d40c18b-0e28-47d0-8626-7f544a9cd711\") " pod="openstack/dnsmasq-dns-5f59b8f679-q9f2f" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.449699 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d40c18b-0e28-47d0-8626-7f544a9cd711-config\") pod \"dnsmasq-dns-5f59b8f679-q9f2f\" (UID: \"9d40c18b-0e28-47d0-8626-7f544a9cd711\") " pod="openstack/dnsmasq-dns-5f59b8f679-q9f2f" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.449777 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d40c18b-0e28-47d0-8626-7f544a9cd711-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-q9f2f\" (UID: \"9d40c18b-0e28-47d0-8626-7f544a9cd711\") " pod="openstack/dnsmasq-dns-5f59b8f679-q9f2f" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.449922 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d40c18b-0e28-47d0-8626-7f544a9cd711-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-q9f2f\" (UID: \"9d40c18b-0e28-47d0-8626-7f544a9cd711\") " pod="openstack/dnsmasq-dns-5f59b8f679-q9f2f" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.450021 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d40c18b-0e28-47d0-8626-7f544a9cd711-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-q9f2f\" (UID: \"9d40c18b-0e28-47d0-8626-7f544a9cd711\") " pod="openstack/dnsmasq-dns-5f59b8f679-q9f2f" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.450057 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnptc\" (UniqueName: \"kubernetes.io/projected/9d40c18b-0e28-47d0-8626-7f544a9cd711-kube-api-access-nnptc\") pod \"dnsmasq-dns-5f59b8f679-q9f2f\" (UID: \"9d40c18b-0e28-47d0-8626-7f544a9cd711\") " pod="openstack/dnsmasq-dns-5f59b8f679-q9f2f" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.450810 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d40c18b-0e28-47d0-8626-7f544a9cd711-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-q9f2f\" (UID: \"9d40c18b-0e28-47d0-8626-7f544a9cd711\") " pod="openstack/dnsmasq-dns-5f59b8f679-q9f2f" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.450926 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d40c18b-0e28-47d0-8626-7f544a9cd711-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-q9f2f\" (UID: \"9d40c18b-0e28-47d0-8626-7f544a9cd711\") " pod="openstack/dnsmasq-dns-5f59b8f679-q9f2f" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.450968 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d40c18b-0e28-47d0-8626-7f544a9cd711-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-q9f2f\" (UID: \"9d40c18b-0e28-47d0-8626-7f544a9cd711\") " pod="openstack/dnsmasq-dns-5f59b8f679-q9f2f" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.451480 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d40c18b-0e28-47d0-8626-7f544a9cd711-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-q9f2f\" (UID: \"9d40c18b-0e28-47d0-8626-7f544a9cd711\") " pod="openstack/dnsmasq-dns-5f59b8f679-q9f2f" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.451602 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d40c18b-0e28-47d0-8626-7f544a9cd711-config\") pod \"dnsmasq-dns-5f59b8f679-q9f2f\" (UID: \"9d40c18b-0e28-47d0-8626-7f544a9cd711\") " pod="openstack/dnsmasq-dns-5f59b8f679-q9f2f" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.466380 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnptc\" (UniqueName: \"kubernetes.io/projected/9d40c18b-0e28-47d0-8626-7f544a9cd711-kube-api-access-nnptc\") pod \"dnsmasq-dns-5f59b8f679-q9f2f\" (UID: \"9d40c18b-0e28-47d0-8626-7f544a9cd711\") " pod="openstack/dnsmasq-dns-5f59b8f679-q9f2f" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.527018 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-q9f2f" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.626013 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-w22r8" event={"ID":"c44ae554-632d-4347-ac9c-ce0c467ddce7","Type":"ContainerDied","Data":"65f3155ec69117ad776a3c494a001ad23fdf46b8514ff6c1248fade6ba701d99"} Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.626247 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65f3155ec69117ad776a3c494a001ad23fdf46b8514ff6c1248fade6ba701d99" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.626202 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-w22r8" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.628425 4830 generic.go:334] "Generic (PLEG): container finished" podID="1dd5a364-2f28-4e8b-831c-08ed09984745" containerID="15a23e14b83d11b94ee3dc1d1a64b6c64f14f01947565c6e8dd5152c025f9fa1" exitCode=0 Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.628455 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-pchsl" event={"ID":"1dd5a364-2f28-4e8b-831c-08ed09984745","Type":"ContainerDied","Data":"15a23e14b83d11b94ee3dc1d1a64b6c64f14f01947565c6e8dd5152c025f9fa1"} Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.781618 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-q9f2f"] Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.815746 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-7xpjt"] Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.818326 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-7xpjt" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.832021 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-jk57b"] Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.840131 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jk57b" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.843267 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.843528 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zm2zz" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.843855 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.843969 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.844077 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.852470 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-7xpjt"] Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.885014 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jk57b"] Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.957499 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33ab5b85-8198-4e45-89ad-c1c08e39fe20-scripts\") pod \"keystone-bootstrap-jk57b\" (UID: \"33ab5b85-8198-4e45-89ad-c1c08e39fe20\") " pod="openstack/keystone-bootstrap-jk57b" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.957548 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-7xpjt\" (UID: \"bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86\") " pod="openstack/dnsmasq-dns-bbf5cc879-7xpjt" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.957568 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-7xpjt\" (UID: \"bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86\") " pod="openstack/dnsmasq-dns-bbf5cc879-7xpjt" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.957612 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-7xpjt\" (UID: \"bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86\") " pod="openstack/dnsmasq-dns-bbf5cc879-7xpjt" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.957639 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86-config\") pod \"dnsmasq-dns-bbf5cc879-7xpjt\" (UID: \"bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86\") " pod="openstack/dnsmasq-dns-bbf5cc879-7xpjt" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.957662 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/33ab5b85-8198-4e45-89ad-c1c08e39fe20-credential-keys\") pod \"keystone-bootstrap-jk57b\" (UID: \"33ab5b85-8198-4e45-89ad-c1c08e39fe20\") " pod="openstack/keystone-bootstrap-jk57b" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.957680 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qffh4\" (UniqueName: \"kubernetes.io/projected/bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86-kube-api-access-qffh4\") pod \"dnsmasq-dns-bbf5cc879-7xpjt\" (UID: \"bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86\") " pod="openstack/dnsmasq-dns-bbf5cc879-7xpjt" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.957702 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33ab5b85-8198-4e45-89ad-c1c08e39fe20-config-data\") pod \"keystone-bootstrap-jk57b\" (UID: \"33ab5b85-8198-4e45-89ad-c1c08e39fe20\") " pod="openstack/keystone-bootstrap-jk57b" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.957722 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-7xpjt\" (UID: \"bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86\") " pod="openstack/dnsmasq-dns-bbf5cc879-7xpjt" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.957739 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb75x\" (UniqueName: \"kubernetes.io/projected/33ab5b85-8198-4e45-89ad-c1c08e39fe20-kube-api-access-cb75x\") pod \"keystone-bootstrap-jk57b\" (UID: \"33ab5b85-8198-4e45-89ad-c1c08e39fe20\") " pod="openstack/keystone-bootstrap-jk57b" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.957767 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ab5b85-8198-4e45-89ad-c1c08e39fe20-combined-ca-bundle\") pod \"keystone-bootstrap-jk57b\" (UID: \"33ab5b85-8198-4e45-89ad-c1c08e39fe20\") " pod="openstack/keystone-bootstrap-jk57b" Feb 27 16:29:07 crc kubenswrapper[4830]: I0227 16:29:07.957784 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/33ab5b85-8198-4e45-89ad-c1c08e39fe20-fernet-keys\") pod \"keystone-bootstrap-jk57b\" (UID: \"33ab5b85-8198-4e45-89ad-c1c08e39fe20\") " pod="openstack/keystone-bootstrap-jk57b" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.002300 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-q9f2f"] Feb 27 16:29:08 crc kubenswrapper[4830]: W0227 16:29:08.033106 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d40c18b_0e28_47d0_8626_7f544a9cd711.slice/crio-98f9297290042cbff2892384775085fb115d773832475c4860274b3e0196dfb1 WatchSource:0}: Error finding container 98f9297290042cbff2892384775085fb115d773832475c4860274b3e0196dfb1: Status 404 returned error can't find the container with id 98f9297290042cbff2892384775085fb115d773832475c4860274b3e0196dfb1 Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.058816 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-7xpjt\" (UID: \"bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86\") " pod="openstack/dnsmasq-dns-bbf5cc879-7xpjt" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.058861 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86-config\") pod \"dnsmasq-dns-bbf5cc879-7xpjt\" (UID: \"bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86\") " pod="openstack/dnsmasq-dns-bbf5cc879-7xpjt" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.058894 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/33ab5b85-8198-4e45-89ad-c1c08e39fe20-credential-keys\") pod \"keystone-bootstrap-jk57b\" (UID: \"33ab5b85-8198-4e45-89ad-c1c08e39fe20\") " pod="openstack/keystone-bootstrap-jk57b" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.058916 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qffh4\" (UniqueName: \"kubernetes.io/projected/bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86-kube-api-access-qffh4\") pod \"dnsmasq-dns-bbf5cc879-7xpjt\" (UID: \"bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86\") " pod="openstack/dnsmasq-dns-bbf5cc879-7xpjt" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.058938 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33ab5b85-8198-4e45-89ad-c1c08e39fe20-config-data\") pod \"keystone-bootstrap-jk57b\" (UID: \"33ab5b85-8198-4e45-89ad-c1c08e39fe20\") " pod="openstack/keystone-bootstrap-jk57b" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.058975 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-7xpjt\" (UID: \"bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86\") " pod="openstack/dnsmasq-dns-bbf5cc879-7xpjt" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.058994 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb75x\" (UniqueName: \"kubernetes.io/projected/33ab5b85-8198-4e45-89ad-c1c08e39fe20-kube-api-access-cb75x\") pod \"keystone-bootstrap-jk57b\" (UID: \"33ab5b85-8198-4e45-89ad-c1c08e39fe20\") " pod="openstack/keystone-bootstrap-jk57b" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.059024 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ab5b85-8198-4e45-89ad-c1c08e39fe20-combined-ca-bundle\") pod \"keystone-bootstrap-jk57b\" (UID: \"33ab5b85-8198-4e45-89ad-c1c08e39fe20\") " pod="openstack/keystone-bootstrap-jk57b" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.059043 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/33ab5b85-8198-4e45-89ad-c1c08e39fe20-fernet-keys\") pod \"keystone-bootstrap-jk57b\" (UID: \"33ab5b85-8198-4e45-89ad-c1c08e39fe20\") " pod="openstack/keystone-bootstrap-jk57b" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.059090 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33ab5b85-8198-4e45-89ad-c1c08e39fe20-scripts\") pod \"keystone-bootstrap-jk57b\" (UID: \"33ab5b85-8198-4e45-89ad-c1c08e39fe20\") " pod="openstack/keystone-bootstrap-jk57b" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.059110 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-7xpjt\" (UID: \"bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86\") " pod="openstack/dnsmasq-dns-bbf5cc879-7xpjt" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.059126 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-7xpjt\" (UID: \"bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86\") " pod="openstack/dnsmasq-dns-bbf5cc879-7xpjt" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.059909 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-7xpjt\" (UID: \"bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86\") " pod="openstack/dnsmasq-dns-bbf5cc879-7xpjt" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.063217 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-7xpjt\" (UID: \"bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86\") " pod="openstack/dnsmasq-dns-bbf5cc879-7xpjt" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.066034 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33ab5b85-8198-4e45-89ad-c1c08e39fe20-scripts\") pod \"keystone-bootstrap-jk57b\" (UID: \"33ab5b85-8198-4e45-89ad-c1c08e39fe20\") " pod="openstack/keystone-bootstrap-jk57b" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.066499 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/33ab5b85-8198-4e45-89ad-c1c08e39fe20-credential-keys\") pod \"keystone-bootstrap-jk57b\" (UID: \"33ab5b85-8198-4e45-89ad-c1c08e39fe20\") " pod="openstack/keystone-bootstrap-jk57b" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.066608 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-7xpjt\" (UID: \"bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86\") " pod="openstack/dnsmasq-dns-bbf5cc879-7xpjt" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.067247 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86-config\") pod \"dnsmasq-dns-bbf5cc879-7xpjt\" (UID: \"bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86\") " pod="openstack/dnsmasq-dns-bbf5cc879-7xpjt" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.067484 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/33ab5b85-8198-4e45-89ad-c1c08e39fe20-fernet-keys\") pod \"keystone-bootstrap-jk57b\" (UID: \"33ab5b85-8198-4e45-89ad-c1c08e39fe20\") " pod="openstack/keystone-bootstrap-jk57b" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.067642 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-7xpjt\" (UID: \"bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86\") " pod="openstack/dnsmasq-dns-bbf5cc879-7xpjt" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.072350 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ab5b85-8198-4e45-89ad-c1c08e39fe20-combined-ca-bundle\") pod \"keystone-bootstrap-jk57b\" (UID: \"33ab5b85-8198-4e45-89ad-c1c08e39fe20\") " pod="openstack/keystone-bootstrap-jk57b" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.096119 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33ab5b85-8198-4e45-89ad-c1c08e39fe20-config-data\") pod \"keystone-bootstrap-jk57b\" (UID: \"33ab5b85-8198-4e45-89ad-c1c08e39fe20\") " pod="openstack/keystone-bootstrap-jk57b" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.104265 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qffh4\" (UniqueName: \"kubernetes.io/projected/bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86-kube-api-access-qffh4\") pod \"dnsmasq-dns-bbf5cc879-7xpjt\" (UID: \"bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86\") " pod="openstack/dnsmasq-dns-bbf5cc879-7xpjt" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.107677 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb75x\" (UniqueName: \"kubernetes.io/projected/33ab5b85-8198-4e45-89ad-c1c08e39fe20-kube-api-access-cb75x\") pod \"keystone-bootstrap-jk57b\" (UID: \"33ab5b85-8198-4e45-89ad-c1c08e39fe20\") " pod="openstack/keystone-bootstrap-jk57b" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.113014 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.132352 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.141860 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.146613 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-7xpjt" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.165007 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jk57b" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.165450 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.177015 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.187539 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-4d9ld"] Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.206379 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4d9ld" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.209794 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.209830 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-wlzvq" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.210071 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.211126 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4d9ld"] Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.239613 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-vrjmz"] Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.240786 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vrjmz" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.243894 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.244108 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-jw2lh" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.244181 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.262822 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-vrjmz"] Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.286782 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efa2d7d0-3613-4580-be80-b1a72de4501d-scripts\") pod \"ceilometer-0\" (UID: \"efa2d7d0-3613-4580-be80-b1a72de4501d\") " pod="openstack/ceilometer-0" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.286852 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efa2d7d0-3613-4580-be80-b1a72de4501d-config-data\") pod \"ceilometer-0\" (UID: \"efa2d7d0-3613-4580-be80-b1a72de4501d\") " pod="openstack/ceilometer-0" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.286884 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efa2d7d0-3613-4580-be80-b1a72de4501d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"efa2d7d0-3613-4580-be80-b1a72de4501d\") " pod="openstack/ceilometer-0" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.286900 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/52d332d0-98e5-4cff-8486-151b6593c94f-config\") pod \"neutron-db-sync-4d9ld\" (UID: \"52d332d0-98e5-4cff-8486-151b6593c94f\") " pod="openstack/neutron-db-sync-4d9ld" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.286915 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52d332d0-98e5-4cff-8486-151b6593c94f-combined-ca-bundle\") pod \"neutron-db-sync-4d9ld\" (UID: \"52d332d0-98e5-4cff-8486-151b6593c94f\") " pod="openstack/neutron-db-sync-4d9ld" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.286963 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efa2d7d0-3613-4580-be80-b1a72de4501d-log-httpd\") pod \"ceilometer-0\" (UID: \"efa2d7d0-3613-4580-be80-b1a72de4501d\") " pod="openstack/ceilometer-0" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.286979 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efa2d7d0-3613-4580-be80-b1a72de4501d-run-httpd\") pod \"ceilometer-0\" (UID: \"efa2d7d0-3613-4580-be80-b1a72de4501d\") " pod="openstack/ceilometer-0" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.292104 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s6nl\" (UniqueName: \"kubernetes.io/projected/efa2d7d0-3613-4580-be80-b1a72de4501d-kube-api-access-8s6nl\") pod \"ceilometer-0\" (UID: \"efa2d7d0-3613-4580-be80-b1a72de4501d\") " pod="openstack/ceilometer-0" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.292170 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efa2d7d0-3613-4580-be80-b1a72de4501d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"efa2d7d0-3613-4580-be80-b1a72de4501d\") " pod="openstack/ceilometer-0" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.292257 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z7zp\" (UniqueName: \"kubernetes.io/projected/52d332d0-98e5-4cff-8486-151b6593c94f-kube-api-access-4z7zp\") pod \"neutron-db-sync-4d9ld\" (UID: \"52d332d0-98e5-4cff-8486-151b6593c94f\") " pod="openstack/neutron-db-sync-4d9ld" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.328008 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-dcxkj"] Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.329452 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dcxkj" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.335351 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-nkprd" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.335546 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.340359 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-dcxkj"] Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.348998 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-7xpjt"] Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.370076 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-vqkl9"] Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.371510 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-vqkl9" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.384027 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-b9fgg"] Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.385162 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-b9fgg" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.387150 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-crgzb" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.387321 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.388049 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.395608 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z7zp\" (UniqueName: \"kubernetes.io/projected/52d332d0-98e5-4cff-8486-151b6593c94f-kube-api-access-4z7zp\") pod \"neutron-db-sync-4d9ld\" (UID: \"52d332d0-98e5-4cff-8486-151b6593c94f\") " pod="openstack/neutron-db-sync-4d9ld" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.395653 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efa2d7d0-3613-4580-be80-b1a72de4501d-scripts\") pod \"ceilometer-0\" (UID: \"efa2d7d0-3613-4580-be80-b1a72de4501d\") " pod="openstack/ceilometer-0" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.395687 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a69bc2ed-ce70-4828-af02-ccac1c3f0c10-db-sync-config-data\") pod \"cinder-db-sync-vrjmz\" (UID: \"a69bc2ed-ce70-4828-af02-ccac1c3f0c10\") " pod="openstack/cinder-db-sync-vrjmz" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.395722 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a69bc2ed-ce70-4828-af02-ccac1c3f0c10-scripts\") pod \"cinder-db-sync-vrjmz\" (UID: \"a69bc2ed-ce70-4828-af02-ccac1c3f0c10\") " pod="openstack/cinder-db-sync-vrjmz" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.395748 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efa2d7d0-3613-4580-be80-b1a72de4501d-config-data\") pod \"ceilometer-0\" (UID: \"efa2d7d0-3613-4580-be80-b1a72de4501d\") " pod="openstack/ceilometer-0" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.395779 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efa2d7d0-3613-4580-be80-b1a72de4501d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"efa2d7d0-3613-4580-be80-b1a72de4501d\") " pod="openstack/ceilometer-0" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.395795 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a69bc2ed-ce70-4828-af02-ccac1c3f0c10-config-data\") pod \"cinder-db-sync-vrjmz\" (UID: \"a69bc2ed-ce70-4828-af02-ccac1c3f0c10\") " pod="openstack/cinder-db-sync-vrjmz" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.395814 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/52d332d0-98e5-4cff-8486-151b6593c94f-config\") pod \"neutron-db-sync-4d9ld\" (UID: \"52d332d0-98e5-4cff-8486-151b6593c94f\") " pod="openstack/neutron-db-sync-4d9ld" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.395827 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52d332d0-98e5-4cff-8486-151b6593c94f-combined-ca-bundle\") pod \"neutron-db-sync-4d9ld\" (UID: \"52d332d0-98e5-4cff-8486-151b6593c94f\") " pod="openstack/neutron-db-sync-4d9ld" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.395873 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efa2d7d0-3613-4580-be80-b1a72de4501d-log-httpd\") pod \"ceilometer-0\" (UID: \"efa2d7d0-3613-4580-be80-b1a72de4501d\") " pod="openstack/ceilometer-0" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.395889 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efa2d7d0-3613-4580-be80-b1a72de4501d-run-httpd\") pod \"ceilometer-0\" (UID: \"efa2d7d0-3613-4580-be80-b1a72de4501d\") " pod="openstack/ceilometer-0" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.395916 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s6nl\" (UniqueName: \"kubernetes.io/projected/efa2d7d0-3613-4580-be80-b1a72de4501d-kube-api-access-8s6nl\") pod \"ceilometer-0\" (UID: \"efa2d7d0-3613-4580-be80-b1a72de4501d\") " pod="openstack/ceilometer-0" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.395931 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmf7s\" (UniqueName: \"kubernetes.io/projected/a69bc2ed-ce70-4828-af02-ccac1c3f0c10-kube-api-access-wmf7s\") pod \"cinder-db-sync-vrjmz\" (UID: \"a69bc2ed-ce70-4828-af02-ccac1c3f0c10\") " pod="openstack/cinder-db-sync-vrjmz" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.395960 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a69bc2ed-ce70-4828-af02-ccac1c3f0c10-etc-machine-id\") pod \"cinder-db-sync-vrjmz\" (UID: \"a69bc2ed-ce70-4828-af02-ccac1c3f0c10\") " pod="openstack/cinder-db-sync-vrjmz" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.395980 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efa2d7d0-3613-4580-be80-b1a72de4501d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"efa2d7d0-3613-4580-be80-b1a72de4501d\") " pod="openstack/ceilometer-0" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.396006 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a69bc2ed-ce70-4828-af02-ccac1c3f0c10-combined-ca-bundle\") pod \"cinder-db-sync-vrjmz\" (UID: \"a69bc2ed-ce70-4828-af02-ccac1c3f0c10\") " pod="openstack/cinder-db-sync-vrjmz" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.406635 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52d332d0-98e5-4cff-8486-151b6593c94f-combined-ca-bundle\") pod \"neutron-db-sync-4d9ld\" (UID: \"52d332d0-98e5-4cff-8486-151b6593c94f\") " pod="openstack/neutron-db-sync-4d9ld" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.409385 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-vqkl9"] Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.414378 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efa2d7d0-3613-4580-be80-b1a72de4501d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"efa2d7d0-3613-4580-be80-b1a72de4501d\") " pod="openstack/ceilometer-0" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.418105 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efa2d7d0-3613-4580-be80-b1a72de4501d-run-httpd\") pod \"ceilometer-0\" (UID: \"efa2d7d0-3613-4580-be80-b1a72de4501d\") " pod="openstack/ceilometer-0" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.434255 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efa2d7d0-3613-4580-be80-b1a72de4501d-log-httpd\") pod \"ceilometer-0\" (UID: \"efa2d7d0-3613-4580-be80-b1a72de4501d\") " pod="openstack/ceilometer-0" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.434379 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-b9fgg"] Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.438321 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efa2d7d0-3613-4580-be80-b1a72de4501d-scripts\") pod \"ceilometer-0\" (UID: \"efa2d7d0-3613-4580-be80-b1a72de4501d\") " pod="openstack/ceilometer-0" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.438913 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efa2d7d0-3613-4580-be80-b1a72de4501d-config-data\") pod \"ceilometer-0\" (UID: \"efa2d7d0-3613-4580-be80-b1a72de4501d\") " pod="openstack/ceilometer-0" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.443476 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z7zp\" (UniqueName: \"kubernetes.io/projected/52d332d0-98e5-4cff-8486-151b6593c94f-kube-api-access-4z7zp\") pod \"neutron-db-sync-4d9ld\" (UID: \"52d332d0-98e5-4cff-8486-151b6593c94f\") " pod="openstack/neutron-db-sync-4d9ld" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.447059 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s6nl\" (UniqueName: \"kubernetes.io/projected/efa2d7d0-3613-4580-be80-b1a72de4501d-kube-api-access-8s6nl\") pod \"ceilometer-0\" (UID: \"efa2d7d0-3613-4580-be80-b1a72de4501d\") " pod="openstack/ceilometer-0" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.450394 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/52d332d0-98e5-4cff-8486-151b6593c94f-config\") pod \"neutron-db-sync-4d9ld\" (UID: \"52d332d0-98e5-4cff-8486-151b6593c94f\") " pod="openstack/neutron-db-sync-4d9ld" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.452769 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efa2d7d0-3613-4580-be80-b1a72de4501d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"efa2d7d0-3613-4580-be80-b1a72de4501d\") " pod="openstack/ceilometer-0" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.472312 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.496897 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b05d69f2-31a8-4212-ad9a-8f2bec833edd-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-vqkl9\" (UID: \"b05d69f2-31a8-4212-ad9a-8f2bec833edd\") " pod="openstack/dnsmasq-dns-56df8fb6b7-vqkl9" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.496937 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmf7s\" (UniqueName: \"kubernetes.io/projected/a69bc2ed-ce70-4828-af02-ccac1c3f0c10-kube-api-access-wmf7s\") pod \"cinder-db-sync-vrjmz\" (UID: \"a69bc2ed-ce70-4828-af02-ccac1c3f0c10\") " pod="openstack/cinder-db-sync-vrjmz" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.496997 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a69bc2ed-ce70-4828-af02-ccac1c3f0c10-etc-machine-id\") pod \"cinder-db-sync-vrjmz\" (UID: \"a69bc2ed-ce70-4828-af02-ccac1c3f0c10\") " pod="openstack/cinder-db-sync-vrjmz" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.497016 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm9xq\" (UniqueName: \"kubernetes.io/projected/459173e8-7571-47b7-9af8-3bd2d24d4e21-kube-api-access-gm9xq\") pod \"barbican-db-sync-dcxkj\" (UID: \"459173e8-7571-47b7-9af8-3bd2d24d4e21\") " pod="openstack/barbican-db-sync-dcxkj" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.497047 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b05d69f2-31a8-4212-ad9a-8f2bec833edd-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-vqkl9\" (UID: \"b05d69f2-31a8-4212-ad9a-8f2bec833edd\") " pod="openstack/dnsmasq-dns-56df8fb6b7-vqkl9" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.497068 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b05d69f2-31a8-4212-ad9a-8f2bec833edd-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-vqkl9\" (UID: \"b05d69f2-31a8-4212-ad9a-8f2bec833edd\") " pod="openstack/dnsmasq-dns-56df8fb6b7-vqkl9" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.497087 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a69bc2ed-ce70-4828-af02-ccac1c3f0c10-combined-ca-bundle\") pod \"cinder-db-sync-vrjmz\" (UID: \"a69bc2ed-ce70-4828-af02-ccac1c3f0c10\") " pod="openstack/cinder-db-sync-vrjmz" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.497114 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67a7f858-b1fb-4547-9880-8f496d704f48-scripts\") pod \"placement-db-sync-b9fgg\" (UID: \"67a7f858-b1fb-4547-9880-8f496d704f48\") " pod="openstack/placement-db-sync-b9fgg" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.497157 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b05d69f2-31a8-4212-ad9a-8f2bec833edd-config\") pod \"dnsmasq-dns-56df8fb6b7-vqkl9\" (UID: \"b05d69f2-31a8-4212-ad9a-8f2bec833edd\") " pod="openstack/dnsmasq-dns-56df8fb6b7-vqkl9" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.497179 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a69bc2ed-ce70-4828-af02-ccac1c3f0c10-db-sync-config-data\") pod \"cinder-db-sync-vrjmz\" (UID: \"a69bc2ed-ce70-4828-af02-ccac1c3f0c10\") " pod="openstack/cinder-db-sync-vrjmz" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.497207 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a69bc2ed-ce70-4828-af02-ccac1c3f0c10-scripts\") pod \"cinder-db-sync-vrjmz\" (UID: \"a69bc2ed-ce70-4828-af02-ccac1c3f0c10\") " pod="openstack/cinder-db-sync-vrjmz" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.497250 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/459173e8-7571-47b7-9af8-3bd2d24d4e21-combined-ca-bundle\") pod \"barbican-db-sync-dcxkj\" (UID: \"459173e8-7571-47b7-9af8-3bd2d24d4e21\") " pod="openstack/barbican-db-sync-dcxkj" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.497273 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7fz4\" (UniqueName: \"kubernetes.io/projected/67a7f858-b1fb-4547-9880-8f496d704f48-kube-api-access-n7fz4\") pod \"placement-db-sync-b9fgg\" (UID: \"67a7f858-b1fb-4547-9880-8f496d704f48\") " pod="openstack/placement-db-sync-b9fgg" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.497291 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b05d69f2-31a8-4212-ad9a-8f2bec833edd-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-vqkl9\" (UID: \"b05d69f2-31a8-4212-ad9a-8f2bec833edd\") " pod="openstack/dnsmasq-dns-56df8fb6b7-vqkl9" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.497308 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67a7f858-b1fb-4547-9880-8f496d704f48-combined-ca-bundle\") pod \"placement-db-sync-b9fgg\" (UID: \"67a7f858-b1fb-4547-9880-8f496d704f48\") " pod="openstack/placement-db-sync-b9fgg" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.497333 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjs55\" (UniqueName: \"kubernetes.io/projected/b05d69f2-31a8-4212-ad9a-8f2bec833edd-kube-api-access-bjs55\") pod \"dnsmasq-dns-56df8fb6b7-vqkl9\" (UID: \"b05d69f2-31a8-4212-ad9a-8f2bec833edd\") " pod="openstack/dnsmasq-dns-56df8fb6b7-vqkl9" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.497364 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a69bc2ed-ce70-4828-af02-ccac1c3f0c10-config-data\") pod \"cinder-db-sync-vrjmz\" (UID: \"a69bc2ed-ce70-4828-af02-ccac1c3f0c10\") " pod="openstack/cinder-db-sync-vrjmz" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.497397 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67a7f858-b1fb-4547-9880-8f496d704f48-config-data\") pod \"placement-db-sync-b9fgg\" (UID: \"67a7f858-b1fb-4547-9880-8f496d704f48\") " pod="openstack/placement-db-sync-b9fgg" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.497417 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/459173e8-7571-47b7-9af8-3bd2d24d4e21-db-sync-config-data\") pod \"barbican-db-sync-dcxkj\" (UID: \"459173e8-7571-47b7-9af8-3bd2d24d4e21\") " pod="openstack/barbican-db-sync-dcxkj" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.498474 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a69bc2ed-ce70-4828-af02-ccac1c3f0c10-etc-machine-id\") pod \"cinder-db-sync-vrjmz\" (UID: \"a69bc2ed-ce70-4828-af02-ccac1c3f0c10\") " pod="openstack/cinder-db-sync-vrjmz" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.498830 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67a7f858-b1fb-4547-9880-8f496d704f48-logs\") pod \"placement-db-sync-b9fgg\" (UID: \"67a7f858-b1fb-4547-9880-8f496d704f48\") " pod="openstack/placement-db-sync-b9fgg" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.507283 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a69bc2ed-ce70-4828-af02-ccac1c3f0c10-config-data\") pod \"cinder-db-sync-vrjmz\" (UID: \"a69bc2ed-ce70-4828-af02-ccac1c3f0c10\") " pod="openstack/cinder-db-sync-vrjmz" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.514513 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a69bc2ed-ce70-4828-af02-ccac1c3f0c10-combined-ca-bundle\") pod \"cinder-db-sync-vrjmz\" (UID: \"a69bc2ed-ce70-4828-af02-ccac1c3f0c10\") " pod="openstack/cinder-db-sync-vrjmz" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.514720 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a69bc2ed-ce70-4828-af02-ccac1c3f0c10-scripts\") pod \"cinder-db-sync-vrjmz\" (UID: \"a69bc2ed-ce70-4828-af02-ccac1c3f0c10\") " pod="openstack/cinder-db-sync-vrjmz" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.514840 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a69bc2ed-ce70-4828-af02-ccac1c3f0c10-db-sync-config-data\") pod \"cinder-db-sync-vrjmz\" (UID: \"a69bc2ed-ce70-4828-af02-ccac1c3f0c10\") " pod="openstack/cinder-db-sync-vrjmz" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.518344 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmf7s\" (UniqueName: \"kubernetes.io/projected/a69bc2ed-ce70-4828-af02-ccac1c3f0c10-kube-api-access-wmf7s\") pod \"cinder-db-sync-vrjmz\" (UID: \"a69bc2ed-ce70-4828-af02-ccac1c3f0c10\") " pod="openstack/cinder-db-sync-vrjmz" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.552498 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4d9ld" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.567489 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vrjmz" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.603165 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67a7f858-b1fb-4547-9880-8f496d704f48-config-data\") pod \"placement-db-sync-b9fgg\" (UID: \"67a7f858-b1fb-4547-9880-8f496d704f48\") " pod="openstack/placement-db-sync-b9fgg" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.603223 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/459173e8-7571-47b7-9af8-3bd2d24d4e21-db-sync-config-data\") pod \"barbican-db-sync-dcxkj\" (UID: \"459173e8-7571-47b7-9af8-3bd2d24d4e21\") " pod="openstack/barbican-db-sync-dcxkj" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.603256 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67a7f858-b1fb-4547-9880-8f496d704f48-logs\") pod \"placement-db-sync-b9fgg\" (UID: \"67a7f858-b1fb-4547-9880-8f496d704f48\") " pod="openstack/placement-db-sync-b9fgg" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.603302 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b05d69f2-31a8-4212-ad9a-8f2bec833edd-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-vqkl9\" (UID: \"b05d69f2-31a8-4212-ad9a-8f2bec833edd\") " pod="openstack/dnsmasq-dns-56df8fb6b7-vqkl9" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.603323 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm9xq\" (UniqueName: \"kubernetes.io/projected/459173e8-7571-47b7-9af8-3bd2d24d4e21-kube-api-access-gm9xq\") pod \"barbican-db-sync-dcxkj\" (UID: \"459173e8-7571-47b7-9af8-3bd2d24d4e21\") " pod="openstack/barbican-db-sync-dcxkj" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.603352 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b05d69f2-31a8-4212-ad9a-8f2bec833edd-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-vqkl9\" (UID: \"b05d69f2-31a8-4212-ad9a-8f2bec833edd\") " pod="openstack/dnsmasq-dns-56df8fb6b7-vqkl9" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.603388 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b05d69f2-31a8-4212-ad9a-8f2bec833edd-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-vqkl9\" (UID: \"b05d69f2-31a8-4212-ad9a-8f2bec833edd\") " pod="openstack/dnsmasq-dns-56df8fb6b7-vqkl9" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.603412 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67a7f858-b1fb-4547-9880-8f496d704f48-scripts\") pod \"placement-db-sync-b9fgg\" (UID: \"67a7f858-b1fb-4547-9880-8f496d704f48\") " pod="openstack/placement-db-sync-b9fgg" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.603433 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b05d69f2-31a8-4212-ad9a-8f2bec833edd-config\") pod \"dnsmasq-dns-56df8fb6b7-vqkl9\" (UID: \"b05d69f2-31a8-4212-ad9a-8f2bec833edd\") " pod="openstack/dnsmasq-dns-56df8fb6b7-vqkl9" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.603501 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/459173e8-7571-47b7-9af8-3bd2d24d4e21-combined-ca-bundle\") pod \"barbican-db-sync-dcxkj\" (UID: \"459173e8-7571-47b7-9af8-3bd2d24d4e21\") " pod="openstack/barbican-db-sync-dcxkj" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.603522 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7fz4\" (UniqueName: \"kubernetes.io/projected/67a7f858-b1fb-4547-9880-8f496d704f48-kube-api-access-n7fz4\") pod \"placement-db-sync-b9fgg\" (UID: \"67a7f858-b1fb-4547-9880-8f496d704f48\") " pod="openstack/placement-db-sync-b9fgg" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.604431 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b05d69f2-31a8-4212-ad9a-8f2bec833edd-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-vqkl9\" (UID: \"b05d69f2-31a8-4212-ad9a-8f2bec833edd\") " pod="openstack/dnsmasq-dns-56df8fb6b7-vqkl9" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.604457 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67a7f858-b1fb-4547-9880-8f496d704f48-combined-ca-bundle\") pod \"placement-db-sync-b9fgg\" (UID: \"67a7f858-b1fb-4547-9880-8f496d704f48\") " pod="openstack/placement-db-sync-b9fgg" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.604514 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjs55\" (UniqueName: \"kubernetes.io/projected/b05d69f2-31a8-4212-ad9a-8f2bec833edd-kube-api-access-bjs55\") pod \"dnsmasq-dns-56df8fb6b7-vqkl9\" (UID: \"b05d69f2-31a8-4212-ad9a-8f2bec833edd\") " pod="openstack/dnsmasq-dns-56df8fb6b7-vqkl9" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.607007 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b05d69f2-31a8-4212-ad9a-8f2bec833edd-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-vqkl9\" (UID: \"b05d69f2-31a8-4212-ad9a-8f2bec833edd\") " pod="openstack/dnsmasq-dns-56df8fb6b7-vqkl9" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.607283 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b05d69f2-31a8-4212-ad9a-8f2bec833edd-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-vqkl9\" (UID: \"b05d69f2-31a8-4212-ad9a-8f2bec833edd\") " pod="openstack/dnsmasq-dns-56df8fb6b7-vqkl9" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.607830 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b05d69f2-31a8-4212-ad9a-8f2bec833edd-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-vqkl9\" (UID: \"b05d69f2-31a8-4212-ad9a-8f2bec833edd\") " pod="openstack/dnsmasq-dns-56df8fb6b7-vqkl9" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.608129 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67a7f858-b1fb-4547-9880-8f496d704f48-logs\") pod \"placement-db-sync-b9fgg\" (UID: \"67a7f858-b1fb-4547-9880-8f496d704f48\") " pod="openstack/placement-db-sync-b9fgg" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.610268 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67a7f858-b1fb-4547-9880-8f496d704f48-scripts\") pod \"placement-db-sync-b9fgg\" (UID: \"67a7f858-b1fb-4547-9880-8f496d704f48\") " pod="openstack/placement-db-sync-b9fgg" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.610275 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b05d69f2-31a8-4212-ad9a-8f2bec833edd-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-vqkl9\" (UID: \"b05d69f2-31a8-4212-ad9a-8f2bec833edd\") " pod="openstack/dnsmasq-dns-56df8fb6b7-vqkl9" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.610417 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b05d69f2-31a8-4212-ad9a-8f2bec833edd-config\") pod \"dnsmasq-dns-56df8fb6b7-vqkl9\" (UID: \"b05d69f2-31a8-4212-ad9a-8f2bec833edd\") " pod="openstack/dnsmasq-dns-56df8fb6b7-vqkl9" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.612457 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/459173e8-7571-47b7-9af8-3bd2d24d4e21-combined-ca-bundle\") pod \"barbican-db-sync-dcxkj\" (UID: \"459173e8-7571-47b7-9af8-3bd2d24d4e21\") " pod="openstack/barbican-db-sync-dcxkj" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.612792 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67a7f858-b1fb-4547-9880-8f496d704f48-config-data\") pod \"placement-db-sync-b9fgg\" (UID: \"67a7f858-b1fb-4547-9880-8f496d704f48\") " pod="openstack/placement-db-sync-b9fgg" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.614510 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/459173e8-7571-47b7-9af8-3bd2d24d4e21-db-sync-config-data\") pod \"barbican-db-sync-dcxkj\" (UID: \"459173e8-7571-47b7-9af8-3bd2d24d4e21\") " pod="openstack/barbican-db-sync-dcxkj" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.615086 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67a7f858-b1fb-4547-9880-8f496d704f48-combined-ca-bundle\") pod \"placement-db-sync-b9fgg\" (UID: \"67a7f858-b1fb-4547-9880-8f496d704f48\") " pod="openstack/placement-db-sync-b9fgg" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.623540 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjs55\" (UniqueName: \"kubernetes.io/projected/b05d69f2-31a8-4212-ad9a-8f2bec833edd-kube-api-access-bjs55\") pod \"dnsmasq-dns-56df8fb6b7-vqkl9\" (UID: \"b05d69f2-31a8-4212-ad9a-8f2bec833edd\") " pod="openstack/dnsmasq-dns-56df8fb6b7-vqkl9" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.624506 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7fz4\" (UniqueName: \"kubernetes.io/projected/67a7f858-b1fb-4547-9880-8f496d704f48-kube-api-access-n7fz4\") pod \"placement-db-sync-b9fgg\" (UID: \"67a7f858-b1fb-4547-9880-8f496d704f48\") " pod="openstack/placement-db-sync-b9fgg" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.624889 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm9xq\" (UniqueName: \"kubernetes.io/projected/459173e8-7571-47b7-9af8-3bd2d24d4e21-kube-api-access-gm9xq\") pod \"barbican-db-sync-dcxkj\" (UID: \"459173e8-7571-47b7-9af8-3bd2d24d4e21\") " pod="openstack/barbican-db-sync-dcxkj" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.669175 4830 generic.go:334] "Generic (PLEG): container finished" podID="9d40c18b-0e28-47d0-8626-7f544a9cd711" containerID="7d227a20e256f7b1714ec22e233a9003dc5145ea4fac66e3744a979360d5e6e5" exitCode=0 Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.669239 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-q9f2f" event={"ID":"9d40c18b-0e28-47d0-8626-7f544a9cd711","Type":"ContainerDied","Data":"7d227a20e256f7b1714ec22e233a9003dc5145ea4fac66e3744a979360d5e6e5"} Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.669265 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-q9f2f" event={"ID":"9d40c18b-0e28-47d0-8626-7f544a9cd711","Type":"ContainerStarted","Data":"98f9297290042cbff2892384775085fb115d773832475c4860274b3e0196dfb1"} Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.686344 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-pchsl" event={"ID":"1dd5a364-2f28-4e8b-831c-08ed09984745","Type":"ContainerDied","Data":"7667a4a15aedca8225515d5ddc41f7663a718322ed85628b4048831fac00fc13"} Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.686381 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7667a4a15aedca8225515d5ddc41f7663a718322ed85628b4048831fac00fc13" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.690700 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-pchsl" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.694068 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-vqkl9" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.708346 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-b9fgg" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.807785 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dd5a364-2f28-4e8b-831c-08ed09984745-ovsdbserver-sb\") pod \"1dd5a364-2f28-4e8b-831c-08ed09984745\" (UID: \"1dd5a364-2f28-4e8b-831c-08ed09984745\") " Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.807827 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1dd5a364-2f28-4e8b-831c-08ed09984745-dns-swift-storage-0\") pod \"1dd5a364-2f28-4e8b-831c-08ed09984745\" (UID: \"1dd5a364-2f28-4e8b-831c-08ed09984745\") " Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.807870 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dd5a364-2f28-4e8b-831c-08ed09984745-dns-svc\") pod \"1dd5a364-2f28-4e8b-831c-08ed09984745\" (UID: \"1dd5a364-2f28-4e8b-831c-08ed09984745\") " Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.807999 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd5a364-2f28-4e8b-831c-08ed09984745-config\") pod \"1dd5a364-2f28-4e8b-831c-08ed09984745\" (UID: \"1dd5a364-2f28-4e8b-831c-08ed09984745\") " Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.808034 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxb9v\" (UniqueName: \"kubernetes.io/projected/1dd5a364-2f28-4e8b-831c-08ed09984745-kube-api-access-sxb9v\") pod \"1dd5a364-2f28-4e8b-831c-08ed09984745\" (UID: \"1dd5a364-2f28-4e8b-831c-08ed09984745\") " Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.808091 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dd5a364-2f28-4e8b-831c-08ed09984745-ovsdbserver-nb\") pod \"1dd5a364-2f28-4e8b-831c-08ed09984745\" (UID: \"1dd5a364-2f28-4e8b-831c-08ed09984745\") " Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.814788 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dd5a364-2f28-4e8b-831c-08ed09984745-kube-api-access-sxb9v" (OuterVolumeSpecName: "kube-api-access-sxb9v") pod "1dd5a364-2f28-4e8b-831c-08ed09984745" (UID: "1dd5a364-2f28-4e8b-831c-08ed09984745"). InnerVolumeSpecName "kube-api-access-sxb9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.864248 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-7xpjt"] Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.872607 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dcxkj" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.880041 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dd5a364-2f28-4e8b-831c-08ed09984745-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1dd5a364-2f28-4e8b-831c-08ed09984745" (UID: "1dd5a364-2f28-4e8b-831c-08ed09984745"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.881552 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dd5a364-2f28-4e8b-831c-08ed09984745-config" (OuterVolumeSpecName: "config") pod "1dd5a364-2f28-4e8b-831c-08ed09984745" (UID: "1dd5a364-2f28-4e8b-831c-08ed09984745"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.911226 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dd5a364-2f28-4e8b-831c-08ed09984745-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1dd5a364-2f28-4e8b-831c-08ed09984745" (UID: "1dd5a364-2f28-4e8b-831c-08ed09984745"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.913355 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dd5a364-2f28-4e8b-831c-08ed09984745-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.913381 4830 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1dd5a364-2f28-4e8b-831c-08ed09984745-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.913393 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd5a364-2f28-4e8b-831c-08ed09984745-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.913403 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxb9v\" (UniqueName: \"kubernetes.io/projected/1dd5a364-2f28-4e8b-831c-08ed09984745-kube-api-access-sxb9v\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.915110 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dd5a364-2f28-4e8b-831c-08ed09984745-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1dd5a364-2f28-4e8b-831c-08ed09984745" (UID: "1dd5a364-2f28-4e8b-831c-08ed09984745"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.946061 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dd5a364-2f28-4e8b-831c-08ed09984745-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1dd5a364-2f28-4e8b-831c-08ed09984745" (UID: "1dd5a364-2f28-4e8b-831c-08ed09984745"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.949759 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 16:29:08 crc kubenswrapper[4830]: E0227 16:29:08.950070 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dd5a364-2f28-4e8b-831c-08ed09984745" containerName="init" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.950087 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dd5a364-2f28-4e8b-831c-08ed09984745" containerName="init" Feb 27 16:29:08 crc kubenswrapper[4830]: E0227 16:29:08.950099 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dd5a364-2f28-4e8b-831c-08ed09984745" containerName="dnsmasq-dns" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.950106 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dd5a364-2f28-4e8b-831c-08ed09984745" containerName="dnsmasq-dns" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.950273 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dd5a364-2f28-4e8b-831c-08ed09984745" containerName="dnsmasq-dns" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.953235 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.961652 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-mh994" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.969524 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.972225 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 16:29:08 crc kubenswrapper[4830]: I0227 16:29:08.989195 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.012017 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jk57b"] Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.015453 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dd5a364-2f28-4e8b-831c-08ed09984745-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.015496 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dd5a364-2f28-4e8b-831c-08ed09984745-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.028468 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.035590 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.038363 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.074621 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.111997 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.126029 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/20d045c1-a920-4c3c-bba8-e3666f4a6549-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"20d045c1-a920-4c3c-bba8-e3666f4a6549\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.126093 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20d045c1-a920-4c3c-bba8-e3666f4a6549-config-data\") pod \"glance-default-external-api-0\" (UID: \"20d045c1-a920-4c3c-bba8-e3666f4a6549\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.126109 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20d045c1-a920-4c3c-bba8-e3666f4a6549-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"20d045c1-a920-4c3c-bba8-e3666f4a6549\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.126129 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20d045c1-a920-4c3c-bba8-e3666f4a6549-logs\") pod \"glance-default-external-api-0\" (UID: \"20d045c1-a920-4c3c-bba8-e3666f4a6549\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.126175 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx2qd\" (UniqueName: \"kubernetes.io/projected/20d045c1-a920-4c3c-bba8-e3666f4a6549-kube-api-access-jx2qd\") pod \"glance-default-external-api-0\" (UID: \"20d045c1-a920-4c3c-bba8-e3666f4a6549\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.126218 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20d045c1-a920-4c3c-bba8-e3666f4a6549-scripts\") pod \"glance-default-external-api-0\" (UID: \"20d045c1-a920-4c3c-bba8-e3666f4a6549\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.126244 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"20d045c1-a920-4c3c-bba8-e3666f4a6549\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.161383 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-q9f2f" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.227970 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d40c18b-0e28-47d0-8626-7f544a9cd711-ovsdbserver-nb\") pod \"9d40c18b-0e28-47d0-8626-7f544a9cd711\" (UID: \"9d40c18b-0e28-47d0-8626-7f544a9cd711\") " Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.228568 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20d045c1-a920-4c3c-bba8-e3666f4a6549-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"20d045c1-a920-4c3c-bba8-e3666f4a6549\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.228590 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20d045c1-a920-4c3c-bba8-e3666f4a6549-config-data\") pod \"glance-default-external-api-0\" (UID: \"20d045c1-a920-4c3c-bba8-e3666f4a6549\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.228607 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20d045c1-a920-4c3c-bba8-e3666f4a6549-logs\") pod \"glance-default-external-api-0\" (UID: \"20d045c1-a920-4c3c-bba8-e3666f4a6549\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.228625 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c2a6285-eb97-45b0-b45b-b42c78e4e2b5-logs\") pod \"glance-default-internal-api-0\" (UID: \"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.228676 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx2qd\" (UniqueName: \"kubernetes.io/projected/20d045c1-a920-4c3c-bba8-e3666f4a6549-kube-api-access-jx2qd\") pod \"glance-default-external-api-0\" (UID: \"20d045c1-a920-4c3c-bba8-e3666f4a6549\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.228693 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8w97\" (UniqueName: \"kubernetes.io/projected/6c2a6285-eb97-45b0-b45b-b42c78e4e2b5-kube-api-access-h8w97\") pod \"glance-default-internal-api-0\" (UID: \"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.228716 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2a6285-eb97-45b0-b45b-b42c78e4e2b5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.228735 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c2a6285-eb97-45b0-b45b-b42c78e4e2b5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.228759 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c2a6285-eb97-45b0-b45b-b42c78e4e2b5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.228782 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20d045c1-a920-4c3c-bba8-e3666f4a6549-scripts\") pod \"glance-default-external-api-0\" (UID: \"20d045c1-a920-4c3c-bba8-e3666f4a6549\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.228808 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"20d045c1-a920-4c3c-bba8-e3666f4a6549\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.228832 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/20d045c1-a920-4c3c-bba8-e3666f4a6549-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"20d045c1-a920-4c3c-bba8-e3666f4a6549\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.228848 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c2a6285-eb97-45b0-b45b-b42c78e4e2b5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.228875 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.237394 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/20d045c1-a920-4c3c-bba8-e3666f4a6549-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"20d045c1-a920-4c3c-bba8-e3666f4a6549\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.237660 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"20d045c1-a920-4c3c-bba8-e3666f4a6549\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.240909 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20d045c1-a920-4c3c-bba8-e3666f4a6549-logs\") pod \"glance-default-external-api-0\" (UID: \"20d045c1-a920-4c3c-bba8-e3666f4a6549\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.252292 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20d045c1-a920-4c3c-bba8-e3666f4a6549-scripts\") pod \"glance-default-external-api-0\" (UID: \"20d045c1-a920-4c3c-bba8-e3666f4a6549\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.264859 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20d045c1-a920-4c3c-bba8-e3666f4a6549-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"20d045c1-a920-4c3c-bba8-e3666f4a6549\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.265962 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20d045c1-a920-4c3c-bba8-e3666f4a6549-config-data\") pod \"glance-default-external-api-0\" (UID: \"20d045c1-a920-4c3c-bba8-e3666f4a6549\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.288565 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx2qd\" (UniqueName: \"kubernetes.io/projected/20d045c1-a920-4c3c-bba8-e3666f4a6549-kube-api-access-jx2qd\") pod \"glance-default-external-api-0\" (UID: \"20d045c1-a920-4c3c-bba8-e3666f4a6549\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.314389 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d40c18b-0e28-47d0-8626-7f544a9cd711-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9d40c18b-0e28-47d0-8626-7f544a9cd711" (UID: "9d40c18b-0e28-47d0-8626-7f544a9cd711"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.329707 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-vrjmz"] Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.331195 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"20d045c1-a920-4c3c-bba8-e3666f4a6549\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.331914 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d40c18b-0e28-47d0-8626-7f544a9cd711-config\") pod \"9d40c18b-0e28-47d0-8626-7f544a9cd711\" (UID: \"9d40c18b-0e28-47d0-8626-7f544a9cd711\") " Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.331971 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d40c18b-0e28-47d0-8626-7f544a9cd711-ovsdbserver-sb\") pod \"9d40c18b-0e28-47d0-8626-7f544a9cd711\" (UID: \"9d40c18b-0e28-47d0-8626-7f544a9cd711\") " Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.332014 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d40c18b-0e28-47d0-8626-7f544a9cd711-dns-swift-storage-0\") pod \"9d40c18b-0e28-47d0-8626-7f544a9cd711\" (UID: \"9d40c18b-0e28-47d0-8626-7f544a9cd711\") " Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.332099 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnptc\" (UniqueName: \"kubernetes.io/projected/9d40c18b-0e28-47d0-8626-7f544a9cd711-kube-api-access-nnptc\") pod \"9d40c18b-0e28-47d0-8626-7f544a9cd711\" (UID: \"9d40c18b-0e28-47d0-8626-7f544a9cd711\") " Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.332117 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d40c18b-0e28-47d0-8626-7f544a9cd711-dns-svc\") pod \"9d40c18b-0e28-47d0-8626-7f544a9cd711\" (UID: \"9d40c18b-0e28-47d0-8626-7f544a9cd711\") " Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.332380 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c2a6285-eb97-45b0-b45b-b42c78e4e2b5-logs\") pod \"glance-default-internal-api-0\" (UID: \"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.332443 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8w97\" (UniqueName: \"kubernetes.io/projected/6c2a6285-eb97-45b0-b45b-b42c78e4e2b5-kube-api-access-h8w97\") pod \"glance-default-internal-api-0\" (UID: \"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.332468 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2a6285-eb97-45b0-b45b-b42c78e4e2b5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.332486 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c2a6285-eb97-45b0-b45b-b42c78e4e2b5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.332510 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c2a6285-eb97-45b0-b45b-b42c78e4e2b5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.332563 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c2a6285-eb97-45b0-b45b-b42c78e4e2b5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.332587 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.332645 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d40c18b-0e28-47d0-8626-7f544a9cd711-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.332737 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.343848 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c2a6285-eb97-45b0-b45b-b42c78e4e2b5-logs\") pod \"glance-default-internal-api-0\" (UID: \"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.346013 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c2a6285-eb97-45b0-b45b-b42c78e4e2b5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.350055 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d40c18b-0e28-47d0-8626-7f544a9cd711-kube-api-access-nnptc" (OuterVolumeSpecName: "kube-api-access-nnptc") pod "9d40c18b-0e28-47d0-8626-7f544a9cd711" (UID: "9d40c18b-0e28-47d0-8626-7f544a9cd711"). InnerVolumeSpecName "kube-api-access-nnptc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.354212 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2a6285-eb97-45b0-b45b-b42c78e4e2b5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.355413 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4d9ld"] Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.358821 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c2a6285-eb97-45b0-b45b-b42c78e4e2b5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.360102 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c2a6285-eb97-45b0-b45b-b42c78e4e2b5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: W0227 16:29:09.374237 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52d332d0_98e5_4cff_8486_151b6593c94f.slice/crio-9eb11a8506bba690e55a72bafbc3808cd479c78cb49f73dc26b47b82227ec393 WatchSource:0}: Error finding container 9eb11a8506bba690e55a72bafbc3808cd479c78cb49f73dc26b47b82227ec393: Status 404 returned error can't find the container with id 9eb11a8506bba690e55a72bafbc3808cd479c78cb49f73dc26b47b82227ec393 Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.376722 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d40c18b-0e28-47d0-8626-7f544a9cd711-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9d40c18b-0e28-47d0-8626-7f544a9cd711" (UID: "9d40c18b-0e28-47d0-8626-7f544a9cd711"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.383450 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d40c18b-0e28-47d0-8626-7f544a9cd711-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9d40c18b-0e28-47d0-8626-7f544a9cd711" (UID: "9d40c18b-0e28-47d0-8626-7f544a9cd711"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.386040 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8w97\" (UniqueName: \"kubernetes.io/projected/6c2a6285-eb97-45b0-b45b-b42c78e4e2b5-kube-api-access-h8w97\") pod \"glance-default-internal-api-0\" (UID: \"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.386106 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d40c18b-0e28-47d0-8626-7f544a9cd711-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9d40c18b-0e28-47d0-8626-7f544a9cd711" (UID: "9d40c18b-0e28-47d0-8626-7f544a9cd711"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.392759 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.393208 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d40c18b-0e28-47d0-8626-7f544a9cd711-config" (OuterVolumeSpecName: "config") pod "9d40c18b-0e28-47d0-8626-7f544a9cd711" (UID: "9d40c18b-0e28-47d0-8626-7f544a9cd711"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.433828 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d40c18b-0e28-47d0-8626-7f544a9cd711-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.433856 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d40c18b-0e28-47d0-8626-7f544a9cd711-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.433867 4830 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d40c18b-0e28-47d0-8626-7f544a9cd711-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.433876 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnptc\" (UniqueName: \"kubernetes.io/projected/9d40c18b-0e28-47d0-8626-7f544a9cd711-kube-api-access-nnptc\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.433884 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d40c18b-0e28-47d0-8626-7f544a9cd711-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.542741 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-vqkl9"] Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.581372 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-b9fgg"] Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.582035 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: W0227 16:29:09.593076 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67a7f858_b1fb_4547_9880_8f496d704f48.slice/crio-fe6195e6d80b4607701cb9026c48088c2d230eb0b7443f4ce3245b3ef14fc6dc WatchSource:0}: Error finding container fe6195e6d80b4607701cb9026c48088c2d230eb0b7443f4ce3245b3ef14fc6dc: Status 404 returned error can't find the container with id fe6195e6d80b4607701cb9026c48088c2d230eb0b7443f4ce3245b3ef14fc6dc Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.659628 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.728126 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-b9fgg" event={"ID":"67a7f858-b1fb-4547-9880-8f496d704f48","Type":"ContainerStarted","Data":"fe6195e6d80b4607701cb9026c48088c2d230eb0b7443f4ce3245b3ef14fc6dc"} Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.742960 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vrjmz" event={"ID":"a69bc2ed-ce70-4828-af02-ccac1c3f0c10","Type":"ContainerStarted","Data":"f1942395b439c33fd144b9ce5069c931029aa29ce43019767aebdb680fc41a8d"} Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.743099 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-dcxkj"] Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.744547 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jk57b" event={"ID":"33ab5b85-8198-4e45-89ad-c1c08e39fe20","Type":"ContainerStarted","Data":"bb6df788f5e8ca91abf23ff245808d9a7fe090cde362eff70896185f860b5a62"} Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.744571 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jk57b" event={"ID":"33ab5b85-8198-4e45-89ad-c1c08e39fe20","Type":"ContainerStarted","Data":"abba77f47639382c49e356bcc94b9b612f65bb5f39941eeae0769a674c3cf451"} Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.749813 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efa2d7d0-3613-4580-be80-b1a72de4501d","Type":"ContainerStarted","Data":"f072f58e2b7fa0c3b23478fba4b494ee650a4418c8597e8c16f9d2af5cc690f2"} Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.754845 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-q9f2f" event={"ID":"9d40c18b-0e28-47d0-8626-7f544a9cd711","Type":"ContainerDied","Data":"98f9297290042cbff2892384775085fb115d773832475c4860274b3e0196dfb1"} Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.754881 4830 scope.go:117] "RemoveContainer" containerID="7d227a20e256f7b1714ec22e233a9003dc5145ea4fac66e3744a979360d5e6e5" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.755006 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-q9f2f" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.772325 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-jk57b" podStartSLOduration=2.772308614 podStartE2EDuration="2.772308614s" podCreationTimestamp="2026-02-27 16:29:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:29:09.762154426 +0000 UTC m=+1345.851426889" watchObservedRunningTime="2026-02-27 16:29:09.772308614 +0000 UTC m=+1345.861581077" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.774205 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4d9ld" event={"ID":"52d332d0-98e5-4cff-8486-151b6593c94f","Type":"ContainerStarted","Data":"9eb11a8506bba690e55a72bafbc3808cd479c78cb49f73dc26b47b82227ec393"} Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.788579 4830 generic.go:334] "Generic (PLEG): container finished" podID="bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86" containerID="9994a024e01b59f506bd96204ca204376e24ce5e43fe986a089e355083e80493" exitCode=0 Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.788688 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-7xpjt" event={"ID":"bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86","Type":"ContainerDied","Data":"9994a024e01b59f506bd96204ca204376e24ce5e43fe986a089e355083e80493"} Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.788715 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-7xpjt" event={"ID":"bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86","Type":"ContainerStarted","Data":"15ac19d69a7d13acbf585e198014a7fe676ce847750ce58090f77e9bba2117ef"} Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.794551 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-pchsl" Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.794663 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-vqkl9" event={"ID":"b05d69f2-31a8-4212-ad9a-8f2bec833edd","Type":"ContainerStarted","Data":"e1bfcb91a2322670780b165425fc25ad38100b057c266b270e43e01a14db7849"} Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.879138 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-q9f2f"] Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.889549 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-q9f2f"] Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.896259 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-pchsl"] Feb 27 16:29:09 crc kubenswrapper[4830]: I0227 16:29:09.910667 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-pchsl"] Feb 27 16:29:10 crc kubenswrapper[4830]: I0227 16:29:10.190907 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 16:29:10 crc kubenswrapper[4830]: I0227 16:29:10.191352 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-7xpjt" Feb 27 16:29:10 crc kubenswrapper[4830]: I0227 16:29:10.247741 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86-config\") pod \"bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86\" (UID: \"bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86\") " Feb 27 16:29:10 crc kubenswrapper[4830]: I0227 16:29:10.247851 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86-ovsdbserver-nb\") pod \"bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86\" (UID: \"bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86\") " Feb 27 16:29:10 crc kubenswrapper[4830]: I0227 16:29:10.247886 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qffh4\" (UniqueName: \"kubernetes.io/projected/bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86-kube-api-access-qffh4\") pod \"bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86\" (UID: \"bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86\") " Feb 27 16:29:10 crc kubenswrapper[4830]: I0227 16:29:10.247922 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86-dns-svc\") pod \"bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86\" (UID: \"bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86\") " Feb 27 16:29:10 crc kubenswrapper[4830]: I0227 16:29:10.247974 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86-ovsdbserver-sb\") pod \"bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86\" (UID: \"bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86\") " Feb 27 16:29:10 crc kubenswrapper[4830]: I0227 16:29:10.248034 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86-dns-swift-storage-0\") pod \"bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86\" (UID: \"bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86\") " Feb 27 16:29:10 crc kubenswrapper[4830]: I0227 16:29:10.255353 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86-kube-api-access-qffh4" (OuterVolumeSpecName: "kube-api-access-qffh4") pod "bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86" (UID: "bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86"). InnerVolumeSpecName "kube-api-access-qffh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:29:10 crc kubenswrapper[4830]: I0227 16:29:10.277576 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86-config" (OuterVolumeSpecName: "config") pod "bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86" (UID: "bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:29:10 crc kubenswrapper[4830]: I0227 16:29:10.286028 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86" (UID: "bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:29:10 crc kubenswrapper[4830]: I0227 16:29:10.298592 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86" (UID: "bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:29:10 crc kubenswrapper[4830]: I0227 16:29:10.298892 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86" (UID: "bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:29:10 crc kubenswrapper[4830]: I0227 16:29:10.298986 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86" (UID: "bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:29:10 crc kubenswrapper[4830]: I0227 16:29:10.350084 4830 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:10 crc kubenswrapper[4830]: I0227 16:29:10.350119 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:10 crc kubenswrapper[4830]: I0227 16:29:10.350130 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:10 crc kubenswrapper[4830]: I0227 16:29:10.350139 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qffh4\" (UniqueName: \"kubernetes.io/projected/bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86-kube-api-access-qffh4\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:10 crc kubenswrapper[4830]: I0227 16:29:10.350149 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:10 crc kubenswrapper[4830]: I0227 16:29:10.350157 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:10 crc kubenswrapper[4830]: I0227 16:29:10.465005 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 16:29:10 crc kubenswrapper[4830]: I0227 16:29:10.516027 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 16:29:10 crc kubenswrapper[4830]: I0227 16:29:10.533838 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 16:29:10 crc kubenswrapper[4830]: I0227 16:29:10.574127 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:29:10 crc kubenswrapper[4830]: I0227 16:29:10.776621 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dd5a364-2f28-4e8b-831c-08ed09984745" path="/var/lib/kubelet/pods/1dd5a364-2f28-4e8b-831c-08ed09984745/volumes" Feb 27 16:29:10 crc kubenswrapper[4830]: I0227 16:29:10.781931 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d40c18b-0e28-47d0-8626-7f544a9cd711" path="/var/lib/kubelet/pods/9d40c18b-0e28-47d0-8626-7f544a9cd711/volumes" Feb 27 16:29:10 crc kubenswrapper[4830]: I0227 16:29:10.819022 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dcxkj" event={"ID":"459173e8-7571-47b7-9af8-3bd2d24d4e21","Type":"ContainerStarted","Data":"13b19b6f06c9501b054b389763ab1794a7ca7f8055e2ecc66c44dddc1a0f6fd0"} Feb 27 16:29:10 crc kubenswrapper[4830]: I0227 16:29:10.832350 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5","Type":"ContainerStarted","Data":"dddaa0942eb867fe732b34d249cd5ad6a422feb6b9d3a0be1b495b6dcd62e150"} Feb 27 16:29:10 crc kubenswrapper[4830]: I0227 16:29:10.836339 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"20d045c1-a920-4c3c-bba8-e3666f4a6549","Type":"ContainerStarted","Data":"faf487805191db1eb2a94b902cb7142c01176d2e157c9b628ffb52ff8337019e"} Feb 27 16:29:10 crc kubenswrapper[4830]: I0227 16:29:10.855111 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4d9ld" event={"ID":"52d332d0-98e5-4cff-8486-151b6593c94f","Type":"ContainerStarted","Data":"fa02ddd168c52a09e17f02290dc6532b6d413641b49271f0c4fad4240693f403"} Feb 27 16:29:10 crc kubenswrapper[4830]: I0227 16:29:10.862275 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-7xpjt" event={"ID":"bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86","Type":"ContainerDied","Data":"15ac19d69a7d13acbf585e198014a7fe676ce847750ce58090f77e9bba2117ef"} Feb 27 16:29:10 crc kubenswrapper[4830]: I0227 16:29:10.862327 4830 scope.go:117] "RemoveContainer" containerID="9994a024e01b59f506bd96204ca204376e24ce5e43fe986a089e355083e80493" Feb 27 16:29:10 crc kubenswrapper[4830]: I0227 16:29:10.862438 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-7xpjt" Feb 27 16:29:10 crc kubenswrapper[4830]: I0227 16:29:10.867445 4830 generic.go:334] "Generic (PLEG): container finished" podID="b05d69f2-31a8-4212-ad9a-8f2bec833edd" containerID="66691e78bbd70b07b2bdb539dd9a20b73d57e3ed0c6f37039c2c988e694d1d0e" exitCode=0 Feb 27 16:29:10 crc kubenswrapper[4830]: I0227 16:29:10.868935 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-vqkl9" event={"ID":"b05d69f2-31a8-4212-ad9a-8f2bec833edd","Type":"ContainerDied","Data":"66691e78bbd70b07b2bdb539dd9a20b73d57e3ed0c6f37039c2c988e694d1d0e"} Feb 27 16:29:10 crc kubenswrapper[4830]: I0227 16:29:10.877161 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-4d9ld" podStartSLOduration=2.8771444 podStartE2EDuration="2.8771444s" podCreationTimestamp="2026-02-27 16:29:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:29:10.869455591 +0000 UTC m=+1346.958728064" watchObservedRunningTime="2026-02-27 16:29:10.8771444 +0000 UTC m=+1346.966416863" Feb 27 16:29:10 crc kubenswrapper[4830]: I0227 16:29:10.934205 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-7xpjt"] Feb 27 16:29:10 crc kubenswrapper[4830]: I0227 16:29:10.959361 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-7xpjt"] Feb 27 16:29:11 crc kubenswrapper[4830]: I0227 16:29:11.892764 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-vqkl9" event={"ID":"b05d69f2-31a8-4212-ad9a-8f2bec833edd","Type":"ContainerStarted","Data":"1f0e3dc4b4d829a2806ddef6c08bba3c9fbb78fe1be3327ca2005557f8ba019e"} Feb 27 16:29:11 crc kubenswrapper[4830]: I0227 16:29:11.893310 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-vqkl9" Feb 27 16:29:11 crc kubenswrapper[4830]: I0227 16:29:11.895458 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"20d045c1-a920-4c3c-bba8-e3666f4a6549","Type":"ContainerStarted","Data":"2c074c557312f5e7da22073901b6c08bd89c311ba77a25795b97178682884f7d"} Feb 27 16:29:11 crc kubenswrapper[4830]: I0227 16:29:11.914655 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-vqkl9" podStartSLOduration=3.914631542 podStartE2EDuration="3.914631542s" podCreationTimestamp="2026-02-27 16:29:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:29:11.909068215 +0000 UTC m=+1347.998340688" watchObservedRunningTime="2026-02-27 16:29:11.914631542 +0000 UTC m=+1348.003904005" Feb 27 16:29:12 crc kubenswrapper[4830]: I0227 16:29:12.784996 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86" path="/var/lib/kubelet/pods/bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86/volumes" Feb 27 16:29:12 crc kubenswrapper[4830]: I0227 16:29:12.909906 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5","Type":"ContainerStarted","Data":"49b234618e387f9db69268adc60b9d401ddf8860800b126b07c12e1ab28d3e20"} Feb 27 16:29:12 crc kubenswrapper[4830]: I0227 16:29:12.912628 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"20d045c1-a920-4c3c-bba8-e3666f4a6549","Type":"ContainerStarted","Data":"9311e1c659e238d4071bb43b33062672a357e6dfb718e43d49b6339aac2adac4"} Feb 27 16:29:13 crc kubenswrapper[4830]: I0227 16:29:13.937025 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="20d045c1-a920-4c3c-bba8-e3666f4a6549" containerName="glance-log" containerID="cri-o://2c074c557312f5e7da22073901b6c08bd89c311ba77a25795b97178682884f7d" gracePeriod=30 Feb 27 16:29:13 crc kubenswrapper[4830]: I0227 16:29:13.937345 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5","Type":"ContainerStarted","Data":"724e28a9cf241814a075686e9aca8a465db331a9eb2c26d9920fe90d63f091f3"} Feb 27 16:29:13 crc kubenswrapper[4830]: I0227 16:29:13.937400 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="20d045c1-a920-4c3c-bba8-e3666f4a6549" containerName="glance-httpd" containerID="cri-o://9311e1c659e238d4071bb43b33062672a357e6dfb718e43d49b6339aac2adac4" gracePeriod=30 Feb 27 16:29:13 crc kubenswrapper[4830]: I0227 16:29:13.971987 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.971967834 podStartE2EDuration="6.971967834s" podCreationTimestamp="2026-02-27 16:29:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:29:13.964546872 +0000 UTC m=+1350.053819345" watchObservedRunningTime="2026-02-27 16:29:13.971967834 +0000 UTC m=+1350.061240297" Feb 27 16:29:14 crc kubenswrapper[4830]: I0227 16:29:14.954597 4830 generic.go:334] "Generic (PLEG): container finished" podID="20d045c1-a920-4c3c-bba8-e3666f4a6549" containerID="9311e1c659e238d4071bb43b33062672a357e6dfb718e43d49b6339aac2adac4" exitCode=0 Feb 27 16:29:14 crc kubenswrapper[4830]: I0227 16:29:14.954819 4830 generic.go:334] "Generic (PLEG): container finished" podID="20d045c1-a920-4c3c-bba8-e3666f4a6549" containerID="2c074c557312f5e7da22073901b6c08bd89c311ba77a25795b97178682884f7d" exitCode=143 Feb 27 16:29:14 crc kubenswrapper[4830]: I0227 16:29:14.954677 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"20d045c1-a920-4c3c-bba8-e3666f4a6549","Type":"ContainerDied","Data":"9311e1c659e238d4071bb43b33062672a357e6dfb718e43d49b6339aac2adac4"} Feb 27 16:29:14 crc kubenswrapper[4830]: I0227 16:29:14.954916 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"20d045c1-a920-4c3c-bba8-e3666f4a6549","Type":"ContainerDied","Data":"2c074c557312f5e7da22073901b6c08bd89c311ba77a25795b97178682884f7d"} Feb 27 16:29:14 crc kubenswrapper[4830]: I0227 16:29:14.954971 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6c2a6285-eb97-45b0-b45b-b42c78e4e2b5" containerName="glance-log" containerID="cri-o://49b234618e387f9db69268adc60b9d401ddf8860800b126b07c12e1ab28d3e20" gracePeriod=30 Feb 27 16:29:14 crc kubenswrapper[4830]: I0227 16:29:14.955010 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6c2a6285-eb97-45b0-b45b-b42c78e4e2b5" containerName="glance-httpd" containerID="cri-o://724e28a9cf241814a075686e9aca8a465db331a9eb2c26d9920fe90d63f091f3" gracePeriod=30 Feb 27 16:29:14 crc kubenswrapper[4830]: I0227 16:29:14.988481 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.988465362 podStartE2EDuration="7.988465362s" podCreationTimestamp="2026-02-27 16:29:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:29:14.983938661 +0000 UTC m=+1351.073211114" watchObservedRunningTime="2026-02-27 16:29:14.988465362 +0000 UTC m=+1351.077737815" Feb 27 16:29:15 crc kubenswrapper[4830]: E0227 16:29:15.230053 4830 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c2a6285_eb97_45b0_b45b_b42c78e4e2b5.slice/crio-724e28a9cf241814a075686e9aca8a465db331a9eb2c26d9920fe90d63f091f3.scope\": RecentStats: unable to find data in memory cache]" Feb 27 16:29:15 crc kubenswrapper[4830]: I0227 16:29:15.966983 4830 generic.go:334] "Generic (PLEG): container finished" podID="6c2a6285-eb97-45b0-b45b-b42c78e4e2b5" containerID="724e28a9cf241814a075686e9aca8a465db331a9eb2c26d9920fe90d63f091f3" exitCode=0 Feb 27 16:29:15 crc kubenswrapper[4830]: I0227 16:29:15.967214 4830 generic.go:334] "Generic (PLEG): container finished" podID="6c2a6285-eb97-45b0-b45b-b42c78e4e2b5" containerID="49b234618e387f9db69268adc60b9d401ddf8860800b126b07c12e1ab28d3e20" exitCode=143 Feb 27 16:29:15 crc kubenswrapper[4830]: I0227 16:29:15.967050 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5","Type":"ContainerDied","Data":"724e28a9cf241814a075686e9aca8a465db331a9eb2c26d9920fe90d63f091f3"} Feb 27 16:29:15 crc kubenswrapper[4830]: I0227 16:29:15.967251 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5","Type":"ContainerDied","Data":"49b234618e387f9db69268adc60b9d401ddf8860800b126b07c12e1ab28d3e20"} Feb 27 16:29:18 crc kubenswrapper[4830]: I0227 16:29:18.697277 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-vqkl9" Feb 27 16:29:18 crc kubenswrapper[4830]: I0227 16:29:18.793886 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-cjq7v"] Feb 27 16:29:18 crc kubenswrapper[4830]: I0227 16:29:18.794171 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-cjq7v" podUID="1434c895-fa3e-4feb-a56a-0451f1f16a3b" containerName="dnsmasq-dns" containerID="cri-o://51057a0e1285abbf0d8d8183a853aec44ee1a9c4c03ece1d5f094ba69d645778" gracePeriod=10 Feb 27 16:29:20 crc kubenswrapper[4830]: I0227 16:29:20.007566 4830 generic.go:334] "Generic (PLEG): container finished" podID="1434c895-fa3e-4feb-a56a-0451f1f16a3b" containerID="51057a0e1285abbf0d8d8183a853aec44ee1a9c4c03ece1d5f094ba69d645778" exitCode=0 Feb 27 16:29:20 crc kubenswrapper[4830]: I0227 16:29:20.007802 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-cjq7v" event={"ID":"1434c895-fa3e-4feb-a56a-0451f1f16a3b","Type":"ContainerDied","Data":"51057a0e1285abbf0d8d8183a853aec44ee1a9c4c03ece1d5f094ba69d645778"} Feb 27 16:29:22 crc kubenswrapper[4830]: I0227 16:29:22.025983 4830 generic.go:334] "Generic (PLEG): container finished" podID="33ab5b85-8198-4e45-89ad-c1c08e39fe20" containerID="bb6df788f5e8ca91abf23ff245808d9a7fe090cde362eff70896185f860b5a62" exitCode=0 Feb 27 16:29:22 crc kubenswrapper[4830]: I0227 16:29:22.026047 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jk57b" event={"ID":"33ab5b85-8198-4e45-89ad-c1c08e39fe20","Type":"ContainerDied","Data":"bb6df788f5e8ca91abf23ff245808d9a7fe090cde362eff70896185f860b5a62"} Feb 27 16:29:23 crc kubenswrapper[4830]: E0227 16:29:23.055033 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Feb 27 16:29:23 crc kubenswrapper[4830]: E0227 16:29:23.055649 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n7fz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-b9fgg_openstack(67a7f858-b1fb-4547-9880-8f496d704f48): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 16:29:23 crc kubenswrapper[4830]: E0227 16:29:23.056895 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-b9fgg" podUID="67a7f858-b1fb-4547-9880-8f496d704f48" Feb 27 16:29:23 crc kubenswrapper[4830]: I0227 16:29:23.275583 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 16:29:23 crc kubenswrapper[4830]: I0227 16:29:23.361538 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/20d045c1-a920-4c3c-bba8-e3666f4a6549-httpd-run\") pod \"20d045c1-a920-4c3c-bba8-e3666f4a6549\" (UID: \"20d045c1-a920-4c3c-bba8-e3666f4a6549\") " Feb 27 16:29:23 crc kubenswrapper[4830]: I0227 16:29:23.361833 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20d045c1-a920-4c3c-bba8-e3666f4a6549-combined-ca-bundle\") pod \"20d045c1-a920-4c3c-bba8-e3666f4a6549\" (UID: \"20d045c1-a920-4c3c-bba8-e3666f4a6549\") " Feb 27 16:29:23 crc kubenswrapper[4830]: I0227 16:29:23.361855 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20d045c1-a920-4c3c-bba8-e3666f4a6549-logs\") pod \"20d045c1-a920-4c3c-bba8-e3666f4a6549\" (UID: \"20d045c1-a920-4c3c-bba8-e3666f4a6549\") " Feb 27 16:29:23 crc kubenswrapper[4830]: I0227 16:29:23.361896 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20d045c1-a920-4c3c-bba8-e3666f4a6549-scripts\") pod \"20d045c1-a920-4c3c-bba8-e3666f4a6549\" (UID: \"20d045c1-a920-4c3c-bba8-e3666f4a6549\") " Feb 27 16:29:23 crc kubenswrapper[4830]: I0227 16:29:23.361998 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20d045c1-a920-4c3c-bba8-e3666f4a6549-config-data\") pod \"20d045c1-a920-4c3c-bba8-e3666f4a6549\" (UID: \"20d045c1-a920-4c3c-bba8-e3666f4a6549\") " Feb 27 16:29:23 crc kubenswrapper[4830]: I0227 16:29:23.362062 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"20d045c1-a920-4c3c-bba8-e3666f4a6549\" (UID: \"20d045c1-a920-4c3c-bba8-e3666f4a6549\") " Feb 27 16:29:23 crc kubenswrapper[4830]: I0227 16:29:23.362116 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jx2qd\" (UniqueName: \"kubernetes.io/projected/20d045c1-a920-4c3c-bba8-e3666f4a6549-kube-api-access-jx2qd\") pod \"20d045c1-a920-4c3c-bba8-e3666f4a6549\" (UID: \"20d045c1-a920-4c3c-bba8-e3666f4a6549\") " Feb 27 16:29:23 crc kubenswrapper[4830]: I0227 16:29:23.362657 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20d045c1-a920-4c3c-bba8-e3666f4a6549-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "20d045c1-a920-4c3c-bba8-e3666f4a6549" (UID: "20d045c1-a920-4c3c-bba8-e3666f4a6549"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:29:23 crc kubenswrapper[4830]: I0227 16:29:23.362752 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20d045c1-a920-4c3c-bba8-e3666f4a6549-logs" (OuterVolumeSpecName: "logs") pod "20d045c1-a920-4c3c-bba8-e3666f4a6549" (UID: "20d045c1-a920-4c3c-bba8-e3666f4a6549"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:29:23 crc kubenswrapper[4830]: I0227 16:29:23.369266 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20d045c1-a920-4c3c-bba8-e3666f4a6549-kube-api-access-jx2qd" (OuterVolumeSpecName: "kube-api-access-jx2qd") pod "20d045c1-a920-4c3c-bba8-e3666f4a6549" (UID: "20d045c1-a920-4c3c-bba8-e3666f4a6549"). InnerVolumeSpecName "kube-api-access-jx2qd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:29:23 crc kubenswrapper[4830]: I0227 16:29:23.370238 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "20d045c1-a920-4c3c-bba8-e3666f4a6549" (UID: "20d045c1-a920-4c3c-bba8-e3666f4a6549"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 27 16:29:23 crc kubenswrapper[4830]: I0227 16:29:23.384813 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20d045c1-a920-4c3c-bba8-e3666f4a6549-scripts" (OuterVolumeSpecName: "scripts") pod "20d045c1-a920-4c3c-bba8-e3666f4a6549" (UID: "20d045c1-a920-4c3c-bba8-e3666f4a6549"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:29:23 crc kubenswrapper[4830]: I0227 16:29:23.406726 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20d045c1-a920-4c3c-bba8-e3666f4a6549-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20d045c1-a920-4c3c-bba8-e3666f4a6549" (UID: "20d045c1-a920-4c3c-bba8-e3666f4a6549"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:29:23 crc kubenswrapper[4830]: I0227 16:29:23.430121 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20d045c1-a920-4c3c-bba8-e3666f4a6549-config-data" (OuterVolumeSpecName: "config-data") pod "20d045c1-a920-4c3c-bba8-e3666f4a6549" (UID: "20d045c1-a920-4c3c-bba8-e3666f4a6549"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:29:23 crc kubenswrapper[4830]: I0227 16:29:23.463786 4830 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Feb 27 16:29:23 crc kubenswrapper[4830]: I0227 16:29:23.463888 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jx2qd\" (UniqueName: \"kubernetes.io/projected/20d045c1-a920-4c3c-bba8-e3666f4a6549-kube-api-access-jx2qd\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:23 crc kubenswrapper[4830]: I0227 16:29:23.463914 4830 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/20d045c1-a920-4c3c-bba8-e3666f4a6549-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:23 crc kubenswrapper[4830]: I0227 16:29:23.463924 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20d045c1-a920-4c3c-bba8-e3666f4a6549-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:23 crc kubenswrapper[4830]: I0227 16:29:23.463934 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20d045c1-a920-4c3c-bba8-e3666f4a6549-logs\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:23 crc kubenswrapper[4830]: I0227 16:29:23.463962 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20d045c1-a920-4c3c-bba8-e3666f4a6549-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:23 crc kubenswrapper[4830]: I0227 16:29:23.463972 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20d045c1-a920-4c3c-bba8-e3666f4a6549-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:23 crc kubenswrapper[4830]: I0227 16:29:23.481720 4830 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Feb 27 16:29:23 crc kubenswrapper[4830]: I0227 16:29:23.496277 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-cjq7v" podUID="1434c895-fa3e-4feb-a56a-0451f1f16a3b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: connect: connection refused" Feb 27 16:29:23 crc kubenswrapper[4830]: I0227 16:29:23.565272 4830 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.056419 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.056751 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"20d045c1-a920-4c3c-bba8-e3666f4a6549","Type":"ContainerDied","Data":"faf487805191db1eb2a94b902cb7142c01176d2e157c9b628ffb52ff8337019e"} Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.056813 4830 scope.go:117] "RemoveContainer" containerID="9311e1c659e238d4071bb43b33062672a357e6dfb718e43d49b6339aac2adac4" Feb 27 16:29:24 crc kubenswrapper[4830]: E0227 16:29:24.071521 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-b9fgg" podUID="67a7f858-b1fb-4547-9880-8f496d704f48" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.122479 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.136356 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.155251 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 16:29:24 crc kubenswrapper[4830]: E0227 16:29:24.155681 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20d045c1-a920-4c3c-bba8-e3666f4a6549" containerName="glance-httpd" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.155705 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="20d045c1-a920-4c3c-bba8-e3666f4a6549" containerName="glance-httpd" Feb 27 16:29:24 crc kubenswrapper[4830]: E0227 16:29:24.155726 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86" containerName="init" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.155735 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86" containerName="init" Feb 27 16:29:24 crc kubenswrapper[4830]: E0227 16:29:24.155759 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d40c18b-0e28-47d0-8626-7f544a9cd711" containerName="init" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.155769 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d40c18b-0e28-47d0-8626-7f544a9cd711" containerName="init" Feb 27 16:29:24 crc kubenswrapper[4830]: E0227 16:29:24.155800 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20d045c1-a920-4c3c-bba8-e3666f4a6549" containerName="glance-log" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.155808 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="20d045c1-a920-4c3c-bba8-e3666f4a6549" containerName="glance-log" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.156334 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="20d045c1-a920-4c3c-bba8-e3666f4a6549" containerName="glance-httpd" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.156366 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d40c18b-0e28-47d0-8626-7f544a9cd711" containerName="init" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.156380 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="20d045c1-a920-4c3c-bba8-e3666f4a6549" containerName="glance-log" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.156392 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf3ec800-e09f-4fb8-8a1b-55c6bf08dc86" containerName="init" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.157853 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.160823 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.161269 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.167812 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.280819 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krhd5\" (UniqueName: \"kubernetes.io/projected/bb4fe631-52f0-445f-9e4c-90f4137bdba6-kube-api-access-krhd5\") pod \"glance-default-external-api-0\" (UID: \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.281323 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb4fe631-52f0-445f-9e4c-90f4137bdba6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.281380 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb4fe631-52f0-445f-9e4c-90f4137bdba6-scripts\") pod \"glance-default-external-api-0\" (UID: \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.281614 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb4fe631-52f0-445f-9e4c-90f4137bdba6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.281710 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb4fe631-52f0-445f-9e4c-90f4137bdba6-logs\") pod \"glance-default-external-api-0\" (UID: \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.281770 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.281823 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb4fe631-52f0-445f-9e4c-90f4137bdba6-config-data\") pod \"glance-default-external-api-0\" (UID: \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.281882 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb4fe631-52f0-445f-9e4c-90f4137bdba6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.383396 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb4fe631-52f0-445f-9e4c-90f4137bdba6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.383434 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb4fe631-52f0-445f-9e4c-90f4137bdba6-scripts\") pod \"glance-default-external-api-0\" (UID: \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.383488 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb4fe631-52f0-445f-9e4c-90f4137bdba6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.383513 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb4fe631-52f0-445f-9e4c-90f4137bdba6-logs\") pod \"glance-default-external-api-0\" (UID: \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.383539 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.383560 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb4fe631-52f0-445f-9e4c-90f4137bdba6-config-data\") pod \"glance-default-external-api-0\" (UID: \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.383582 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb4fe631-52f0-445f-9e4c-90f4137bdba6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.383624 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krhd5\" (UniqueName: \"kubernetes.io/projected/bb4fe631-52f0-445f-9e4c-90f4137bdba6-kube-api-access-krhd5\") pod \"glance-default-external-api-0\" (UID: \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.384400 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb4fe631-52f0-445f-9e4c-90f4137bdba6-logs\") pod \"glance-default-external-api-0\" (UID: \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.384530 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.384612 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb4fe631-52f0-445f-9e4c-90f4137bdba6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.388665 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb4fe631-52f0-445f-9e4c-90f4137bdba6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.391048 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb4fe631-52f0-445f-9e4c-90f4137bdba6-config-data\") pod \"glance-default-external-api-0\" (UID: \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.407013 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb4fe631-52f0-445f-9e4c-90f4137bdba6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.412090 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krhd5\" (UniqueName: \"kubernetes.io/projected/bb4fe631-52f0-445f-9e4c-90f4137bdba6-kube-api-access-krhd5\") pod \"glance-default-external-api-0\" (UID: \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.413806 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.424561 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb4fe631-52f0-445f-9e4c-90f4137bdba6-scripts\") pod \"glance-default-external-api-0\" (UID: \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\") " pod="openstack/glance-default-external-api-0" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.494491 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 16:29:24 crc kubenswrapper[4830]: I0227 16:29:24.773490 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20d045c1-a920-4c3c-bba8-e3666f4a6549" path="/var/lib/kubelet/pods/20d045c1-a920-4c3c-bba8-e3666f4a6549/volumes" Feb 27 16:29:27 crc kubenswrapper[4830]: I0227 16:29:27.087867 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jk57b" event={"ID":"33ab5b85-8198-4e45-89ad-c1c08e39fe20","Type":"ContainerDied","Data":"abba77f47639382c49e356bcc94b9b612f65bb5f39941eeae0769a674c3cf451"} Feb 27 16:29:27 crc kubenswrapper[4830]: I0227 16:29:27.088263 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abba77f47639382c49e356bcc94b9b612f65bb5f39941eeae0769a674c3cf451" Feb 27 16:29:27 crc kubenswrapper[4830]: I0227 16:29:27.176255 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jk57b" Feb 27 16:29:27 crc kubenswrapper[4830]: I0227 16:29:27.358166 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ab5b85-8198-4e45-89ad-c1c08e39fe20-combined-ca-bundle\") pod \"33ab5b85-8198-4e45-89ad-c1c08e39fe20\" (UID: \"33ab5b85-8198-4e45-89ad-c1c08e39fe20\") " Feb 27 16:29:27 crc kubenswrapper[4830]: I0227 16:29:27.358225 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cb75x\" (UniqueName: \"kubernetes.io/projected/33ab5b85-8198-4e45-89ad-c1c08e39fe20-kube-api-access-cb75x\") pod \"33ab5b85-8198-4e45-89ad-c1c08e39fe20\" (UID: \"33ab5b85-8198-4e45-89ad-c1c08e39fe20\") " Feb 27 16:29:27 crc kubenswrapper[4830]: I0227 16:29:27.358276 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/33ab5b85-8198-4e45-89ad-c1c08e39fe20-fernet-keys\") pod \"33ab5b85-8198-4e45-89ad-c1c08e39fe20\" (UID: \"33ab5b85-8198-4e45-89ad-c1c08e39fe20\") " Feb 27 16:29:27 crc kubenswrapper[4830]: I0227 16:29:27.358458 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33ab5b85-8198-4e45-89ad-c1c08e39fe20-scripts\") pod \"33ab5b85-8198-4e45-89ad-c1c08e39fe20\" (UID: \"33ab5b85-8198-4e45-89ad-c1c08e39fe20\") " Feb 27 16:29:27 crc kubenswrapper[4830]: I0227 16:29:27.358507 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/33ab5b85-8198-4e45-89ad-c1c08e39fe20-credential-keys\") pod \"33ab5b85-8198-4e45-89ad-c1c08e39fe20\" (UID: \"33ab5b85-8198-4e45-89ad-c1c08e39fe20\") " Feb 27 16:29:27 crc kubenswrapper[4830]: I0227 16:29:27.358551 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33ab5b85-8198-4e45-89ad-c1c08e39fe20-config-data\") pod \"33ab5b85-8198-4e45-89ad-c1c08e39fe20\" (UID: \"33ab5b85-8198-4e45-89ad-c1c08e39fe20\") " Feb 27 16:29:27 crc kubenswrapper[4830]: I0227 16:29:27.364010 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33ab5b85-8198-4e45-89ad-c1c08e39fe20-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "33ab5b85-8198-4e45-89ad-c1c08e39fe20" (UID: "33ab5b85-8198-4e45-89ad-c1c08e39fe20"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:29:27 crc kubenswrapper[4830]: I0227 16:29:27.364552 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33ab5b85-8198-4e45-89ad-c1c08e39fe20-kube-api-access-cb75x" (OuterVolumeSpecName: "kube-api-access-cb75x") pod "33ab5b85-8198-4e45-89ad-c1c08e39fe20" (UID: "33ab5b85-8198-4e45-89ad-c1c08e39fe20"). InnerVolumeSpecName "kube-api-access-cb75x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:29:27 crc kubenswrapper[4830]: I0227 16:29:27.365480 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33ab5b85-8198-4e45-89ad-c1c08e39fe20-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "33ab5b85-8198-4e45-89ad-c1c08e39fe20" (UID: "33ab5b85-8198-4e45-89ad-c1c08e39fe20"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:29:27 crc kubenswrapper[4830]: I0227 16:29:27.383555 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33ab5b85-8198-4e45-89ad-c1c08e39fe20-scripts" (OuterVolumeSpecName: "scripts") pod "33ab5b85-8198-4e45-89ad-c1c08e39fe20" (UID: "33ab5b85-8198-4e45-89ad-c1c08e39fe20"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:29:27 crc kubenswrapper[4830]: I0227 16:29:27.388990 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33ab5b85-8198-4e45-89ad-c1c08e39fe20-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33ab5b85-8198-4e45-89ad-c1c08e39fe20" (UID: "33ab5b85-8198-4e45-89ad-c1c08e39fe20"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:29:27 crc kubenswrapper[4830]: I0227 16:29:27.400116 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33ab5b85-8198-4e45-89ad-c1c08e39fe20-config-data" (OuterVolumeSpecName: "config-data") pod "33ab5b85-8198-4e45-89ad-c1c08e39fe20" (UID: "33ab5b85-8198-4e45-89ad-c1c08e39fe20"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:29:27 crc kubenswrapper[4830]: I0227 16:29:27.460545 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cb75x\" (UniqueName: \"kubernetes.io/projected/33ab5b85-8198-4e45-89ad-c1c08e39fe20-kube-api-access-cb75x\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:27 crc kubenswrapper[4830]: I0227 16:29:27.460576 4830 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/33ab5b85-8198-4e45-89ad-c1c08e39fe20-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:27 crc kubenswrapper[4830]: I0227 16:29:27.460587 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33ab5b85-8198-4e45-89ad-c1c08e39fe20-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:27 crc kubenswrapper[4830]: I0227 16:29:27.460595 4830 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/33ab5b85-8198-4e45-89ad-c1c08e39fe20-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:27 crc kubenswrapper[4830]: I0227 16:29:27.460602 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33ab5b85-8198-4e45-89ad-c1c08e39fe20-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:27 crc kubenswrapper[4830]: I0227 16:29:27.460610 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33ab5b85-8198-4e45-89ad-c1c08e39fe20-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:28 crc kubenswrapper[4830]: I0227 16:29:28.096214 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jk57b" Feb 27 16:29:28 crc kubenswrapper[4830]: I0227 16:29:28.258993 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-jk57b"] Feb 27 16:29:28 crc kubenswrapper[4830]: I0227 16:29:28.276307 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-jk57b"] Feb 27 16:29:28 crc kubenswrapper[4830]: I0227 16:29:28.369320 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-wnx5p"] Feb 27 16:29:28 crc kubenswrapper[4830]: E0227 16:29:28.369649 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33ab5b85-8198-4e45-89ad-c1c08e39fe20" containerName="keystone-bootstrap" Feb 27 16:29:28 crc kubenswrapper[4830]: I0227 16:29:28.369664 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="33ab5b85-8198-4e45-89ad-c1c08e39fe20" containerName="keystone-bootstrap" Feb 27 16:29:28 crc kubenswrapper[4830]: I0227 16:29:28.369827 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="33ab5b85-8198-4e45-89ad-c1c08e39fe20" containerName="keystone-bootstrap" Feb 27 16:29:28 crc kubenswrapper[4830]: I0227 16:29:28.370357 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wnx5p" Feb 27 16:29:28 crc kubenswrapper[4830]: I0227 16:29:28.375698 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 27 16:29:28 crc kubenswrapper[4830]: I0227 16:29:28.376040 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 27 16:29:28 crc kubenswrapper[4830]: I0227 16:29:28.376184 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 27 16:29:28 crc kubenswrapper[4830]: I0227 16:29:28.379500 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zm2zz" Feb 27 16:29:28 crc kubenswrapper[4830]: I0227 16:29:28.379581 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 27 16:29:28 crc kubenswrapper[4830]: I0227 16:29:28.391009 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wnx5p"] Feb 27 16:29:28 crc kubenswrapper[4830]: I0227 16:29:28.480527 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td6pf\" (UniqueName: \"kubernetes.io/projected/f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6-kube-api-access-td6pf\") pod \"keystone-bootstrap-wnx5p\" (UID: \"f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6\") " pod="openstack/keystone-bootstrap-wnx5p" Feb 27 16:29:28 crc kubenswrapper[4830]: I0227 16:29:28.480586 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6-scripts\") pod \"keystone-bootstrap-wnx5p\" (UID: \"f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6\") " pod="openstack/keystone-bootstrap-wnx5p" Feb 27 16:29:28 crc kubenswrapper[4830]: I0227 16:29:28.480605 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6-fernet-keys\") pod \"keystone-bootstrap-wnx5p\" (UID: \"f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6\") " pod="openstack/keystone-bootstrap-wnx5p" Feb 27 16:29:28 crc kubenswrapper[4830]: I0227 16:29:28.480647 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6-combined-ca-bundle\") pod \"keystone-bootstrap-wnx5p\" (UID: \"f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6\") " pod="openstack/keystone-bootstrap-wnx5p" Feb 27 16:29:28 crc kubenswrapper[4830]: I0227 16:29:28.480689 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6-credential-keys\") pod \"keystone-bootstrap-wnx5p\" (UID: \"f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6\") " pod="openstack/keystone-bootstrap-wnx5p" Feb 27 16:29:28 crc kubenswrapper[4830]: I0227 16:29:28.480716 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6-config-data\") pod \"keystone-bootstrap-wnx5p\" (UID: \"f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6\") " pod="openstack/keystone-bootstrap-wnx5p" Feb 27 16:29:28 crc kubenswrapper[4830]: I0227 16:29:28.499908 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-cjq7v" podUID="1434c895-fa3e-4feb-a56a-0451f1f16a3b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: connect: connection refused" Feb 27 16:29:28 crc kubenswrapper[4830]: I0227 16:29:28.582740 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6-combined-ca-bundle\") pod \"keystone-bootstrap-wnx5p\" (UID: \"f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6\") " pod="openstack/keystone-bootstrap-wnx5p" Feb 27 16:29:28 crc kubenswrapper[4830]: I0227 16:29:28.582811 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6-credential-keys\") pod \"keystone-bootstrap-wnx5p\" (UID: \"f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6\") " pod="openstack/keystone-bootstrap-wnx5p" Feb 27 16:29:28 crc kubenswrapper[4830]: I0227 16:29:28.582842 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6-config-data\") pod \"keystone-bootstrap-wnx5p\" (UID: \"f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6\") " pod="openstack/keystone-bootstrap-wnx5p" Feb 27 16:29:28 crc kubenswrapper[4830]: I0227 16:29:28.582896 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td6pf\" (UniqueName: \"kubernetes.io/projected/f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6-kube-api-access-td6pf\") pod \"keystone-bootstrap-wnx5p\" (UID: \"f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6\") " pod="openstack/keystone-bootstrap-wnx5p" Feb 27 16:29:28 crc kubenswrapper[4830]: I0227 16:29:28.582962 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6-scripts\") pod \"keystone-bootstrap-wnx5p\" (UID: \"f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6\") " pod="openstack/keystone-bootstrap-wnx5p" Feb 27 16:29:28 crc kubenswrapper[4830]: I0227 16:29:28.582979 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6-fernet-keys\") pod \"keystone-bootstrap-wnx5p\" (UID: \"f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6\") " pod="openstack/keystone-bootstrap-wnx5p" Feb 27 16:29:28 crc kubenswrapper[4830]: I0227 16:29:28.586784 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6-combined-ca-bundle\") pod \"keystone-bootstrap-wnx5p\" (UID: \"f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6\") " pod="openstack/keystone-bootstrap-wnx5p" Feb 27 16:29:28 crc kubenswrapper[4830]: I0227 16:29:28.587125 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6-config-data\") pod \"keystone-bootstrap-wnx5p\" (UID: \"f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6\") " pod="openstack/keystone-bootstrap-wnx5p" Feb 27 16:29:28 crc kubenswrapper[4830]: I0227 16:29:28.588064 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6-scripts\") pod \"keystone-bootstrap-wnx5p\" (UID: \"f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6\") " pod="openstack/keystone-bootstrap-wnx5p" Feb 27 16:29:28 crc kubenswrapper[4830]: I0227 16:29:28.590460 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6-credential-keys\") pod \"keystone-bootstrap-wnx5p\" (UID: \"f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6\") " pod="openstack/keystone-bootstrap-wnx5p" Feb 27 16:29:28 crc kubenswrapper[4830]: I0227 16:29:28.594258 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6-fernet-keys\") pod \"keystone-bootstrap-wnx5p\" (UID: \"f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6\") " pod="openstack/keystone-bootstrap-wnx5p" Feb 27 16:29:28 crc kubenswrapper[4830]: I0227 16:29:28.601564 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td6pf\" (UniqueName: \"kubernetes.io/projected/f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6-kube-api-access-td6pf\") pod \"keystone-bootstrap-wnx5p\" (UID: \"f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6\") " pod="openstack/keystone-bootstrap-wnx5p" Feb 27 16:29:28 crc kubenswrapper[4830]: I0227 16:29:28.691499 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wnx5p" Feb 27 16:29:28 crc kubenswrapper[4830]: I0227 16:29:28.772507 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33ab5b85-8198-4e45-89ad-c1c08e39fe20" path="/var/lib/kubelet/pods/33ab5b85-8198-4e45-89ad-c1c08e39fe20/volumes" Feb 27 16:29:33 crc kubenswrapper[4830]: I0227 16:29:33.160906 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 16:29:33 crc kubenswrapper[4830]: I0227 16:29:33.161425 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 16:29:33 crc kubenswrapper[4830]: I0227 16:29:33.495645 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-cjq7v" podUID="1434c895-fa3e-4feb-a56a-0451f1f16a3b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: connect: connection refused" Feb 27 16:29:33 crc kubenswrapper[4830]: I0227 16:29:33.496105 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-cjq7v" Feb 27 16:29:38 crc kubenswrapper[4830]: E0227 16:29:38.277510 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 27 16:29:38 crc kubenswrapper[4830]: E0227 16:29:38.278671 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wmf7s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-vrjmz_openstack(a69bc2ed-ce70-4828-af02-ccac1c3f0c10): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 16:29:38 crc kubenswrapper[4830]: E0227 16:29:38.280171 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-vrjmz" podUID="a69bc2ed-ce70-4828-af02-ccac1c3f0c10" Feb 27 16:29:38 crc kubenswrapper[4830]: E0227 16:29:38.864664 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 27 16:29:38 crc kubenswrapper[4830]: E0227 16:29:38.865016 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gm9xq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-dcxkj_openstack(459173e8-7571-47b7-9af8-3bd2d24d4e21): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 16:29:38 crc kubenswrapper[4830]: E0227 16:29:38.866154 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-dcxkj" podUID="459173e8-7571-47b7-9af8-3bd2d24d4e21" Feb 27 16:29:38 crc kubenswrapper[4830]: I0227 16:29:38.876164 4830 scope.go:117] "RemoveContainer" containerID="2c074c557312f5e7da22073901b6c08bd89c311ba77a25795b97178682884f7d" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.023323 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.056647 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-cjq7v" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.193171 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2a6285-eb97-45b0-b45b-b42c78e4e2b5-combined-ca-bundle\") pod \"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5\" (UID: \"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5\") " Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.193253 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1434c895-fa3e-4feb-a56a-0451f1f16a3b-config\") pod \"1434c895-fa3e-4feb-a56a-0451f1f16a3b\" (UID: \"1434c895-fa3e-4feb-a56a-0451f1f16a3b\") " Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.193290 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l57bg\" (UniqueName: \"kubernetes.io/projected/1434c895-fa3e-4feb-a56a-0451f1f16a3b-kube-api-access-l57bg\") pod \"1434c895-fa3e-4feb-a56a-0451f1f16a3b\" (UID: \"1434c895-fa3e-4feb-a56a-0451f1f16a3b\") " Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.193310 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c2a6285-eb97-45b0-b45b-b42c78e4e2b5-httpd-run\") pod \"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5\" (UID: \"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5\") " Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.193331 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c2a6285-eb97-45b0-b45b-b42c78e4e2b5-logs\") pod \"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5\" (UID: \"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5\") " Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.193354 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c2a6285-eb97-45b0-b45b-b42c78e4e2b5-scripts\") pod \"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5\" (UID: \"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5\") " Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.193383 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1434c895-fa3e-4feb-a56a-0451f1f16a3b-ovsdbserver-sb\") pod \"1434c895-fa3e-4feb-a56a-0451f1f16a3b\" (UID: \"1434c895-fa3e-4feb-a56a-0451f1f16a3b\") " Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.193409 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1434c895-fa3e-4feb-a56a-0451f1f16a3b-dns-svc\") pod \"1434c895-fa3e-4feb-a56a-0451f1f16a3b\" (UID: \"1434c895-fa3e-4feb-a56a-0451f1f16a3b\") " Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.193454 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c2a6285-eb97-45b0-b45b-b42c78e4e2b5-config-data\") pod \"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5\" (UID: \"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5\") " Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.193470 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1434c895-fa3e-4feb-a56a-0451f1f16a3b-ovsdbserver-nb\") pod \"1434c895-fa3e-4feb-a56a-0451f1f16a3b\" (UID: \"1434c895-fa3e-4feb-a56a-0451f1f16a3b\") " Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.193547 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5\" (UID: \"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5\") " Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.193571 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8w97\" (UniqueName: \"kubernetes.io/projected/6c2a6285-eb97-45b0-b45b-b42c78e4e2b5-kube-api-access-h8w97\") pod \"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5\" (UID: \"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5\") " Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.202332 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c2a6285-eb97-45b0-b45b-b42c78e4e2b5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6c2a6285-eb97-45b0-b45b-b42c78e4e2b5" (UID: "6c2a6285-eb97-45b0-b45b-b42c78e4e2b5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.212544 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "6c2a6285-eb97-45b0-b45b-b42c78e4e2b5" (UID: "6c2a6285-eb97-45b0-b45b-b42c78e4e2b5"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.212685 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c2a6285-eb97-45b0-b45b-b42c78e4e2b5-kube-api-access-h8w97" (OuterVolumeSpecName: "kube-api-access-h8w97") pod "6c2a6285-eb97-45b0-b45b-b42c78e4e2b5" (UID: "6c2a6285-eb97-45b0-b45b-b42c78e4e2b5"). InnerVolumeSpecName "kube-api-access-h8w97". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.213189 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1434c895-fa3e-4feb-a56a-0451f1f16a3b-kube-api-access-l57bg" (OuterVolumeSpecName: "kube-api-access-l57bg") pod "1434c895-fa3e-4feb-a56a-0451f1f16a3b" (UID: "1434c895-fa3e-4feb-a56a-0451f1f16a3b"). InnerVolumeSpecName "kube-api-access-l57bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.220118 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c2a6285-eb97-45b0-b45b-b42c78e4e2b5-logs" (OuterVolumeSpecName: "logs") pod "6c2a6285-eb97-45b0-b45b-b42c78e4e2b5" (UID: "6c2a6285-eb97-45b0-b45b-b42c78e4e2b5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.261133 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c2a6285-eb97-45b0-b45b-b42c78e4e2b5-scripts" (OuterVolumeSpecName: "scripts") pod "6c2a6285-eb97-45b0-b45b-b42c78e4e2b5" (UID: "6c2a6285-eb97-45b0-b45b-b42c78e4e2b5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.261219 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c2a6285-eb97-45b0-b45b-b42c78e4e2b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c2a6285-eb97-45b0-b45b-b42c78e4e2b5" (UID: "6c2a6285-eb97-45b0-b45b-b42c78e4e2b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.263328 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c2a6285-eb97-45b0-b45b-b42c78e4e2b5","Type":"ContainerDied","Data":"dddaa0942eb867fe732b34d249cd5ad6a422feb6b9d3a0be1b495b6dcd62e150"} Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.263370 4830 scope.go:117] "RemoveContainer" containerID="724e28a9cf241814a075686e9aca8a465db331a9eb2c26d9920fe90d63f091f3" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.263473 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.303024 4830 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.303068 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8w97\" (UniqueName: \"kubernetes.io/projected/6c2a6285-eb97-45b0-b45b-b42c78e4e2b5-kube-api-access-h8w97\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.303081 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2a6285-eb97-45b0-b45b-b42c78e4e2b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.303089 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l57bg\" (UniqueName: \"kubernetes.io/projected/1434c895-fa3e-4feb-a56a-0451f1f16a3b-kube-api-access-l57bg\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.303098 4830 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c2a6285-eb97-45b0-b45b-b42c78e4e2b5-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.303107 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c2a6285-eb97-45b0-b45b-b42c78e4e2b5-logs\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.303114 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c2a6285-eb97-45b0-b45b-b42c78e4e2b5-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.332212 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1434c895-fa3e-4feb-a56a-0451f1f16a3b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1434c895-fa3e-4feb-a56a-0451f1f16a3b" (UID: "1434c895-fa3e-4feb-a56a-0451f1f16a3b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.351248 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-cjq7v" event={"ID":"1434c895-fa3e-4feb-a56a-0451f1f16a3b","Type":"ContainerDied","Data":"6712aaa3bc10e702ad1242fad1603a54acc7021a5eb45e728d20766d74ad02f8"} Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.351398 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-cjq7v" Feb 27 16:29:39 crc kubenswrapper[4830]: E0227 16:29:39.353955 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-dcxkj" podUID="459173e8-7571-47b7-9af8-3bd2d24d4e21" Feb 27 16:29:39 crc kubenswrapper[4830]: E0227 16:29:39.355761 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-vrjmz" podUID="a69bc2ed-ce70-4828-af02-ccac1c3f0c10" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.369423 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1434c895-fa3e-4feb-a56a-0451f1f16a3b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1434c895-fa3e-4feb-a56a-0451f1f16a3b" (UID: "1434c895-fa3e-4feb-a56a-0451f1f16a3b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.372973 4830 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.383118 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1434c895-fa3e-4feb-a56a-0451f1f16a3b-config" (OuterVolumeSpecName: "config") pod "1434c895-fa3e-4feb-a56a-0451f1f16a3b" (UID: "1434c895-fa3e-4feb-a56a-0451f1f16a3b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.390883 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wnx5p"] Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.391124 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c2a6285-eb97-45b0-b45b-b42c78e4e2b5-config-data" (OuterVolumeSpecName: "config-data") pod "6c2a6285-eb97-45b0-b45b-b42c78e4e2b5" (UID: "6c2a6285-eb97-45b0-b45b-b42c78e4e2b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.401755 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1434c895-fa3e-4feb-a56a-0451f1f16a3b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1434c895-fa3e-4feb-a56a-0451f1f16a3b" (UID: "1434c895-fa3e-4feb-a56a-0451f1f16a3b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.404305 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1434c895-fa3e-4feb-a56a-0451f1f16a3b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.404337 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1434c895-fa3e-4feb-a56a-0451f1f16a3b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.404347 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c2a6285-eb97-45b0-b45b-b42c78e4e2b5-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.404356 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1434c895-fa3e-4feb-a56a-0451f1f16a3b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.404366 4830 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.404375 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1434c895-fa3e-4feb-a56a-0451f1f16a3b-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.469023 4830 scope.go:117] "RemoveContainer" containerID="49b234618e387f9db69268adc60b9d401ddf8860800b126b07c12e1ab28d3e20" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.495618 4830 scope.go:117] "RemoveContainer" containerID="51057a0e1285abbf0d8d8183a853aec44ee1a9c4c03ece1d5f094ba69d645778" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.498124 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.518012 4830 scope.go:117] "RemoveContainer" containerID="0e61e2eba7cefcaeb7cc49da2fcf3fb946c76fa49968f3858bb6de35d92d599a" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.592157 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.606245 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.614905 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 16:29:39 crc kubenswrapper[4830]: E0227 16:29:39.615236 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1434c895-fa3e-4feb-a56a-0451f1f16a3b" containerName="dnsmasq-dns" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.615249 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="1434c895-fa3e-4feb-a56a-0451f1f16a3b" containerName="dnsmasq-dns" Feb 27 16:29:39 crc kubenswrapper[4830]: E0227 16:29:39.615268 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1434c895-fa3e-4feb-a56a-0451f1f16a3b" containerName="init" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.615276 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="1434c895-fa3e-4feb-a56a-0451f1f16a3b" containerName="init" Feb 27 16:29:39 crc kubenswrapper[4830]: E0227 16:29:39.615285 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c2a6285-eb97-45b0-b45b-b42c78e4e2b5" containerName="glance-log" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.615293 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c2a6285-eb97-45b0-b45b-b42c78e4e2b5" containerName="glance-log" Feb 27 16:29:39 crc kubenswrapper[4830]: E0227 16:29:39.615325 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c2a6285-eb97-45b0-b45b-b42c78e4e2b5" containerName="glance-httpd" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.615332 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c2a6285-eb97-45b0-b45b-b42c78e4e2b5" containerName="glance-httpd" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.615488 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c2a6285-eb97-45b0-b45b-b42c78e4e2b5" containerName="glance-httpd" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.615509 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="1434c895-fa3e-4feb-a56a-0451f1f16a3b" containerName="dnsmasq-dns" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.615518 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c2a6285-eb97-45b0-b45b-b42c78e4e2b5" containerName="glance-log" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.616459 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.622642 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.622913 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.626723 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.712874 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-cjq7v"] Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.713271 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab89052d-19a3-4bee-8e41-3fc364424b47-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ab89052d-19a3-4bee-8e41-3fc364424b47\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.713331 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab89052d-19a3-4bee-8e41-3fc364424b47-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ab89052d-19a3-4bee-8e41-3fc364424b47\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.713356 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab89052d-19a3-4bee-8e41-3fc364424b47-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ab89052d-19a3-4bee-8e41-3fc364424b47\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.713382 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab89052d-19a3-4bee-8e41-3fc364424b47-logs\") pod \"glance-default-internal-api-0\" (UID: \"ab89052d-19a3-4bee-8e41-3fc364424b47\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.713423 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxg2h\" (UniqueName: \"kubernetes.io/projected/ab89052d-19a3-4bee-8e41-3fc364424b47-kube-api-access-bxg2h\") pod \"glance-default-internal-api-0\" (UID: \"ab89052d-19a3-4bee-8e41-3fc364424b47\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.713469 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ab89052d-19a3-4bee-8e41-3fc364424b47-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ab89052d-19a3-4bee-8e41-3fc364424b47\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.713496 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab89052d-19a3-4bee-8e41-3fc364424b47-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ab89052d-19a3-4bee-8e41-3fc364424b47\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.713547 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"ab89052d-19a3-4bee-8e41-3fc364424b47\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.721611 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-cjq7v"] Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.815544 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxg2h\" (UniqueName: \"kubernetes.io/projected/ab89052d-19a3-4bee-8e41-3fc364424b47-kube-api-access-bxg2h\") pod \"glance-default-internal-api-0\" (UID: \"ab89052d-19a3-4bee-8e41-3fc364424b47\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.815618 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ab89052d-19a3-4bee-8e41-3fc364424b47-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ab89052d-19a3-4bee-8e41-3fc364424b47\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.815654 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab89052d-19a3-4bee-8e41-3fc364424b47-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ab89052d-19a3-4bee-8e41-3fc364424b47\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.815707 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"ab89052d-19a3-4bee-8e41-3fc364424b47\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.815725 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab89052d-19a3-4bee-8e41-3fc364424b47-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ab89052d-19a3-4bee-8e41-3fc364424b47\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.815754 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab89052d-19a3-4bee-8e41-3fc364424b47-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ab89052d-19a3-4bee-8e41-3fc364424b47\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.815773 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab89052d-19a3-4bee-8e41-3fc364424b47-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ab89052d-19a3-4bee-8e41-3fc364424b47\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.815794 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab89052d-19a3-4bee-8e41-3fc364424b47-logs\") pod \"glance-default-internal-api-0\" (UID: \"ab89052d-19a3-4bee-8e41-3fc364424b47\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.816088 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"ab89052d-19a3-4bee-8e41-3fc364424b47\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.816288 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab89052d-19a3-4bee-8e41-3fc364424b47-logs\") pod \"glance-default-internal-api-0\" (UID: \"ab89052d-19a3-4bee-8e41-3fc364424b47\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.816512 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ab89052d-19a3-4bee-8e41-3fc364424b47-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ab89052d-19a3-4bee-8e41-3fc364424b47\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.820844 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab89052d-19a3-4bee-8e41-3fc364424b47-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ab89052d-19a3-4bee-8e41-3fc364424b47\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.822076 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab89052d-19a3-4bee-8e41-3fc364424b47-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ab89052d-19a3-4bee-8e41-3fc364424b47\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.822870 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab89052d-19a3-4bee-8e41-3fc364424b47-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ab89052d-19a3-4bee-8e41-3fc364424b47\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.825709 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab89052d-19a3-4bee-8e41-3fc364424b47-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ab89052d-19a3-4bee-8e41-3fc364424b47\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.836642 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxg2h\" (UniqueName: \"kubernetes.io/projected/ab89052d-19a3-4bee-8e41-3fc364424b47-kube-api-access-bxg2h\") pod \"glance-default-internal-api-0\" (UID: \"ab89052d-19a3-4bee-8e41-3fc364424b47\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.849114 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"ab89052d-19a3-4bee-8e41-3fc364424b47\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:29:39 crc kubenswrapper[4830]: I0227 16:29:39.951403 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 16:29:40 crc kubenswrapper[4830]: I0227 16:29:40.370086 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wnx5p" event={"ID":"f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6","Type":"ContainerStarted","Data":"cc733946c2730a559cac7a10dc518215f583a9b93706df17edea75a23418ffdc"} Feb 27 16:29:40 crc kubenswrapper[4830]: I0227 16:29:40.370558 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wnx5p" event={"ID":"f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6","Type":"ContainerStarted","Data":"ecc391ee6a6772aabb604221214b79d723c56a4577c7adcfcae64c0fd81f82f3"} Feb 27 16:29:40 crc kubenswrapper[4830]: I0227 16:29:40.373384 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-b9fgg" event={"ID":"67a7f858-b1fb-4547-9880-8f496d704f48","Type":"ContainerStarted","Data":"f06c98d4e511d3e89e496c04ad5a11d60444ab50c2a4dc23cb608869e9b5b98a"} Feb 27 16:29:40 crc kubenswrapper[4830]: I0227 16:29:40.376091 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efa2d7d0-3613-4580-be80-b1a72de4501d","Type":"ContainerStarted","Data":"dec4c52081bcd6b7e6cdc8967cf57151769e02a26e085b39c0fcb8aa6faf71fb"} Feb 27 16:29:40 crc kubenswrapper[4830]: I0227 16:29:40.377252 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bb4fe631-52f0-445f-9e4c-90f4137bdba6","Type":"ContainerStarted","Data":"1f34e6f642aea7d4125be18b29bd3b54dedeb193e281e665389dc545b3650026"} Feb 27 16:29:40 crc kubenswrapper[4830]: I0227 16:29:40.377275 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bb4fe631-52f0-445f-9e4c-90f4137bdba6","Type":"ContainerStarted","Data":"4a59a842baf7a5998b965141f2b75707ae5e894067ec0fa43a6b5cf53db034ff"} Feb 27 16:29:40 crc kubenswrapper[4830]: I0227 16:29:40.388569 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-wnx5p" podStartSLOduration=12.388553175 podStartE2EDuration="12.388553175s" podCreationTimestamp="2026-02-27 16:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:29:40.388008941 +0000 UTC m=+1376.477281404" watchObservedRunningTime="2026-02-27 16:29:40.388553175 +0000 UTC m=+1376.477825638" Feb 27 16:29:40 crc kubenswrapper[4830]: I0227 16:29:40.415049 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-b9fgg" podStartSLOduration=2.851601773 podStartE2EDuration="32.415030433s" podCreationTimestamp="2026-02-27 16:29:08 +0000 UTC" firstStartedPulling="2026-02-27 16:29:09.626108288 +0000 UTC m=+1345.715380751" lastFinishedPulling="2026-02-27 16:29:39.189536948 +0000 UTC m=+1375.278809411" observedRunningTime="2026-02-27 16:29:40.411711893 +0000 UTC m=+1376.500984346" watchObservedRunningTime="2026-02-27 16:29:40.415030433 +0000 UTC m=+1376.504302896" Feb 27 16:29:40 crc kubenswrapper[4830]: I0227 16:29:40.487101 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 16:29:40 crc kubenswrapper[4830]: I0227 16:29:40.775983 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1434c895-fa3e-4feb-a56a-0451f1f16a3b" path="/var/lib/kubelet/pods/1434c895-fa3e-4feb-a56a-0451f1f16a3b/volumes" Feb 27 16:29:40 crc kubenswrapper[4830]: I0227 16:29:40.776964 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c2a6285-eb97-45b0-b45b-b42c78e4e2b5" path="/var/lib/kubelet/pods/6c2a6285-eb97-45b0-b45b-b42c78e4e2b5/volumes" Feb 27 16:29:41 crc kubenswrapper[4830]: I0227 16:29:41.388186 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ab89052d-19a3-4bee-8e41-3fc364424b47","Type":"ContainerStarted","Data":"b45a73023f7c179a20bfe1c108b5dba26f62e40ce6291f0fc63775365ac5f555"} Feb 27 16:29:41 crc kubenswrapper[4830]: I0227 16:29:41.388451 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ab89052d-19a3-4bee-8e41-3fc364424b47","Type":"ContainerStarted","Data":"685755162edab3a265bcc645b673533a44e0aacb72e710cd701790c8efb9a257"} Feb 27 16:29:41 crc kubenswrapper[4830]: I0227 16:29:41.390845 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bb4fe631-52f0-445f-9e4c-90f4137bdba6","Type":"ContainerStarted","Data":"3976783388fcdce522b0afa5b0ca99a1cf893c91a02d58f8d8a5a9a4a19a9296"} Feb 27 16:29:41 crc kubenswrapper[4830]: I0227 16:29:41.423458 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=17.423439083 podStartE2EDuration="17.423439083s" podCreationTimestamp="2026-02-27 16:29:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:29:41.420424981 +0000 UTC m=+1377.509697444" watchObservedRunningTime="2026-02-27 16:29:41.423439083 +0000 UTC m=+1377.512711546" Feb 27 16:29:42 crc kubenswrapper[4830]: I0227 16:29:42.401156 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ab89052d-19a3-4bee-8e41-3fc364424b47","Type":"ContainerStarted","Data":"d3f9f381fd837c86af349bc5a2ece3fe00a4310cf835db8ff2d27eaf11394c2f"} Feb 27 16:29:42 crc kubenswrapper[4830]: I0227 16:29:42.405434 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efa2d7d0-3613-4580-be80-b1a72de4501d","Type":"ContainerStarted","Data":"8d582680884d2051e1a83e581994241ae6295cb8526af869e81146a8d3d23a7c"} Feb 27 16:29:42 crc kubenswrapper[4830]: I0227 16:29:42.423693 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.423670735 podStartE2EDuration="3.423670735s" podCreationTimestamp="2026-02-27 16:29:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:29:42.416197675 +0000 UTC m=+1378.505470178" watchObservedRunningTime="2026-02-27 16:29:42.423670735 +0000 UTC m=+1378.512943198" Feb 27 16:29:43 crc kubenswrapper[4830]: I0227 16:29:43.418988 4830 generic.go:334] "Generic (PLEG): container finished" podID="67a7f858-b1fb-4547-9880-8f496d704f48" containerID="f06c98d4e511d3e89e496c04ad5a11d60444ab50c2a4dc23cb608869e9b5b98a" exitCode=0 Feb 27 16:29:43 crc kubenswrapper[4830]: I0227 16:29:43.419088 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-b9fgg" event={"ID":"67a7f858-b1fb-4547-9880-8f496d704f48","Type":"ContainerDied","Data":"f06c98d4e511d3e89e496c04ad5a11d60444ab50c2a4dc23cb608869e9b5b98a"} Feb 27 16:29:43 crc kubenswrapper[4830]: I0227 16:29:43.424035 4830 generic.go:334] "Generic (PLEG): container finished" podID="f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6" containerID="cc733946c2730a559cac7a10dc518215f583a9b93706df17edea75a23418ffdc" exitCode=0 Feb 27 16:29:43 crc kubenswrapper[4830]: I0227 16:29:43.424128 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wnx5p" event={"ID":"f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6","Type":"ContainerDied","Data":"cc733946c2730a559cac7a10dc518215f583a9b93706df17edea75a23418ffdc"} Feb 27 16:29:43 crc kubenswrapper[4830]: I0227 16:29:43.495888 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-cjq7v" podUID="1434c895-fa3e-4feb-a56a-0451f1f16a3b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: i/o timeout" Feb 27 16:29:44 crc kubenswrapper[4830]: I0227 16:29:44.496142 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 27 16:29:44 crc kubenswrapper[4830]: I0227 16:29:44.496214 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 27 16:29:44 crc kubenswrapper[4830]: I0227 16:29:44.542495 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 27 16:29:44 crc kubenswrapper[4830]: I0227 16:29:44.546974 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 27 16:29:45 crc kubenswrapper[4830]: I0227 16:29:45.439555 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 27 16:29:45 crc kubenswrapper[4830]: I0227 16:29:45.439806 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.124807 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wnx5p" Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.128711 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-b9fgg" Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.172594 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6-combined-ca-bundle\") pod \"f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6\" (UID: \"f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6\") " Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.173126 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6-credential-keys\") pod \"f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6\" (UID: \"f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6\") " Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.173397 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6-fernet-keys\") pod \"f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6\" (UID: \"f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6\") " Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.173499 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6-config-data\") pod \"f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6\" (UID: \"f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6\") " Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.173626 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67a7f858-b1fb-4547-9880-8f496d704f48-config-data\") pod \"67a7f858-b1fb-4547-9880-8f496d704f48\" (UID: \"67a7f858-b1fb-4547-9880-8f496d704f48\") " Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.173706 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67a7f858-b1fb-4547-9880-8f496d704f48-scripts\") pod \"67a7f858-b1fb-4547-9880-8f496d704f48\" (UID: \"67a7f858-b1fb-4547-9880-8f496d704f48\") " Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.173777 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6-scripts\") pod \"f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6\" (UID: \"f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6\") " Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.173852 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67a7f858-b1fb-4547-9880-8f496d704f48-combined-ca-bundle\") pod \"67a7f858-b1fb-4547-9880-8f496d704f48\" (UID: \"67a7f858-b1fb-4547-9880-8f496d704f48\") " Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.173974 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7fz4\" (UniqueName: \"kubernetes.io/projected/67a7f858-b1fb-4547-9880-8f496d704f48-kube-api-access-n7fz4\") pod \"67a7f858-b1fb-4547-9880-8f496d704f48\" (UID: \"67a7f858-b1fb-4547-9880-8f496d704f48\") " Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.174058 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td6pf\" (UniqueName: \"kubernetes.io/projected/f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6-kube-api-access-td6pf\") pod \"f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6\" (UID: \"f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6\") " Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.174155 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67a7f858-b1fb-4547-9880-8f496d704f48-logs\") pod \"67a7f858-b1fb-4547-9880-8f496d704f48\" (UID: \"67a7f858-b1fb-4547-9880-8f496d704f48\") " Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.175164 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67a7f858-b1fb-4547-9880-8f496d704f48-logs" (OuterVolumeSpecName: "logs") pod "67a7f858-b1fb-4547-9880-8f496d704f48" (UID: "67a7f858-b1fb-4547-9880-8f496d704f48"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.184097 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6" (UID: "f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.196517 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6-scripts" (OuterVolumeSpecName: "scripts") pod "f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6" (UID: "f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.196563 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6" (UID: "f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.202792 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67a7f858-b1fb-4547-9880-8f496d704f48-scripts" (OuterVolumeSpecName: "scripts") pod "67a7f858-b1fb-4547-9880-8f496d704f48" (UID: "67a7f858-b1fb-4547-9880-8f496d704f48"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.203917 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6-kube-api-access-td6pf" (OuterVolumeSpecName: "kube-api-access-td6pf") pod "f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6" (UID: "f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6"). InnerVolumeSpecName "kube-api-access-td6pf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.204343 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67a7f858-b1fb-4547-9880-8f496d704f48-kube-api-access-n7fz4" (OuterVolumeSpecName: "kube-api-access-n7fz4") pod "67a7f858-b1fb-4547-9880-8f496d704f48" (UID: "67a7f858-b1fb-4547-9880-8f496d704f48"). InnerVolumeSpecName "kube-api-access-n7fz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.222060 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6-config-data" (OuterVolumeSpecName: "config-data") pod "f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6" (UID: "f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.228049 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67a7f858-b1fb-4547-9880-8f496d704f48-config-data" (OuterVolumeSpecName: "config-data") pod "67a7f858-b1fb-4547-9880-8f496d704f48" (UID: "67a7f858-b1fb-4547-9880-8f496d704f48"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.240579 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67a7f858-b1fb-4547-9880-8f496d704f48-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67a7f858-b1fb-4547-9880-8f496d704f48" (UID: "67a7f858-b1fb-4547-9880-8f496d704f48"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.245808 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6" (UID: "f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.276198 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.276228 4830 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.276238 4830 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.276248 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.276256 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67a7f858-b1fb-4547-9880-8f496d704f48-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.276265 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67a7f858-b1fb-4547-9880-8f496d704f48-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.276273 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.276281 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67a7f858-b1fb-4547-9880-8f496d704f48-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.276289 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7fz4\" (UniqueName: \"kubernetes.io/projected/67a7f858-b1fb-4547-9880-8f496d704f48-kube-api-access-n7fz4\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.276298 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td6pf\" (UniqueName: \"kubernetes.io/projected/f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6-kube-api-access-td6pf\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.276307 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67a7f858-b1fb-4547-9880-8f496d704f48-logs\") on node \"crc\" DevicePath \"\"" Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.284288 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.284679 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.457479 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efa2d7d0-3613-4580-be80-b1a72de4501d","Type":"ContainerStarted","Data":"a72f96761cb905e307618ed792f50bb4e78875f5d3b39445b8e7c5cc56c2e306"} Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.459740 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wnx5p" event={"ID":"f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6","Type":"ContainerDied","Data":"ecc391ee6a6772aabb604221214b79d723c56a4577c7adcfcae64c0fd81f82f3"} Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.459829 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecc391ee6a6772aabb604221214b79d723c56a4577c7adcfcae64c0fd81f82f3" Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.459908 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wnx5p" Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.468510 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-b9fgg" Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.468807 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-b9fgg" event={"ID":"67a7f858-b1fb-4547-9880-8f496d704f48","Type":"ContainerDied","Data":"fe6195e6d80b4607701cb9026c48088c2d230eb0b7443f4ce3245b3ef14fc6dc"} Feb 27 16:29:47 crc kubenswrapper[4830]: I0227 16:29:47.468859 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe6195e6d80b4607701cb9026c48088c2d230eb0b7443f4ce3245b3ef14fc6dc" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.226677 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6b747d769f-z82kl"] Feb 27 16:29:48 crc kubenswrapper[4830]: E0227 16:29:48.227977 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6" containerName="keystone-bootstrap" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.227993 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6" containerName="keystone-bootstrap" Feb 27 16:29:48 crc kubenswrapper[4830]: E0227 16:29:48.228016 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67a7f858-b1fb-4547-9880-8f496d704f48" containerName="placement-db-sync" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.228025 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="67a7f858-b1fb-4547-9880-8f496d704f48" containerName="placement-db-sync" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.228222 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6" containerName="keystone-bootstrap" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.228243 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="67a7f858-b1fb-4547-9880-8f496d704f48" containerName="placement-db-sync" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.228871 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6b747d769f-z82kl" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.233354 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.234142 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.234466 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zm2zz" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.234659 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.234777 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.235230 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.259782 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6b747d769f-z82kl"] Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.317782 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-config-data\") pod \"keystone-6b747d769f-z82kl\" (UID: \"28316ca0-eb95-47b0-bc7e-d31591facdc5\") " pod="openstack/keystone-6b747d769f-z82kl" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.317829 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-scripts\") pod \"keystone-6b747d769f-z82kl\" (UID: \"28316ca0-eb95-47b0-bc7e-d31591facdc5\") " pod="openstack/keystone-6b747d769f-z82kl" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.317873 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zjxw\" (UniqueName: \"kubernetes.io/projected/28316ca0-eb95-47b0-bc7e-d31591facdc5-kube-api-access-4zjxw\") pod \"keystone-6b747d769f-z82kl\" (UID: \"28316ca0-eb95-47b0-bc7e-d31591facdc5\") " pod="openstack/keystone-6b747d769f-z82kl" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.317899 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-fernet-keys\") pod \"keystone-6b747d769f-z82kl\" (UID: \"28316ca0-eb95-47b0-bc7e-d31591facdc5\") " pod="openstack/keystone-6b747d769f-z82kl" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.317931 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-internal-tls-certs\") pod \"keystone-6b747d769f-z82kl\" (UID: \"28316ca0-eb95-47b0-bc7e-d31591facdc5\") " pod="openstack/keystone-6b747d769f-z82kl" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.317963 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-public-tls-certs\") pod \"keystone-6b747d769f-z82kl\" (UID: \"28316ca0-eb95-47b0-bc7e-d31591facdc5\") " pod="openstack/keystone-6b747d769f-z82kl" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.317999 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-combined-ca-bundle\") pod \"keystone-6b747d769f-z82kl\" (UID: \"28316ca0-eb95-47b0-bc7e-d31591facdc5\") " pod="openstack/keystone-6b747d769f-z82kl" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.318080 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-credential-keys\") pod \"keystone-6b747d769f-z82kl\" (UID: \"28316ca0-eb95-47b0-bc7e-d31591facdc5\") " pod="openstack/keystone-6b747d769f-z82kl" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.362249 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5c55fdd8d8-tv8zp"] Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.366705 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5c55fdd8d8-tv8zp" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.370485 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-crgzb" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.370544 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.370630 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.371528 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.374961 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.388791 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5c55fdd8d8-tv8zp"] Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.419902 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-config-data\") pod \"keystone-6b747d769f-z82kl\" (UID: \"28316ca0-eb95-47b0-bc7e-d31591facdc5\") " pod="openstack/keystone-6b747d769f-z82kl" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.419967 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-scripts\") pod \"keystone-6b747d769f-z82kl\" (UID: \"28316ca0-eb95-47b0-bc7e-d31591facdc5\") " pod="openstack/keystone-6b747d769f-z82kl" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.419999 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzhpt\" (UniqueName: \"kubernetes.io/projected/4da01425-1614-4383-810b-ff1a89832197-kube-api-access-jzhpt\") pod \"placement-5c55fdd8d8-tv8zp\" (UID: \"4da01425-1614-4383-810b-ff1a89832197\") " pod="openstack/placement-5c55fdd8d8-tv8zp" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.420019 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4da01425-1614-4383-810b-ff1a89832197-scripts\") pod \"placement-5c55fdd8d8-tv8zp\" (UID: \"4da01425-1614-4383-810b-ff1a89832197\") " pod="openstack/placement-5c55fdd8d8-tv8zp" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.420102 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zjxw\" (UniqueName: \"kubernetes.io/projected/28316ca0-eb95-47b0-bc7e-d31591facdc5-kube-api-access-4zjxw\") pod \"keystone-6b747d769f-z82kl\" (UID: \"28316ca0-eb95-47b0-bc7e-d31591facdc5\") " pod="openstack/keystone-6b747d769f-z82kl" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.420127 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-fernet-keys\") pod \"keystone-6b747d769f-z82kl\" (UID: \"28316ca0-eb95-47b0-bc7e-d31591facdc5\") " pod="openstack/keystone-6b747d769f-z82kl" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.420786 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4da01425-1614-4383-810b-ff1a89832197-internal-tls-certs\") pod \"placement-5c55fdd8d8-tv8zp\" (UID: \"4da01425-1614-4383-810b-ff1a89832197\") " pod="openstack/placement-5c55fdd8d8-tv8zp" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.420854 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-internal-tls-certs\") pod \"keystone-6b747d769f-z82kl\" (UID: \"28316ca0-eb95-47b0-bc7e-d31591facdc5\") " pod="openstack/keystone-6b747d769f-z82kl" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.420876 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-public-tls-certs\") pod \"keystone-6b747d769f-z82kl\" (UID: \"28316ca0-eb95-47b0-bc7e-d31591facdc5\") " pod="openstack/keystone-6b747d769f-z82kl" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.421139 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4da01425-1614-4383-810b-ff1a89832197-logs\") pod \"placement-5c55fdd8d8-tv8zp\" (UID: \"4da01425-1614-4383-810b-ff1a89832197\") " pod="openstack/placement-5c55fdd8d8-tv8zp" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.421188 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4da01425-1614-4383-810b-ff1a89832197-combined-ca-bundle\") pod \"placement-5c55fdd8d8-tv8zp\" (UID: \"4da01425-1614-4383-810b-ff1a89832197\") " pod="openstack/placement-5c55fdd8d8-tv8zp" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.421213 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-combined-ca-bundle\") pod \"keystone-6b747d769f-z82kl\" (UID: \"28316ca0-eb95-47b0-bc7e-d31591facdc5\") " pod="openstack/keystone-6b747d769f-z82kl" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.421235 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4da01425-1614-4383-810b-ff1a89832197-config-data\") pod \"placement-5c55fdd8d8-tv8zp\" (UID: \"4da01425-1614-4383-810b-ff1a89832197\") " pod="openstack/placement-5c55fdd8d8-tv8zp" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.421259 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4da01425-1614-4383-810b-ff1a89832197-public-tls-certs\") pod \"placement-5c55fdd8d8-tv8zp\" (UID: \"4da01425-1614-4383-810b-ff1a89832197\") " pod="openstack/placement-5c55fdd8d8-tv8zp" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.421296 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-credential-keys\") pod \"keystone-6b747d769f-z82kl\" (UID: \"28316ca0-eb95-47b0-bc7e-d31591facdc5\") " pod="openstack/keystone-6b747d769f-z82kl" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.434739 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-scripts\") pod \"keystone-6b747d769f-z82kl\" (UID: \"28316ca0-eb95-47b0-bc7e-d31591facdc5\") " pod="openstack/keystone-6b747d769f-z82kl" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.441530 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-fernet-keys\") pod \"keystone-6b747d769f-z82kl\" (UID: \"28316ca0-eb95-47b0-bc7e-d31591facdc5\") " pod="openstack/keystone-6b747d769f-z82kl" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.445549 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-internal-tls-certs\") pod \"keystone-6b747d769f-z82kl\" (UID: \"28316ca0-eb95-47b0-bc7e-d31591facdc5\") " pod="openstack/keystone-6b747d769f-z82kl" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.445596 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-config-data\") pod \"keystone-6b747d769f-z82kl\" (UID: \"28316ca0-eb95-47b0-bc7e-d31591facdc5\") " pod="openstack/keystone-6b747d769f-z82kl" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.447720 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-public-tls-certs\") pod \"keystone-6b747d769f-z82kl\" (UID: \"28316ca0-eb95-47b0-bc7e-d31591facdc5\") " pod="openstack/keystone-6b747d769f-z82kl" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.451366 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-credential-keys\") pod \"keystone-6b747d769f-z82kl\" (UID: \"28316ca0-eb95-47b0-bc7e-d31591facdc5\") " pod="openstack/keystone-6b747d769f-z82kl" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.462130 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-combined-ca-bundle\") pod \"keystone-6b747d769f-z82kl\" (UID: \"28316ca0-eb95-47b0-bc7e-d31591facdc5\") " pod="openstack/keystone-6b747d769f-z82kl" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.508582 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zjxw\" (UniqueName: \"kubernetes.io/projected/28316ca0-eb95-47b0-bc7e-d31591facdc5-kube-api-access-4zjxw\") pod \"keystone-6b747d769f-z82kl\" (UID: \"28316ca0-eb95-47b0-bc7e-d31591facdc5\") " pod="openstack/keystone-6b747d769f-z82kl" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.522523 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzhpt\" (UniqueName: \"kubernetes.io/projected/4da01425-1614-4383-810b-ff1a89832197-kube-api-access-jzhpt\") pod \"placement-5c55fdd8d8-tv8zp\" (UID: \"4da01425-1614-4383-810b-ff1a89832197\") " pod="openstack/placement-5c55fdd8d8-tv8zp" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.522570 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4da01425-1614-4383-810b-ff1a89832197-scripts\") pod \"placement-5c55fdd8d8-tv8zp\" (UID: \"4da01425-1614-4383-810b-ff1a89832197\") " pod="openstack/placement-5c55fdd8d8-tv8zp" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.522625 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4da01425-1614-4383-810b-ff1a89832197-internal-tls-certs\") pod \"placement-5c55fdd8d8-tv8zp\" (UID: \"4da01425-1614-4383-810b-ff1a89832197\") " pod="openstack/placement-5c55fdd8d8-tv8zp" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.522655 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4da01425-1614-4383-810b-ff1a89832197-logs\") pod \"placement-5c55fdd8d8-tv8zp\" (UID: \"4da01425-1614-4383-810b-ff1a89832197\") " pod="openstack/placement-5c55fdd8d8-tv8zp" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.522695 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4da01425-1614-4383-810b-ff1a89832197-combined-ca-bundle\") pod \"placement-5c55fdd8d8-tv8zp\" (UID: \"4da01425-1614-4383-810b-ff1a89832197\") " pod="openstack/placement-5c55fdd8d8-tv8zp" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.522728 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4da01425-1614-4383-810b-ff1a89832197-config-data\") pod \"placement-5c55fdd8d8-tv8zp\" (UID: \"4da01425-1614-4383-810b-ff1a89832197\") " pod="openstack/placement-5c55fdd8d8-tv8zp" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.522750 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4da01425-1614-4383-810b-ff1a89832197-public-tls-certs\") pod \"placement-5c55fdd8d8-tv8zp\" (UID: \"4da01425-1614-4383-810b-ff1a89832197\") " pod="openstack/placement-5c55fdd8d8-tv8zp" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.524648 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4da01425-1614-4383-810b-ff1a89832197-logs\") pod \"placement-5c55fdd8d8-tv8zp\" (UID: \"4da01425-1614-4383-810b-ff1a89832197\") " pod="openstack/placement-5c55fdd8d8-tv8zp" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.527732 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4da01425-1614-4383-810b-ff1a89832197-config-data\") pod \"placement-5c55fdd8d8-tv8zp\" (UID: \"4da01425-1614-4383-810b-ff1a89832197\") " pod="openstack/placement-5c55fdd8d8-tv8zp" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.531858 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4da01425-1614-4383-810b-ff1a89832197-scripts\") pod \"placement-5c55fdd8d8-tv8zp\" (UID: \"4da01425-1614-4383-810b-ff1a89832197\") " pod="openstack/placement-5c55fdd8d8-tv8zp" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.532401 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4da01425-1614-4383-810b-ff1a89832197-internal-tls-certs\") pod \"placement-5c55fdd8d8-tv8zp\" (UID: \"4da01425-1614-4383-810b-ff1a89832197\") " pod="openstack/placement-5c55fdd8d8-tv8zp" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.541274 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4da01425-1614-4383-810b-ff1a89832197-public-tls-certs\") pod \"placement-5c55fdd8d8-tv8zp\" (UID: \"4da01425-1614-4383-810b-ff1a89832197\") " pod="openstack/placement-5c55fdd8d8-tv8zp" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.542647 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4da01425-1614-4383-810b-ff1a89832197-combined-ca-bundle\") pod \"placement-5c55fdd8d8-tv8zp\" (UID: \"4da01425-1614-4383-810b-ff1a89832197\") " pod="openstack/placement-5c55fdd8d8-tv8zp" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.545774 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzhpt\" (UniqueName: \"kubernetes.io/projected/4da01425-1614-4383-810b-ff1a89832197-kube-api-access-jzhpt\") pod \"placement-5c55fdd8d8-tv8zp\" (UID: \"4da01425-1614-4383-810b-ff1a89832197\") " pod="openstack/placement-5c55fdd8d8-tv8zp" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.548255 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6b747d769f-z82kl" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.637751 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-58db7bd5dd-jr8zt"] Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.639301 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-58db7bd5dd-jr8zt" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.666240 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-58db7bd5dd-jr8zt"] Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.682674 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5c55fdd8d8-tv8zp" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.726834 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-scripts\") pod \"placement-58db7bd5dd-jr8zt\" (UID: \"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf\") " pod="openstack/placement-58db7bd5dd-jr8zt" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.726890 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-public-tls-certs\") pod \"placement-58db7bd5dd-jr8zt\" (UID: \"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf\") " pod="openstack/placement-58db7bd5dd-jr8zt" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.726922 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-internal-tls-certs\") pod \"placement-58db7bd5dd-jr8zt\" (UID: \"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf\") " pod="openstack/placement-58db7bd5dd-jr8zt" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.726975 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-config-data\") pod \"placement-58db7bd5dd-jr8zt\" (UID: \"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf\") " pod="openstack/placement-58db7bd5dd-jr8zt" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.727139 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zldns\" (UniqueName: \"kubernetes.io/projected/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-kube-api-access-zldns\") pod \"placement-58db7bd5dd-jr8zt\" (UID: \"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf\") " pod="openstack/placement-58db7bd5dd-jr8zt" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.727206 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-logs\") pod \"placement-58db7bd5dd-jr8zt\" (UID: \"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf\") " pod="openstack/placement-58db7bd5dd-jr8zt" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.727254 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-combined-ca-bundle\") pod \"placement-58db7bd5dd-jr8zt\" (UID: \"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf\") " pod="openstack/placement-58db7bd5dd-jr8zt" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.830576 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zldns\" (UniqueName: \"kubernetes.io/projected/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-kube-api-access-zldns\") pod \"placement-58db7bd5dd-jr8zt\" (UID: \"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf\") " pod="openstack/placement-58db7bd5dd-jr8zt" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.830888 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-logs\") pod \"placement-58db7bd5dd-jr8zt\" (UID: \"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf\") " pod="openstack/placement-58db7bd5dd-jr8zt" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.831003 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-combined-ca-bundle\") pod \"placement-58db7bd5dd-jr8zt\" (UID: \"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf\") " pod="openstack/placement-58db7bd5dd-jr8zt" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.831049 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-scripts\") pod \"placement-58db7bd5dd-jr8zt\" (UID: \"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf\") " pod="openstack/placement-58db7bd5dd-jr8zt" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.831195 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-public-tls-certs\") pod \"placement-58db7bd5dd-jr8zt\" (UID: \"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf\") " pod="openstack/placement-58db7bd5dd-jr8zt" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.831273 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-internal-tls-certs\") pod \"placement-58db7bd5dd-jr8zt\" (UID: \"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf\") " pod="openstack/placement-58db7bd5dd-jr8zt" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.831413 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-config-data\") pod \"placement-58db7bd5dd-jr8zt\" (UID: \"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf\") " pod="openstack/placement-58db7bd5dd-jr8zt" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.834008 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-logs\") pod \"placement-58db7bd5dd-jr8zt\" (UID: \"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf\") " pod="openstack/placement-58db7bd5dd-jr8zt" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.839241 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-scripts\") pod \"placement-58db7bd5dd-jr8zt\" (UID: \"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf\") " pod="openstack/placement-58db7bd5dd-jr8zt" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.839783 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-internal-tls-certs\") pod \"placement-58db7bd5dd-jr8zt\" (UID: \"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf\") " pod="openstack/placement-58db7bd5dd-jr8zt" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.841785 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-combined-ca-bundle\") pod \"placement-58db7bd5dd-jr8zt\" (UID: \"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf\") " pod="openstack/placement-58db7bd5dd-jr8zt" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.844549 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-config-data\") pod \"placement-58db7bd5dd-jr8zt\" (UID: \"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf\") " pod="openstack/placement-58db7bd5dd-jr8zt" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.847619 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-public-tls-certs\") pod \"placement-58db7bd5dd-jr8zt\" (UID: \"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf\") " pod="openstack/placement-58db7bd5dd-jr8zt" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.851416 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zldns\" (UniqueName: \"kubernetes.io/projected/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-kube-api-access-zldns\") pod \"placement-58db7bd5dd-jr8zt\" (UID: \"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf\") " pod="openstack/placement-58db7bd5dd-jr8zt" Feb 27 16:29:48 crc kubenswrapper[4830]: I0227 16:29:48.960067 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-58db7bd5dd-jr8zt" Feb 27 16:29:49 crc kubenswrapper[4830]: I0227 16:29:49.063711 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6b747d769f-z82kl"] Feb 27 16:29:49 crc kubenswrapper[4830]: I0227 16:29:49.185493 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5c55fdd8d8-tv8zp"] Feb 27 16:29:49 crc kubenswrapper[4830]: W0227 16:29:49.204926 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4da01425_1614_4383_810b_ff1a89832197.slice/crio-b233ee8776a60535dfe76755e1d36fbed27ebca58c1784af3bb02bec34cc6e3a WatchSource:0}: Error finding container b233ee8776a60535dfe76755e1d36fbed27ebca58c1784af3bb02bec34cc6e3a: Status 404 returned error can't find the container with id b233ee8776a60535dfe76755e1d36fbed27ebca58c1784af3bb02bec34cc6e3a Feb 27 16:29:49 crc kubenswrapper[4830]: I0227 16:29:49.458211 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-58db7bd5dd-jr8zt"] Feb 27 16:29:49 crc kubenswrapper[4830]: I0227 16:29:49.508826 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c55fdd8d8-tv8zp" event={"ID":"4da01425-1614-4383-810b-ff1a89832197","Type":"ContainerStarted","Data":"b233ee8776a60535dfe76755e1d36fbed27ebca58c1784af3bb02bec34cc6e3a"} Feb 27 16:29:49 crc kubenswrapper[4830]: I0227 16:29:49.514739 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-58db7bd5dd-jr8zt" event={"ID":"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf","Type":"ContainerStarted","Data":"ea5deab6a6c50b1124f89741ffb33d0a3789c9617f1574cfb25f4be315dbf7e6"} Feb 27 16:29:49 crc kubenswrapper[4830]: I0227 16:29:49.515909 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6b747d769f-z82kl" event={"ID":"28316ca0-eb95-47b0-bc7e-d31591facdc5","Type":"ContainerStarted","Data":"c34db6546cf7c5a207be75e43da86cdbe0ee1689c79b1c3a34a3de47326a4399"} Feb 27 16:29:49 crc kubenswrapper[4830]: I0227 16:29:49.953111 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 27 16:29:49 crc kubenswrapper[4830]: I0227 16:29:49.953372 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 27 16:29:49 crc kubenswrapper[4830]: I0227 16:29:49.992419 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 27 16:29:50 crc kubenswrapper[4830]: I0227 16:29:50.001264 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 27 16:29:50 crc kubenswrapper[4830]: I0227 16:29:50.526690 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-58db7bd5dd-jr8zt" event={"ID":"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf","Type":"ContainerStarted","Data":"b4c2a77141370e51625fa6bf385bb1eb77fc6e2be81322189a2da160e42e03d0"} Feb 27 16:29:50 crc kubenswrapper[4830]: I0227 16:29:50.527021 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-58db7bd5dd-jr8zt" event={"ID":"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf","Type":"ContainerStarted","Data":"4ad340ff7e5d3dcbe59313ae7a759101ba1b8edf59a86c29f287b2cb3edf2de6"} Feb 27 16:29:50 crc kubenswrapper[4830]: I0227 16:29:50.527038 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-58db7bd5dd-jr8zt" Feb 27 16:29:50 crc kubenswrapper[4830]: I0227 16:29:50.527050 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-58db7bd5dd-jr8zt" Feb 27 16:29:50 crc kubenswrapper[4830]: I0227 16:29:50.529504 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6b747d769f-z82kl" event={"ID":"28316ca0-eb95-47b0-bc7e-d31591facdc5","Type":"ContainerStarted","Data":"0222fc9c68ebb7ebbcbccfa2809183acfbfef310f1d1faa28bd88a72fb86cf67"} Feb 27 16:29:50 crc kubenswrapper[4830]: I0227 16:29:50.529634 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6b747d769f-z82kl" Feb 27 16:29:50 crc kubenswrapper[4830]: I0227 16:29:50.533569 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c55fdd8d8-tv8zp" event={"ID":"4da01425-1614-4383-810b-ff1a89832197","Type":"ContainerStarted","Data":"e13c29a20be2e0d3d8fc761683b1442db2bdcb77f337bbbcc9f2f64f166246d2"} Feb 27 16:29:50 crc kubenswrapper[4830]: I0227 16:29:50.533610 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c55fdd8d8-tv8zp" event={"ID":"4da01425-1614-4383-810b-ff1a89832197","Type":"ContainerStarted","Data":"cdc9a3387155de07c2eb226e985c0ee25a06eea0632b208c44e4fcc19ec8c984"} Feb 27 16:29:50 crc kubenswrapper[4830]: I0227 16:29:50.533626 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 27 16:29:50 crc kubenswrapper[4830]: I0227 16:29:50.533913 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5c55fdd8d8-tv8zp" Feb 27 16:29:50 crc kubenswrapper[4830]: I0227 16:29:50.535391 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 27 16:29:50 crc kubenswrapper[4830]: I0227 16:29:50.535420 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5c55fdd8d8-tv8zp" Feb 27 16:29:50 crc kubenswrapper[4830]: I0227 16:29:50.552442 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-58db7bd5dd-jr8zt" podStartSLOduration=2.552424565 podStartE2EDuration="2.552424565s" podCreationTimestamp="2026-02-27 16:29:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:29:50.550224152 +0000 UTC m=+1386.639496605" watchObservedRunningTime="2026-02-27 16:29:50.552424565 +0000 UTC m=+1386.641697028" Feb 27 16:29:50 crc kubenswrapper[4830]: I0227 16:29:50.568356 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5c55fdd8d8-tv8zp" podStartSLOduration=2.5683372589999998 podStartE2EDuration="2.568337259s" podCreationTimestamp="2026-02-27 16:29:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:29:50.567584021 +0000 UTC m=+1386.656856484" watchObservedRunningTime="2026-02-27 16:29:50.568337259 +0000 UTC m=+1386.657609722" Feb 27 16:29:51 crc kubenswrapper[4830]: I0227 16:29:51.782395 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6b747d769f-z82kl" podStartSLOduration=3.782377067 podStartE2EDuration="3.782377067s" podCreationTimestamp="2026-02-27 16:29:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:29:50.5866407 +0000 UTC m=+1386.675913163" watchObservedRunningTime="2026-02-27 16:29:51.782377067 +0000 UTC m=+1387.871649530" Feb 27 16:29:52 crc kubenswrapper[4830]: I0227 16:29:52.320254 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 27 16:29:52 crc kubenswrapper[4830]: I0227 16:29:52.377552 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 27 16:30:00 crc kubenswrapper[4830]: I0227 16:30:00.208575 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536830-gwpcb"] Feb 27 16:30:00 crc kubenswrapper[4830]: I0227 16:30:00.210160 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536830-gwpcb" Feb 27 16:30:00 crc kubenswrapper[4830]: I0227 16:30:00.215350 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 16:30:00 crc kubenswrapper[4830]: I0227 16:30:00.215522 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 16:30:00 crc kubenswrapper[4830]: I0227 16:30:00.220330 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 16:30:00 crc kubenswrapper[4830]: I0227 16:30:00.231156 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536830-9wp9k"] Feb 27 16:30:00 crc kubenswrapper[4830]: I0227 16:30:00.232251 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536830-9wp9k" Feb 27 16:30:00 crc kubenswrapper[4830]: I0227 16:30:00.236373 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 16:30:00 crc kubenswrapper[4830]: I0227 16:30:00.236589 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 16:30:00 crc kubenswrapper[4830]: I0227 16:30:00.257863 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536830-gwpcb"] Feb 27 16:30:00 crc kubenswrapper[4830]: I0227 16:30:00.274003 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536830-9wp9k"] Feb 27 16:30:00 crc kubenswrapper[4830]: I0227 16:30:00.286709 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmkk6\" (UniqueName: \"kubernetes.io/projected/4827561e-f60d-4b02-b4c6-7af50ab350ce-kube-api-access-hmkk6\") pod \"collect-profiles-29536830-9wp9k\" (UID: \"4827561e-f60d-4b02-b4c6-7af50ab350ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536830-9wp9k" Feb 27 16:30:00 crc kubenswrapper[4830]: I0227 16:30:00.286786 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4827561e-f60d-4b02-b4c6-7af50ab350ce-config-volume\") pod \"collect-profiles-29536830-9wp9k\" (UID: \"4827561e-f60d-4b02-b4c6-7af50ab350ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536830-9wp9k" Feb 27 16:30:00 crc kubenswrapper[4830]: I0227 16:30:00.286831 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xp8c\" (UniqueName: \"kubernetes.io/projected/1141b071-f448-4a3f-b062-0255dd5dc38a-kube-api-access-5xp8c\") pod \"auto-csr-approver-29536830-gwpcb\" (UID: \"1141b071-f448-4a3f-b062-0255dd5dc38a\") " pod="openshift-infra/auto-csr-approver-29536830-gwpcb" Feb 27 16:30:00 crc kubenswrapper[4830]: I0227 16:30:00.286852 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4827561e-f60d-4b02-b4c6-7af50ab350ce-secret-volume\") pod \"collect-profiles-29536830-9wp9k\" (UID: \"4827561e-f60d-4b02-b4c6-7af50ab350ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536830-9wp9k" Feb 27 16:30:00 crc kubenswrapper[4830]: I0227 16:30:00.387916 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4827561e-f60d-4b02-b4c6-7af50ab350ce-config-volume\") pod \"collect-profiles-29536830-9wp9k\" (UID: \"4827561e-f60d-4b02-b4c6-7af50ab350ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536830-9wp9k" Feb 27 16:30:00 crc kubenswrapper[4830]: I0227 16:30:00.388018 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xp8c\" (UniqueName: \"kubernetes.io/projected/1141b071-f448-4a3f-b062-0255dd5dc38a-kube-api-access-5xp8c\") pod \"auto-csr-approver-29536830-gwpcb\" (UID: \"1141b071-f448-4a3f-b062-0255dd5dc38a\") " pod="openshift-infra/auto-csr-approver-29536830-gwpcb" Feb 27 16:30:00 crc kubenswrapper[4830]: I0227 16:30:00.388046 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4827561e-f60d-4b02-b4c6-7af50ab350ce-secret-volume\") pod \"collect-profiles-29536830-9wp9k\" (UID: \"4827561e-f60d-4b02-b4c6-7af50ab350ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536830-9wp9k" Feb 27 16:30:00 crc kubenswrapper[4830]: I0227 16:30:00.388119 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmkk6\" (UniqueName: \"kubernetes.io/projected/4827561e-f60d-4b02-b4c6-7af50ab350ce-kube-api-access-hmkk6\") pod \"collect-profiles-29536830-9wp9k\" (UID: \"4827561e-f60d-4b02-b4c6-7af50ab350ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536830-9wp9k" Feb 27 16:30:00 crc kubenswrapper[4830]: I0227 16:30:00.388779 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4827561e-f60d-4b02-b4c6-7af50ab350ce-config-volume\") pod \"collect-profiles-29536830-9wp9k\" (UID: \"4827561e-f60d-4b02-b4c6-7af50ab350ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536830-9wp9k" Feb 27 16:30:00 crc kubenswrapper[4830]: I0227 16:30:00.394561 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4827561e-f60d-4b02-b4c6-7af50ab350ce-secret-volume\") pod \"collect-profiles-29536830-9wp9k\" (UID: \"4827561e-f60d-4b02-b4c6-7af50ab350ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536830-9wp9k" Feb 27 16:30:00 crc kubenswrapper[4830]: I0227 16:30:00.402655 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmkk6\" (UniqueName: \"kubernetes.io/projected/4827561e-f60d-4b02-b4c6-7af50ab350ce-kube-api-access-hmkk6\") pod \"collect-profiles-29536830-9wp9k\" (UID: \"4827561e-f60d-4b02-b4c6-7af50ab350ce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536830-9wp9k" Feb 27 16:30:00 crc kubenswrapper[4830]: I0227 16:30:00.409519 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xp8c\" (UniqueName: \"kubernetes.io/projected/1141b071-f448-4a3f-b062-0255dd5dc38a-kube-api-access-5xp8c\") pod \"auto-csr-approver-29536830-gwpcb\" (UID: \"1141b071-f448-4a3f-b062-0255dd5dc38a\") " pod="openshift-infra/auto-csr-approver-29536830-gwpcb" Feb 27 16:30:00 crc kubenswrapper[4830]: I0227 16:30:00.543075 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536830-gwpcb" Feb 27 16:30:00 crc kubenswrapper[4830]: I0227 16:30:00.588839 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536830-9wp9k" Feb 27 16:30:03 crc kubenswrapper[4830]: I0227 16:30:03.160662 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 16:30:03 crc kubenswrapper[4830]: I0227 16:30:03.160983 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 16:30:03 crc kubenswrapper[4830]: I0227 16:30:03.161045 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 16:30:03 crc kubenswrapper[4830]: I0227 16:30:03.161711 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4451f44bd5a230af740184dd479b8e8cef56c8f4c478f47a91288db9cb943456"} pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 16:30:03 crc kubenswrapper[4830]: I0227 16:30:03.161797 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" containerID="cri-o://4451f44bd5a230af740184dd479b8e8cef56c8f4c478f47a91288db9cb943456" gracePeriod=600 Feb 27 16:30:03 crc kubenswrapper[4830]: E0227 16:30:03.378973 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Feb 27 16:30:03 crc kubenswrapper[4830]: E0227 16:30:03.379434 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8s6nl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(efa2d7d0-3613-4580-be80-b1a72de4501d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 16:30:03 crc kubenswrapper[4830]: E0227 16:30:03.380855 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="efa2d7d0-3613-4580-be80-b1a72de4501d" Feb 27 16:30:03 crc kubenswrapper[4830]: I0227 16:30:03.637298 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536830-9wp9k"] Feb 27 16:30:03 crc kubenswrapper[4830]: I0227 16:30:03.650811 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536830-gwpcb"] Feb 27 16:30:03 crc kubenswrapper[4830]: I0227 16:30:03.661696 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dcxkj" event={"ID":"459173e8-7571-47b7-9af8-3bd2d24d4e21","Type":"ContainerStarted","Data":"45fba76ddd5f2fe4e68c5bc218edf28d6a195079fa1921a738dce0674accf471"} Feb 27 16:30:03 crc kubenswrapper[4830]: I0227 16:30:03.665897 4830 generic.go:334] "Generic (PLEG): container finished" podID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerID="4451f44bd5a230af740184dd479b8e8cef56c8f4c478f47a91288db9cb943456" exitCode=0 Feb 27 16:30:03 crc kubenswrapper[4830]: I0227 16:30:03.665934 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerDied","Data":"4451f44bd5a230af740184dd479b8e8cef56c8f4c478f47a91288db9cb943456"} Feb 27 16:30:03 crc kubenswrapper[4830]: I0227 16:30:03.666004 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerStarted","Data":"ca4233a6fa911a1d6b959075103b585a592935c9e9a6d178d17fef41a2fea048"} Feb 27 16:30:03 crc kubenswrapper[4830]: I0227 16:30:03.666023 4830 scope.go:117] "RemoveContainer" containerID="471097b7c348ccaf71a4c92a38d56632d777ed06a5ddca169a907c05253b1349" Feb 27 16:30:03 crc kubenswrapper[4830]: I0227 16:30:03.666083 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="efa2d7d0-3613-4580-be80-b1a72de4501d" containerName="ceilometer-central-agent" containerID="cri-o://dec4c52081bcd6b7e6cdc8967cf57151769e02a26e085b39c0fcb8aa6faf71fb" gracePeriod=30 Feb 27 16:30:03 crc kubenswrapper[4830]: I0227 16:30:03.666125 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="efa2d7d0-3613-4580-be80-b1a72de4501d" containerName="ceilometer-notification-agent" containerID="cri-o://8d582680884d2051e1a83e581994241ae6295cb8526af869e81146a8d3d23a7c" gracePeriod=30 Feb 27 16:30:03 crc kubenswrapper[4830]: I0227 16:30:03.666137 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="efa2d7d0-3613-4580-be80-b1a72de4501d" containerName="sg-core" containerID="cri-o://a72f96761cb905e307618ed792f50bb4e78875f5d3b39445b8e7c5cc56c2e306" gracePeriod=30 Feb 27 16:30:03 crc kubenswrapper[4830]: I0227 16:30:03.690431 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-dcxkj" podStartSLOduration=2.281946746 podStartE2EDuration="55.690408183s" podCreationTimestamp="2026-02-27 16:29:08 +0000 UTC" firstStartedPulling="2026-02-27 16:29:09.749889354 +0000 UTC m=+1345.839161817" lastFinishedPulling="2026-02-27 16:30:03.158350771 +0000 UTC m=+1399.247623254" observedRunningTime="2026-02-27 16:30:03.681072467 +0000 UTC m=+1399.770344930" watchObservedRunningTime="2026-02-27 16:30:03.690408183 +0000 UTC m=+1399.779680646" Feb 27 16:30:04 crc kubenswrapper[4830]: W0227 16:30:04.013150 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1141b071_f448_4a3f_b062_0255dd5dc38a.slice/crio-375f17aeecc61daf66cfb11ec8bc3f1d7f73fefe7fefa6be0438d372f0d38def WatchSource:0}: Error finding container 375f17aeecc61daf66cfb11ec8bc3f1d7f73fefe7fefa6be0438d372f0d38def: Status 404 returned error can't find the container with id 375f17aeecc61daf66cfb11ec8bc3f1d7f73fefe7fefa6be0438d372f0d38def Feb 27 16:30:04 crc kubenswrapper[4830]: I0227 16:30:04.695419 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536830-9wp9k" event={"ID":"4827561e-f60d-4b02-b4c6-7af50ab350ce","Type":"ContainerStarted","Data":"16aa2b72eb611476bfa1ca732d50197957726b82f3e6029bf856f46816ea160c"} Feb 27 16:30:04 crc kubenswrapper[4830]: I0227 16:30:04.695661 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536830-9wp9k" event={"ID":"4827561e-f60d-4b02-b4c6-7af50ab350ce","Type":"ContainerStarted","Data":"45540be0d73514d6ff630b36211f6104415ba9c796b6c8d49a74fd73e9950ec8"} Feb 27 16:30:04 crc kubenswrapper[4830]: I0227 16:30:04.697893 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536830-gwpcb" event={"ID":"1141b071-f448-4a3f-b062-0255dd5dc38a","Type":"ContainerStarted","Data":"375f17aeecc61daf66cfb11ec8bc3f1d7f73fefe7fefa6be0438d372f0d38def"} Feb 27 16:30:04 crc kubenswrapper[4830]: I0227 16:30:04.704104 4830 generic.go:334] "Generic (PLEG): container finished" podID="efa2d7d0-3613-4580-be80-b1a72de4501d" containerID="a72f96761cb905e307618ed792f50bb4e78875f5d3b39445b8e7c5cc56c2e306" exitCode=2 Feb 27 16:30:04 crc kubenswrapper[4830]: I0227 16:30:04.704126 4830 generic.go:334] "Generic (PLEG): container finished" podID="efa2d7d0-3613-4580-be80-b1a72de4501d" containerID="dec4c52081bcd6b7e6cdc8967cf57151769e02a26e085b39c0fcb8aa6faf71fb" exitCode=0 Feb 27 16:30:04 crc kubenswrapper[4830]: I0227 16:30:04.704142 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efa2d7d0-3613-4580-be80-b1a72de4501d","Type":"ContainerDied","Data":"a72f96761cb905e307618ed792f50bb4e78875f5d3b39445b8e7c5cc56c2e306"} Feb 27 16:30:04 crc kubenswrapper[4830]: I0227 16:30:04.704157 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efa2d7d0-3613-4580-be80-b1a72de4501d","Type":"ContainerDied","Data":"dec4c52081bcd6b7e6cdc8967cf57151769e02a26e085b39c0fcb8aa6faf71fb"} Feb 27 16:30:04 crc kubenswrapper[4830]: I0227 16:30:04.718475 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29536830-9wp9k" podStartSLOduration=4.718458286 podStartE2EDuration="4.718458286s" podCreationTimestamp="2026-02-27 16:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:30:04.711324464 +0000 UTC m=+1400.800596927" watchObservedRunningTime="2026-02-27 16:30:04.718458286 +0000 UTC m=+1400.807730749" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.367434 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.479015 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efa2d7d0-3613-4580-be80-b1a72de4501d-config-data\") pod \"efa2d7d0-3613-4580-be80-b1a72de4501d\" (UID: \"efa2d7d0-3613-4580-be80-b1a72de4501d\") " Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.479472 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efa2d7d0-3613-4580-be80-b1a72de4501d-combined-ca-bundle\") pod \"efa2d7d0-3613-4580-be80-b1a72de4501d\" (UID: \"efa2d7d0-3613-4580-be80-b1a72de4501d\") " Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.479565 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efa2d7d0-3613-4580-be80-b1a72de4501d-log-httpd\") pod \"efa2d7d0-3613-4580-be80-b1a72de4501d\" (UID: \"efa2d7d0-3613-4580-be80-b1a72de4501d\") " Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.479604 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8s6nl\" (UniqueName: \"kubernetes.io/projected/efa2d7d0-3613-4580-be80-b1a72de4501d-kube-api-access-8s6nl\") pod \"efa2d7d0-3613-4580-be80-b1a72de4501d\" (UID: \"efa2d7d0-3613-4580-be80-b1a72de4501d\") " Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.479633 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efa2d7d0-3613-4580-be80-b1a72de4501d-sg-core-conf-yaml\") pod \"efa2d7d0-3613-4580-be80-b1a72de4501d\" (UID: \"efa2d7d0-3613-4580-be80-b1a72de4501d\") " Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.479650 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efa2d7d0-3613-4580-be80-b1a72de4501d-run-httpd\") pod \"efa2d7d0-3613-4580-be80-b1a72de4501d\" (UID: \"efa2d7d0-3613-4580-be80-b1a72de4501d\") " Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.479698 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efa2d7d0-3613-4580-be80-b1a72de4501d-scripts\") pod \"efa2d7d0-3613-4580-be80-b1a72de4501d\" (UID: \"efa2d7d0-3613-4580-be80-b1a72de4501d\") " Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.480955 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efa2d7d0-3613-4580-be80-b1a72de4501d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "efa2d7d0-3613-4580-be80-b1a72de4501d" (UID: "efa2d7d0-3613-4580-be80-b1a72de4501d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.481000 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efa2d7d0-3613-4580-be80-b1a72de4501d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "efa2d7d0-3613-4580-be80-b1a72de4501d" (UID: "efa2d7d0-3613-4580-be80-b1a72de4501d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.491420 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efa2d7d0-3613-4580-be80-b1a72de4501d-kube-api-access-8s6nl" (OuterVolumeSpecName: "kube-api-access-8s6nl") pod "efa2d7d0-3613-4580-be80-b1a72de4501d" (UID: "efa2d7d0-3613-4580-be80-b1a72de4501d"). InnerVolumeSpecName "kube-api-access-8s6nl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.499182 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efa2d7d0-3613-4580-be80-b1a72de4501d-scripts" (OuterVolumeSpecName: "scripts") pod "efa2d7d0-3613-4580-be80-b1a72de4501d" (UID: "efa2d7d0-3613-4580-be80-b1a72de4501d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.532651 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efa2d7d0-3613-4580-be80-b1a72de4501d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "efa2d7d0-3613-4580-be80-b1a72de4501d" (UID: "efa2d7d0-3613-4580-be80-b1a72de4501d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.534532 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efa2d7d0-3613-4580-be80-b1a72de4501d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "efa2d7d0-3613-4580-be80-b1a72de4501d" (UID: "efa2d7d0-3613-4580-be80-b1a72de4501d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.543126 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efa2d7d0-3613-4580-be80-b1a72de4501d-config-data" (OuterVolumeSpecName: "config-data") pod "efa2d7d0-3613-4580-be80-b1a72de4501d" (UID: "efa2d7d0-3613-4580-be80-b1a72de4501d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.582527 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8s6nl\" (UniqueName: \"kubernetes.io/projected/efa2d7d0-3613-4580-be80-b1a72de4501d-kube-api-access-8s6nl\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.582564 4830 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efa2d7d0-3613-4580-be80-b1a72de4501d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.582577 4830 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efa2d7d0-3613-4580-be80-b1a72de4501d-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.582590 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efa2d7d0-3613-4580-be80-b1a72de4501d-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.582602 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efa2d7d0-3613-4580-be80-b1a72de4501d-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.582612 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efa2d7d0-3613-4580-be80-b1a72de4501d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.582625 4830 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efa2d7d0-3613-4580-be80-b1a72de4501d-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.718124 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vrjmz" event={"ID":"a69bc2ed-ce70-4828-af02-ccac1c3f0c10","Type":"ContainerStarted","Data":"78f7362752654ea3426af2a1f637ac858637b23cda39620187459b1ca0eb954f"} Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.720258 4830 generic.go:334] "Generic (PLEG): container finished" podID="4827561e-f60d-4b02-b4c6-7af50ab350ce" containerID="16aa2b72eb611476bfa1ca732d50197957726b82f3e6029bf856f46816ea160c" exitCode=0 Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.720371 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536830-9wp9k" event={"ID":"4827561e-f60d-4b02-b4c6-7af50ab350ce","Type":"ContainerDied","Data":"16aa2b72eb611476bfa1ca732d50197957726b82f3e6029bf856f46816ea160c"} Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.724178 4830 generic.go:334] "Generic (PLEG): container finished" podID="efa2d7d0-3613-4580-be80-b1a72de4501d" containerID="8d582680884d2051e1a83e581994241ae6295cb8526af869e81146a8d3d23a7c" exitCode=0 Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.724212 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efa2d7d0-3613-4580-be80-b1a72de4501d","Type":"ContainerDied","Data":"8d582680884d2051e1a83e581994241ae6295cb8526af869e81146a8d3d23a7c"} Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.724233 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efa2d7d0-3613-4580-be80-b1a72de4501d","Type":"ContainerDied","Data":"f072f58e2b7fa0c3b23478fba4b494ee650a4418c8597e8c16f9d2af5cc690f2"} Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.724253 4830 scope.go:117] "RemoveContainer" containerID="a72f96761cb905e307618ed792f50bb4e78875f5d3b39445b8e7c5cc56c2e306" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.724383 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.745310 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-vrjmz" podStartSLOduration=2.99051711 podStartE2EDuration="57.7452897s" podCreationTimestamp="2026-02-27 16:29:08 +0000 UTC" firstStartedPulling="2026-02-27 16:29:09.355299784 +0000 UTC m=+1345.444572247" lastFinishedPulling="2026-02-27 16:30:04.110072384 +0000 UTC m=+1400.199344837" observedRunningTime="2026-02-27 16:30:05.742738829 +0000 UTC m=+1401.832011302" watchObservedRunningTime="2026-02-27 16:30:05.7452897 +0000 UTC m=+1401.834562173" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.766793 4830 scope.go:117] "RemoveContainer" containerID="8d582680884d2051e1a83e581994241ae6295cb8526af869e81146a8d3d23a7c" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.815195 4830 scope.go:117] "RemoveContainer" containerID="dec4c52081bcd6b7e6cdc8967cf57151769e02a26e085b39c0fcb8aa6faf71fb" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.830837 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.844118 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.845382 4830 scope.go:117] "RemoveContainer" containerID="a72f96761cb905e307618ed792f50bb4e78875f5d3b39445b8e7c5cc56c2e306" Feb 27 16:30:05 crc kubenswrapper[4830]: E0227 16:30:05.845784 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a72f96761cb905e307618ed792f50bb4e78875f5d3b39445b8e7c5cc56c2e306\": container with ID starting with a72f96761cb905e307618ed792f50bb4e78875f5d3b39445b8e7c5cc56c2e306 not found: ID does not exist" containerID="a72f96761cb905e307618ed792f50bb4e78875f5d3b39445b8e7c5cc56c2e306" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.845811 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a72f96761cb905e307618ed792f50bb4e78875f5d3b39445b8e7c5cc56c2e306"} err="failed to get container status \"a72f96761cb905e307618ed792f50bb4e78875f5d3b39445b8e7c5cc56c2e306\": rpc error: code = NotFound desc = could not find container \"a72f96761cb905e307618ed792f50bb4e78875f5d3b39445b8e7c5cc56c2e306\": container with ID starting with a72f96761cb905e307618ed792f50bb4e78875f5d3b39445b8e7c5cc56c2e306 not found: ID does not exist" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.845839 4830 scope.go:117] "RemoveContainer" containerID="8d582680884d2051e1a83e581994241ae6295cb8526af869e81146a8d3d23a7c" Feb 27 16:30:05 crc kubenswrapper[4830]: E0227 16:30:05.846061 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d582680884d2051e1a83e581994241ae6295cb8526af869e81146a8d3d23a7c\": container with ID starting with 8d582680884d2051e1a83e581994241ae6295cb8526af869e81146a8d3d23a7c not found: ID does not exist" containerID="8d582680884d2051e1a83e581994241ae6295cb8526af869e81146a8d3d23a7c" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.846083 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d582680884d2051e1a83e581994241ae6295cb8526af869e81146a8d3d23a7c"} err="failed to get container status \"8d582680884d2051e1a83e581994241ae6295cb8526af869e81146a8d3d23a7c\": rpc error: code = NotFound desc = could not find container \"8d582680884d2051e1a83e581994241ae6295cb8526af869e81146a8d3d23a7c\": container with ID starting with 8d582680884d2051e1a83e581994241ae6295cb8526af869e81146a8d3d23a7c not found: ID does not exist" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.846098 4830 scope.go:117] "RemoveContainer" containerID="dec4c52081bcd6b7e6cdc8967cf57151769e02a26e085b39c0fcb8aa6faf71fb" Feb 27 16:30:05 crc kubenswrapper[4830]: E0227 16:30:05.846293 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dec4c52081bcd6b7e6cdc8967cf57151769e02a26e085b39c0fcb8aa6faf71fb\": container with ID starting with dec4c52081bcd6b7e6cdc8967cf57151769e02a26e085b39c0fcb8aa6faf71fb not found: ID does not exist" containerID="dec4c52081bcd6b7e6cdc8967cf57151769e02a26e085b39c0fcb8aa6faf71fb" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.846309 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dec4c52081bcd6b7e6cdc8967cf57151769e02a26e085b39c0fcb8aa6faf71fb"} err="failed to get container status \"dec4c52081bcd6b7e6cdc8967cf57151769e02a26e085b39c0fcb8aa6faf71fb\": rpc error: code = NotFound desc = could not find container \"dec4c52081bcd6b7e6cdc8967cf57151769e02a26e085b39c0fcb8aa6faf71fb\": container with ID starting with dec4c52081bcd6b7e6cdc8967cf57151769e02a26e085b39c0fcb8aa6faf71fb not found: ID does not exist" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.852834 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:30:05 crc kubenswrapper[4830]: E0227 16:30:05.853297 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efa2d7d0-3613-4580-be80-b1a72de4501d" containerName="ceilometer-central-agent" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.853315 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="efa2d7d0-3613-4580-be80-b1a72de4501d" containerName="ceilometer-central-agent" Feb 27 16:30:05 crc kubenswrapper[4830]: E0227 16:30:05.853331 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efa2d7d0-3613-4580-be80-b1a72de4501d" containerName="ceilometer-notification-agent" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.853338 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="efa2d7d0-3613-4580-be80-b1a72de4501d" containerName="ceilometer-notification-agent" Feb 27 16:30:05 crc kubenswrapper[4830]: E0227 16:30:05.853357 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efa2d7d0-3613-4580-be80-b1a72de4501d" containerName="sg-core" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.853364 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="efa2d7d0-3613-4580-be80-b1a72de4501d" containerName="sg-core" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.853532 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="efa2d7d0-3613-4580-be80-b1a72de4501d" containerName="sg-core" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.853550 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="efa2d7d0-3613-4580-be80-b1a72de4501d" containerName="ceilometer-central-agent" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.853580 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="efa2d7d0-3613-4580-be80-b1a72de4501d" containerName="ceilometer-notification-agent" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.855092 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.860866 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.861898 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.862059 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.889572 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0443650-95ce-4e86-97cd-5700be47571c-run-httpd\") pod \"ceilometer-0\" (UID: \"a0443650-95ce-4e86-97cd-5700be47571c\") " pod="openstack/ceilometer-0" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.889653 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0443650-95ce-4e86-97cd-5700be47571c-scripts\") pod \"ceilometer-0\" (UID: \"a0443650-95ce-4e86-97cd-5700be47571c\") " pod="openstack/ceilometer-0" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.889697 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a0443650-95ce-4e86-97cd-5700be47571c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a0443650-95ce-4e86-97cd-5700be47571c\") " pod="openstack/ceilometer-0" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.889725 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0443650-95ce-4e86-97cd-5700be47571c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a0443650-95ce-4e86-97cd-5700be47571c\") " pod="openstack/ceilometer-0" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.889745 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0443650-95ce-4e86-97cd-5700be47571c-log-httpd\") pod \"ceilometer-0\" (UID: \"a0443650-95ce-4e86-97cd-5700be47571c\") " pod="openstack/ceilometer-0" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.889764 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0443650-95ce-4e86-97cd-5700be47571c-config-data\") pod \"ceilometer-0\" (UID: \"a0443650-95ce-4e86-97cd-5700be47571c\") " pod="openstack/ceilometer-0" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.889792 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nmjs\" (UniqueName: \"kubernetes.io/projected/a0443650-95ce-4e86-97cd-5700be47571c-kube-api-access-6nmjs\") pod \"ceilometer-0\" (UID: \"a0443650-95ce-4e86-97cd-5700be47571c\") " pod="openstack/ceilometer-0" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.928788 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:30:05 crc kubenswrapper[4830]: E0227 16:30:05.929404 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-6nmjs log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="a0443650-95ce-4e86-97cd-5700be47571c" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.991524 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0443650-95ce-4e86-97cd-5700be47571c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a0443650-95ce-4e86-97cd-5700be47571c\") " pod="openstack/ceilometer-0" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.991843 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0443650-95ce-4e86-97cd-5700be47571c-log-httpd\") pod \"ceilometer-0\" (UID: \"a0443650-95ce-4e86-97cd-5700be47571c\") " pod="openstack/ceilometer-0" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.991879 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0443650-95ce-4e86-97cd-5700be47571c-config-data\") pod \"ceilometer-0\" (UID: \"a0443650-95ce-4e86-97cd-5700be47571c\") " pod="openstack/ceilometer-0" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.991920 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nmjs\" (UniqueName: \"kubernetes.io/projected/a0443650-95ce-4e86-97cd-5700be47571c-kube-api-access-6nmjs\") pod \"ceilometer-0\" (UID: \"a0443650-95ce-4e86-97cd-5700be47571c\") " pod="openstack/ceilometer-0" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.992035 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0443650-95ce-4e86-97cd-5700be47571c-run-httpd\") pod \"ceilometer-0\" (UID: \"a0443650-95ce-4e86-97cd-5700be47571c\") " pod="openstack/ceilometer-0" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.992084 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0443650-95ce-4e86-97cd-5700be47571c-scripts\") pod \"ceilometer-0\" (UID: \"a0443650-95ce-4e86-97cd-5700be47571c\") " pod="openstack/ceilometer-0" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.992145 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a0443650-95ce-4e86-97cd-5700be47571c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a0443650-95ce-4e86-97cd-5700be47571c\") " pod="openstack/ceilometer-0" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.992303 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0443650-95ce-4e86-97cd-5700be47571c-log-httpd\") pod \"ceilometer-0\" (UID: \"a0443650-95ce-4e86-97cd-5700be47571c\") " pod="openstack/ceilometer-0" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.992551 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0443650-95ce-4e86-97cd-5700be47571c-run-httpd\") pod \"ceilometer-0\" (UID: \"a0443650-95ce-4e86-97cd-5700be47571c\") " pod="openstack/ceilometer-0" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.996338 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0443650-95ce-4e86-97cd-5700be47571c-scripts\") pod \"ceilometer-0\" (UID: \"a0443650-95ce-4e86-97cd-5700be47571c\") " pod="openstack/ceilometer-0" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.996613 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a0443650-95ce-4e86-97cd-5700be47571c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a0443650-95ce-4e86-97cd-5700be47571c\") " pod="openstack/ceilometer-0" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.998757 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0443650-95ce-4e86-97cd-5700be47571c-config-data\") pod \"ceilometer-0\" (UID: \"a0443650-95ce-4e86-97cd-5700be47571c\") " pod="openstack/ceilometer-0" Feb 27 16:30:05 crc kubenswrapper[4830]: I0227 16:30:05.999018 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0443650-95ce-4e86-97cd-5700be47571c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a0443650-95ce-4e86-97cd-5700be47571c\") " pod="openstack/ceilometer-0" Feb 27 16:30:06 crc kubenswrapper[4830]: I0227 16:30:06.013852 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nmjs\" (UniqueName: \"kubernetes.io/projected/a0443650-95ce-4e86-97cd-5700be47571c-kube-api-access-6nmjs\") pod \"ceilometer-0\" (UID: \"a0443650-95ce-4e86-97cd-5700be47571c\") " pod="openstack/ceilometer-0" Feb 27 16:30:06 crc kubenswrapper[4830]: I0227 16:30:06.737801 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 16:30:06 crc kubenswrapper[4830]: I0227 16:30:06.756560 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 16:30:06 crc kubenswrapper[4830]: I0227 16:30:06.777718 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efa2d7d0-3613-4580-be80-b1a72de4501d" path="/var/lib/kubelet/pods/efa2d7d0-3613-4580-be80-b1a72de4501d/volumes" Feb 27 16:30:06 crc kubenswrapper[4830]: I0227 16:30:06.907673 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0443650-95ce-4e86-97cd-5700be47571c-log-httpd\") pod \"a0443650-95ce-4e86-97cd-5700be47571c\" (UID: \"a0443650-95ce-4e86-97cd-5700be47571c\") " Feb 27 16:30:06 crc kubenswrapper[4830]: I0227 16:30:06.907751 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0443650-95ce-4e86-97cd-5700be47571c-run-httpd\") pod \"a0443650-95ce-4e86-97cd-5700be47571c\" (UID: \"a0443650-95ce-4e86-97cd-5700be47571c\") " Feb 27 16:30:06 crc kubenswrapper[4830]: I0227 16:30:06.907799 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0443650-95ce-4e86-97cd-5700be47571c-combined-ca-bundle\") pod \"a0443650-95ce-4e86-97cd-5700be47571c\" (UID: \"a0443650-95ce-4e86-97cd-5700be47571c\") " Feb 27 16:30:06 crc kubenswrapper[4830]: I0227 16:30:06.907909 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nmjs\" (UniqueName: \"kubernetes.io/projected/a0443650-95ce-4e86-97cd-5700be47571c-kube-api-access-6nmjs\") pod \"a0443650-95ce-4e86-97cd-5700be47571c\" (UID: \"a0443650-95ce-4e86-97cd-5700be47571c\") " Feb 27 16:30:06 crc kubenswrapper[4830]: I0227 16:30:06.908023 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0443650-95ce-4e86-97cd-5700be47571c-config-data\") pod \"a0443650-95ce-4e86-97cd-5700be47571c\" (UID: \"a0443650-95ce-4e86-97cd-5700be47571c\") " Feb 27 16:30:06 crc kubenswrapper[4830]: I0227 16:30:06.908063 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0443650-95ce-4e86-97cd-5700be47571c-scripts\") pod \"a0443650-95ce-4e86-97cd-5700be47571c\" (UID: \"a0443650-95ce-4e86-97cd-5700be47571c\") " Feb 27 16:30:06 crc kubenswrapper[4830]: I0227 16:30:06.908257 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a0443650-95ce-4e86-97cd-5700be47571c-sg-core-conf-yaml\") pod \"a0443650-95ce-4e86-97cd-5700be47571c\" (UID: \"a0443650-95ce-4e86-97cd-5700be47571c\") " Feb 27 16:30:06 crc kubenswrapper[4830]: I0227 16:30:06.909031 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0443650-95ce-4e86-97cd-5700be47571c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a0443650-95ce-4e86-97cd-5700be47571c" (UID: "a0443650-95ce-4e86-97cd-5700be47571c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:30:06 crc kubenswrapper[4830]: I0227 16:30:06.909680 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0443650-95ce-4e86-97cd-5700be47571c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a0443650-95ce-4e86-97cd-5700be47571c" (UID: "a0443650-95ce-4e86-97cd-5700be47571c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:30:06 crc kubenswrapper[4830]: I0227 16:30:06.910799 4830 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0443650-95ce-4e86-97cd-5700be47571c-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:06 crc kubenswrapper[4830]: I0227 16:30:06.910833 4830 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0443650-95ce-4e86-97cd-5700be47571c-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:06 crc kubenswrapper[4830]: I0227 16:30:06.931573 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0443650-95ce-4e86-97cd-5700be47571c-kube-api-access-6nmjs" (OuterVolumeSpecName: "kube-api-access-6nmjs") pod "a0443650-95ce-4e86-97cd-5700be47571c" (UID: "a0443650-95ce-4e86-97cd-5700be47571c"). InnerVolumeSpecName "kube-api-access-6nmjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:30:06 crc kubenswrapper[4830]: I0227 16:30:06.935823 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0443650-95ce-4e86-97cd-5700be47571c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0443650-95ce-4e86-97cd-5700be47571c" (UID: "a0443650-95ce-4e86-97cd-5700be47571c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:06 crc kubenswrapper[4830]: I0227 16:30:06.936023 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0443650-95ce-4e86-97cd-5700be47571c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a0443650-95ce-4e86-97cd-5700be47571c" (UID: "a0443650-95ce-4e86-97cd-5700be47571c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:06 crc kubenswrapper[4830]: I0227 16:30:06.939232 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0443650-95ce-4e86-97cd-5700be47571c-scripts" (OuterVolumeSpecName: "scripts") pod "a0443650-95ce-4e86-97cd-5700be47571c" (UID: "a0443650-95ce-4e86-97cd-5700be47571c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:06 crc kubenswrapper[4830]: I0227 16:30:06.947095 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0443650-95ce-4e86-97cd-5700be47571c-config-data" (OuterVolumeSpecName: "config-data") pod "a0443650-95ce-4e86-97cd-5700be47571c" (UID: "a0443650-95ce-4e86-97cd-5700be47571c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:07 crc kubenswrapper[4830]: I0227 16:30:07.020737 4830 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a0443650-95ce-4e86-97cd-5700be47571c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:07 crc kubenswrapper[4830]: I0227 16:30:07.020789 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0443650-95ce-4e86-97cd-5700be47571c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:07 crc kubenswrapper[4830]: I0227 16:30:07.020808 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nmjs\" (UniqueName: \"kubernetes.io/projected/a0443650-95ce-4e86-97cd-5700be47571c-kube-api-access-6nmjs\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:07 crc kubenswrapper[4830]: I0227 16:30:07.020831 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0443650-95ce-4e86-97cd-5700be47571c-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:07 crc kubenswrapper[4830]: I0227 16:30:07.020847 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0443650-95ce-4e86-97cd-5700be47571c-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:07 crc kubenswrapper[4830]: I0227 16:30:07.125304 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536830-9wp9k" Feb 27 16:30:07 crc kubenswrapper[4830]: I0227 16:30:07.223918 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4827561e-f60d-4b02-b4c6-7af50ab350ce-config-volume\") pod \"4827561e-f60d-4b02-b4c6-7af50ab350ce\" (UID: \"4827561e-f60d-4b02-b4c6-7af50ab350ce\") " Feb 27 16:30:07 crc kubenswrapper[4830]: I0227 16:30:07.224091 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmkk6\" (UniqueName: \"kubernetes.io/projected/4827561e-f60d-4b02-b4c6-7af50ab350ce-kube-api-access-hmkk6\") pod \"4827561e-f60d-4b02-b4c6-7af50ab350ce\" (UID: \"4827561e-f60d-4b02-b4c6-7af50ab350ce\") " Feb 27 16:30:07 crc kubenswrapper[4830]: I0227 16:30:07.224258 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4827561e-f60d-4b02-b4c6-7af50ab350ce-secret-volume\") pod \"4827561e-f60d-4b02-b4c6-7af50ab350ce\" (UID: \"4827561e-f60d-4b02-b4c6-7af50ab350ce\") " Feb 27 16:30:07 crc kubenswrapper[4830]: I0227 16:30:07.225331 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4827561e-f60d-4b02-b4c6-7af50ab350ce-config-volume" (OuterVolumeSpecName: "config-volume") pod "4827561e-f60d-4b02-b4c6-7af50ab350ce" (UID: "4827561e-f60d-4b02-b4c6-7af50ab350ce"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:30:07 crc kubenswrapper[4830]: I0227 16:30:07.228373 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4827561e-f60d-4b02-b4c6-7af50ab350ce-kube-api-access-hmkk6" (OuterVolumeSpecName: "kube-api-access-hmkk6") pod "4827561e-f60d-4b02-b4c6-7af50ab350ce" (UID: "4827561e-f60d-4b02-b4c6-7af50ab350ce"). InnerVolumeSpecName "kube-api-access-hmkk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:30:07 crc kubenswrapper[4830]: I0227 16:30:07.228575 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4827561e-f60d-4b02-b4c6-7af50ab350ce-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4827561e-f60d-4b02-b4c6-7af50ab350ce" (UID: "4827561e-f60d-4b02-b4c6-7af50ab350ce"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:07 crc kubenswrapper[4830]: I0227 16:30:07.327211 4830 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4827561e-f60d-4b02-b4c6-7af50ab350ce-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:07 crc kubenswrapper[4830]: I0227 16:30:07.327578 4830 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4827561e-f60d-4b02-b4c6-7af50ab350ce-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:07 crc kubenswrapper[4830]: I0227 16:30:07.327598 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmkk6\" (UniqueName: \"kubernetes.io/projected/4827561e-f60d-4b02-b4c6-7af50ab350ce-kube-api-access-hmkk6\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:07 crc kubenswrapper[4830]: I0227 16:30:07.750835 4830 generic.go:334] "Generic (PLEG): container finished" podID="459173e8-7571-47b7-9af8-3bd2d24d4e21" containerID="45fba76ddd5f2fe4e68c5bc218edf28d6a195079fa1921a738dce0674accf471" exitCode=0 Feb 27 16:30:07 crc kubenswrapper[4830]: I0227 16:30:07.750914 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dcxkj" event={"ID":"459173e8-7571-47b7-9af8-3bd2d24d4e21","Type":"ContainerDied","Data":"45fba76ddd5f2fe4e68c5bc218edf28d6a195079fa1921a738dce0674accf471"} Feb 27 16:30:07 crc kubenswrapper[4830]: I0227 16:30:07.755023 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536830-9wp9k" Feb 27 16:30:07 crc kubenswrapper[4830]: I0227 16:30:07.755013 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536830-9wp9k" event={"ID":"4827561e-f60d-4b02-b4c6-7af50ab350ce","Type":"ContainerDied","Data":"45540be0d73514d6ff630b36211f6104415ba9c796b6c8d49a74fd73e9950ec8"} Feb 27 16:30:07 crc kubenswrapper[4830]: I0227 16:30:07.755255 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45540be0d73514d6ff630b36211f6104415ba9c796b6c8d49a74fd73e9950ec8" Feb 27 16:30:07 crc kubenswrapper[4830]: I0227 16:30:07.758017 4830 generic.go:334] "Generic (PLEG): container finished" podID="52d332d0-98e5-4cff-8486-151b6593c94f" containerID="fa02ddd168c52a09e17f02290dc6532b6d413641b49271f0c4fad4240693f403" exitCode=0 Feb 27 16:30:07 crc kubenswrapper[4830]: I0227 16:30:07.758141 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 16:30:07 crc kubenswrapper[4830]: I0227 16:30:07.758974 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4d9ld" event={"ID":"52d332d0-98e5-4cff-8486-151b6593c94f","Type":"ContainerDied","Data":"fa02ddd168c52a09e17f02290dc6532b6d413641b49271f0c4fad4240693f403"} Feb 27 16:30:07 crc kubenswrapper[4830]: I0227 16:30:07.865198 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:30:07 crc kubenswrapper[4830]: I0227 16:30:07.865246 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:30:07 crc kubenswrapper[4830]: I0227 16:30:07.903517 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:30:07 crc kubenswrapper[4830]: E0227 16:30:07.904141 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4827561e-f60d-4b02-b4c6-7af50ab350ce" containerName="collect-profiles" Feb 27 16:30:07 crc kubenswrapper[4830]: I0227 16:30:07.904233 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="4827561e-f60d-4b02-b4c6-7af50ab350ce" containerName="collect-profiles" Feb 27 16:30:07 crc kubenswrapper[4830]: I0227 16:30:07.904575 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="4827561e-f60d-4b02-b4c6-7af50ab350ce" containerName="collect-profiles" Feb 27 16:30:07 crc kubenswrapper[4830]: I0227 16:30:07.907917 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 16:30:07 crc kubenswrapper[4830]: I0227 16:30:07.910051 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 16:30:07 crc kubenswrapper[4830]: I0227 16:30:07.916426 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 16:30:07 crc kubenswrapper[4830]: I0227 16:30:07.928860 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:30:08 crc kubenswrapper[4830]: I0227 16:30:08.041172 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa935bfb-ebfd-4aa9-abc3-84d118252abe-log-httpd\") pod \"ceilometer-0\" (UID: \"aa935bfb-ebfd-4aa9-abc3-84d118252abe\") " pod="openstack/ceilometer-0" Feb 27 16:30:08 crc kubenswrapper[4830]: I0227 16:30:08.041520 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa935bfb-ebfd-4aa9-abc3-84d118252abe-run-httpd\") pod \"ceilometer-0\" (UID: \"aa935bfb-ebfd-4aa9-abc3-84d118252abe\") " pod="openstack/ceilometer-0" Feb 27 16:30:08 crc kubenswrapper[4830]: I0227 16:30:08.041557 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa935bfb-ebfd-4aa9-abc3-84d118252abe-config-data\") pod \"ceilometer-0\" (UID: \"aa935bfb-ebfd-4aa9-abc3-84d118252abe\") " pod="openstack/ceilometer-0" Feb 27 16:30:08 crc kubenswrapper[4830]: I0227 16:30:08.041593 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa935bfb-ebfd-4aa9-abc3-84d118252abe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa935bfb-ebfd-4aa9-abc3-84d118252abe\") " pod="openstack/ceilometer-0" Feb 27 16:30:08 crc kubenswrapper[4830]: I0227 16:30:08.041625 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdqcg\" (UniqueName: \"kubernetes.io/projected/aa935bfb-ebfd-4aa9-abc3-84d118252abe-kube-api-access-sdqcg\") pod \"ceilometer-0\" (UID: \"aa935bfb-ebfd-4aa9-abc3-84d118252abe\") " pod="openstack/ceilometer-0" Feb 27 16:30:08 crc kubenswrapper[4830]: I0227 16:30:08.041683 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa935bfb-ebfd-4aa9-abc3-84d118252abe-scripts\") pod \"ceilometer-0\" (UID: \"aa935bfb-ebfd-4aa9-abc3-84d118252abe\") " pod="openstack/ceilometer-0" Feb 27 16:30:08 crc kubenswrapper[4830]: I0227 16:30:08.041716 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa935bfb-ebfd-4aa9-abc3-84d118252abe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa935bfb-ebfd-4aa9-abc3-84d118252abe\") " pod="openstack/ceilometer-0" Feb 27 16:30:08 crc kubenswrapper[4830]: I0227 16:30:08.144440 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa935bfb-ebfd-4aa9-abc3-84d118252abe-run-httpd\") pod \"ceilometer-0\" (UID: \"aa935bfb-ebfd-4aa9-abc3-84d118252abe\") " pod="openstack/ceilometer-0" Feb 27 16:30:08 crc kubenswrapper[4830]: I0227 16:30:08.144544 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa935bfb-ebfd-4aa9-abc3-84d118252abe-config-data\") pod \"ceilometer-0\" (UID: \"aa935bfb-ebfd-4aa9-abc3-84d118252abe\") " pod="openstack/ceilometer-0" Feb 27 16:30:08 crc kubenswrapper[4830]: I0227 16:30:08.144608 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa935bfb-ebfd-4aa9-abc3-84d118252abe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa935bfb-ebfd-4aa9-abc3-84d118252abe\") " pod="openstack/ceilometer-0" Feb 27 16:30:08 crc kubenswrapper[4830]: I0227 16:30:08.144670 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdqcg\" (UniqueName: \"kubernetes.io/projected/aa935bfb-ebfd-4aa9-abc3-84d118252abe-kube-api-access-sdqcg\") pod \"ceilometer-0\" (UID: \"aa935bfb-ebfd-4aa9-abc3-84d118252abe\") " pod="openstack/ceilometer-0" Feb 27 16:30:08 crc kubenswrapper[4830]: I0227 16:30:08.144772 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa935bfb-ebfd-4aa9-abc3-84d118252abe-scripts\") pod \"ceilometer-0\" (UID: \"aa935bfb-ebfd-4aa9-abc3-84d118252abe\") " pod="openstack/ceilometer-0" Feb 27 16:30:08 crc kubenswrapper[4830]: I0227 16:30:08.144833 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa935bfb-ebfd-4aa9-abc3-84d118252abe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa935bfb-ebfd-4aa9-abc3-84d118252abe\") " pod="openstack/ceilometer-0" Feb 27 16:30:08 crc kubenswrapper[4830]: I0227 16:30:08.144937 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa935bfb-ebfd-4aa9-abc3-84d118252abe-log-httpd\") pod \"ceilometer-0\" (UID: \"aa935bfb-ebfd-4aa9-abc3-84d118252abe\") " pod="openstack/ceilometer-0" Feb 27 16:30:08 crc kubenswrapper[4830]: I0227 16:30:08.145599 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa935bfb-ebfd-4aa9-abc3-84d118252abe-run-httpd\") pod \"ceilometer-0\" (UID: \"aa935bfb-ebfd-4aa9-abc3-84d118252abe\") " pod="openstack/ceilometer-0" Feb 27 16:30:08 crc kubenswrapper[4830]: I0227 16:30:08.145703 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa935bfb-ebfd-4aa9-abc3-84d118252abe-log-httpd\") pod \"ceilometer-0\" (UID: \"aa935bfb-ebfd-4aa9-abc3-84d118252abe\") " pod="openstack/ceilometer-0" Feb 27 16:30:08 crc kubenswrapper[4830]: I0227 16:30:08.156924 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa935bfb-ebfd-4aa9-abc3-84d118252abe-scripts\") pod \"ceilometer-0\" (UID: \"aa935bfb-ebfd-4aa9-abc3-84d118252abe\") " pod="openstack/ceilometer-0" Feb 27 16:30:08 crc kubenswrapper[4830]: I0227 16:30:08.157205 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa935bfb-ebfd-4aa9-abc3-84d118252abe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aa935bfb-ebfd-4aa9-abc3-84d118252abe\") " pod="openstack/ceilometer-0" Feb 27 16:30:08 crc kubenswrapper[4830]: I0227 16:30:08.157655 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa935bfb-ebfd-4aa9-abc3-84d118252abe-config-data\") pod \"ceilometer-0\" (UID: \"aa935bfb-ebfd-4aa9-abc3-84d118252abe\") " pod="openstack/ceilometer-0" Feb 27 16:30:08 crc kubenswrapper[4830]: I0227 16:30:08.158604 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa935bfb-ebfd-4aa9-abc3-84d118252abe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aa935bfb-ebfd-4aa9-abc3-84d118252abe\") " pod="openstack/ceilometer-0" Feb 27 16:30:08 crc kubenswrapper[4830]: I0227 16:30:08.161497 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdqcg\" (UniqueName: \"kubernetes.io/projected/aa935bfb-ebfd-4aa9-abc3-84d118252abe-kube-api-access-sdqcg\") pod \"ceilometer-0\" (UID: \"aa935bfb-ebfd-4aa9-abc3-84d118252abe\") " pod="openstack/ceilometer-0" Feb 27 16:30:08 crc kubenswrapper[4830]: I0227 16:30:08.226359 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 16:30:08 crc kubenswrapper[4830]: I0227 16:30:08.505125 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:30:08 crc kubenswrapper[4830]: W0227 16:30:08.506423 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa935bfb_ebfd_4aa9_abc3_84d118252abe.slice/crio-b20b2a8db2343e30af76eeef218b63e3151bb756ad774c454a43318439550ffe WatchSource:0}: Error finding container b20b2a8db2343e30af76eeef218b63e3151bb756ad774c454a43318439550ffe: Status 404 returned error can't find the container with id b20b2a8db2343e30af76eeef218b63e3151bb756ad774c454a43318439550ffe Feb 27 16:30:08 crc kubenswrapper[4830]: I0227 16:30:08.780038 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0443650-95ce-4e86-97cd-5700be47571c" path="/var/lib/kubelet/pods/a0443650-95ce-4e86-97cd-5700be47571c/volumes" Feb 27 16:30:08 crc kubenswrapper[4830]: I0227 16:30:08.781728 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa935bfb-ebfd-4aa9-abc3-84d118252abe","Type":"ContainerStarted","Data":"b20b2a8db2343e30af76eeef218b63e3151bb756ad774c454a43318439550ffe"} Feb 27 16:30:09 crc kubenswrapper[4830]: I0227 16:30:09.187887 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dcxkj" Feb 27 16:30:09 crc kubenswrapper[4830]: I0227 16:30:09.196347 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4d9ld" Feb 27 16:30:09 crc kubenswrapper[4830]: I0227 16:30:09.366631 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52d332d0-98e5-4cff-8486-151b6593c94f-combined-ca-bundle\") pod \"52d332d0-98e5-4cff-8486-151b6593c94f\" (UID: \"52d332d0-98e5-4cff-8486-151b6593c94f\") " Feb 27 16:30:09 crc kubenswrapper[4830]: I0227 16:30:09.367125 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4z7zp\" (UniqueName: \"kubernetes.io/projected/52d332d0-98e5-4cff-8486-151b6593c94f-kube-api-access-4z7zp\") pod \"52d332d0-98e5-4cff-8486-151b6593c94f\" (UID: \"52d332d0-98e5-4cff-8486-151b6593c94f\") " Feb 27 16:30:09 crc kubenswrapper[4830]: I0227 16:30:09.367281 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gm9xq\" (UniqueName: \"kubernetes.io/projected/459173e8-7571-47b7-9af8-3bd2d24d4e21-kube-api-access-gm9xq\") pod \"459173e8-7571-47b7-9af8-3bd2d24d4e21\" (UID: \"459173e8-7571-47b7-9af8-3bd2d24d4e21\") " Feb 27 16:30:09 crc kubenswrapper[4830]: I0227 16:30:09.367556 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/459173e8-7571-47b7-9af8-3bd2d24d4e21-combined-ca-bundle\") pod \"459173e8-7571-47b7-9af8-3bd2d24d4e21\" (UID: \"459173e8-7571-47b7-9af8-3bd2d24d4e21\") " Feb 27 16:30:09 crc kubenswrapper[4830]: I0227 16:30:09.367693 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/52d332d0-98e5-4cff-8486-151b6593c94f-config\") pod \"52d332d0-98e5-4cff-8486-151b6593c94f\" (UID: \"52d332d0-98e5-4cff-8486-151b6593c94f\") " Feb 27 16:30:09 crc kubenswrapper[4830]: I0227 16:30:09.367840 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/459173e8-7571-47b7-9af8-3bd2d24d4e21-db-sync-config-data\") pod \"459173e8-7571-47b7-9af8-3bd2d24d4e21\" (UID: \"459173e8-7571-47b7-9af8-3bd2d24d4e21\") " Feb 27 16:30:09 crc kubenswrapper[4830]: I0227 16:30:09.370698 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52d332d0-98e5-4cff-8486-151b6593c94f-kube-api-access-4z7zp" (OuterVolumeSpecName: "kube-api-access-4z7zp") pod "52d332d0-98e5-4cff-8486-151b6593c94f" (UID: "52d332d0-98e5-4cff-8486-151b6593c94f"). InnerVolumeSpecName "kube-api-access-4z7zp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:30:09 crc kubenswrapper[4830]: I0227 16:30:09.372672 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/459173e8-7571-47b7-9af8-3bd2d24d4e21-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "459173e8-7571-47b7-9af8-3bd2d24d4e21" (UID: "459173e8-7571-47b7-9af8-3bd2d24d4e21"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:09 crc kubenswrapper[4830]: I0227 16:30:09.374379 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/459173e8-7571-47b7-9af8-3bd2d24d4e21-kube-api-access-gm9xq" (OuterVolumeSpecName: "kube-api-access-gm9xq") pod "459173e8-7571-47b7-9af8-3bd2d24d4e21" (UID: "459173e8-7571-47b7-9af8-3bd2d24d4e21"). InnerVolumeSpecName "kube-api-access-gm9xq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:30:09 crc kubenswrapper[4830]: I0227 16:30:09.403895 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/459173e8-7571-47b7-9af8-3bd2d24d4e21-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "459173e8-7571-47b7-9af8-3bd2d24d4e21" (UID: "459173e8-7571-47b7-9af8-3bd2d24d4e21"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:09 crc kubenswrapper[4830]: I0227 16:30:09.413755 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52d332d0-98e5-4cff-8486-151b6593c94f-config" (OuterVolumeSpecName: "config") pod "52d332d0-98e5-4cff-8486-151b6593c94f" (UID: "52d332d0-98e5-4cff-8486-151b6593c94f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:09 crc kubenswrapper[4830]: I0227 16:30:09.425644 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52d332d0-98e5-4cff-8486-151b6593c94f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52d332d0-98e5-4cff-8486-151b6593c94f" (UID: "52d332d0-98e5-4cff-8486-151b6593c94f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:09 crc kubenswrapper[4830]: I0227 16:30:09.472454 4830 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/459173e8-7571-47b7-9af8-3bd2d24d4e21-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:09 crc kubenswrapper[4830]: I0227 16:30:09.472522 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52d332d0-98e5-4cff-8486-151b6593c94f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:09 crc kubenswrapper[4830]: I0227 16:30:09.472550 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4z7zp\" (UniqueName: \"kubernetes.io/projected/52d332d0-98e5-4cff-8486-151b6593c94f-kube-api-access-4z7zp\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:09 crc kubenswrapper[4830]: I0227 16:30:09.472580 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gm9xq\" (UniqueName: \"kubernetes.io/projected/459173e8-7571-47b7-9af8-3bd2d24d4e21-kube-api-access-gm9xq\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:09 crc kubenswrapper[4830]: I0227 16:30:09.472605 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/459173e8-7571-47b7-9af8-3bd2d24d4e21-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:09 crc kubenswrapper[4830]: I0227 16:30:09.472628 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/52d332d0-98e5-4cff-8486-151b6593c94f-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:09 crc kubenswrapper[4830]: I0227 16:30:09.790484 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dcxkj" event={"ID":"459173e8-7571-47b7-9af8-3bd2d24d4e21","Type":"ContainerDied","Data":"13b19b6f06c9501b054b389763ab1794a7ca7f8055e2ecc66c44dddc1a0f6fd0"} Feb 27 16:30:09 crc kubenswrapper[4830]: I0227 16:30:09.790523 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13b19b6f06c9501b054b389763ab1794a7ca7f8055e2ecc66c44dddc1a0f6fd0" Feb 27 16:30:09 crc kubenswrapper[4830]: I0227 16:30:09.790531 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dcxkj" Feb 27 16:30:09 crc kubenswrapper[4830]: I0227 16:30:09.793303 4830 generic.go:334] "Generic (PLEG): container finished" podID="a69bc2ed-ce70-4828-af02-ccac1c3f0c10" containerID="78f7362752654ea3426af2a1f637ac858637b23cda39620187459b1ca0eb954f" exitCode=0 Feb 27 16:30:09 crc kubenswrapper[4830]: I0227 16:30:09.793353 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vrjmz" event={"ID":"a69bc2ed-ce70-4828-af02-ccac1c3f0c10","Type":"ContainerDied","Data":"78f7362752654ea3426af2a1f637ac858637b23cda39620187459b1ca0eb954f"} Feb 27 16:30:09 crc kubenswrapper[4830]: I0227 16:30:09.808297 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa935bfb-ebfd-4aa9-abc3-84d118252abe","Type":"ContainerStarted","Data":"9d7903ee5c9c8d27d585b4910f200bfb80e3e1da5bdfb566e661396da94d6a68"} Feb 27 16:30:09 crc kubenswrapper[4830]: I0227 16:30:09.811007 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4d9ld" event={"ID":"52d332d0-98e5-4cff-8486-151b6593c94f","Type":"ContainerDied","Data":"9eb11a8506bba690e55a72bafbc3808cd479c78cb49f73dc26b47b82227ec393"} Feb 27 16:30:09 crc kubenswrapper[4830]: I0227 16:30:09.811053 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4d9ld" Feb 27 16:30:09 crc kubenswrapper[4830]: I0227 16:30:09.811066 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9eb11a8506bba690e55a72bafbc3808cd479c78cb49f73dc26b47b82227ec393" Feb 27 16:30:09 crc kubenswrapper[4830]: I0227 16:30:09.985102 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-948fdb9cd-ncm6f"] Feb 27 16:30:09 crc kubenswrapper[4830]: E0227 16:30:09.985711 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="459173e8-7571-47b7-9af8-3bd2d24d4e21" containerName="barbican-db-sync" Feb 27 16:30:09 crc kubenswrapper[4830]: I0227 16:30:09.985727 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="459173e8-7571-47b7-9af8-3bd2d24d4e21" containerName="barbican-db-sync" Feb 27 16:30:09 crc kubenswrapper[4830]: E0227 16:30:09.985744 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52d332d0-98e5-4cff-8486-151b6593c94f" containerName="neutron-db-sync" Feb 27 16:30:09 crc kubenswrapper[4830]: I0227 16:30:09.985750 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="52d332d0-98e5-4cff-8486-151b6593c94f" containerName="neutron-db-sync" Feb 27 16:30:09 crc kubenswrapper[4830]: I0227 16:30:09.985911 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="459173e8-7571-47b7-9af8-3bd2d24d4e21" containerName="barbican-db-sync" Feb 27 16:30:09 crc kubenswrapper[4830]: I0227 16:30:09.985933 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="52d332d0-98e5-4cff-8486-151b6593c94f" containerName="neutron-db-sync" Feb 27 16:30:09 crc kubenswrapper[4830]: I0227 16:30:09.987610 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-948fdb9cd-ncm6f" Feb 27 16:30:09 crc kubenswrapper[4830]: I0227 16:30:09.990504 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-nkprd" Feb 27 16:30:09 crc kubenswrapper[4830]: I0227 16:30:09.990771 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 27 16:30:09 crc kubenswrapper[4830]: I0227 16:30:09.991093 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 27 16:30:09 crc kubenswrapper[4830]: I0227 16:30:09.997875 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-58c49587-cz4f5"] Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:09.999203 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-58c49587-cz4f5" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.001856 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22232c9c-ecf7-443e-834f-ad39b37735b2-config-data-custom\") pod \"barbican-keystone-listener-948fdb9cd-ncm6f\" (UID: \"22232c9c-ecf7-443e-834f-ad39b37735b2\") " pod="openstack/barbican-keystone-listener-948fdb9cd-ncm6f" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.001909 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22232c9c-ecf7-443e-834f-ad39b37735b2-config-data\") pod \"barbican-keystone-listener-948fdb9cd-ncm6f\" (UID: \"22232c9c-ecf7-443e-834f-ad39b37735b2\") " pod="openstack/barbican-keystone-listener-948fdb9cd-ncm6f" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.001930 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22232c9c-ecf7-443e-834f-ad39b37735b2-logs\") pod \"barbican-keystone-listener-948fdb9cd-ncm6f\" (UID: \"22232c9c-ecf7-443e-834f-ad39b37735b2\") " pod="openstack/barbican-keystone-listener-948fdb9cd-ncm6f" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.001967 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt8qf\" (UniqueName: \"kubernetes.io/projected/22232c9c-ecf7-443e-834f-ad39b37735b2-kube-api-access-tt8qf\") pod \"barbican-keystone-listener-948fdb9cd-ncm6f\" (UID: \"22232c9c-ecf7-443e-834f-ad39b37735b2\") " pod="openstack/barbican-keystone-listener-948fdb9cd-ncm6f" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.002054 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22232c9c-ecf7-443e-834f-ad39b37735b2-combined-ca-bundle\") pod \"barbican-keystone-listener-948fdb9cd-ncm6f\" (UID: \"22232c9c-ecf7-443e-834f-ad39b37735b2\") " pod="openstack/barbican-keystone-listener-948fdb9cd-ncm6f" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.002373 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.027879 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-58c49587-cz4f5"] Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.061837 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-948fdb9cd-ncm6f"] Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.082011 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-mdpgd"] Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.083423 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-mdpgd" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.101214 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-mdpgd"] Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.107276 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22232c9c-ecf7-443e-834f-ad39b37735b2-combined-ca-bundle\") pod \"barbican-keystone-listener-948fdb9cd-ncm6f\" (UID: \"22232c9c-ecf7-443e-834f-ad39b37735b2\") " pod="openstack/barbican-keystone-listener-948fdb9cd-ncm6f" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.107402 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22232c9c-ecf7-443e-834f-ad39b37735b2-config-data-custom\") pod \"barbican-keystone-listener-948fdb9cd-ncm6f\" (UID: \"22232c9c-ecf7-443e-834f-ad39b37735b2\") " pod="openstack/barbican-keystone-listener-948fdb9cd-ncm6f" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.107449 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22232c9c-ecf7-443e-834f-ad39b37735b2-config-data\") pod \"barbican-keystone-listener-948fdb9cd-ncm6f\" (UID: \"22232c9c-ecf7-443e-834f-ad39b37735b2\") " pod="openstack/barbican-keystone-listener-948fdb9cd-ncm6f" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.107470 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22232c9c-ecf7-443e-834f-ad39b37735b2-logs\") pod \"barbican-keystone-listener-948fdb9cd-ncm6f\" (UID: \"22232c9c-ecf7-443e-834f-ad39b37735b2\") " pod="openstack/barbican-keystone-listener-948fdb9cd-ncm6f" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.107512 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt8qf\" (UniqueName: \"kubernetes.io/projected/22232c9c-ecf7-443e-834f-ad39b37735b2-kube-api-access-tt8qf\") pod \"barbican-keystone-listener-948fdb9cd-ncm6f\" (UID: \"22232c9c-ecf7-443e-834f-ad39b37735b2\") " pod="openstack/barbican-keystone-listener-948fdb9cd-ncm6f" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.110399 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22232c9c-ecf7-443e-834f-ad39b37735b2-logs\") pod \"barbican-keystone-listener-948fdb9cd-ncm6f\" (UID: \"22232c9c-ecf7-443e-834f-ad39b37735b2\") " pod="openstack/barbican-keystone-listener-948fdb9cd-ncm6f" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.118601 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22232c9c-ecf7-443e-834f-ad39b37735b2-config-data-custom\") pod \"barbican-keystone-listener-948fdb9cd-ncm6f\" (UID: \"22232c9c-ecf7-443e-834f-ad39b37735b2\") " pod="openstack/barbican-keystone-listener-948fdb9cd-ncm6f" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.131254 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22232c9c-ecf7-443e-834f-ad39b37735b2-config-data\") pod \"barbican-keystone-listener-948fdb9cd-ncm6f\" (UID: \"22232c9c-ecf7-443e-834f-ad39b37735b2\") " pod="openstack/barbican-keystone-listener-948fdb9cd-ncm6f" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.135090 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt8qf\" (UniqueName: \"kubernetes.io/projected/22232c9c-ecf7-443e-834f-ad39b37735b2-kube-api-access-tt8qf\") pod \"barbican-keystone-listener-948fdb9cd-ncm6f\" (UID: \"22232c9c-ecf7-443e-834f-ad39b37735b2\") " pod="openstack/barbican-keystone-listener-948fdb9cd-ncm6f" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.138517 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22232c9c-ecf7-443e-834f-ad39b37735b2-combined-ca-bundle\") pod \"barbican-keystone-listener-948fdb9cd-ncm6f\" (UID: \"22232c9c-ecf7-443e-834f-ad39b37735b2\") " pod="openstack/barbican-keystone-listener-948fdb9cd-ncm6f" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.145321 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-mdpgd"] Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.176881 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-244cw"] Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.180569 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-244cw" Feb 27 16:30:10 crc kubenswrapper[4830]: E0227 16:30:10.199445 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc dns-swift-storage-0 kube-api-access-7jlfc ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-7c67bffd47-mdpgd" podUID="d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.208808 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-244cw"] Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.212492 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03b0201f-0147-4757-b5a1-6109d5c7ed94-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-244cw\" (UID: \"03b0201f-0147-4757-b5a1-6109d5c7ed94\") " pod="openstack/dnsmasq-dns-848cf88cfc-244cw" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.212544 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03b0201f-0147-4757-b5a1-6109d5c7ed94-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-244cw\" (UID: \"03b0201f-0147-4757-b5a1-6109d5c7ed94\") " pod="openstack/dnsmasq-dns-848cf88cfc-244cw" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.212561 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-mdpgd\" (UID: \"d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a\") " pod="openstack/dnsmasq-dns-7c67bffd47-mdpgd" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.212577 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h5gt\" (UniqueName: \"kubernetes.io/projected/03b0201f-0147-4757-b5a1-6109d5c7ed94-kube-api-access-2h5gt\") pod \"dnsmasq-dns-848cf88cfc-244cw\" (UID: \"03b0201f-0147-4757-b5a1-6109d5c7ed94\") " pod="openstack/dnsmasq-dns-848cf88cfc-244cw" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.212598 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-mdpgd\" (UID: \"d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a\") " pod="openstack/dnsmasq-dns-7c67bffd47-mdpgd" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.212623 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc59x\" (UniqueName: \"kubernetes.io/projected/f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32-kube-api-access-tc59x\") pod \"barbican-worker-58c49587-cz4f5\" (UID: \"f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32\") " pod="openstack/barbican-worker-58c49587-cz4f5" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.212643 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jlfc\" (UniqueName: \"kubernetes.io/projected/d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a-kube-api-access-7jlfc\") pod \"dnsmasq-dns-7c67bffd47-mdpgd\" (UID: \"d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a\") " pod="openstack/dnsmasq-dns-7c67bffd47-mdpgd" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.212661 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03b0201f-0147-4757-b5a1-6109d5c7ed94-config\") pod \"dnsmasq-dns-848cf88cfc-244cw\" (UID: \"03b0201f-0147-4757-b5a1-6109d5c7ed94\") " pod="openstack/dnsmasq-dns-848cf88cfc-244cw" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.212679 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32-config-data-custom\") pod \"barbican-worker-58c49587-cz4f5\" (UID: \"f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32\") " pod="openstack/barbican-worker-58c49587-cz4f5" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.212715 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32-logs\") pod \"barbican-worker-58c49587-cz4f5\" (UID: \"f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32\") " pod="openstack/barbican-worker-58c49587-cz4f5" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.212743 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32-combined-ca-bundle\") pod \"barbican-worker-58c49587-cz4f5\" (UID: \"f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32\") " pod="openstack/barbican-worker-58c49587-cz4f5" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.212762 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32-config-data\") pod \"barbican-worker-58c49587-cz4f5\" (UID: \"f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32\") " pod="openstack/barbican-worker-58c49587-cz4f5" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.212783 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-mdpgd\" (UID: \"d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a\") " pod="openstack/dnsmasq-dns-7c67bffd47-mdpgd" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.212816 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a-config\") pod \"dnsmasq-dns-7c67bffd47-mdpgd\" (UID: \"d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a\") " pod="openstack/dnsmasq-dns-7c67bffd47-mdpgd" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.212836 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03b0201f-0147-4757-b5a1-6109d5c7ed94-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-244cw\" (UID: \"03b0201f-0147-4757-b5a1-6109d5c7ed94\") " pod="openstack/dnsmasq-dns-848cf88cfc-244cw" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.212860 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03b0201f-0147-4757-b5a1-6109d5c7ed94-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-244cw\" (UID: \"03b0201f-0147-4757-b5a1-6109d5c7ed94\") " pod="openstack/dnsmasq-dns-848cf88cfc-244cw" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.212902 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-mdpgd\" (UID: \"d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a\") " pod="openstack/dnsmasq-dns-7c67bffd47-mdpgd" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.233291 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-86c4877d94-j48gv"] Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.234588 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-86c4877d94-j48gv" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.236233 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.246240 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-86c4877d94-j48gv"] Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.299533 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-c844968fb-vzqlt"] Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.301075 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c844968fb-vzqlt" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.306407 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.306476 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-wlzvq" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.306657 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.306781 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.308254 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c844968fb-vzqlt"] Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.314410 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc59x\" (UniqueName: \"kubernetes.io/projected/f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32-kube-api-access-tc59x\") pod \"barbican-worker-58c49587-cz4f5\" (UID: \"f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32\") " pod="openstack/barbican-worker-58c49587-cz4f5" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.314449 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jlfc\" (UniqueName: \"kubernetes.io/projected/d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a-kube-api-access-7jlfc\") pod \"dnsmasq-dns-7c67bffd47-mdpgd\" (UID: \"d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a\") " pod="openstack/dnsmasq-dns-7c67bffd47-mdpgd" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.314473 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03b0201f-0147-4757-b5a1-6109d5c7ed94-config\") pod \"dnsmasq-dns-848cf88cfc-244cw\" (UID: \"03b0201f-0147-4757-b5a1-6109d5c7ed94\") " pod="openstack/dnsmasq-dns-848cf88cfc-244cw" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.314494 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32-config-data-custom\") pod \"barbican-worker-58c49587-cz4f5\" (UID: \"f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32\") " pod="openstack/barbican-worker-58c49587-cz4f5" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.314525 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32-logs\") pod \"barbican-worker-58c49587-cz4f5\" (UID: \"f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32\") " pod="openstack/barbican-worker-58c49587-cz4f5" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.314552 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32-combined-ca-bundle\") pod \"barbican-worker-58c49587-cz4f5\" (UID: \"f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32\") " pod="openstack/barbican-worker-58c49587-cz4f5" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.314570 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32-config-data\") pod \"barbican-worker-58c49587-cz4f5\" (UID: \"f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32\") " pod="openstack/barbican-worker-58c49587-cz4f5" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.314589 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-mdpgd\" (UID: \"d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a\") " pod="openstack/dnsmasq-dns-7c67bffd47-mdpgd" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.314617 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a-config\") pod \"dnsmasq-dns-7c67bffd47-mdpgd\" (UID: \"d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a\") " pod="openstack/dnsmasq-dns-7c67bffd47-mdpgd" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.314756 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03b0201f-0147-4757-b5a1-6109d5c7ed94-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-244cw\" (UID: \"03b0201f-0147-4757-b5a1-6109d5c7ed94\") " pod="openstack/dnsmasq-dns-848cf88cfc-244cw" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.314782 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03b0201f-0147-4757-b5a1-6109d5c7ed94-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-244cw\" (UID: \"03b0201f-0147-4757-b5a1-6109d5c7ed94\") " pod="openstack/dnsmasq-dns-848cf88cfc-244cw" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.314788 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-948fdb9cd-ncm6f" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.314823 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-mdpgd\" (UID: \"d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a\") " pod="openstack/dnsmasq-dns-7c67bffd47-mdpgd" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.314854 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03b0201f-0147-4757-b5a1-6109d5c7ed94-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-244cw\" (UID: \"03b0201f-0147-4757-b5a1-6109d5c7ed94\") " pod="openstack/dnsmasq-dns-848cf88cfc-244cw" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.314883 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03b0201f-0147-4757-b5a1-6109d5c7ed94-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-244cw\" (UID: \"03b0201f-0147-4757-b5a1-6109d5c7ed94\") " pod="openstack/dnsmasq-dns-848cf88cfc-244cw" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.314897 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-mdpgd\" (UID: \"d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a\") " pod="openstack/dnsmasq-dns-7c67bffd47-mdpgd" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.314912 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h5gt\" (UniqueName: \"kubernetes.io/projected/03b0201f-0147-4757-b5a1-6109d5c7ed94-kube-api-access-2h5gt\") pod \"dnsmasq-dns-848cf88cfc-244cw\" (UID: \"03b0201f-0147-4757-b5a1-6109d5c7ed94\") " pod="openstack/dnsmasq-dns-848cf88cfc-244cw" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.314930 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-mdpgd\" (UID: \"d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a\") " pod="openstack/dnsmasq-dns-7c67bffd47-mdpgd" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.315607 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32-logs\") pod \"barbican-worker-58c49587-cz4f5\" (UID: \"f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32\") " pod="openstack/barbican-worker-58c49587-cz4f5" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.316412 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-mdpgd\" (UID: \"d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a\") " pod="openstack/dnsmasq-dns-7c67bffd47-mdpgd" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.316520 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-mdpgd\" (UID: \"d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a\") " pod="openstack/dnsmasq-dns-7c67bffd47-mdpgd" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.316821 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a-config\") pod \"dnsmasq-dns-7c67bffd47-mdpgd\" (UID: \"d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a\") " pod="openstack/dnsmasq-dns-7c67bffd47-mdpgd" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.317117 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03b0201f-0147-4757-b5a1-6109d5c7ed94-config\") pod \"dnsmasq-dns-848cf88cfc-244cw\" (UID: \"03b0201f-0147-4757-b5a1-6109d5c7ed94\") " pod="openstack/dnsmasq-dns-848cf88cfc-244cw" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.317214 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03b0201f-0147-4757-b5a1-6109d5c7ed94-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-244cw\" (UID: \"03b0201f-0147-4757-b5a1-6109d5c7ed94\") " pod="openstack/dnsmasq-dns-848cf88cfc-244cw" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.317385 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-mdpgd\" (UID: \"d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a\") " pod="openstack/dnsmasq-dns-7c67bffd47-mdpgd" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.318367 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03b0201f-0147-4757-b5a1-6109d5c7ed94-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-244cw\" (UID: \"03b0201f-0147-4757-b5a1-6109d5c7ed94\") " pod="openstack/dnsmasq-dns-848cf88cfc-244cw" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.318555 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03b0201f-0147-4757-b5a1-6109d5c7ed94-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-244cw\" (UID: \"03b0201f-0147-4757-b5a1-6109d5c7ed94\") " pod="openstack/dnsmasq-dns-848cf88cfc-244cw" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.319193 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32-config-data\") pod \"barbican-worker-58c49587-cz4f5\" (UID: \"f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32\") " pod="openstack/barbican-worker-58c49587-cz4f5" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.319373 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03b0201f-0147-4757-b5a1-6109d5c7ed94-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-244cw\" (UID: \"03b0201f-0147-4757-b5a1-6109d5c7ed94\") " pod="openstack/dnsmasq-dns-848cf88cfc-244cw" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.319847 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32-combined-ca-bundle\") pod \"barbican-worker-58c49587-cz4f5\" (UID: \"f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32\") " pod="openstack/barbican-worker-58c49587-cz4f5" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.327834 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-mdpgd\" (UID: \"d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a\") " pod="openstack/dnsmasq-dns-7c67bffd47-mdpgd" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.337511 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32-config-data-custom\") pod \"barbican-worker-58c49587-cz4f5\" (UID: \"f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32\") " pod="openstack/barbican-worker-58c49587-cz4f5" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.339508 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jlfc\" (UniqueName: \"kubernetes.io/projected/d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a-kube-api-access-7jlfc\") pod \"dnsmasq-dns-7c67bffd47-mdpgd\" (UID: \"d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a\") " pod="openstack/dnsmasq-dns-7c67bffd47-mdpgd" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.346054 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h5gt\" (UniqueName: \"kubernetes.io/projected/03b0201f-0147-4757-b5a1-6109d5c7ed94-kube-api-access-2h5gt\") pod \"dnsmasq-dns-848cf88cfc-244cw\" (UID: \"03b0201f-0147-4757-b5a1-6109d5c7ed94\") " pod="openstack/dnsmasq-dns-848cf88cfc-244cw" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.346449 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc59x\" (UniqueName: \"kubernetes.io/projected/f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32-kube-api-access-tc59x\") pod \"barbican-worker-58c49587-cz4f5\" (UID: \"f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32\") " pod="openstack/barbican-worker-58c49587-cz4f5" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.416015 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c68417-9771-4ad5-acfa-b25ddda70e33-combined-ca-bundle\") pod \"barbican-api-86c4877d94-j48gv\" (UID: \"b6c68417-9771-4ad5-acfa-b25ddda70e33\") " pod="openstack/barbican-api-86c4877d94-j48gv" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.416048 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/92e3fe75-3936-4491-80ad-e2b738f023b2-httpd-config\") pod \"neutron-c844968fb-vzqlt\" (UID: \"92e3fe75-3936-4491-80ad-e2b738f023b2\") " pod="openstack/neutron-c844968fb-vzqlt" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.416100 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdnmh\" (UniqueName: \"kubernetes.io/projected/b6c68417-9771-4ad5-acfa-b25ddda70e33-kube-api-access-wdnmh\") pod \"barbican-api-86c4877d94-j48gv\" (UID: \"b6c68417-9771-4ad5-acfa-b25ddda70e33\") " pod="openstack/barbican-api-86c4877d94-j48gv" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.416127 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnrhg\" (UniqueName: \"kubernetes.io/projected/92e3fe75-3936-4491-80ad-e2b738f023b2-kube-api-access-jnrhg\") pod \"neutron-c844968fb-vzqlt\" (UID: \"92e3fe75-3936-4491-80ad-e2b738f023b2\") " pod="openstack/neutron-c844968fb-vzqlt" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.416166 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6c68417-9771-4ad5-acfa-b25ddda70e33-config-data-custom\") pod \"barbican-api-86c4877d94-j48gv\" (UID: \"b6c68417-9771-4ad5-acfa-b25ddda70e33\") " pod="openstack/barbican-api-86c4877d94-j48gv" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.416191 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6c68417-9771-4ad5-acfa-b25ddda70e33-config-data\") pod \"barbican-api-86c4877d94-j48gv\" (UID: \"b6c68417-9771-4ad5-acfa-b25ddda70e33\") " pod="openstack/barbican-api-86c4877d94-j48gv" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.416226 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92e3fe75-3936-4491-80ad-e2b738f023b2-combined-ca-bundle\") pod \"neutron-c844968fb-vzqlt\" (UID: \"92e3fe75-3936-4491-80ad-e2b738f023b2\") " pod="openstack/neutron-c844968fb-vzqlt" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.417214 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/92e3fe75-3936-4491-80ad-e2b738f023b2-config\") pod \"neutron-c844968fb-vzqlt\" (UID: \"92e3fe75-3936-4491-80ad-e2b738f023b2\") " pod="openstack/neutron-c844968fb-vzqlt" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.417315 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/92e3fe75-3936-4491-80ad-e2b738f023b2-ovndb-tls-certs\") pod \"neutron-c844968fb-vzqlt\" (UID: \"92e3fe75-3936-4491-80ad-e2b738f023b2\") " pod="openstack/neutron-c844968fb-vzqlt" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.417367 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6c68417-9771-4ad5-acfa-b25ddda70e33-logs\") pod \"barbican-api-86c4877d94-j48gv\" (UID: \"b6c68417-9771-4ad5-acfa-b25ddda70e33\") " pod="openstack/barbican-api-86c4877d94-j48gv" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.525672 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6c68417-9771-4ad5-acfa-b25ddda70e33-config-data\") pod \"barbican-api-86c4877d94-j48gv\" (UID: \"b6c68417-9771-4ad5-acfa-b25ddda70e33\") " pod="openstack/barbican-api-86c4877d94-j48gv" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.525888 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92e3fe75-3936-4491-80ad-e2b738f023b2-combined-ca-bundle\") pod \"neutron-c844968fb-vzqlt\" (UID: \"92e3fe75-3936-4491-80ad-e2b738f023b2\") " pod="openstack/neutron-c844968fb-vzqlt" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.529322 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-244cw" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.529788 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/92e3fe75-3936-4491-80ad-e2b738f023b2-config\") pod \"neutron-c844968fb-vzqlt\" (UID: \"92e3fe75-3936-4491-80ad-e2b738f023b2\") " pod="openstack/neutron-c844968fb-vzqlt" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.529857 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/92e3fe75-3936-4491-80ad-e2b738f023b2-ovndb-tls-certs\") pod \"neutron-c844968fb-vzqlt\" (UID: \"92e3fe75-3936-4491-80ad-e2b738f023b2\") " pod="openstack/neutron-c844968fb-vzqlt" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.529893 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6c68417-9771-4ad5-acfa-b25ddda70e33-logs\") pod \"barbican-api-86c4877d94-j48gv\" (UID: \"b6c68417-9771-4ad5-acfa-b25ddda70e33\") " pod="openstack/barbican-api-86c4877d94-j48gv" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.529978 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c68417-9771-4ad5-acfa-b25ddda70e33-combined-ca-bundle\") pod \"barbican-api-86c4877d94-j48gv\" (UID: \"b6c68417-9771-4ad5-acfa-b25ddda70e33\") " pod="openstack/barbican-api-86c4877d94-j48gv" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.529992 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/92e3fe75-3936-4491-80ad-e2b738f023b2-httpd-config\") pod \"neutron-c844968fb-vzqlt\" (UID: \"92e3fe75-3936-4491-80ad-e2b738f023b2\") " pod="openstack/neutron-c844968fb-vzqlt" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.530042 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdnmh\" (UniqueName: \"kubernetes.io/projected/b6c68417-9771-4ad5-acfa-b25ddda70e33-kube-api-access-wdnmh\") pod \"barbican-api-86c4877d94-j48gv\" (UID: \"b6c68417-9771-4ad5-acfa-b25ddda70e33\") " pod="openstack/barbican-api-86c4877d94-j48gv" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.530312 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6c68417-9771-4ad5-acfa-b25ddda70e33-logs\") pod \"barbican-api-86c4877d94-j48gv\" (UID: \"b6c68417-9771-4ad5-acfa-b25ddda70e33\") " pod="openstack/barbican-api-86c4877d94-j48gv" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.530062 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnrhg\" (UniqueName: \"kubernetes.io/projected/92e3fe75-3936-4491-80ad-e2b738f023b2-kube-api-access-jnrhg\") pod \"neutron-c844968fb-vzqlt\" (UID: \"92e3fe75-3936-4491-80ad-e2b738f023b2\") " pod="openstack/neutron-c844968fb-vzqlt" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.530591 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6c68417-9771-4ad5-acfa-b25ddda70e33-config-data-custom\") pod \"barbican-api-86c4877d94-j48gv\" (UID: \"b6c68417-9771-4ad5-acfa-b25ddda70e33\") " pod="openstack/barbican-api-86c4877d94-j48gv" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.534741 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/92e3fe75-3936-4491-80ad-e2b738f023b2-config\") pod \"neutron-c844968fb-vzqlt\" (UID: \"92e3fe75-3936-4491-80ad-e2b738f023b2\") " pod="openstack/neutron-c844968fb-vzqlt" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.535109 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92e3fe75-3936-4491-80ad-e2b738f023b2-combined-ca-bundle\") pod \"neutron-c844968fb-vzqlt\" (UID: \"92e3fe75-3936-4491-80ad-e2b738f023b2\") " pod="openstack/neutron-c844968fb-vzqlt" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.538593 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6c68417-9771-4ad5-acfa-b25ddda70e33-config-data\") pod \"barbican-api-86c4877d94-j48gv\" (UID: \"b6c68417-9771-4ad5-acfa-b25ddda70e33\") " pod="openstack/barbican-api-86c4877d94-j48gv" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.543029 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c68417-9771-4ad5-acfa-b25ddda70e33-combined-ca-bundle\") pod \"barbican-api-86c4877d94-j48gv\" (UID: \"b6c68417-9771-4ad5-acfa-b25ddda70e33\") " pod="openstack/barbican-api-86c4877d94-j48gv" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.548476 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/92e3fe75-3936-4491-80ad-e2b738f023b2-httpd-config\") pod \"neutron-c844968fb-vzqlt\" (UID: \"92e3fe75-3936-4491-80ad-e2b738f023b2\") " pod="openstack/neutron-c844968fb-vzqlt" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.552592 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdnmh\" (UniqueName: \"kubernetes.io/projected/b6c68417-9771-4ad5-acfa-b25ddda70e33-kube-api-access-wdnmh\") pod \"barbican-api-86c4877d94-j48gv\" (UID: \"b6c68417-9771-4ad5-acfa-b25ddda70e33\") " pod="openstack/barbican-api-86c4877d94-j48gv" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.561213 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnrhg\" (UniqueName: \"kubernetes.io/projected/92e3fe75-3936-4491-80ad-e2b738f023b2-kube-api-access-jnrhg\") pod \"neutron-c844968fb-vzqlt\" (UID: \"92e3fe75-3936-4491-80ad-e2b738f023b2\") " pod="openstack/neutron-c844968fb-vzqlt" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.568328 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/92e3fe75-3936-4491-80ad-e2b738f023b2-ovndb-tls-certs\") pod \"neutron-c844968fb-vzqlt\" (UID: \"92e3fe75-3936-4491-80ad-e2b738f023b2\") " pod="openstack/neutron-c844968fb-vzqlt" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.573484 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6c68417-9771-4ad5-acfa-b25ddda70e33-config-data-custom\") pod \"barbican-api-86c4877d94-j48gv\" (UID: \"b6c68417-9771-4ad5-acfa-b25ddda70e33\") " pod="openstack/barbican-api-86c4877d94-j48gv" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.623974 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-58c49587-cz4f5" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.637168 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c844968fb-vzqlt" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.639064 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-948fdb9cd-ncm6f"] Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.825430 4830 generic.go:334] "Generic (PLEG): container finished" podID="1141b071-f448-4a3f-b062-0255dd5dc38a" containerID="21b7ce6b7e12d2dc0f7f2b14e5661ca319f4a158bd99eb2265e8cc2844c46aeb" exitCode=0 Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.826426 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536830-gwpcb" event={"ID":"1141b071-f448-4a3f-b062-0255dd5dc38a","Type":"ContainerDied","Data":"21b7ce6b7e12d2dc0f7f2b14e5661ca319f4a158bd99eb2265e8cc2844c46aeb"} Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.832785 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa935bfb-ebfd-4aa9-abc3-84d118252abe","Type":"ContainerStarted","Data":"f70612a9d7987bfbe011c9d173a99294117f3554823216cc62088541c06772f4"} Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.836751 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-948fdb9cd-ncm6f" event={"ID":"22232c9c-ecf7-443e-834f-ad39b37735b2","Type":"ContainerStarted","Data":"f107c931d523968950e2e2557e2e0c71d1906d8784ee47e8cdbc627751b3a65f"} Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.836783 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-mdpgd" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.849369 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-mdpgd" Feb 27 16:30:10 crc kubenswrapper[4830]: I0227 16:30:10.860230 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-86c4877d94-j48gv" Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.040440 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a-ovsdbserver-nb\") pod \"d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a\" (UID: \"d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a\") " Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.040501 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a-dns-swift-storage-0\") pod \"d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a\" (UID: \"d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a\") " Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.040625 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jlfc\" (UniqueName: \"kubernetes.io/projected/d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a-kube-api-access-7jlfc\") pod \"d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a\" (UID: \"d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a\") " Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.040684 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a-config\") pod \"d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a\" (UID: \"d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a\") " Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.040714 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a-dns-svc\") pod \"d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a\" (UID: \"d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a\") " Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.040832 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a-ovsdbserver-sb\") pod \"d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a\" (UID: \"d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a\") " Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.041761 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a" (UID: "d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.041795 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a" (UID: "d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.042578 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a-config" (OuterVolumeSpecName: "config") pod "d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a" (UID: "d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.044296 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a" (UID: "d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.045359 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a" (UID: "d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.051631 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-244cw"] Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.054742 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a-kube-api-access-7jlfc" (OuterVolumeSpecName: "kube-api-access-7jlfc") pod "d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a" (UID: "d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a"). InnerVolumeSpecName "kube-api-access-7jlfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:30:11 crc kubenswrapper[4830]: W0227 16:30:11.061901 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03b0201f_0147_4757_b5a1_6109d5c7ed94.slice/crio-a03ba04743b7c4f41d1fea28c83449a214445ab02b3f1f884899e4854c6f74ea WatchSource:0}: Error finding container a03ba04743b7c4f41d1fea28c83449a214445ab02b3f1f884899e4854c6f74ea: Status 404 returned error can't find the container with id a03ba04743b7c4f41d1fea28c83449a214445ab02b3f1f884899e4854c6f74ea Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.143613 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jlfc\" (UniqueName: \"kubernetes.io/projected/d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a-kube-api-access-7jlfc\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.143647 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.143677 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.143687 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.143694 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.143702 4830 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.198703 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vrjmz" Feb 27 16:30:11 crc kubenswrapper[4830]: W0227 16:30:11.239740 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4baf4d8_24c9_4aa8_b72e_9d6d9cdd5f32.slice/crio-847d19249a348581377717aa03626cf8ed77cb6a659d9e8fa65b56a85e33ea72 WatchSource:0}: Error finding container 847d19249a348581377717aa03626cf8ed77cb6a659d9e8fa65b56a85e33ea72: Status 404 returned error can't find the container with id 847d19249a348581377717aa03626cf8ed77cb6a659d9e8fa65b56a85e33ea72 Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.239790 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-58c49587-cz4f5"] Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.280588 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-86c4877d94-j48gv"] Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.313237 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-c844968fb-vzqlt"] Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.346134 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a69bc2ed-ce70-4828-af02-ccac1c3f0c10-etc-machine-id\") pod \"a69bc2ed-ce70-4828-af02-ccac1c3f0c10\" (UID: \"a69bc2ed-ce70-4828-af02-ccac1c3f0c10\") " Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.346511 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a69bc2ed-ce70-4828-af02-ccac1c3f0c10-scripts\") pod \"a69bc2ed-ce70-4828-af02-ccac1c3f0c10\" (UID: \"a69bc2ed-ce70-4828-af02-ccac1c3f0c10\") " Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.346575 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a69bc2ed-ce70-4828-af02-ccac1c3f0c10-config-data\") pod \"a69bc2ed-ce70-4828-af02-ccac1c3f0c10\" (UID: \"a69bc2ed-ce70-4828-af02-ccac1c3f0c10\") " Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.346601 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmf7s\" (UniqueName: \"kubernetes.io/projected/a69bc2ed-ce70-4828-af02-ccac1c3f0c10-kube-api-access-wmf7s\") pod \"a69bc2ed-ce70-4828-af02-ccac1c3f0c10\" (UID: \"a69bc2ed-ce70-4828-af02-ccac1c3f0c10\") " Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.346731 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a69bc2ed-ce70-4828-af02-ccac1c3f0c10-db-sync-config-data\") pod \"a69bc2ed-ce70-4828-af02-ccac1c3f0c10\" (UID: \"a69bc2ed-ce70-4828-af02-ccac1c3f0c10\") " Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.346760 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a69bc2ed-ce70-4828-af02-ccac1c3f0c10-combined-ca-bundle\") pod \"a69bc2ed-ce70-4828-af02-ccac1c3f0c10\" (UID: \"a69bc2ed-ce70-4828-af02-ccac1c3f0c10\") " Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.347532 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a69bc2ed-ce70-4828-af02-ccac1c3f0c10-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a69bc2ed-ce70-4828-af02-ccac1c3f0c10" (UID: "a69bc2ed-ce70-4828-af02-ccac1c3f0c10"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.351404 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a69bc2ed-ce70-4828-af02-ccac1c3f0c10-scripts" (OuterVolumeSpecName: "scripts") pod "a69bc2ed-ce70-4828-af02-ccac1c3f0c10" (UID: "a69bc2ed-ce70-4828-af02-ccac1c3f0c10"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.351414 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a69bc2ed-ce70-4828-af02-ccac1c3f0c10-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a69bc2ed-ce70-4828-af02-ccac1c3f0c10" (UID: "a69bc2ed-ce70-4828-af02-ccac1c3f0c10"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.356668 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a69bc2ed-ce70-4828-af02-ccac1c3f0c10-kube-api-access-wmf7s" (OuterVolumeSpecName: "kube-api-access-wmf7s") pod "a69bc2ed-ce70-4828-af02-ccac1c3f0c10" (UID: "a69bc2ed-ce70-4828-af02-ccac1c3f0c10"). InnerVolumeSpecName "kube-api-access-wmf7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.374575 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a69bc2ed-ce70-4828-af02-ccac1c3f0c10-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a69bc2ed-ce70-4828-af02-ccac1c3f0c10" (UID: "a69bc2ed-ce70-4828-af02-ccac1c3f0c10"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.399809 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a69bc2ed-ce70-4828-af02-ccac1c3f0c10-config-data" (OuterVolumeSpecName: "config-data") pod "a69bc2ed-ce70-4828-af02-ccac1c3f0c10" (UID: "a69bc2ed-ce70-4828-af02-ccac1c3f0c10"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.448664 4830 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a69bc2ed-ce70-4828-af02-ccac1c3f0c10-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.448691 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a69bc2ed-ce70-4828-af02-ccac1c3f0c10-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.448700 4830 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a69bc2ed-ce70-4828-af02-ccac1c3f0c10-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.448709 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a69bc2ed-ce70-4828-af02-ccac1c3f0c10-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.448718 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a69bc2ed-ce70-4828-af02-ccac1c3f0c10-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.448726 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmf7s\" (UniqueName: \"kubernetes.io/projected/a69bc2ed-ce70-4828-af02-ccac1c3f0c10-kube-api-access-wmf7s\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.854516 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c844968fb-vzqlt" event={"ID":"92e3fe75-3936-4491-80ad-e2b738f023b2","Type":"ContainerStarted","Data":"370cccbbf378833ab78c48ea79a72b415f5be5b63595a1d5c9da597419ac42f8"} Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.854819 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c844968fb-vzqlt" event={"ID":"92e3fe75-3936-4491-80ad-e2b738f023b2","Type":"ContainerStarted","Data":"a986bbda403364dd28f3ffc0954e8e1f8595a2d731d8bb3cf54223d09a324a21"} Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.854830 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c844968fb-vzqlt" event={"ID":"92e3fe75-3936-4491-80ad-e2b738f023b2","Type":"ContainerStarted","Data":"6d503f9c2d8929099442767fef35a030b24a05a6f13adde4a763f40df6a0ba49"} Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.854843 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-c844968fb-vzqlt" Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.872741 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa935bfb-ebfd-4aa9-abc3-84d118252abe","Type":"ContainerStarted","Data":"10d669d0a72502efb3bd8086dffa6db237238897617d2d5aa7425d67a1a8b135"} Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.880250 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-c844968fb-vzqlt" podStartSLOduration=1.880234465 podStartE2EDuration="1.880234465s" podCreationTimestamp="2026-02-27 16:30:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:30:11.875486871 +0000 UTC m=+1407.964759324" watchObservedRunningTime="2026-02-27 16:30:11.880234465 +0000 UTC m=+1407.969506938" Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.889737 4830 generic.go:334] "Generic (PLEG): container finished" podID="03b0201f-0147-4757-b5a1-6109d5c7ed94" containerID="c8b8abf14112a674db4a179c12314ef74d6aa35ee51b308fd44b4896c7805a9f" exitCode=0 Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.889807 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-244cw" event={"ID":"03b0201f-0147-4757-b5a1-6109d5c7ed94","Type":"ContainerDied","Data":"c8b8abf14112a674db4a179c12314ef74d6aa35ee51b308fd44b4896c7805a9f"} Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.889846 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-244cw" event={"ID":"03b0201f-0147-4757-b5a1-6109d5c7ed94","Type":"ContainerStarted","Data":"a03ba04743b7c4f41d1fea28c83449a214445ab02b3f1f884899e4854c6f74ea"} Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.895025 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-58c49587-cz4f5" event={"ID":"f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32","Type":"ContainerStarted","Data":"847d19249a348581377717aa03626cf8ed77cb6a659d9e8fa65b56a85e33ea72"} Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.900134 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-86c4877d94-j48gv" event={"ID":"b6c68417-9771-4ad5-acfa-b25ddda70e33","Type":"ContainerStarted","Data":"f6f9eacfd59446aa4e25953cd1e74800b15b5cb949be8f1c201f6b98ceddfaea"} Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.900172 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-86c4877d94-j48gv" event={"ID":"b6c68417-9771-4ad5-acfa-b25ddda70e33","Type":"ContainerStarted","Data":"87fe0cf182a5dc688fa01ab19965899ca6f4035532e1b667dffb3b4e0f3cee8a"} Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.900184 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-86c4877d94-j48gv" event={"ID":"b6c68417-9771-4ad5-acfa-b25ddda70e33","Type":"ContainerStarted","Data":"e0a53804b4e17fb255d28d1f34ee08b9a66644b29b7ada9f6f591f074a17fa8c"} Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.900871 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-86c4877d94-j48gv" Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.900899 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-86c4877d94-j48gv" Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.905442 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-mdpgd" Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.907769 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-vrjmz" Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.907872 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-vrjmz" event={"ID":"a69bc2ed-ce70-4828-af02-ccac1c3f0c10","Type":"ContainerDied","Data":"f1942395b439c33fd144b9ce5069c931029aa29ce43019767aebdb680fc41a8d"} Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.907925 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1942395b439c33fd144b9ce5069c931029aa29ce43019767aebdb680fc41a8d" Feb 27 16:30:11 crc kubenswrapper[4830]: I0227 16:30:11.943075 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-86c4877d94-j48gv" podStartSLOduration=1.94305514 podStartE2EDuration="1.94305514s" podCreationTimestamp="2026-02-27 16:30:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:30:11.939485155 +0000 UTC m=+1408.028757618" watchObservedRunningTime="2026-02-27 16:30:11.94305514 +0000 UTC m=+1408.032327603" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.019040 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-mdpgd"] Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.042038 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-mdpgd"] Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.054465 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 16:30:12 crc kubenswrapper[4830]: E0227 16:30:12.054883 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a69bc2ed-ce70-4828-af02-ccac1c3f0c10" containerName="cinder-db-sync" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.054901 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="a69bc2ed-ce70-4828-af02-ccac1c3f0c10" containerName="cinder-db-sync" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.055110 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="a69bc2ed-ce70-4828-af02-ccac1c3f0c10" containerName="cinder-db-sync" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.056018 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.061428 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.061740 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.061871 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.062054 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-jw2lh" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.077009 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.111482 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-244cw"] Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.128528 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-g94gr"] Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.129911 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-g94gr" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.148365 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-g94gr"] Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.167790 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvrn7\" (UniqueName: \"kubernetes.io/projected/da0f5d1f-1944-4347-8a18-fd946fb7ed6a-kube-api-access-cvrn7\") pod \"cinder-scheduler-0\" (UID: \"da0f5d1f-1944-4347-8a18-fd946fb7ed6a\") " pod="openstack/cinder-scheduler-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.167838 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da0f5d1f-1944-4347-8a18-fd946fb7ed6a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"da0f5d1f-1944-4347-8a18-fd946fb7ed6a\") " pod="openstack/cinder-scheduler-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.167887 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da0f5d1f-1944-4347-8a18-fd946fb7ed6a-config-data\") pod \"cinder-scheduler-0\" (UID: \"da0f5d1f-1944-4347-8a18-fd946fb7ed6a\") " pod="openstack/cinder-scheduler-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.167938 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da0f5d1f-1944-4347-8a18-fd946fb7ed6a-scripts\") pod \"cinder-scheduler-0\" (UID: \"da0f5d1f-1944-4347-8a18-fd946fb7ed6a\") " pod="openstack/cinder-scheduler-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.167983 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/da0f5d1f-1944-4347-8a18-fd946fb7ed6a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"da0f5d1f-1944-4347-8a18-fd946fb7ed6a\") " pod="openstack/cinder-scheduler-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.168008 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da0f5d1f-1944-4347-8a18-fd946fb7ed6a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"da0f5d1f-1944-4347-8a18-fd946fb7ed6a\") " pod="openstack/cinder-scheduler-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.269826 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvrn7\" (UniqueName: \"kubernetes.io/projected/da0f5d1f-1944-4347-8a18-fd946fb7ed6a-kube-api-access-cvrn7\") pod \"cinder-scheduler-0\" (UID: \"da0f5d1f-1944-4347-8a18-fd946fb7ed6a\") " pod="openstack/cinder-scheduler-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.269868 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da0f5d1f-1944-4347-8a18-fd946fb7ed6a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"da0f5d1f-1944-4347-8a18-fd946fb7ed6a\") " pod="openstack/cinder-scheduler-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.269911 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss7wg\" (UniqueName: \"kubernetes.io/projected/eeb02ef6-6b7f-4e31-8446-f2376b49d69a-kube-api-access-ss7wg\") pod \"dnsmasq-dns-6578955fd5-g94gr\" (UID: \"eeb02ef6-6b7f-4e31-8446-f2376b49d69a\") " pod="openstack/dnsmasq-dns-6578955fd5-g94gr" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.269937 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da0f5d1f-1944-4347-8a18-fd946fb7ed6a-config-data\") pod \"cinder-scheduler-0\" (UID: \"da0f5d1f-1944-4347-8a18-fd946fb7ed6a\") " pod="openstack/cinder-scheduler-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.269979 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eeb02ef6-6b7f-4e31-8446-f2376b49d69a-dns-svc\") pod \"dnsmasq-dns-6578955fd5-g94gr\" (UID: \"eeb02ef6-6b7f-4e31-8446-f2376b49d69a\") " pod="openstack/dnsmasq-dns-6578955fd5-g94gr" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.269995 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eeb02ef6-6b7f-4e31-8446-f2376b49d69a-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-g94gr\" (UID: \"eeb02ef6-6b7f-4e31-8446-f2376b49d69a\") " pod="openstack/dnsmasq-dns-6578955fd5-g94gr" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.270016 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eeb02ef6-6b7f-4e31-8446-f2376b49d69a-config\") pod \"dnsmasq-dns-6578955fd5-g94gr\" (UID: \"eeb02ef6-6b7f-4e31-8446-f2376b49d69a\") " pod="openstack/dnsmasq-dns-6578955fd5-g94gr" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.270048 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eeb02ef6-6b7f-4e31-8446-f2376b49d69a-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-g94gr\" (UID: \"eeb02ef6-6b7f-4e31-8446-f2376b49d69a\") " pod="openstack/dnsmasq-dns-6578955fd5-g94gr" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.270071 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da0f5d1f-1944-4347-8a18-fd946fb7ed6a-scripts\") pod \"cinder-scheduler-0\" (UID: \"da0f5d1f-1944-4347-8a18-fd946fb7ed6a\") " pod="openstack/cinder-scheduler-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.270096 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/da0f5d1f-1944-4347-8a18-fd946fb7ed6a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"da0f5d1f-1944-4347-8a18-fd946fb7ed6a\") " pod="openstack/cinder-scheduler-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.270115 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eeb02ef6-6b7f-4e31-8446-f2376b49d69a-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-g94gr\" (UID: \"eeb02ef6-6b7f-4e31-8446-f2376b49d69a\") " pod="openstack/dnsmasq-dns-6578955fd5-g94gr" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.270142 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da0f5d1f-1944-4347-8a18-fd946fb7ed6a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"da0f5d1f-1944-4347-8a18-fd946fb7ed6a\") " pod="openstack/cinder-scheduler-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.274883 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da0f5d1f-1944-4347-8a18-fd946fb7ed6a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"da0f5d1f-1944-4347-8a18-fd946fb7ed6a\") " pod="openstack/cinder-scheduler-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.277161 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.278152 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da0f5d1f-1944-4347-8a18-fd946fb7ed6a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"da0f5d1f-1944-4347-8a18-fd946fb7ed6a\") " pod="openstack/cinder-scheduler-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.278198 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/da0f5d1f-1944-4347-8a18-fd946fb7ed6a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"da0f5d1f-1944-4347-8a18-fd946fb7ed6a\") " pod="openstack/cinder-scheduler-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.278641 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.284924 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.294119 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da0f5d1f-1944-4347-8a18-fd946fb7ed6a-scripts\") pod \"cinder-scheduler-0\" (UID: \"da0f5d1f-1944-4347-8a18-fd946fb7ed6a\") " pod="openstack/cinder-scheduler-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.316618 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.321842 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da0f5d1f-1944-4347-8a18-fd946fb7ed6a-config-data\") pod \"cinder-scheduler-0\" (UID: \"da0f5d1f-1944-4347-8a18-fd946fb7ed6a\") " pod="openstack/cinder-scheduler-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.332822 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvrn7\" (UniqueName: \"kubernetes.io/projected/da0f5d1f-1944-4347-8a18-fd946fb7ed6a-kube-api-access-cvrn7\") pod \"cinder-scheduler-0\" (UID: \"da0f5d1f-1944-4347-8a18-fd946fb7ed6a\") " pod="openstack/cinder-scheduler-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.373817 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eeb02ef6-6b7f-4e31-8446-f2376b49d69a-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-g94gr\" (UID: \"eeb02ef6-6b7f-4e31-8446-f2376b49d69a\") " pod="openstack/dnsmasq-dns-6578955fd5-g94gr" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.373884 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d4b5776-2c37-4d23-a1ef-4738230012db-logs\") pod \"cinder-api-0\" (UID: \"8d4b5776-2c37-4d23-a1ef-4738230012db\") " pod="openstack/cinder-api-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.373902 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eeb02ef6-6b7f-4e31-8446-f2376b49d69a-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-g94gr\" (UID: \"eeb02ef6-6b7f-4e31-8446-f2376b49d69a\") " pod="openstack/dnsmasq-dns-6578955fd5-g94gr" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.373954 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d4b5776-2c37-4d23-a1ef-4738230012db-config-data\") pod \"cinder-api-0\" (UID: \"8d4b5776-2c37-4d23-a1ef-4738230012db\") " pod="openstack/cinder-api-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.373978 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-825j4\" (UniqueName: \"kubernetes.io/projected/8d4b5776-2c37-4d23-a1ef-4738230012db-kube-api-access-825j4\") pod \"cinder-api-0\" (UID: \"8d4b5776-2c37-4d23-a1ef-4738230012db\") " pod="openstack/cinder-api-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.374003 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d4b5776-2c37-4d23-a1ef-4738230012db-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8d4b5776-2c37-4d23-a1ef-4738230012db\") " pod="openstack/cinder-api-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.374034 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8d4b5776-2c37-4d23-a1ef-4738230012db-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8d4b5776-2c37-4d23-a1ef-4738230012db\") " pod="openstack/cinder-api-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.374072 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss7wg\" (UniqueName: \"kubernetes.io/projected/eeb02ef6-6b7f-4e31-8446-f2376b49d69a-kube-api-access-ss7wg\") pod \"dnsmasq-dns-6578955fd5-g94gr\" (UID: \"eeb02ef6-6b7f-4e31-8446-f2376b49d69a\") " pod="openstack/dnsmasq-dns-6578955fd5-g94gr" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.374102 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d4b5776-2c37-4d23-a1ef-4738230012db-scripts\") pod \"cinder-api-0\" (UID: \"8d4b5776-2c37-4d23-a1ef-4738230012db\") " pod="openstack/cinder-api-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.374123 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eeb02ef6-6b7f-4e31-8446-f2376b49d69a-dns-svc\") pod \"dnsmasq-dns-6578955fd5-g94gr\" (UID: \"eeb02ef6-6b7f-4e31-8446-f2376b49d69a\") " pod="openstack/dnsmasq-dns-6578955fd5-g94gr" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.374140 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eeb02ef6-6b7f-4e31-8446-f2376b49d69a-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-g94gr\" (UID: \"eeb02ef6-6b7f-4e31-8446-f2376b49d69a\") " pod="openstack/dnsmasq-dns-6578955fd5-g94gr" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.374160 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d4b5776-2c37-4d23-a1ef-4738230012db-config-data-custom\") pod \"cinder-api-0\" (UID: \"8d4b5776-2c37-4d23-a1ef-4738230012db\") " pod="openstack/cinder-api-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.374175 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eeb02ef6-6b7f-4e31-8446-f2376b49d69a-config\") pod \"dnsmasq-dns-6578955fd5-g94gr\" (UID: \"eeb02ef6-6b7f-4e31-8446-f2376b49d69a\") " pod="openstack/dnsmasq-dns-6578955fd5-g94gr" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.374936 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eeb02ef6-6b7f-4e31-8446-f2376b49d69a-config\") pod \"dnsmasq-dns-6578955fd5-g94gr\" (UID: \"eeb02ef6-6b7f-4e31-8446-f2376b49d69a\") " pod="openstack/dnsmasq-dns-6578955fd5-g94gr" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.375475 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eeb02ef6-6b7f-4e31-8446-f2376b49d69a-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-g94gr\" (UID: \"eeb02ef6-6b7f-4e31-8446-f2376b49d69a\") " pod="openstack/dnsmasq-dns-6578955fd5-g94gr" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.379584 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eeb02ef6-6b7f-4e31-8446-f2376b49d69a-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-g94gr\" (UID: \"eeb02ef6-6b7f-4e31-8446-f2376b49d69a\") " pod="openstack/dnsmasq-dns-6578955fd5-g94gr" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.379701 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eeb02ef6-6b7f-4e31-8446-f2376b49d69a-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-g94gr\" (UID: \"eeb02ef6-6b7f-4e31-8446-f2376b49d69a\") " pod="openstack/dnsmasq-dns-6578955fd5-g94gr" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.382438 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eeb02ef6-6b7f-4e31-8446-f2376b49d69a-dns-svc\") pod \"dnsmasq-dns-6578955fd5-g94gr\" (UID: \"eeb02ef6-6b7f-4e31-8446-f2376b49d69a\") " pod="openstack/dnsmasq-dns-6578955fd5-g94gr" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.397353 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.410777 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss7wg\" (UniqueName: \"kubernetes.io/projected/eeb02ef6-6b7f-4e31-8446-f2376b49d69a-kube-api-access-ss7wg\") pod \"dnsmasq-dns-6578955fd5-g94gr\" (UID: \"eeb02ef6-6b7f-4e31-8446-f2376b49d69a\") " pod="openstack/dnsmasq-dns-6578955fd5-g94gr" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.477225 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-g94gr" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.478506 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d4b5776-2c37-4d23-a1ef-4738230012db-logs\") pod \"cinder-api-0\" (UID: \"8d4b5776-2c37-4d23-a1ef-4738230012db\") " pod="openstack/cinder-api-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.478549 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d4b5776-2c37-4d23-a1ef-4738230012db-config-data\") pod \"cinder-api-0\" (UID: \"8d4b5776-2c37-4d23-a1ef-4738230012db\") " pod="openstack/cinder-api-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.478571 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-825j4\" (UniqueName: \"kubernetes.io/projected/8d4b5776-2c37-4d23-a1ef-4738230012db-kube-api-access-825j4\") pod \"cinder-api-0\" (UID: \"8d4b5776-2c37-4d23-a1ef-4738230012db\") " pod="openstack/cinder-api-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.478595 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d4b5776-2c37-4d23-a1ef-4738230012db-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8d4b5776-2c37-4d23-a1ef-4738230012db\") " pod="openstack/cinder-api-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.478625 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8d4b5776-2c37-4d23-a1ef-4738230012db-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8d4b5776-2c37-4d23-a1ef-4738230012db\") " pod="openstack/cinder-api-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.478669 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d4b5776-2c37-4d23-a1ef-4738230012db-scripts\") pod \"cinder-api-0\" (UID: \"8d4b5776-2c37-4d23-a1ef-4738230012db\") " pod="openstack/cinder-api-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.478693 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d4b5776-2c37-4d23-a1ef-4738230012db-config-data-custom\") pod \"cinder-api-0\" (UID: \"8d4b5776-2c37-4d23-a1ef-4738230012db\") " pod="openstack/cinder-api-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.479045 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8d4b5776-2c37-4d23-a1ef-4738230012db-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8d4b5776-2c37-4d23-a1ef-4738230012db\") " pod="openstack/cinder-api-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.479452 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d4b5776-2c37-4d23-a1ef-4738230012db-logs\") pod \"cinder-api-0\" (UID: \"8d4b5776-2c37-4d23-a1ef-4738230012db\") " pod="openstack/cinder-api-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.487620 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d4b5776-2c37-4d23-a1ef-4738230012db-scripts\") pod \"cinder-api-0\" (UID: \"8d4b5776-2c37-4d23-a1ef-4738230012db\") " pod="openstack/cinder-api-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.488676 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d4b5776-2c37-4d23-a1ef-4738230012db-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8d4b5776-2c37-4d23-a1ef-4738230012db\") " pod="openstack/cinder-api-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.491269 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d4b5776-2c37-4d23-a1ef-4738230012db-config-data\") pod \"cinder-api-0\" (UID: \"8d4b5776-2c37-4d23-a1ef-4738230012db\") " pod="openstack/cinder-api-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.492133 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d4b5776-2c37-4d23-a1ef-4738230012db-config-data-custom\") pod \"cinder-api-0\" (UID: \"8d4b5776-2c37-4d23-a1ef-4738230012db\") " pod="openstack/cinder-api-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.499987 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-825j4\" (UniqueName: \"kubernetes.io/projected/8d4b5776-2c37-4d23-a1ef-4738230012db-kube-api-access-825j4\") pod \"cinder-api-0\" (UID: \"8d4b5776-2c37-4d23-a1ef-4738230012db\") " pod="openstack/cinder-api-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.660335 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536830-gwpcb" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.718341 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.771486 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a" path="/var/lib/kubelet/pods/d46941b3-1d7c-4a6a-a9b5-ceb2effb4c5a/volumes" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.787409 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xp8c\" (UniqueName: \"kubernetes.io/projected/1141b071-f448-4a3f-b062-0255dd5dc38a-kube-api-access-5xp8c\") pod \"1141b071-f448-4a3f-b062-0255dd5dc38a\" (UID: \"1141b071-f448-4a3f-b062-0255dd5dc38a\") " Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.796203 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1141b071-f448-4a3f-b062-0255dd5dc38a-kube-api-access-5xp8c" (OuterVolumeSpecName: "kube-api-access-5xp8c") pod "1141b071-f448-4a3f-b062-0255dd5dc38a" (UID: "1141b071-f448-4a3f-b062-0255dd5dc38a"). InnerVolumeSpecName "kube-api-access-5xp8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.889371 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xp8c\" (UniqueName: \"kubernetes.io/projected/1141b071-f448-4a3f-b062-0255dd5dc38a-kube-api-access-5xp8c\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.929842 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536830-gwpcb" event={"ID":"1141b071-f448-4a3f-b062-0255dd5dc38a","Type":"ContainerDied","Data":"375f17aeecc61daf66cfb11ec8bc3f1d7f73fefe7fefa6be0438d372f0d38def"} Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.929891 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="375f17aeecc61daf66cfb11ec8bc3f1d7f73fefe7fefa6be0438d372f0d38def" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.929929 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536830-gwpcb" Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.938105 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848cf88cfc-244cw" podUID="03b0201f-0147-4757-b5a1-6109d5c7ed94" containerName="dnsmasq-dns" containerID="cri-o://d7e731781c382958c7640a67734856c1313b66fa64c74a2baa017def22b8e0e5" gracePeriod=10 Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.938494 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-244cw" event={"ID":"03b0201f-0147-4757-b5a1-6109d5c7ed94","Type":"ContainerStarted","Data":"d7e731781c382958c7640a67734856c1313b66fa64c74a2baa017def22b8e0e5"} Feb 27 16:30:12 crc kubenswrapper[4830]: I0227 16:30:12.939524 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848cf88cfc-244cw" Feb 27 16:30:13 crc kubenswrapper[4830]: I0227 16:30:12.999793 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848cf88cfc-244cw" podStartSLOduration=2.999773935 podStartE2EDuration="2.999773935s" podCreationTimestamp="2026-02-27 16:30:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:30:12.989798335 +0000 UTC m=+1409.079070798" watchObservedRunningTime="2026-02-27 16:30:12.999773935 +0000 UTC m=+1409.089046388" Feb 27 16:30:13 crc kubenswrapper[4830]: I0227 16:30:13.043872 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-g94gr"] Feb 27 16:30:13 crc kubenswrapper[4830]: W0227 16:30:13.057055 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeeb02ef6_6b7f_4e31_8446_f2376b49d69a.slice/crio-92ee00367a3de4046aa627874cec439510e16c033fb5062bbff130ecb2d11c30 WatchSource:0}: Error finding container 92ee00367a3de4046aa627874cec439510e16c033fb5062bbff130ecb2d11c30: Status 404 returned error can't find the container with id 92ee00367a3de4046aa627874cec439510e16c033fb5062bbff130ecb2d11c30 Feb 27 16:30:13 crc kubenswrapper[4830]: I0227 16:30:13.122485 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 16:30:13 crc kubenswrapper[4830]: I0227 16:30:13.252046 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 27 16:30:13 crc kubenswrapper[4830]: I0227 16:30:13.364917 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-244cw" Feb 27 16:30:13 crc kubenswrapper[4830]: I0227 16:30:13.509568 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03b0201f-0147-4757-b5a1-6109d5c7ed94-dns-svc\") pod \"03b0201f-0147-4757-b5a1-6109d5c7ed94\" (UID: \"03b0201f-0147-4757-b5a1-6109d5c7ed94\") " Feb 27 16:30:13 crc kubenswrapper[4830]: I0227 16:30:13.509662 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03b0201f-0147-4757-b5a1-6109d5c7ed94-ovsdbserver-sb\") pod \"03b0201f-0147-4757-b5a1-6109d5c7ed94\" (UID: \"03b0201f-0147-4757-b5a1-6109d5c7ed94\") " Feb 27 16:30:13 crc kubenswrapper[4830]: I0227 16:30:13.509695 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03b0201f-0147-4757-b5a1-6109d5c7ed94-config\") pod \"03b0201f-0147-4757-b5a1-6109d5c7ed94\" (UID: \"03b0201f-0147-4757-b5a1-6109d5c7ed94\") " Feb 27 16:30:13 crc kubenswrapper[4830]: I0227 16:30:13.509761 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03b0201f-0147-4757-b5a1-6109d5c7ed94-ovsdbserver-nb\") pod \"03b0201f-0147-4757-b5a1-6109d5c7ed94\" (UID: \"03b0201f-0147-4757-b5a1-6109d5c7ed94\") " Feb 27 16:30:13 crc kubenswrapper[4830]: I0227 16:30:13.509848 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03b0201f-0147-4757-b5a1-6109d5c7ed94-dns-swift-storage-0\") pod \"03b0201f-0147-4757-b5a1-6109d5c7ed94\" (UID: \"03b0201f-0147-4757-b5a1-6109d5c7ed94\") " Feb 27 16:30:13 crc kubenswrapper[4830]: I0227 16:30:13.509919 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2h5gt\" (UniqueName: \"kubernetes.io/projected/03b0201f-0147-4757-b5a1-6109d5c7ed94-kube-api-access-2h5gt\") pod \"03b0201f-0147-4757-b5a1-6109d5c7ed94\" (UID: \"03b0201f-0147-4757-b5a1-6109d5c7ed94\") " Feb 27 16:30:13 crc kubenswrapper[4830]: I0227 16:30:13.518128 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03b0201f-0147-4757-b5a1-6109d5c7ed94-kube-api-access-2h5gt" (OuterVolumeSpecName: "kube-api-access-2h5gt") pod "03b0201f-0147-4757-b5a1-6109d5c7ed94" (UID: "03b0201f-0147-4757-b5a1-6109d5c7ed94"). InnerVolumeSpecName "kube-api-access-2h5gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:30:13 crc kubenswrapper[4830]: I0227 16:30:13.583824 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03b0201f-0147-4757-b5a1-6109d5c7ed94-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "03b0201f-0147-4757-b5a1-6109d5c7ed94" (UID: "03b0201f-0147-4757-b5a1-6109d5c7ed94"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:30:13 crc kubenswrapper[4830]: I0227 16:30:13.609618 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03b0201f-0147-4757-b5a1-6109d5c7ed94-config" (OuterVolumeSpecName: "config") pod "03b0201f-0147-4757-b5a1-6109d5c7ed94" (UID: "03b0201f-0147-4757-b5a1-6109d5c7ed94"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:30:13 crc kubenswrapper[4830]: I0227 16:30:13.615069 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03b0201f-0147-4757-b5a1-6109d5c7ed94-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:13 crc kubenswrapper[4830]: I0227 16:30:13.615181 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03b0201f-0147-4757-b5a1-6109d5c7ed94-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:13 crc kubenswrapper[4830]: I0227 16:30:13.615193 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2h5gt\" (UniqueName: \"kubernetes.io/projected/03b0201f-0147-4757-b5a1-6109d5c7ed94-kube-api-access-2h5gt\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:13 crc kubenswrapper[4830]: I0227 16:30:13.617175 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03b0201f-0147-4757-b5a1-6109d5c7ed94-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "03b0201f-0147-4757-b5a1-6109d5c7ed94" (UID: "03b0201f-0147-4757-b5a1-6109d5c7ed94"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:30:13 crc kubenswrapper[4830]: I0227 16:30:13.618576 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03b0201f-0147-4757-b5a1-6109d5c7ed94-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "03b0201f-0147-4757-b5a1-6109d5c7ed94" (UID: "03b0201f-0147-4757-b5a1-6109d5c7ed94"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:30:13 crc kubenswrapper[4830]: I0227 16:30:13.632758 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03b0201f-0147-4757-b5a1-6109d5c7ed94-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "03b0201f-0147-4757-b5a1-6109d5c7ed94" (UID: "03b0201f-0147-4757-b5a1-6109d5c7ed94"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:30:13 crc kubenswrapper[4830]: I0227 16:30:13.720076 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03b0201f-0147-4757-b5a1-6109d5c7ed94-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:13 crc kubenswrapper[4830]: I0227 16:30:13.720113 4830 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03b0201f-0147-4757-b5a1-6109d5c7ed94-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:13 crc kubenswrapper[4830]: I0227 16:30:13.720127 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03b0201f-0147-4757-b5a1-6109d5c7ed94-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:13 crc kubenswrapper[4830]: I0227 16:30:13.743164 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536824-fd6f8"] Feb 27 16:30:13 crc kubenswrapper[4830]: I0227 16:30:13.753866 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536824-fd6f8"] Feb 27 16:30:13 crc kubenswrapper[4830]: I0227 16:30:13.948173 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"da0f5d1f-1944-4347-8a18-fd946fb7ed6a","Type":"ContainerStarted","Data":"910da7804241c63309a92f1879853230e879cdaf110b2204498a53e07581f3cb"} Feb 27 16:30:13 crc kubenswrapper[4830]: I0227 16:30:13.949856 4830 generic.go:334] "Generic (PLEG): container finished" podID="eeb02ef6-6b7f-4e31-8446-f2376b49d69a" containerID="533b31f6d0ca4f32c6256537889ca87b10608b92bd1415efbb3780a2f2b99d4c" exitCode=0 Feb 27 16:30:13 crc kubenswrapper[4830]: I0227 16:30:13.949958 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-g94gr" event={"ID":"eeb02ef6-6b7f-4e31-8446-f2376b49d69a","Type":"ContainerDied","Data":"533b31f6d0ca4f32c6256537889ca87b10608b92bd1415efbb3780a2f2b99d4c"} Feb 27 16:30:13 crc kubenswrapper[4830]: I0227 16:30:13.950001 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-g94gr" event={"ID":"eeb02ef6-6b7f-4e31-8446-f2376b49d69a","Type":"ContainerStarted","Data":"92ee00367a3de4046aa627874cec439510e16c033fb5062bbff130ecb2d11c30"} Feb 27 16:30:13 crc kubenswrapper[4830]: I0227 16:30:13.955103 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8d4b5776-2c37-4d23-a1ef-4738230012db","Type":"ContainerStarted","Data":"694e97d353e207e268f0e6313ce9897587342003cd51f80beb7356d64cd38135"} Feb 27 16:30:13 crc kubenswrapper[4830]: I0227 16:30:13.956786 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-948fdb9cd-ncm6f" event={"ID":"22232c9c-ecf7-443e-834f-ad39b37735b2","Type":"ContainerStarted","Data":"91059dd00f11fc333eace4b793fe5a4f3fca466216720380e52c9fb9f6ce33ff"} Feb 27 16:30:13 crc kubenswrapper[4830]: I0227 16:30:13.956812 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-948fdb9cd-ncm6f" event={"ID":"22232c9c-ecf7-443e-834f-ad39b37735b2","Type":"ContainerStarted","Data":"6cf3d9b94980e2ca5aa0032ef28c8b51ac4ff272ea01954cb10fbe1ad64d9f4b"} Feb 27 16:30:13 crc kubenswrapper[4830]: I0227 16:30:13.959706 4830 generic.go:334] "Generic (PLEG): container finished" podID="03b0201f-0147-4757-b5a1-6109d5c7ed94" containerID="d7e731781c382958c7640a67734856c1313b66fa64c74a2baa017def22b8e0e5" exitCode=0 Feb 27 16:30:13 crc kubenswrapper[4830]: I0227 16:30:13.960016 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-244cw" event={"ID":"03b0201f-0147-4757-b5a1-6109d5c7ed94","Type":"ContainerDied","Data":"d7e731781c382958c7640a67734856c1313b66fa64c74a2baa017def22b8e0e5"} Feb 27 16:30:13 crc kubenswrapper[4830]: I0227 16:30:13.960055 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-244cw" event={"ID":"03b0201f-0147-4757-b5a1-6109d5c7ed94","Type":"ContainerDied","Data":"a03ba04743b7c4f41d1fea28c83449a214445ab02b3f1f884899e4854c6f74ea"} Feb 27 16:30:13 crc kubenswrapper[4830]: I0227 16:30:13.960075 4830 scope.go:117] "RemoveContainer" containerID="d7e731781c382958c7640a67734856c1313b66fa64c74a2baa017def22b8e0e5" Feb 27 16:30:13 crc kubenswrapper[4830]: I0227 16:30:13.960097 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-244cw" Feb 27 16:30:13 crc kubenswrapper[4830]: I0227 16:30:13.995190 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-948fdb9cd-ncm6f" podStartSLOduration=3.148163387 podStartE2EDuration="4.995173271s" podCreationTimestamp="2026-02-27 16:30:09 +0000 UTC" firstStartedPulling="2026-02-27 16:30:10.725518898 +0000 UTC m=+1406.814791361" lastFinishedPulling="2026-02-27 16:30:12.572528782 +0000 UTC m=+1408.661801245" observedRunningTime="2026-02-27 16:30:13.992755193 +0000 UTC m=+1410.082027656" watchObservedRunningTime="2026-02-27 16:30:13.995173271 +0000 UTC m=+1410.084445734" Feb 27 16:30:14 crc kubenswrapper[4830]: I0227 16:30:14.023618 4830 scope.go:117] "RemoveContainer" containerID="c8b8abf14112a674db4a179c12314ef74d6aa35ee51b308fd44b4896c7805a9f" Feb 27 16:30:14 crc kubenswrapper[4830]: I0227 16:30:14.025528 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-244cw"] Feb 27 16:30:14 crc kubenswrapper[4830]: I0227 16:30:14.040930 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-244cw"] Feb 27 16:30:14 crc kubenswrapper[4830]: I0227 16:30:14.224332 4830 scope.go:117] "RemoveContainer" containerID="d7e731781c382958c7640a67734856c1313b66fa64c74a2baa017def22b8e0e5" Feb 27 16:30:14 crc kubenswrapper[4830]: E0227 16:30:14.224790 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7e731781c382958c7640a67734856c1313b66fa64c74a2baa017def22b8e0e5\": container with ID starting with d7e731781c382958c7640a67734856c1313b66fa64c74a2baa017def22b8e0e5 not found: ID does not exist" containerID="d7e731781c382958c7640a67734856c1313b66fa64c74a2baa017def22b8e0e5" Feb 27 16:30:14 crc kubenswrapper[4830]: I0227 16:30:14.224829 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7e731781c382958c7640a67734856c1313b66fa64c74a2baa017def22b8e0e5"} err="failed to get container status \"d7e731781c382958c7640a67734856c1313b66fa64c74a2baa017def22b8e0e5\": rpc error: code = NotFound desc = could not find container \"d7e731781c382958c7640a67734856c1313b66fa64c74a2baa017def22b8e0e5\": container with ID starting with d7e731781c382958c7640a67734856c1313b66fa64c74a2baa017def22b8e0e5 not found: ID does not exist" Feb 27 16:30:14 crc kubenswrapper[4830]: I0227 16:30:14.224856 4830 scope.go:117] "RemoveContainer" containerID="c8b8abf14112a674db4a179c12314ef74d6aa35ee51b308fd44b4896c7805a9f" Feb 27 16:30:14 crc kubenswrapper[4830]: E0227 16:30:14.225241 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8b8abf14112a674db4a179c12314ef74d6aa35ee51b308fd44b4896c7805a9f\": container with ID starting with c8b8abf14112a674db4a179c12314ef74d6aa35ee51b308fd44b4896c7805a9f not found: ID does not exist" containerID="c8b8abf14112a674db4a179c12314ef74d6aa35ee51b308fd44b4896c7805a9f" Feb 27 16:30:14 crc kubenswrapper[4830]: I0227 16:30:14.225294 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8b8abf14112a674db4a179c12314ef74d6aa35ee51b308fd44b4896c7805a9f"} err="failed to get container status \"c8b8abf14112a674db4a179c12314ef74d6aa35ee51b308fd44b4896c7805a9f\": rpc error: code = NotFound desc = could not find container \"c8b8abf14112a674db4a179c12314ef74d6aa35ee51b308fd44b4896c7805a9f\": container with ID starting with c8b8abf14112a674db4a179c12314ef74d6aa35ee51b308fd44b4896c7805a9f not found: ID does not exist" Feb 27 16:30:14 crc kubenswrapper[4830]: I0227 16:30:14.790156 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03b0201f-0147-4757-b5a1-6109d5c7ed94" path="/var/lib/kubelet/pods/03b0201f-0147-4757-b5a1-6109d5c7ed94/volumes" Feb 27 16:30:14 crc kubenswrapper[4830]: I0227 16:30:14.791557 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d13ca02-7160-46d2-9c14-c123b6e44512" path="/var/lib/kubelet/pods/9d13ca02-7160-46d2-9c14-c123b6e44512/volumes" Feb 27 16:30:14 crc kubenswrapper[4830]: I0227 16:30:14.982375 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"da0f5d1f-1944-4347-8a18-fd946fb7ed6a","Type":"ContainerStarted","Data":"6c582c45fe25a3893424ec2ce2705ebcf999c507acb0c5f092150ee067d084e7"} Feb 27 16:30:14 crc kubenswrapper[4830]: I0227 16:30:14.991264 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-g94gr" event={"ID":"eeb02ef6-6b7f-4e31-8446-f2376b49d69a","Type":"ContainerStarted","Data":"8cb22e02dc7c56d9a73491851f8034a163c7f8516c7abd172d22f31cec725929"} Feb 27 16:30:14 crc kubenswrapper[4830]: I0227 16:30:14.992471 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-g94gr" Feb 27 16:30:14 crc kubenswrapper[4830]: I0227 16:30:14.998840 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8d4b5776-2c37-4d23-a1ef-4738230012db","Type":"ContainerStarted","Data":"9f3142e6a356a06e85a723fe844009aa7172519dfe0722019555f2bb001193b3"} Feb 27 16:30:15 crc kubenswrapper[4830]: I0227 16:30:15.001662 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa935bfb-ebfd-4aa9-abc3-84d118252abe","Type":"ContainerStarted","Data":"78fb851103bba5ab46085de10cfd0d141bab5e2bf3115eeb51d3305f719fb23b"} Feb 27 16:30:15 crc kubenswrapper[4830]: I0227 16:30:15.002239 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 16:30:15 crc kubenswrapper[4830]: I0227 16:30:15.004855 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-58c49587-cz4f5" event={"ID":"f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32","Type":"ContainerStarted","Data":"3bd476206784383c2fbe0db210deee00da003f513b1f05dcbc55ea33c264c212"} Feb 27 16:30:15 crc kubenswrapper[4830]: I0227 16:30:15.004891 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-58c49587-cz4f5" event={"ID":"f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32","Type":"ContainerStarted","Data":"d25e9e29213d4dd9d13dc6e8f8443d64cbecee22307bae547934dfd69a24c51a"} Feb 27 16:30:15 crc kubenswrapper[4830]: I0227 16:30:15.020967 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-g94gr" podStartSLOduration=3.02093458 podStartE2EDuration="3.02093458s" podCreationTimestamp="2026-02-27 16:30:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:30:15.018843769 +0000 UTC m=+1411.108116232" watchObservedRunningTime="2026-02-27 16:30:15.02093458 +0000 UTC m=+1411.110207043" Feb 27 16:30:15 crc kubenswrapper[4830]: I0227 16:30:15.071393 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.557471249 podStartE2EDuration="8.071375066s" podCreationTimestamp="2026-02-27 16:30:07 +0000 UTC" firstStartedPulling="2026-02-27 16:30:08.509860943 +0000 UTC m=+1404.599133416" lastFinishedPulling="2026-02-27 16:30:14.02376478 +0000 UTC m=+1410.113037233" observedRunningTime="2026-02-27 16:30:15.049440787 +0000 UTC m=+1411.138713250" watchObservedRunningTime="2026-02-27 16:30:15.071375066 +0000 UTC m=+1411.160647529" Feb 27 16:30:15 crc kubenswrapper[4830]: I0227 16:30:15.118756 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-58c49587-cz4f5" podStartSLOduration=3.337571506 podStartE2EDuration="6.118733568s" podCreationTimestamp="2026-02-27 16:30:09 +0000 UTC" firstStartedPulling="2026-02-27 16:30:11.242375573 +0000 UTC m=+1407.331648036" lastFinishedPulling="2026-02-27 16:30:14.023537635 +0000 UTC m=+1410.112810098" observedRunningTime="2026-02-27 16:30:15.068546148 +0000 UTC m=+1411.157818611" watchObservedRunningTime="2026-02-27 16:30:15.118733568 +0000 UTC m=+1411.208006031" Feb 27 16:30:16 crc kubenswrapper[4830]: I0227 16:30:16.023485 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"da0f5d1f-1944-4347-8a18-fd946fb7ed6a","Type":"ContainerStarted","Data":"4784515ee13ec5f2e9c5dbdedc925b93fda50d5cc115163838e530a4050b1298"} Feb 27 16:30:16 crc kubenswrapper[4830]: I0227 16:30:16.032347 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8d4b5776-2c37-4d23-a1ef-4738230012db","Type":"ContainerStarted","Data":"fef4597a9f24d5b1707762a89f295a161fc358fcd78e352499d8f554dadd2b38"} Feb 27 16:30:16 crc kubenswrapper[4830]: I0227 16:30:16.034038 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 27 16:30:16 crc kubenswrapper[4830]: I0227 16:30:16.047748 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.9694250479999997 podStartE2EDuration="5.047728163s" podCreationTimestamp="2026-02-27 16:30:11 +0000 UTC" firstStartedPulling="2026-02-27 16:30:13.16040695 +0000 UTC m=+1409.249679413" lastFinishedPulling="2026-02-27 16:30:14.238710065 +0000 UTC m=+1410.327982528" observedRunningTime="2026-02-27 16:30:16.046998304 +0000 UTC m=+1412.136270777" watchObservedRunningTime="2026-02-27 16:30:16.047728163 +0000 UTC m=+1412.137000626" Feb 27 16:30:16 crc kubenswrapper[4830]: I0227 16:30:16.082323 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.082307097 podStartE2EDuration="4.082307097s" podCreationTimestamp="2026-02-27 16:30:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:30:16.079490399 +0000 UTC m=+1412.168762852" watchObservedRunningTime="2026-02-27 16:30:16.082307097 +0000 UTC m=+1412.171579560" Feb 27 16:30:16 crc kubenswrapper[4830]: I0227 16:30:16.553314 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 27 16:30:16 crc kubenswrapper[4830]: I0227 16:30:16.686592 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8559c55d4f-z6hpf"] Feb 27 16:30:16 crc kubenswrapper[4830]: E0227 16:30:16.687184 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03b0201f-0147-4757-b5a1-6109d5c7ed94" containerName="dnsmasq-dns" Feb 27 16:30:16 crc kubenswrapper[4830]: I0227 16:30:16.687252 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="03b0201f-0147-4757-b5a1-6109d5c7ed94" containerName="dnsmasq-dns" Feb 27 16:30:16 crc kubenswrapper[4830]: E0227 16:30:16.687314 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1141b071-f448-4a3f-b062-0255dd5dc38a" containerName="oc" Feb 27 16:30:16 crc kubenswrapper[4830]: I0227 16:30:16.687374 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="1141b071-f448-4a3f-b062-0255dd5dc38a" containerName="oc" Feb 27 16:30:16 crc kubenswrapper[4830]: E0227 16:30:16.687433 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03b0201f-0147-4757-b5a1-6109d5c7ed94" containerName="init" Feb 27 16:30:16 crc kubenswrapper[4830]: I0227 16:30:16.687492 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="03b0201f-0147-4757-b5a1-6109d5c7ed94" containerName="init" Feb 27 16:30:16 crc kubenswrapper[4830]: I0227 16:30:16.687715 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="03b0201f-0147-4757-b5a1-6109d5c7ed94" containerName="dnsmasq-dns" Feb 27 16:30:16 crc kubenswrapper[4830]: I0227 16:30:16.687799 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="1141b071-f448-4a3f-b062-0255dd5dc38a" containerName="oc" Feb 27 16:30:16 crc kubenswrapper[4830]: I0227 16:30:16.688815 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8559c55d4f-z6hpf" Feb 27 16:30:16 crc kubenswrapper[4830]: I0227 16:30:16.691098 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 27 16:30:16 crc kubenswrapper[4830]: I0227 16:30:16.691289 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 27 16:30:16 crc kubenswrapper[4830]: I0227 16:30:16.702258 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8559c55d4f-z6hpf"] Feb 27 16:30:16 crc kubenswrapper[4830]: I0227 16:30:16.726358 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/acdbf1f3-efd7-4181-b99c-a0697c465c4b-ovndb-tls-certs\") pod \"neutron-8559c55d4f-z6hpf\" (UID: \"acdbf1f3-efd7-4181-b99c-a0697c465c4b\") " pod="openstack/neutron-8559c55d4f-z6hpf" Feb 27 16:30:16 crc kubenswrapper[4830]: I0227 16:30:16.726449 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/acdbf1f3-efd7-4181-b99c-a0697c465c4b-config\") pod \"neutron-8559c55d4f-z6hpf\" (UID: \"acdbf1f3-efd7-4181-b99c-a0697c465c4b\") " pod="openstack/neutron-8559c55d4f-z6hpf" Feb 27 16:30:16 crc kubenswrapper[4830]: I0227 16:30:16.726724 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/acdbf1f3-efd7-4181-b99c-a0697c465c4b-internal-tls-certs\") pod \"neutron-8559c55d4f-z6hpf\" (UID: \"acdbf1f3-efd7-4181-b99c-a0697c465c4b\") " pod="openstack/neutron-8559c55d4f-z6hpf" Feb 27 16:30:16 crc kubenswrapper[4830]: I0227 16:30:16.726998 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acdbf1f3-efd7-4181-b99c-a0697c465c4b-combined-ca-bundle\") pod \"neutron-8559c55d4f-z6hpf\" (UID: \"acdbf1f3-efd7-4181-b99c-a0697c465c4b\") " pod="openstack/neutron-8559c55d4f-z6hpf" Feb 27 16:30:16 crc kubenswrapper[4830]: I0227 16:30:16.727030 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dkvn\" (UniqueName: \"kubernetes.io/projected/acdbf1f3-efd7-4181-b99c-a0697c465c4b-kube-api-access-8dkvn\") pod \"neutron-8559c55d4f-z6hpf\" (UID: \"acdbf1f3-efd7-4181-b99c-a0697c465c4b\") " pod="openstack/neutron-8559c55d4f-z6hpf" Feb 27 16:30:16 crc kubenswrapper[4830]: I0227 16:30:16.727049 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/acdbf1f3-efd7-4181-b99c-a0697c465c4b-httpd-config\") pod \"neutron-8559c55d4f-z6hpf\" (UID: \"acdbf1f3-efd7-4181-b99c-a0697c465c4b\") " pod="openstack/neutron-8559c55d4f-z6hpf" Feb 27 16:30:16 crc kubenswrapper[4830]: I0227 16:30:16.727166 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/acdbf1f3-efd7-4181-b99c-a0697c465c4b-public-tls-certs\") pod \"neutron-8559c55d4f-z6hpf\" (UID: \"acdbf1f3-efd7-4181-b99c-a0697c465c4b\") " pod="openstack/neutron-8559c55d4f-z6hpf" Feb 27 16:30:16 crc kubenswrapper[4830]: I0227 16:30:16.829319 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/acdbf1f3-efd7-4181-b99c-a0697c465c4b-internal-tls-certs\") pod \"neutron-8559c55d4f-z6hpf\" (UID: \"acdbf1f3-efd7-4181-b99c-a0697c465c4b\") " pod="openstack/neutron-8559c55d4f-z6hpf" Feb 27 16:30:16 crc kubenswrapper[4830]: I0227 16:30:16.829442 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acdbf1f3-efd7-4181-b99c-a0697c465c4b-combined-ca-bundle\") pod \"neutron-8559c55d4f-z6hpf\" (UID: \"acdbf1f3-efd7-4181-b99c-a0697c465c4b\") " pod="openstack/neutron-8559c55d4f-z6hpf" Feb 27 16:30:16 crc kubenswrapper[4830]: I0227 16:30:16.829483 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dkvn\" (UniqueName: \"kubernetes.io/projected/acdbf1f3-efd7-4181-b99c-a0697c465c4b-kube-api-access-8dkvn\") pod \"neutron-8559c55d4f-z6hpf\" (UID: \"acdbf1f3-efd7-4181-b99c-a0697c465c4b\") " pod="openstack/neutron-8559c55d4f-z6hpf" Feb 27 16:30:16 crc kubenswrapper[4830]: I0227 16:30:16.829508 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/acdbf1f3-efd7-4181-b99c-a0697c465c4b-httpd-config\") pod \"neutron-8559c55d4f-z6hpf\" (UID: \"acdbf1f3-efd7-4181-b99c-a0697c465c4b\") " pod="openstack/neutron-8559c55d4f-z6hpf" Feb 27 16:30:16 crc kubenswrapper[4830]: I0227 16:30:16.829770 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/acdbf1f3-efd7-4181-b99c-a0697c465c4b-public-tls-certs\") pod \"neutron-8559c55d4f-z6hpf\" (UID: \"acdbf1f3-efd7-4181-b99c-a0697c465c4b\") " pod="openstack/neutron-8559c55d4f-z6hpf" Feb 27 16:30:16 crc kubenswrapper[4830]: I0227 16:30:16.829845 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/acdbf1f3-efd7-4181-b99c-a0697c465c4b-ovndb-tls-certs\") pod \"neutron-8559c55d4f-z6hpf\" (UID: \"acdbf1f3-efd7-4181-b99c-a0697c465c4b\") " pod="openstack/neutron-8559c55d4f-z6hpf" Feb 27 16:30:16 crc kubenswrapper[4830]: I0227 16:30:16.831001 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/acdbf1f3-efd7-4181-b99c-a0697c465c4b-config\") pod \"neutron-8559c55d4f-z6hpf\" (UID: \"acdbf1f3-efd7-4181-b99c-a0697c465c4b\") " pod="openstack/neutron-8559c55d4f-z6hpf" Feb 27 16:30:16 crc kubenswrapper[4830]: I0227 16:30:16.838716 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/acdbf1f3-efd7-4181-b99c-a0697c465c4b-internal-tls-certs\") pod \"neutron-8559c55d4f-z6hpf\" (UID: \"acdbf1f3-efd7-4181-b99c-a0697c465c4b\") " pod="openstack/neutron-8559c55d4f-z6hpf" Feb 27 16:30:16 crc kubenswrapper[4830]: I0227 16:30:16.841481 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/acdbf1f3-efd7-4181-b99c-a0697c465c4b-ovndb-tls-certs\") pod \"neutron-8559c55d4f-z6hpf\" (UID: \"acdbf1f3-efd7-4181-b99c-a0697c465c4b\") " pod="openstack/neutron-8559c55d4f-z6hpf" Feb 27 16:30:16 crc kubenswrapper[4830]: I0227 16:30:16.841685 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acdbf1f3-efd7-4181-b99c-a0697c465c4b-combined-ca-bundle\") pod \"neutron-8559c55d4f-z6hpf\" (UID: \"acdbf1f3-efd7-4181-b99c-a0697c465c4b\") " pod="openstack/neutron-8559c55d4f-z6hpf" Feb 27 16:30:16 crc kubenswrapper[4830]: I0227 16:30:16.842287 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/acdbf1f3-efd7-4181-b99c-a0697c465c4b-config\") pod \"neutron-8559c55d4f-z6hpf\" (UID: \"acdbf1f3-efd7-4181-b99c-a0697c465c4b\") " pod="openstack/neutron-8559c55d4f-z6hpf" Feb 27 16:30:16 crc kubenswrapper[4830]: I0227 16:30:16.846270 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/acdbf1f3-efd7-4181-b99c-a0697c465c4b-httpd-config\") pod \"neutron-8559c55d4f-z6hpf\" (UID: \"acdbf1f3-efd7-4181-b99c-a0697c465c4b\") " pod="openstack/neutron-8559c55d4f-z6hpf" Feb 27 16:30:16 crc kubenswrapper[4830]: I0227 16:30:16.847125 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/acdbf1f3-efd7-4181-b99c-a0697c465c4b-public-tls-certs\") pod \"neutron-8559c55d4f-z6hpf\" (UID: \"acdbf1f3-efd7-4181-b99c-a0697c465c4b\") " pod="openstack/neutron-8559c55d4f-z6hpf" Feb 27 16:30:16 crc kubenswrapper[4830]: I0227 16:30:16.848688 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dkvn\" (UniqueName: \"kubernetes.io/projected/acdbf1f3-efd7-4181-b99c-a0697c465c4b-kube-api-access-8dkvn\") pod \"neutron-8559c55d4f-z6hpf\" (UID: \"acdbf1f3-efd7-4181-b99c-a0697c465c4b\") " pod="openstack/neutron-8559c55d4f-z6hpf" Feb 27 16:30:17 crc kubenswrapper[4830]: I0227 16:30:17.004983 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8559c55d4f-z6hpf" Feb 27 16:30:17 crc kubenswrapper[4830]: I0227 16:30:17.398731 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 27 16:30:17 crc kubenswrapper[4830]: I0227 16:30:17.629084 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8559c55d4f-z6hpf"] Feb 27 16:30:17 crc kubenswrapper[4830]: W0227 16:30:17.642855 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podacdbf1f3_efd7_4181_b99c_a0697c465c4b.slice/crio-5de618222396caaef75cd85687bfe44cc5a6458f007071c8e6edcbabb8998680 WatchSource:0}: Error finding container 5de618222396caaef75cd85687bfe44cc5a6458f007071c8e6edcbabb8998680: Status 404 returned error can't find the container with id 5de618222396caaef75cd85687bfe44cc5a6458f007071c8e6edcbabb8998680 Feb 27 16:30:18 crc kubenswrapper[4830]: I0227 16:30:18.062914 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8559c55d4f-z6hpf" event={"ID":"acdbf1f3-efd7-4181-b99c-a0697c465c4b","Type":"ContainerStarted","Data":"a56e16403fc2d569470e79c24225b344a16dacbbe2255d02caeb6351695ce986"} Feb 27 16:30:18 crc kubenswrapper[4830]: I0227 16:30:18.063298 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8559c55d4f-z6hpf" event={"ID":"acdbf1f3-efd7-4181-b99c-a0697c465c4b","Type":"ContainerStarted","Data":"5de618222396caaef75cd85687bfe44cc5a6458f007071c8e6edcbabb8998680"} Feb 27 16:30:18 crc kubenswrapper[4830]: I0227 16:30:18.063092 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="8d4b5776-2c37-4d23-a1ef-4738230012db" containerName="cinder-api-log" containerID="cri-o://9f3142e6a356a06e85a723fe844009aa7172519dfe0722019555f2bb001193b3" gracePeriod=30 Feb 27 16:30:18 crc kubenswrapper[4830]: I0227 16:30:18.063752 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="8d4b5776-2c37-4d23-a1ef-4738230012db" containerName="cinder-api" containerID="cri-o://fef4597a9f24d5b1707762a89f295a161fc358fcd78e352499d8f554dadd2b38" gracePeriod=30 Feb 27 16:30:18 crc kubenswrapper[4830]: I0227 16:30:18.534739 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 16:30:18 crc kubenswrapper[4830]: I0227 16:30:18.675758 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d4b5776-2c37-4d23-a1ef-4738230012db-logs\") pod \"8d4b5776-2c37-4d23-a1ef-4738230012db\" (UID: \"8d4b5776-2c37-4d23-a1ef-4738230012db\") " Feb 27 16:30:18 crc kubenswrapper[4830]: I0227 16:30:18.675836 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d4b5776-2c37-4d23-a1ef-4738230012db-config-data-custom\") pod \"8d4b5776-2c37-4d23-a1ef-4738230012db\" (UID: \"8d4b5776-2c37-4d23-a1ef-4738230012db\") " Feb 27 16:30:18 crc kubenswrapper[4830]: I0227 16:30:18.675867 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8d4b5776-2c37-4d23-a1ef-4738230012db-etc-machine-id\") pod \"8d4b5776-2c37-4d23-a1ef-4738230012db\" (UID: \"8d4b5776-2c37-4d23-a1ef-4738230012db\") " Feb 27 16:30:18 crc kubenswrapper[4830]: I0227 16:30:18.675903 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-825j4\" (UniqueName: \"kubernetes.io/projected/8d4b5776-2c37-4d23-a1ef-4738230012db-kube-api-access-825j4\") pod \"8d4b5776-2c37-4d23-a1ef-4738230012db\" (UID: \"8d4b5776-2c37-4d23-a1ef-4738230012db\") " Feb 27 16:30:18 crc kubenswrapper[4830]: I0227 16:30:18.675930 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d4b5776-2c37-4d23-a1ef-4738230012db-config-data\") pod \"8d4b5776-2c37-4d23-a1ef-4738230012db\" (UID: \"8d4b5776-2c37-4d23-a1ef-4738230012db\") " Feb 27 16:30:18 crc kubenswrapper[4830]: I0227 16:30:18.675964 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d4b5776-2c37-4d23-a1ef-4738230012db-scripts\") pod \"8d4b5776-2c37-4d23-a1ef-4738230012db\" (UID: \"8d4b5776-2c37-4d23-a1ef-4738230012db\") " Feb 27 16:30:18 crc kubenswrapper[4830]: I0227 16:30:18.675983 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d4b5776-2c37-4d23-a1ef-4738230012db-combined-ca-bundle\") pod \"8d4b5776-2c37-4d23-a1ef-4738230012db\" (UID: \"8d4b5776-2c37-4d23-a1ef-4738230012db\") " Feb 27 16:30:18 crc kubenswrapper[4830]: I0227 16:30:18.676098 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d4b5776-2c37-4d23-a1ef-4738230012db-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8d4b5776-2c37-4d23-a1ef-4738230012db" (UID: "8d4b5776-2c37-4d23-a1ef-4738230012db"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:30:18 crc kubenswrapper[4830]: I0227 16:30:18.676665 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d4b5776-2c37-4d23-a1ef-4738230012db-logs" (OuterVolumeSpecName: "logs") pod "8d4b5776-2c37-4d23-a1ef-4738230012db" (UID: "8d4b5776-2c37-4d23-a1ef-4738230012db"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:30:18 crc kubenswrapper[4830]: I0227 16:30:18.676678 4830 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8d4b5776-2c37-4d23-a1ef-4738230012db-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:18 crc kubenswrapper[4830]: I0227 16:30:18.682025 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d4b5776-2c37-4d23-a1ef-4738230012db-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8d4b5776-2c37-4d23-a1ef-4738230012db" (UID: "8d4b5776-2c37-4d23-a1ef-4738230012db"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:18 crc kubenswrapper[4830]: I0227 16:30:18.682078 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d4b5776-2c37-4d23-a1ef-4738230012db-kube-api-access-825j4" (OuterVolumeSpecName: "kube-api-access-825j4") pod "8d4b5776-2c37-4d23-a1ef-4738230012db" (UID: "8d4b5776-2c37-4d23-a1ef-4738230012db"). InnerVolumeSpecName "kube-api-access-825j4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:30:18 crc kubenswrapper[4830]: I0227 16:30:18.683104 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d4b5776-2c37-4d23-a1ef-4738230012db-scripts" (OuterVolumeSpecName: "scripts") pod "8d4b5776-2c37-4d23-a1ef-4738230012db" (UID: "8d4b5776-2c37-4d23-a1ef-4738230012db"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:18 crc kubenswrapper[4830]: I0227 16:30:18.714500 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d4b5776-2c37-4d23-a1ef-4738230012db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d4b5776-2c37-4d23-a1ef-4738230012db" (UID: "8d4b5776-2c37-4d23-a1ef-4738230012db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:18 crc kubenswrapper[4830]: I0227 16:30:18.764186 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d4b5776-2c37-4d23-a1ef-4738230012db-config-data" (OuterVolumeSpecName: "config-data") pod "8d4b5776-2c37-4d23-a1ef-4738230012db" (UID: "8d4b5776-2c37-4d23-a1ef-4738230012db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:18 crc kubenswrapper[4830]: I0227 16:30:18.778514 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d4b5776-2c37-4d23-a1ef-4738230012db-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:18 crc kubenswrapper[4830]: I0227 16:30:18.778544 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d4b5776-2c37-4d23-a1ef-4738230012db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:18 crc kubenswrapper[4830]: I0227 16:30:18.778557 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d4b5776-2c37-4d23-a1ef-4738230012db-logs\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:18 crc kubenswrapper[4830]: I0227 16:30:18.778568 4830 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8d4b5776-2c37-4d23-a1ef-4738230012db-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:18 crc kubenswrapper[4830]: I0227 16:30:18.778577 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-825j4\" (UniqueName: \"kubernetes.io/projected/8d4b5776-2c37-4d23-a1ef-4738230012db-kube-api-access-825j4\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:18 crc kubenswrapper[4830]: I0227 16:30:18.778586 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d4b5776-2c37-4d23-a1ef-4738230012db-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.074043 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8559c55d4f-z6hpf" event={"ID":"acdbf1f3-efd7-4181-b99c-a0697c465c4b","Type":"ContainerStarted","Data":"825cde15be9549d56742ccbdc2f57b6324396f78c69861f72b851d87071dd387"} Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.074178 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-8559c55d4f-z6hpf" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.076148 4830 generic.go:334] "Generic (PLEG): container finished" podID="8d4b5776-2c37-4d23-a1ef-4738230012db" containerID="fef4597a9f24d5b1707762a89f295a161fc358fcd78e352499d8f554dadd2b38" exitCode=0 Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.076192 4830 generic.go:334] "Generic (PLEG): container finished" podID="8d4b5776-2c37-4d23-a1ef-4738230012db" containerID="9f3142e6a356a06e85a723fe844009aa7172519dfe0722019555f2bb001193b3" exitCode=143 Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.076208 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8d4b5776-2c37-4d23-a1ef-4738230012db","Type":"ContainerDied","Data":"fef4597a9f24d5b1707762a89f295a161fc358fcd78e352499d8f554dadd2b38"} Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.076258 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8d4b5776-2c37-4d23-a1ef-4738230012db","Type":"ContainerDied","Data":"9f3142e6a356a06e85a723fe844009aa7172519dfe0722019555f2bb001193b3"} Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.076269 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8d4b5776-2c37-4d23-a1ef-4738230012db","Type":"ContainerDied","Data":"694e97d353e207e268f0e6313ce9897587342003cd51f80beb7356d64cd38135"} Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.076283 4830 scope.go:117] "RemoveContainer" containerID="fef4597a9f24d5b1707762a89f295a161fc358fcd78e352499d8f554dadd2b38" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.076221 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.089597 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8559c55d4f-z6hpf" podStartSLOduration=3.089583102 podStartE2EDuration="3.089583102s" podCreationTimestamp="2026-02-27 16:30:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:30:19.088804024 +0000 UTC m=+1415.178076517" watchObservedRunningTime="2026-02-27 16:30:19.089583102 +0000 UTC m=+1415.178855565" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.108869 4830 scope.go:117] "RemoveContainer" containerID="9f3142e6a356a06e85a723fe844009aa7172519dfe0722019555f2bb001193b3" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.122710 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.130152 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.142690 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 27 16:30:19 crc kubenswrapper[4830]: E0227 16:30:19.143050 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d4b5776-2c37-4d23-a1ef-4738230012db" containerName="cinder-api" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.143065 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d4b5776-2c37-4d23-a1ef-4738230012db" containerName="cinder-api" Feb 27 16:30:19 crc kubenswrapper[4830]: E0227 16:30:19.143087 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d4b5776-2c37-4d23-a1ef-4738230012db" containerName="cinder-api-log" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.143095 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d4b5776-2c37-4d23-a1ef-4738230012db" containerName="cinder-api-log" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.143253 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d4b5776-2c37-4d23-a1ef-4738230012db" containerName="cinder-api-log" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.143281 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d4b5776-2c37-4d23-a1ef-4738230012db" containerName="cinder-api" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.145532 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.148200 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.148382 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.148582 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.160139 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.175101 4830 scope.go:117] "RemoveContainer" containerID="fef4597a9f24d5b1707762a89f295a161fc358fcd78e352499d8f554dadd2b38" Feb 27 16:30:19 crc kubenswrapper[4830]: E0227 16:30:19.186562 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fef4597a9f24d5b1707762a89f295a161fc358fcd78e352499d8f554dadd2b38\": container with ID starting with fef4597a9f24d5b1707762a89f295a161fc358fcd78e352499d8f554dadd2b38 not found: ID does not exist" containerID="fef4597a9f24d5b1707762a89f295a161fc358fcd78e352499d8f554dadd2b38" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.186609 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fef4597a9f24d5b1707762a89f295a161fc358fcd78e352499d8f554dadd2b38"} err="failed to get container status \"fef4597a9f24d5b1707762a89f295a161fc358fcd78e352499d8f554dadd2b38\": rpc error: code = NotFound desc = could not find container \"fef4597a9f24d5b1707762a89f295a161fc358fcd78e352499d8f554dadd2b38\": container with ID starting with fef4597a9f24d5b1707762a89f295a161fc358fcd78e352499d8f554dadd2b38 not found: ID does not exist" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.186635 4830 scope.go:117] "RemoveContainer" containerID="9f3142e6a356a06e85a723fe844009aa7172519dfe0722019555f2bb001193b3" Feb 27 16:30:19 crc kubenswrapper[4830]: E0227 16:30:19.190426 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f3142e6a356a06e85a723fe844009aa7172519dfe0722019555f2bb001193b3\": container with ID starting with 9f3142e6a356a06e85a723fe844009aa7172519dfe0722019555f2bb001193b3 not found: ID does not exist" containerID="9f3142e6a356a06e85a723fe844009aa7172519dfe0722019555f2bb001193b3" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.190460 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f3142e6a356a06e85a723fe844009aa7172519dfe0722019555f2bb001193b3"} err="failed to get container status \"9f3142e6a356a06e85a723fe844009aa7172519dfe0722019555f2bb001193b3\": rpc error: code = NotFound desc = could not find container \"9f3142e6a356a06e85a723fe844009aa7172519dfe0722019555f2bb001193b3\": container with ID starting with 9f3142e6a356a06e85a723fe844009aa7172519dfe0722019555f2bb001193b3 not found: ID does not exist" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.190478 4830 scope.go:117] "RemoveContainer" containerID="fef4597a9f24d5b1707762a89f295a161fc358fcd78e352499d8f554dadd2b38" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.199101 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fef4597a9f24d5b1707762a89f295a161fc358fcd78e352499d8f554dadd2b38"} err="failed to get container status \"fef4597a9f24d5b1707762a89f295a161fc358fcd78e352499d8f554dadd2b38\": rpc error: code = NotFound desc = could not find container \"fef4597a9f24d5b1707762a89f295a161fc358fcd78e352499d8f554dadd2b38\": container with ID starting with fef4597a9f24d5b1707762a89f295a161fc358fcd78e352499d8f554dadd2b38 not found: ID does not exist" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.199162 4830 scope.go:117] "RemoveContainer" containerID="9f3142e6a356a06e85a723fe844009aa7172519dfe0722019555f2bb001193b3" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.208109 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f3142e6a356a06e85a723fe844009aa7172519dfe0722019555f2bb001193b3"} err="failed to get container status \"9f3142e6a356a06e85a723fe844009aa7172519dfe0722019555f2bb001193b3\": rpc error: code = NotFound desc = could not find container \"9f3142e6a356a06e85a723fe844009aa7172519dfe0722019555f2bb001193b3\": container with ID starting with 9f3142e6a356a06e85a723fe844009aa7172519dfe0722019555f2bb001193b3 not found: ID does not exist" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.288751 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"41fafe33-b43b-4dcb-9edd-b365d0749e10\") " pod="openstack/cinder-api-0" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.288797 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-scripts\") pod \"cinder-api-0\" (UID: \"41fafe33-b43b-4dcb-9edd-b365d0749e10\") " pod="openstack/cinder-api-0" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.288823 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/41fafe33-b43b-4dcb-9edd-b365d0749e10-etc-machine-id\") pod \"cinder-api-0\" (UID: \"41fafe33-b43b-4dcb-9edd-b365d0749e10\") " pod="openstack/cinder-api-0" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.288846 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-config-data-custom\") pod \"cinder-api-0\" (UID: \"41fafe33-b43b-4dcb-9edd-b365d0749e10\") " pod="openstack/cinder-api-0" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.289020 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-config-data\") pod \"cinder-api-0\" (UID: \"41fafe33-b43b-4dcb-9edd-b365d0749e10\") " pod="openstack/cinder-api-0" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.289062 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41fafe33-b43b-4dcb-9edd-b365d0749e10-logs\") pod \"cinder-api-0\" (UID: \"41fafe33-b43b-4dcb-9edd-b365d0749e10\") " pod="openstack/cinder-api-0" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.289307 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"41fafe33-b43b-4dcb-9edd-b365d0749e10\") " pod="openstack/cinder-api-0" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.289369 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvmrx\" (UniqueName: \"kubernetes.io/projected/41fafe33-b43b-4dcb-9edd-b365d0749e10-kube-api-access-zvmrx\") pod \"cinder-api-0\" (UID: \"41fafe33-b43b-4dcb-9edd-b365d0749e10\") " pod="openstack/cinder-api-0" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.289597 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-public-tls-certs\") pod \"cinder-api-0\" (UID: \"41fafe33-b43b-4dcb-9edd-b365d0749e10\") " pod="openstack/cinder-api-0" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.391685 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"41fafe33-b43b-4dcb-9edd-b365d0749e10\") " pod="openstack/cinder-api-0" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.391738 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvmrx\" (UniqueName: \"kubernetes.io/projected/41fafe33-b43b-4dcb-9edd-b365d0749e10-kube-api-access-zvmrx\") pod \"cinder-api-0\" (UID: \"41fafe33-b43b-4dcb-9edd-b365d0749e10\") " pod="openstack/cinder-api-0" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.391780 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-public-tls-certs\") pod \"cinder-api-0\" (UID: \"41fafe33-b43b-4dcb-9edd-b365d0749e10\") " pod="openstack/cinder-api-0" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.391837 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"41fafe33-b43b-4dcb-9edd-b365d0749e10\") " pod="openstack/cinder-api-0" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.391893 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-scripts\") pod \"cinder-api-0\" (UID: \"41fafe33-b43b-4dcb-9edd-b365d0749e10\") " pod="openstack/cinder-api-0" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.391911 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/41fafe33-b43b-4dcb-9edd-b365d0749e10-etc-machine-id\") pod \"cinder-api-0\" (UID: \"41fafe33-b43b-4dcb-9edd-b365d0749e10\") " pod="openstack/cinder-api-0" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.392338 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/41fafe33-b43b-4dcb-9edd-b365d0749e10-etc-machine-id\") pod \"cinder-api-0\" (UID: \"41fafe33-b43b-4dcb-9edd-b365d0749e10\") " pod="openstack/cinder-api-0" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.392781 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-config-data-custom\") pod \"cinder-api-0\" (UID: \"41fafe33-b43b-4dcb-9edd-b365d0749e10\") " pod="openstack/cinder-api-0" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.392821 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-config-data\") pod \"cinder-api-0\" (UID: \"41fafe33-b43b-4dcb-9edd-b365d0749e10\") " pod="openstack/cinder-api-0" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.392844 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41fafe33-b43b-4dcb-9edd-b365d0749e10-logs\") pod \"cinder-api-0\" (UID: \"41fafe33-b43b-4dcb-9edd-b365d0749e10\") " pod="openstack/cinder-api-0" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.393248 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41fafe33-b43b-4dcb-9edd-b365d0749e10-logs\") pod \"cinder-api-0\" (UID: \"41fafe33-b43b-4dcb-9edd-b365d0749e10\") " pod="openstack/cinder-api-0" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.399907 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-public-tls-certs\") pod \"cinder-api-0\" (UID: \"41fafe33-b43b-4dcb-9edd-b365d0749e10\") " pod="openstack/cinder-api-0" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.405896 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"41fafe33-b43b-4dcb-9edd-b365d0749e10\") " pod="openstack/cinder-api-0" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.406147 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-config-data-custom\") pod \"cinder-api-0\" (UID: \"41fafe33-b43b-4dcb-9edd-b365d0749e10\") " pod="openstack/cinder-api-0" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.408096 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-config-data\") pod \"cinder-api-0\" (UID: \"41fafe33-b43b-4dcb-9edd-b365d0749e10\") " pod="openstack/cinder-api-0" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.413464 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-scripts\") pod \"cinder-api-0\" (UID: \"41fafe33-b43b-4dcb-9edd-b365d0749e10\") " pod="openstack/cinder-api-0" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.414637 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"41fafe33-b43b-4dcb-9edd-b365d0749e10\") " pod="openstack/cinder-api-0" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.417577 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvmrx\" (UniqueName: \"kubernetes.io/projected/41fafe33-b43b-4dcb-9edd-b365d0749e10-kube-api-access-zvmrx\") pod \"cinder-api-0\" (UID: \"41fafe33-b43b-4dcb-9edd-b365d0749e10\") " pod="openstack/cinder-api-0" Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.472352 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 16:30:19 crc kubenswrapper[4830]: W0227 16:30:19.939242 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41fafe33_b43b_4dcb_9edd_b365d0749e10.slice/crio-c0d68aa16ecc6706ff17d105de79e65dbea1f9fef4f144b1b02f5ecb8a6a999e WatchSource:0}: Error finding container c0d68aa16ecc6706ff17d105de79e65dbea1f9fef4f144b1b02f5ecb8a6a999e: Status 404 returned error can't find the container with id c0d68aa16ecc6706ff17d105de79e65dbea1f9fef4f144b1b02f5ecb8a6a999e Feb 27 16:30:19 crc kubenswrapper[4830]: I0227 16:30:19.948685 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 27 16:30:20 crc kubenswrapper[4830]: I0227 16:30:20.099008 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"41fafe33-b43b-4dcb-9edd-b365d0749e10","Type":"ContainerStarted","Data":"c0d68aa16ecc6706ff17d105de79e65dbea1f9fef4f144b1b02f5ecb8a6a999e"} Feb 27 16:30:20 crc kubenswrapper[4830]: I0227 16:30:20.118048 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5c55fdd8d8-tv8zp" Feb 27 16:30:20 crc kubenswrapper[4830]: I0227 16:30:20.232411 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5c55fdd8d8-tv8zp" Feb 27 16:30:20 crc kubenswrapper[4830]: I0227 16:30:20.672277 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-58db7bd5dd-jr8zt" Feb 27 16:30:20 crc kubenswrapper[4830]: I0227 16:30:20.785524 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d4b5776-2c37-4d23-a1ef-4738230012db" path="/var/lib/kubelet/pods/8d4b5776-2c37-4d23-a1ef-4738230012db/volumes" Feb 27 16:30:20 crc kubenswrapper[4830]: I0227 16:30:20.913014 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5d54db5966-xcg7l"] Feb 27 16:30:20 crc kubenswrapper[4830]: I0227 16:30:20.914797 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d54db5966-xcg7l" Feb 27 16:30:20 crc kubenswrapper[4830]: I0227 16:30:20.928743 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5d54db5966-xcg7l"] Feb 27 16:30:20 crc kubenswrapper[4830]: I0227 16:30:20.939722 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 27 16:30:20 crc kubenswrapper[4830]: I0227 16:30:20.939846 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6b747d769f-z82kl" Feb 27 16:30:20 crc kubenswrapper[4830]: I0227 16:30:20.939890 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 27 16:30:20 crc kubenswrapper[4830]: I0227 16:30:20.940547 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8klgh\" (UniqueName: \"kubernetes.io/projected/a234743b-8983-4a60-bbb4-59ad823b83e2-kube-api-access-8klgh\") pod \"barbican-api-5d54db5966-xcg7l\" (UID: \"a234743b-8983-4a60-bbb4-59ad823b83e2\") " pod="openstack/barbican-api-5d54db5966-xcg7l" Feb 27 16:30:20 crc kubenswrapper[4830]: I0227 16:30:20.940599 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a234743b-8983-4a60-bbb4-59ad823b83e2-public-tls-certs\") pod \"barbican-api-5d54db5966-xcg7l\" (UID: \"a234743b-8983-4a60-bbb4-59ad823b83e2\") " pod="openstack/barbican-api-5d54db5966-xcg7l" Feb 27 16:30:20 crc kubenswrapper[4830]: I0227 16:30:20.940628 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a234743b-8983-4a60-bbb4-59ad823b83e2-combined-ca-bundle\") pod \"barbican-api-5d54db5966-xcg7l\" (UID: \"a234743b-8983-4a60-bbb4-59ad823b83e2\") " pod="openstack/barbican-api-5d54db5966-xcg7l" Feb 27 16:30:20 crc kubenswrapper[4830]: I0227 16:30:20.940656 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a234743b-8983-4a60-bbb4-59ad823b83e2-config-data\") pod \"barbican-api-5d54db5966-xcg7l\" (UID: \"a234743b-8983-4a60-bbb4-59ad823b83e2\") " pod="openstack/barbican-api-5d54db5966-xcg7l" Feb 27 16:30:20 crc kubenswrapper[4830]: I0227 16:30:20.940692 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a234743b-8983-4a60-bbb4-59ad823b83e2-internal-tls-certs\") pod \"barbican-api-5d54db5966-xcg7l\" (UID: \"a234743b-8983-4a60-bbb4-59ad823b83e2\") " pod="openstack/barbican-api-5d54db5966-xcg7l" Feb 27 16:30:20 crc kubenswrapper[4830]: I0227 16:30:20.940721 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a234743b-8983-4a60-bbb4-59ad823b83e2-config-data-custom\") pod \"barbican-api-5d54db5966-xcg7l\" (UID: \"a234743b-8983-4a60-bbb4-59ad823b83e2\") " pod="openstack/barbican-api-5d54db5966-xcg7l" Feb 27 16:30:20 crc kubenswrapper[4830]: I0227 16:30:20.940757 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a234743b-8983-4a60-bbb4-59ad823b83e2-logs\") pod \"barbican-api-5d54db5966-xcg7l\" (UID: \"a234743b-8983-4a60-bbb4-59ad823b83e2\") " pod="openstack/barbican-api-5d54db5966-xcg7l" Feb 27 16:30:21 crc kubenswrapper[4830]: I0227 16:30:21.044683 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a234743b-8983-4a60-bbb4-59ad823b83e2-internal-tls-certs\") pod \"barbican-api-5d54db5966-xcg7l\" (UID: \"a234743b-8983-4a60-bbb4-59ad823b83e2\") " pod="openstack/barbican-api-5d54db5966-xcg7l" Feb 27 16:30:21 crc kubenswrapper[4830]: I0227 16:30:21.044752 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a234743b-8983-4a60-bbb4-59ad823b83e2-config-data-custom\") pod \"barbican-api-5d54db5966-xcg7l\" (UID: \"a234743b-8983-4a60-bbb4-59ad823b83e2\") " pod="openstack/barbican-api-5d54db5966-xcg7l" Feb 27 16:30:21 crc kubenswrapper[4830]: I0227 16:30:21.044804 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a234743b-8983-4a60-bbb4-59ad823b83e2-logs\") pod \"barbican-api-5d54db5966-xcg7l\" (UID: \"a234743b-8983-4a60-bbb4-59ad823b83e2\") " pod="openstack/barbican-api-5d54db5966-xcg7l" Feb 27 16:30:21 crc kubenswrapper[4830]: I0227 16:30:21.044911 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8klgh\" (UniqueName: \"kubernetes.io/projected/a234743b-8983-4a60-bbb4-59ad823b83e2-kube-api-access-8klgh\") pod \"barbican-api-5d54db5966-xcg7l\" (UID: \"a234743b-8983-4a60-bbb4-59ad823b83e2\") " pod="openstack/barbican-api-5d54db5966-xcg7l" Feb 27 16:30:21 crc kubenswrapper[4830]: I0227 16:30:21.044975 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a234743b-8983-4a60-bbb4-59ad823b83e2-public-tls-certs\") pod \"barbican-api-5d54db5966-xcg7l\" (UID: \"a234743b-8983-4a60-bbb4-59ad823b83e2\") " pod="openstack/barbican-api-5d54db5966-xcg7l" Feb 27 16:30:21 crc kubenswrapper[4830]: I0227 16:30:21.045013 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a234743b-8983-4a60-bbb4-59ad823b83e2-combined-ca-bundle\") pod \"barbican-api-5d54db5966-xcg7l\" (UID: \"a234743b-8983-4a60-bbb4-59ad823b83e2\") " pod="openstack/barbican-api-5d54db5966-xcg7l" Feb 27 16:30:21 crc kubenswrapper[4830]: I0227 16:30:21.045046 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a234743b-8983-4a60-bbb4-59ad823b83e2-config-data\") pod \"barbican-api-5d54db5966-xcg7l\" (UID: \"a234743b-8983-4a60-bbb4-59ad823b83e2\") " pod="openstack/barbican-api-5d54db5966-xcg7l" Feb 27 16:30:21 crc kubenswrapper[4830]: I0227 16:30:21.049488 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a234743b-8983-4a60-bbb4-59ad823b83e2-logs\") pod \"barbican-api-5d54db5966-xcg7l\" (UID: \"a234743b-8983-4a60-bbb4-59ad823b83e2\") " pod="openstack/barbican-api-5d54db5966-xcg7l" Feb 27 16:30:21 crc kubenswrapper[4830]: I0227 16:30:21.052007 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a234743b-8983-4a60-bbb4-59ad823b83e2-config-data-custom\") pod \"barbican-api-5d54db5966-xcg7l\" (UID: \"a234743b-8983-4a60-bbb4-59ad823b83e2\") " pod="openstack/barbican-api-5d54db5966-xcg7l" Feb 27 16:30:21 crc kubenswrapper[4830]: I0227 16:30:21.052921 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a234743b-8983-4a60-bbb4-59ad823b83e2-config-data\") pod \"barbican-api-5d54db5966-xcg7l\" (UID: \"a234743b-8983-4a60-bbb4-59ad823b83e2\") " pod="openstack/barbican-api-5d54db5966-xcg7l" Feb 27 16:30:21 crc kubenswrapper[4830]: I0227 16:30:21.053319 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a234743b-8983-4a60-bbb4-59ad823b83e2-internal-tls-certs\") pod \"barbican-api-5d54db5966-xcg7l\" (UID: \"a234743b-8983-4a60-bbb4-59ad823b83e2\") " pod="openstack/barbican-api-5d54db5966-xcg7l" Feb 27 16:30:21 crc kubenswrapper[4830]: I0227 16:30:21.061648 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-58db7bd5dd-jr8zt" Feb 27 16:30:21 crc kubenswrapper[4830]: I0227 16:30:21.061910 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a234743b-8983-4a60-bbb4-59ad823b83e2-combined-ca-bundle\") pod \"barbican-api-5d54db5966-xcg7l\" (UID: \"a234743b-8983-4a60-bbb4-59ad823b83e2\") " pod="openstack/barbican-api-5d54db5966-xcg7l" Feb 27 16:30:21 crc kubenswrapper[4830]: I0227 16:30:21.062644 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a234743b-8983-4a60-bbb4-59ad823b83e2-public-tls-certs\") pod \"barbican-api-5d54db5966-xcg7l\" (UID: \"a234743b-8983-4a60-bbb4-59ad823b83e2\") " pod="openstack/barbican-api-5d54db5966-xcg7l" Feb 27 16:30:21 crc kubenswrapper[4830]: I0227 16:30:21.065465 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8klgh\" (UniqueName: \"kubernetes.io/projected/a234743b-8983-4a60-bbb4-59ad823b83e2-kube-api-access-8klgh\") pod \"barbican-api-5d54db5966-xcg7l\" (UID: \"a234743b-8983-4a60-bbb4-59ad823b83e2\") " pod="openstack/barbican-api-5d54db5966-xcg7l" Feb 27 16:30:21 crc kubenswrapper[4830]: I0227 16:30:21.138305 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"41fafe33-b43b-4dcb-9edd-b365d0749e10","Type":"ContainerStarted","Data":"40cab2835902cbbd7f2108f23209c5d896b2d0b912cf229a63563e0cdf02215b"} Feb 27 16:30:21 crc kubenswrapper[4830]: I0227 16:30:21.151777 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5c55fdd8d8-tv8zp"] Feb 27 16:30:21 crc kubenswrapper[4830]: I0227 16:30:21.311343 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d54db5966-xcg7l" Feb 27 16:30:21 crc kubenswrapper[4830]: I0227 16:30:21.670957 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5d54db5966-xcg7l"] Feb 27 16:30:22 crc kubenswrapper[4830]: I0227 16:30:22.147193 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d54db5966-xcg7l" event={"ID":"a234743b-8983-4a60-bbb4-59ad823b83e2","Type":"ContainerStarted","Data":"bcaad14a5dbb96adf7a18f1f57a6f9461056ab8d5981e03e5ed3e64de132d692"} Feb 27 16:30:22 crc kubenswrapper[4830]: I0227 16:30:22.147551 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d54db5966-xcg7l" event={"ID":"a234743b-8983-4a60-bbb4-59ad823b83e2","Type":"ContainerStarted","Data":"60b1698b9bf51b951bd77870e5046fcfdcd7a8f538faf1f1732e6055788dfb74"} Feb 27 16:30:22 crc kubenswrapper[4830]: I0227 16:30:22.149112 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"41fafe33-b43b-4dcb-9edd-b365d0749e10","Type":"ContainerStarted","Data":"9f254100c8c027338b42ed369be0ddd72af937c9d87a9a808607f1dcc876c8ed"} Feb 27 16:30:22 crc kubenswrapper[4830]: I0227 16:30:22.149239 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5c55fdd8d8-tv8zp" podUID="4da01425-1614-4383-810b-ff1a89832197" containerName="placement-log" containerID="cri-o://cdc9a3387155de07c2eb226e985c0ee25a06eea0632b208c44e4fcc19ec8c984" gracePeriod=30 Feb 27 16:30:22 crc kubenswrapper[4830]: I0227 16:30:22.149291 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 27 16:30:22 crc kubenswrapper[4830]: I0227 16:30:22.149380 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5c55fdd8d8-tv8zp" podUID="4da01425-1614-4383-810b-ff1a89832197" containerName="placement-api" containerID="cri-o://e13c29a20be2e0d3d8fc761683b1442db2bdcb77f337bbbcc9f2f64f166246d2" gracePeriod=30 Feb 27 16:30:22 crc kubenswrapper[4830]: I0227 16:30:22.187824 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.187798931 podStartE2EDuration="3.187798931s" podCreationTimestamp="2026-02-27 16:30:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:30:22.176006417 +0000 UTC m=+1418.265278880" watchObservedRunningTime="2026-02-27 16:30:22.187798931 +0000 UTC m=+1418.277071394" Feb 27 16:30:22 crc kubenswrapper[4830]: I0227 16:30:22.480139 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-g94gr" Feb 27 16:30:22 crc kubenswrapper[4830]: I0227 16:30:22.513531 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-86c4877d94-j48gv" Feb 27 16:30:22 crc kubenswrapper[4830]: I0227 16:30:22.530540 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 27 16:30:22 crc kubenswrapper[4830]: I0227 16:30:22.532477 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 27 16:30:22 crc kubenswrapper[4830]: I0227 16:30:22.545600 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-trrr2" Feb 27 16:30:22 crc kubenswrapper[4830]: I0227 16:30:22.551376 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 27 16:30:22 crc kubenswrapper[4830]: I0227 16:30:22.551596 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 27 16:30:22 crc kubenswrapper[4830]: I0227 16:30:22.572061 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 27 16:30:22 crc kubenswrapper[4830]: I0227 16:30:22.574766 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3482e9fb-53ae-4908-87fc-4096c5b26b76-openstack-config\") pod \"openstackclient\" (UID: \"3482e9fb-53ae-4908-87fc-4096c5b26b76\") " pod="openstack/openstackclient" Feb 27 16:30:22 crc kubenswrapper[4830]: I0227 16:30:22.576095 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfhbc\" (UniqueName: \"kubernetes.io/projected/3482e9fb-53ae-4908-87fc-4096c5b26b76-kube-api-access-jfhbc\") pod \"openstackclient\" (UID: \"3482e9fb-53ae-4908-87fc-4096c5b26b76\") " pod="openstack/openstackclient" Feb 27 16:30:22 crc kubenswrapper[4830]: I0227 16:30:22.578221 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3482e9fb-53ae-4908-87fc-4096c5b26b76-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3482e9fb-53ae-4908-87fc-4096c5b26b76\") " pod="openstack/openstackclient" Feb 27 16:30:22 crc kubenswrapper[4830]: I0227 16:30:22.578444 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3482e9fb-53ae-4908-87fc-4096c5b26b76-openstack-config-secret\") pod \"openstackclient\" (UID: \"3482e9fb-53ae-4908-87fc-4096c5b26b76\") " pod="openstack/openstackclient" Feb 27 16:30:22 crc kubenswrapper[4830]: I0227 16:30:22.579812 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-vqkl9"] Feb 27 16:30:22 crc kubenswrapper[4830]: I0227 16:30:22.580133 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-vqkl9" podUID="b05d69f2-31a8-4212-ad9a-8f2bec833edd" containerName="dnsmasq-dns" containerID="cri-o://1f0e3dc4b4d829a2806ddef6c08bba3c9fbb78fe1be3327ca2005557f8ba019e" gracePeriod=10 Feb 27 16:30:22 crc kubenswrapper[4830]: I0227 16:30:22.691247 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfhbc\" (UniqueName: \"kubernetes.io/projected/3482e9fb-53ae-4908-87fc-4096c5b26b76-kube-api-access-jfhbc\") pod \"openstackclient\" (UID: \"3482e9fb-53ae-4908-87fc-4096c5b26b76\") " pod="openstack/openstackclient" Feb 27 16:30:22 crc kubenswrapper[4830]: I0227 16:30:22.691349 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3482e9fb-53ae-4908-87fc-4096c5b26b76-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3482e9fb-53ae-4908-87fc-4096c5b26b76\") " pod="openstack/openstackclient" Feb 27 16:30:22 crc kubenswrapper[4830]: I0227 16:30:22.691394 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3482e9fb-53ae-4908-87fc-4096c5b26b76-openstack-config-secret\") pod \"openstackclient\" (UID: \"3482e9fb-53ae-4908-87fc-4096c5b26b76\") " pod="openstack/openstackclient" Feb 27 16:30:22 crc kubenswrapper[4830]: I0227 16:30:22.691436 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3482e9fb-53ae-4908-87fc-4096c5b26b76-openstack-config\") pod \"openstackclient\" (UID: \"3482e9fb-53ae-4908-87fc-4096c5b26b76\") " pod="openstack/openstackclient" Feb 27 16:30:22 crc kubenswrapper[4830]: I0227 16:30:22.692209 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3482e9fb-53ae-4908-87fc-4096c5b26b76-openstack-config\") pod \"openstackclient\" (UID: \"3482e9fb-53ae-4908-87fc-4096c5b26b76\") " pod="openstack/openstackclient" Feb 27 16:30:22 crc kubenswrapper[4830]: I0227 16:30:22.699977 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3482e9fb-53ae-4908-87fc-4096c5b26b76-openstack-config-secret\") pod \"openstackclient\" (UID: \"3482e9fb-53ae-4908-87fc-4096c5b26b76\") " pod="openstack/openstackclient" Feb 27 16:30:22 crc kubenswrapper[4830]: I0227 16:30:22.712382 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfhbc\" (UniqueName: \"kubernetes.io/projected/3482e9fb-53ae-4908-87fc-4096c5b26b76-kube-api-access-jfhbc\") pod \"openstackclient\" (UID: \"3482e9fb-53ae-4908-87fc-4096c5b26b76\") " pod="openstack/openstackclient" Feb 27 16:30:22 crc kubenswrapper[4830]: I0227 16:30:22.715497 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3482e9fb-53ae-4908-87fc-4096c5b26b76-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3482e9fb-53ae-4908-87fc-4096c5b26b76\") " pod="openstack/openstackclient" Feb 27 16:30:22 crc kubenswrapper[4830]: I0227 16:30:22.735771 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 27 16:30:22 crc kubenswrapper[4830]: I0227 16:30:22.786641 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 16:30:22 crc kubenswrapper[4830]: I0227 16:30:22.861044 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-86c4877d94-j48gv" Feb 27 16:30:22 crc kubenswrapper[4830]: I0227 16:30:22.901066 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.156908 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-vqkl9" Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.161831 4830 generic.go:334] "Generic (PLEG): container finished" podID="b05d69f2-31a8-4212-ad9a-8f2bec833edd" containerID="1f0e3dc4b4d829a2806ddef6c08bba3c9fbb78fe1be3327ca2005557f8ba019e" exitCode=0 Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.161897 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-vqkl9" event={"ID":"b05d69f2-31a8-4212-ad9a-8f2bec833edd","Type":"ContainerDied","Data":"1f0e3dc4b4d829a2806ddef6c08bba3c9fbb78fe1be3327ca2005557f8ba019e"} Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.161926 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-vqkl9" event={"ID":"b05d69f2-31a8-4212-ad9a-8f2bec833edd","Type":"ContainerDied","Data":"e1bfcb91a2322670780b165425fc25ad38100b057c266b270e43e01a14db7849"} Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.161962 4830 scope.go:117] "RemoveContainer" containerID="1f0e3dc4b4d829a2806ddef6c08bba3c9fbb78fe1be3327ca2005557f8ba019e" Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.162070 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-vqkl9" Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.169868 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d54db5966-xcg7l" event={"ID":"a234743b-8983-4a60-bbb4-59ad823b83e2","Type":"ContainerStarted","Data":"5d61bb0dcfd0af97605ea6793d0ccb409521660eb0cfce03c505ba533a6f52a4"} Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.170892 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5d54db5966-xcg7l" Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.170920 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5d54db5966-xcg7l" Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.179882 4830 generic.go:334] "Generic (PLEG): container finished" podID="4da01425-1614-4383-810b-ff1a89832197" containerID="cdc9a3387155de07c2eb226e985c0ee25a06eea0632b208c44e4fcc19ec8c984" exitCode=143 Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.180133 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c55fdd8d8-tv8zp" event={"ID":"4da01425-1614-4383-810b-ff1a89832197","Type":"ContainerDied","Data":"cdc9a3387155de07c2eb226e985c0ee25a06eea0632b208c44e4fcc19ec8c984"} Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.181083 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="da0f5d1f-1944-4347-8a18-fd946fb7ed6a" containerName="probe" containerID="cri-o://4784515ee13ec5f2e9c5dbdedc925b93fda50d5cc115163838e530a4050b1298" gracePeriod=30 Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.181171 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="da0f5d1f-1944-4347-8a18-fd946fb7ed6a" containerName="cinder-scheduler" containerID="cri-o://6c582c45fe25a3893424ec2ce2705ebcf999c507acb0c5f092150ee067d084e7" gracePeriod=30 Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.190091 4830 scope.go:117] "RemoveContainer" containerID="66691e78bbd70b07b2bdb539dd9a20b73d57e3ed0c6f37039c2c988e694d1d0e" Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.222210 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b05d69f2-31a8-4212-ad9a-8f2bec833edd-config\") pod \"b05d69f2-31a8-4212-ad9a-8f2bec833edd\" (UID: \"b05d69f2-31a8-4212-ad9a-8f2bec833edd\") " Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.222272 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b05d69f2-31a8-4212-ad9a-8f2bec833edd-dns-svc\") pod \"b05d69f2-31a8-4212-ad9a-8f2bec833edd\" (UID: \"b05d69f2-31a8-4212-ad9a-8f2bec833edd\") " Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.222343 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjs55\" (UniqueName: \"kubernetes.io/projected/b05d69f2-31a8-4212-ad9a-8f2bec833edd-kube-api-access-bjs55\") pod \"b05d69f2-31a8-4212-ad9a-8f2bec833edd\" (UID: \"b05d69f2-31a8-4212-ad9a-8f2bec833edd\") " Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.224504 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b05d69f2-31a8-4212-ad9a-8f2bec833edd-ovsdbserver-sb\") pod \"b05d69f2-31a8-4212-ad9a-8f2bec833edd\" (UID: \"b05d69f2-31a8-4212-ad9a-8f2bec833edd\") " Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.224539 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b05d69f2-31a8-4212-ad9a-8f2bec833edd-dns-swift-storage-0\") pod \"b05d69f2-31a8-4212-ad9a-8f2bec833edd\" (UID: \"b05d69f2-31a8-4212-ad9a-8f2bec833edd\") " Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.227294 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b05d69f2-31a8-4212-ad9a-8f2bec833edd-ovsdbserver-nb\") pod \"b05d69f2-31a8-4212-ad9a-8f2bec833edd\" (UID: \"b05d69f2-31a8-4212-ad9a-8f2bec833edd\") " Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.239340 4830 scope.go:117] "RemoveContainer" containerID="1f0e3dc4b4d829a2806ddef6c08bba3c9fbb78fe1be3327ca2005557f8ba019e" Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.241085 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05d69f2-31a8-4212-ad9a-8f2bec833edd-kube-api-access-bjs55" (OuterVolumeSpecName: "kube-api-access-bjs55") pod "b05d69f2-31a8-4212-ad9a-8f2bec833edd" (UID: "b05d69f2-31a8-4212-ad9a-8f2bec833edd"). InnerVolumeSpecName "kube-api-access-bjs55". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:30:23 crc kubenswrapper[4830]: E0227 16:30:23.243034 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f0e3dc4b4d829a2806ddef6c08bba3c9fbb78fe1be3327ca2005557f8ba019e\": container with ID starting with 1f0e3dc4b4d829a2806ddef6c08bba3c9fbb78fe1be3327ca2005557f8ba019e not found: ID does not exist" containerID="1f0e3dc4b4d829a2806ddef6c08bba3c9fbb78fe1be3327ca2005557f8ba019e" Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.243084 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f0e3dc4b4d829a2806ddef6c08bba3c9fbb78fe1be3327ca2005557f8ba019e"} err="failed to get container status \"1f0e3dc4b4d829a2806ddef6c08bba3c9fbb78fe1be3327ca2005557f8ba019e\": rpc error: code = NotFound desc = could not find container \"1f0e3dc4b4d829a2806ddef6c08bba3c9fbb78fe1be3327ca2005557f8ba019e\": container with ID starting with 1f0e3dc4b4d829a2806ddef6c08bba3c9fbb78fe1be3327ca2005557f8ba019e not found: ID does not exist" Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.243112 4830 scope.go:117] "RemoveContainer" containerID="66691e78bbd70b07b2bdb539dd9a20b73d57e3ed0c6f37039c2c988e694d1d0e" Feb 27 16:30:23 crc kubenswrapper[4830]: E0227 16:30:23.244073 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66691e78bbd70b07b2bdb539dd9a20b73d57e3ed0c6f37039c2c988e694d1d0e\": container with ID starting with 66691e78bbd70b07b2bdb539dd9a20b73d57e3ed0c6f37039c2c988e694d1d0e not found: ID does not exist" containerID="66691e78bbd70b07b2bdb539dd9a20b73d57e3ed0c6f37039c2c988e694d1d0e" Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.244092 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66691e78bbd70b07b2bdb539dd9a20b73d57e3ed0c6f37039c2c988e694d1d0e"} err="failed to get container status \"66691e78bbd70b07b2bdb539dd9a20b73d57e3ed0c6f37039c2c988e694d1d0e\": rpc error: code = NotFound desc = could not find container \"66691e78bbd70b07b2bdb539dd9a20b73d57e3ed0c6f37039c2c988e694d1d0e\": container with ID starting with 66691e78bbd70b07b2bdb539dd9a20b73d57e3ed0c6f37039c2c988e694d1d0e not found: ID does not exist" Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.267560 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5d54db5966-xcg7l" podStartSLOduration=3.267532401 podStartE2EDuration="3.267532401s" podCreationTimestamp="2026-02-27 16:30:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:30:23.220689402 +0000 UTC m=+1419.309961865" watchObservedRunningTime="2026-02-27 16:30:23.267532401 +0000 UTC m=+1419.356804864" Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.302645 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05d69f2-31a8-4212-ad9a-8f2bec833edd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b05d69f2-31a8-4212-ad9a-8f2bec833edd" (UID: "b05d69f2-31a8-4212-ad9a-8f2bec833edd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.306503 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05d69f2-31a8-4212-ad9a-8f2bec833edd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b05d69f2-31a8-4212-ad9a-8f2bec833edd" (UID: "b05d69f2-31a8-4212-ad9a-8f2bec833edd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.312401 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05d69f2-31a8-4212-ad9a-8f2bec833edd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b05d69f2-31a8-4212-ad9a-8f2bec833edd" (UID: "b05d69f2-31a8-4212-ad9a-8f2bec833edd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.314482 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05d69f2-31a8-4212-ad9a-8f2bec833edd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b05d69f2-31a8-4212-ad9a-8f2bec833edd" (UID: "b05d69f2-31a8-4212-ad9a-8f2bec833edd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.329466 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05d69f2-31a8-4212-ad9a-8f2bec833edd-config" (OuterVolumeSpecName: "config") pod "b05d69f2-31a8-4212-ad9a-8f2bec833edd" (UID: "b05d69f2-31a8-4212-ad9a-8f2bec833edd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.340193 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b05d69f2-31a8-4212-ad9a-8f2bec833edd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.340236 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b05d69f2-31a8-4212-ad9a-8f2bec833edd-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.340246 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b05d69f2-31a8-4212-ad9a-8f2bec833edd-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.340257 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjs55\" (UniqueName: \"kubernetes.io/projected/b05d69f2-31a8-4212-ad9a-8f2bec833edd-kube-api-access-bjs55\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.340270 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b05d69f2-31a8-4212-ad9a-8f2bec833edd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.340282 4830 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b05d69f2-31a8-4212-ad9a-8f2bec833edd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.399874 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.492853 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-vqkl9"] Feb 27 16:30:23 crc kubenswrapper[4830]: I0227 16:30:23.500113 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-vqkl9"] Feb 27 16:30:24 crc kubenswrapper[4830]: I0227 16:30:24.189131 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"3482e9fb-53ae-4908-87fc-4096c5b26b76","Type":"ContainerStarted","Data":"4e6ed1b1b6f598744a71afb756819829443f91ad62b90e441271d2f317ee411d"} Feb 27 16:30:24 crc kubenswrapper[4830]: I0227 16:30:24.194727 4830 generic.go:334] "Generic (PLEG): container finished" podID="da0f5d1f-1944-4347-8a18-fd946fb7ed6a" containerID="4784515ee13ec5f2e9c5dbdedc925b93fda50d5cc115163838e530a4050b1298" exitCode=0 Feb 27 16:30:24 crc kubenswrapper[4830]: I0227 16:30:24.194811 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"da0f5d1f-1944-4347-8a18-fd946fb7ed6a","Type":"ContainerDied","Data":"4784515ee13ec5f2e9c5dbdedc925b93fda50d5cc115163838e530a4050b1298"} Feb 27 16:30:24 crc kubenswrapper[4830]: I0227 16:30:24.770776 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05d69f2-31a8-4212-ad9a-8f2bec833edd" path="/var/lib/kubelet/pods/b05d69f2-31a8-4212-ad9a-8f2bec833edd/volumes" Feb 27 16:30:25 crc kubenswrapper[4830]: I0227 16:30:25.761514 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5c55fdd8d8-tv8zp" Feb 27 16:30:25 crc kubenswrapper[4830]: I0227 16:30:25.883464 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4da01425-1614-4383-810b-ff1a89832197-internal-tls-certs\") pod \"4da01425-1614-4383-810b-ff1a89832197\" (UID: \"4da01425-1614-4383-810b-ff1a89832197\") " Feb 27 16:30:25 crc kubenswrapper[4830]: I0227 16:30:25.883548 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4da01425-1614-4383-810b-ff1a89832197-public-tls-certs\") pod \"4da01425-1614-4383-810b-ff1a89832197\" (UID: \"4da01425-1614-4383-810b-ff1a89832197\") " Feb 27 16:30:25 crc kubenswrapper[4830]: I0227 16:30:25.883576 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4da01425-1614-4383-810b-ff1a89832197-config-data\") pod \"4da01425-1614-4383-810b-ff1a89832197\" (UID: \"4da01425-1614-4383-810b-ff1a89832197\") " Feb 27 16:30:25 crc kubenswrapper[4830]: I0227 16:30:25.883687 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4da01425-1614-4383-810b-ff1a89832197-combined-ca-bundle\") pod \"4da01425-1614-4383-810b-ff1a89832197\" (UID: \"4da01425-1614-4383-810b-ff1a89832197\") " Feb 27 16:30:25 crc kubenswrapper[4830]: I0227 16:30:25.883785 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4da01425-1614-4383-810b-ff1a89832197-logs\") pod \"4da01425-1614-4383-810b-ff1a89832197\" (UID: \"4da01425-1614-4383-810b-ff1a89832197\") " Feb 27 16:30:25 crc kubenswrapper[4830]: I0227 16:30:25.883829 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4da01425-1614-4383-810b-ff1a89832197-scripts\") pod \"4da01425-1614-4383-810b-ff1a89832197\" (UID: \"4da01425-1614-4383-810b-ff1a89832197\") " Feb 27 16:30:25 crc kubenswrapper[4830]: I0227 16:30:25.883852 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzhpt\" (UniqueName: \"kubernetes.io/projected/4da01425-1614-4383-810b-ff1a89832197-kube-api-access-jzhpt\") pod \"4da01425-1614-4383-810b-ff1a89832197\" (UID: \"4da01425-1614-4383-810b-ff1a89832197\") " Feb 27 16:30:25 crc kubenswrapper[4830]: I0227 16:30:25.884814 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4da01425-1614-4383-810b-ff1a89832197-logs" (OuterVolumeSpecName: "logs") pod "4da01425-1614-4383-810b-ff1a89832197" (UID: "4da01425-1614-4383-810b-ff1a89832197"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:30:25 crc kubenswrapper[4830]: I0227 16:30:25.887438 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4da01425-1614-4383-810b-ff1a89832197-logs\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:25 crc kubenswrapper[4830]: I0227 16:30:25.889284 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4da01425-1614-4383-810b-ff1a89832197-scripts" (OuterVolumeSpecName: "scripts") pod "4da01425-1614-4383-810b-ff1a89832197" (UID: "4da01425-1614-4383-810b-ff1a89832197"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:25 crc kubenswrapper[4830]: I0227 16:30:25.893087 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4da01425-1614-4383-810b-ff1a89832197-kube-api-access-jzhpt" (OuterVolumeSpecName: "kube-api-access-jzhpt") pod "4da01425-1614-4383-810b-ff1a89832197" (UID: "4da01425-1614-4383-810b-ff1a89832197"). InnerVolumeSpecName "kube-api-access-jzhpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:30:25 crc kubenswrapper[4830]: I0227 16:30:25.936109 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4da01425-1614-4383-810b-ff1a89832197-config-data" (OuterVolumeSpecName: "config-data") pod "4da01425-1614-4383-810b-ff1a89832197" (UID: "4da01425-1614-4383-810b-ff1a89832197"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:25 crc kubenswrapper[4830]: I0227 16:30:25.947549 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4da01425-1614-4383-810b-ff1a89832197-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4da01425-1614-4383-810b-ff1a89832197" (UID: "4da01425-1614-4383-810b-ff1a89832197"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:25 crc kubenswrapper[4830]: I0227 16:30:25.976695 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4da01425-1614-4383-810b-ff1a89832197-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4da01425-1614-4383-810b-ff1a89832197" (UID: "4da01425-1614-4383-810b-ff1a89832197"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:25 crc kubenswrapper[4830]: I0227 16:30:25.985429 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4da01425-1614-4383-810b-ff1a89832197-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4da01425-1614-4383-810b-ff1a89832197" (UID: "4da01425-1614-4383-810b-ff1a89832197"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:25 crc kubenswrapper[4830]: I0227 16:30:25.989647 4830 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4da01425-1614-4383-810b-ff1a89832197-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:25 crc kubenswrapper[4830]: I0227 16:30:25.989670 4830 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4da01425-1614-4383-810b-ff1a89832197-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:25 crc kubenswrapper[4830]: I0227 16:30:25.989679 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4da01425-1614-4383-810b-ff1a89832197-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:25 crc kubenswrapper[4830]: I0227 16:30:25.989688 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4da01425-1614-4383-810b-ff1a89832197-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:25 crc kubenswrapper[4830]: I0227 16:30:25.989699 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4da01425-1614-4383-810b-ff1a89832197-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:25 crc kubenswrapper[4830]: I0227 16:30:25.989707 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzhpt\" (UniqueName: \"kubernetes.io/projected/4da01425-1614-4383-810b-ff1a89832197-kube-api-access-jzhpt\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.224719 4830 generic.go:334] "Generic (PLEG): container finished" podID="4da01425-1614-4383-810b-ff1a89832197" containerID="e13c29a20be2e0d3d8fc761683b1442db2bdcb77f337bbbcc9f2f64f166246d2" exitCode=0 Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.224816 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c55fdd8d8-tv8zp" event={"ID":"4da01425-1614-4383-810b-ff1a89832197","Type":"ContainerDied","Data":"e13c29a20be2e0d3d8fc761683b1442db2bdcb77f337bbbcc9f2f64f166246d2"} Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.224823 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5c55fdd8d8-tv8zp" Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.224848 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5c55fdd8d8-tv8zp" event={"ID":"4da01425-1614-4383-810b-ff1a89832197","Type":"ContainerDied","Data":"b233ee8776a60535dfe76755e1d36fbed27ebca58c1784af3bb02bec34cc6e3a"} Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.224874 4830 scope.go:117] "RemoveContainer" containerID="e13c29a20be2e0d3d8fc761683b1442db2bdcb77f337bbbcc9f2f64f166246d2" Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.232880 4830 generic.go:334] "Generic (PLEG): container finished" podID="da0f5d1f-1944-4347-8a18-fd946fb7ed6a" containerID="6c582c45fe25a3893424ec2ce2705ebcf999c507acb0c5f092150ee067d084e7" exitCode=0 Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.232905 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"da0f5d1f-1944-4347-8a18-fd946fb7ed6a","Type":"ContainerDied","Data":"6c582c45fe25a3893424ec2ce2705ebcf999c507acb0c5f092150ee067d084e7"} Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.259448 4830 scope.go:117] "RemoveContainer" containerID="cdc9a3387155de07c2eb226e985c0ee25a06eea0632b208c44e4fcc19ec8c984" Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.287073 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5c55fdd8d8-tv8zp"] Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.291131 4830 scope.go:117] "RemoveContainer" containerID="e13c29a20be2e0d3d8fc761683b1442db2bdcb77f337bbbcc9f2f64f166246d2" Feb 27 16:30:26 crc kubenswrapper[4830]: E0227 16:30:26.302195 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e13c29a20be2e0d3d8fc761683b1442db2bdcb77f337bbbcc9f2f64f166246d2\": container with ID starting with e13c29a20be2e0d3d8fc761683b1442db2bdcb77f337bbbcc9f2f64f166246d2 not found: ID does not exist" containerID="e13c29a20be2e0d3d8fc761683b1442db2bdcb77f337bbbcc9f2f64f166246d2" Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.302242 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e13c29a20be2e0d3d8fc761683b1442db2bdcb77f337bbbcc9f2f64f166246d2"} err="failed to get container status \"e13c29a20be2e0d3d8fc761683b1442db2bdcb77f337bbbcc9f2f64f166246d2\": rpc error: code = NotFound desc = could not find container \"e13c29a20be2e0d3d8fc761683b1442db2bdcb77f337bbbcc9f2f64f166246d2\": container with ID starting with e13c29a20be2e0d3d8fc761683b1442db2bdcb77f337bbbcc9f2f64f166246d2 not found: ID does not exist" Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.302279 4830 scope.go:117] "RemoveContainer" containerID="cdc9a3387155de07c2eb226e985c0ee25a06eea0632b208c44e4fcc19ec8c984" Feb 27 16:30:26 crc kubenswrapper[4830]: E0227 16:30:26.302932 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdc9a3387155de07c2eb226e985c0ee25a06eea0632b208c44e4fcc19ec8c984\": container with ID starting with cdc9a3387155de07c2eb226e985c0ee25a06eea0632b208c44e4fcc19ec8c984 not found: ID does not exist" containerID="cdc9a3387155de07c2eb226e985c0ee25a06eea0632b208c44e4fcc19ec8c984" Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.303271 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdc9a3387155de07c2eb226e985c0ee25a06eea0632b208c44e4fcc19ec8c984"} err="failed to get container status \"cdc9a3387155de07c2eb226e985c0ee25a06eea0632b208c44e4fcc19ec8c984\": rpc error: code = NotFound desc = could not find container \"cdc9a3387155de07c2eb226e985c0ee25a06eea0632b208c44e4fcc19ec8c984\": container with ID starting with cdc9a3387155de07c2eb226e985c0ee25a06eea0632b208c44e4fcc19ec8c984 not found: ID does not exist" Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.303734 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5c55fdd8d8-tv8zp"] Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.458601 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.599811 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da0f5d1f-1944-4347-8a18-fd946fb7ed6a-scripts\") pod \"da0f5d1f-1944-4347-8a18-fd946fb7ed6a\" (UID: \"da0f5d1f-1944-4347-8a18-fd946fb7ed6a\") " Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.600185 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da0f5d1f-1944-4347-8a18-fd946fb7ed6a-combined-ca-bundle\") pod \"da0f5d1f-1944-4347-8a18-fd946fb7ed6a\" (UID: \"da0f5d1f-1944-4347-8a18-fd946fb7ed6a\") " Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.600249 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvrn7\" (UniqueName: \"kubernetes.io/projected/da0f5d1f-1944-4347-8a18-fd946fb7ed6a-kube-api-access-cvrn7\") pod \"da0f5d1f-1944-4347-8a18-fd946fb7ed6a\" (UID: \"da0f5d1f-1944-4347-8a18-fd946fb7ed6a\") " Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.600366 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da0f5d1f-1944-4347-8a18-fd946fb7ed6a-config-data-custom\") pod \"da0f5d1f-1944-4347-8a18-fd946fb7ed6a\" (UID: \"da0f5d1f-1944-4347-8a18-fd946fb7ed6a\") " Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.600438 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da0f5d1f-1944-4347-8a18-fd946fb7ed6a-config-data\") pod \"da0f5d1f-1944-4347-8a18-fd946fb7ed6a\" (UID: \"da0f5d1f-1944-4347-8a18-fd946fb7ed6a\") " Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.600465 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/da0f5d1f-1944-4347-8a18-fd946fb7ed6a-etc-machine-id\") pod \"da0f5d1f-1944-4347-8a18-fd946fb7ed6a\" (UID: \"da0f5d1f-1944-4347-8a18-fd946fb7ed6a\") " Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.600825 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da0f5d1f-1944-4347-8a18-fd946fb7ed6a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "da0f5d1f-1944-4347-8a18-fd946fb7ed6a" (UID: "da0f5d1f-1944-4347-8a18-fd946fb7ed6a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.619601 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da0f5d1f-1944-4347-8a18-fd946fb7ed6a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "da0f5d1f-1944-4347-8a18-fd946fb7ed6a" (UID: "da0f5d1f-1944-4347-8a18-fd946fb7ed6a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.619813 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da0f5d1f-1944-4347-8a18-fd946fb7ed6a-kube-api-access-cvrn7" (OuterVolumeSpecName: "kube-api-access-cvrn7") pod "da0f5d1f-1944-4347-8a18-fd946fb7ed6a" (UID: "da0f5d1f-1944-4347-8a18-fd946fb7ed6a"). InnerVolumeSpecName "kube-api-access-cvrn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.621986 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da0f5d1f-1944-4347-8a18-fd946fb7ed6a-scripts" (OuterVolumeSpecName: "scripts") pod "da0f5d1f-1944-4347-8a18-fd946fb7ed6a" (UID: "da0f5d1f-1944-4347-8a18-fd946fb7ed6a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.671657 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da0f5d1f-1944-4347-8a18-fd946fb7ed6a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da0f5d1f-1944-4347-8a18-fd946fb7ed6a" (UID: "da0f5d1f-1944-4347-8a18-fd946fb7ed6a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.703328 4830 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da0f5d1f-1944-4347-8a18-fd946fb7ed6a-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.703374 4830 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/da0f5d1f-1944-4347-8a18-fd946fb7ed6a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.703391 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da0f5d1f-1944-4347-8a18-fd946fb7ed6a-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.703405 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da0f5d1f-1944-4347-8a18-fd946fb7ed6a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.703420 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvrn7\" (UniqueName: \"kubernetes.io/projected/da0f5d1f-1944-4347-8a18-fd946fb7ed6a-kube-api-access-cvrn7\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.750770 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da0f5d1f-1944-4347-8a18-fd946fb7ed6a-config-data" (OuterVolumeSpecName: "config-data") pod "da0f5d1f-1944-4347-8a18-fd946fb7ed6a" (UID: "da0f5d1f-1944-4347-8a18-fd946fb7ed6a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.776239 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4da01425-1614-4383-810b-ff1a89832197" path="/var/lib/kubelet/pods/4da01425-1614-4383-810b-ff1a89832197/volumes" Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.809684 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da0f5d1f-1944-4347-8a18-fd946fb7ed6a-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.876047 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-c6f44c475-twbzz"] Feb 27 16:30:26 crc kubenswrapper[4830]: E0227 16:30:26.876743 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da0f5d1f-1944-4347-8a18-fd946fb7ed6a" containerName="probe" Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.876781 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="da0f5d1f-1944-4347-8a18-fd946fb7ed6a" containerName="probe" Feb 27 16:30:26 crc kubenswrapper[4830]: E0227 16:30:26.876795 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b05d69f2-31a8-4212-ad9a-8f2bec833edd" containerName="dnsmasq-dns" Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.876828 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b05d69f2-31a8-4212-ad9a-8f2bec833edd" containerName="dnsmasq-dns" Feb 27 16:30:26 crc kubenswrapper[4830]: E0227 16:30:26.876856 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4da01425-1614-4383-810b-ff1a89832197" containerName="placement-log" Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.876862 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="4da01425-1614-4383-810b-ff1a89832197" containerName="placement-log" Feb 27 16:30:26 crc kubenswrapper[4830]: E0227 16:30:26.876894 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b05d69f2-31a8-4212-ad9a-8f2bec833edd" containerName="init" Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.876901 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b05d69f2-31a8-4212-ad9a-8f2bec833edd" containerName="init" Feb 27 16:30:26 crc kubenswrapper[4830]: E0227 16:30:26.876910 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da0f5d1f-1944-4347-8a18-fd946fb7ed6a" containerName="cinder-scheduler" Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.876916 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="da0f5d1f-1944-4347-8a18-fd946fb7ed6a" containerName="cinder-scheduler" Feb 27 16:30:26 crc kubenswrapper[4830]: E0227 16:30:26.876933 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4da01425-1614-4383-810b-ff1a89832197" containerName="placement-api" Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.876939 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="4da01425-1614-4383-810b-ff1a89832197" containerName="placement-api" Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.877451 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="4da01425-1614-4383-810b-ff1a89832197" containerName="placement-api" Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.877477 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="da0f5d1f-1944-4347-8a18-fd946fb7ed6a" containerName="probe" Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.877491 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="4da01425-1614-4383-810b-ff1a89832197" containerName="placement-log" Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.877507 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="b05d69f2-31a8-4212-ad9a-8f2bec833edd" containerName="dnsmasq-dns" Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.877523 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="da0f5d1f-1944-4347-8a18-fd946fb7ed6a" containerName="cinder-scheduler" Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.878647 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-c6f44c475-twbzz" Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.881320 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.881480 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.881589 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 27 16:30:26 crc kubenswrapper[4830]: I0227 16:30:26.897616 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-c6f44c475-twbzz"] Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.015619 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38b57350-6ca0-4090-876b-7727c983cf52-config-data\") pod \"swift-proxy-c6f44c475-twbzz\" (UID: \"38b57350-6ca0-4090-876b-7727c983cf52\") " pod="openstack/swift-proxy-c6f44c475-twbzz" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.015770 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38b57350-6ca0-4090-876b-7727c983cf52-log-httpd\") pod \"swift-proxy-c6f44c475-twbzz\" (UID: \"38b57350-6ca0-4090-876b-7727c983cf52\") " pod="openstack/swift-proxy-c6f44c475-twbzz" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.015792 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/38b57350-6ca0-4090-876b-7727c983cf52-etc-swift\") pod \"swift-proxy-c6f44c475-twbzz\" (UID: \"38b57350-6ca0-4090-876b-7727c983cf52\") " pod="openstack/swift-proxy-c6f44c475-twbzz" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.015916 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38b57350-6ca0-4090-876b-7727c983cf52-combined-ca-bundle\") pod \"swift-proxy-c6f44c475-twbzz\" (UID: \"38b57350-6ca0-4090-876b-7727c983cf52\") " pod="openstack/swift-proxy-c6f44c475-twbzz" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.016203 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38b57350-6ca0-4090-876b-7727c983cf52-run-httpd\") pod \"swift-proxy-c6f44c475-twbzz\" (UID: \"38b57350-6ca0-4090-876b-7727c983cf52\") " pod="openstack/swift-proxy-c6f44c475-twbzz" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.016225 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/38b57350-6ca0-4090-876b-7727c983cf52-internal-tls-certs\") pod \"swift-proxy-c6f44c475-twbzz\" (UID: \"38b57350-6ca0-4090-876b-7727c983cf52\") " pod="openstack/swift-proxy-c6f44c475-twbzz" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.016248 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/38b57350-6ca0-4090-876b-7727c983cf52-public-tls-certs\") pod \"swift-proxy-c6f44c475-twbzz\" (UID: \"38b57350-6ca0-4090-876b-7727c983cf52\") " pod="openstack/swift-proxy-c6f44c475-twbzz" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.016437 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgkhr\" (UniqueName: \"kubernetes.io/projected/38b57350-6ca0-4090-876b-7727c983cf52-kube-api-access-qgkhr\") pod \"swift-proxy-c6f44c475-twbzz\" (UID: \"38b57350-6ca0-4090-876b-7727c983cf52\") " pod="openstack/swift-proxy-c6f44c475-twbzz" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.119281 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38b57350-6ca0-4090-876b-7727c983cf52-config-data\") pod \"swift-proxy-c6f44c475-twbzz\" (UID: \"38b57350-6ca0-4090-876b-7727c983cf52\") " pod="openstack/swift-proxy-c6f44c475-twbzz" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.119602 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38b57350-6ca0-4090-876b-7727c983cf52-log-httpd\") pod \"swift-proxy-c6f44c475-twbzz\" (UID: \"38b57350-6ca0-4090-876b-7727c983cf52\") " pod="openstack/swift-proxy-c6f44c475-twbzz" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.119624 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/38b57350-6ca0-4090-876b-7727c983cf52-etc-swift\") pod \"swift-proxy-c6f44c475-twbzz\" (UID: \"38b57350-6ca0-4090-876b-7727c983cf52\") " pod="openstack/swift-proxy-c6f44c475-twbzz" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.119645 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38b57350-6ca0-4090-876b-7727c983cf52-combined-ca-bundle\") pod \"swift-proxy-c6f44c475-twbzz\" (UID: \"38b57350-6ca0-4090-876b-7727c983cf52\") " pod="openstack/swift-proxy-c6f44c475-twbzz" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.119670 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38b57350-6ca0-4090-876b-7727c983cf52-run-httpd\") pod \"swift-proxy-c6f44c475-twbzz\" (UID: \"38b57350-6ca0-4090-876b-7727c983cf52\") " pod="openstack/swift-proxy-c6f44c475-twbzz" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.119684 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/38b57350-6ca0-4090-876b-7727c983cf52-internal-tls-certs\") pod \"swift-proxy-c6f44c475-twbzz\" (UID: \"38b57350-6ca0-4090-876b-7727c983cf52\") " pod="openstack/swift-proxy-c6f44c475-twbzz" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.119704 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/38b57350-6ca0-4090-876b-7727c983cf52-public-tls-certs\") pod \"swift-proxy-c6f44c475-twbzz\" (UID: \"38b57350-6ca0-4090-876b-7727c983cf52\") " pod="openstack/swift-proxy-c6f44c475-twbzz" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.119728 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgkhr\" (UniqueName: \"kubernetes.io/projected/38b57350-6ca0-4090-876b-7727c983cf52-kube-api-access-qgkhr\") pod \"swift-proxy-c6f44c475-twbzz\" (UID: \"38b57350-6ca0-4090-876b-7727c983cf52\") " pod="openstack/swift-proxy-c6f44c475-twbzz" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.121461 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38b57350-6ca0-4090-876b-7727c983cf52-run-httpd\") pod \"swift-proxy-c6f44c475-twbzz\" (UID: \"38b57350-6ca0-4090-876b-7727c983cf52\") " pod="openstack/swift-proxy-c6f44c475-twbzz" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.122210 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38b57350-6ca0-4090-876b-7727c983cf52-log-httpd\") pod \"swift-proxy-c6f44c475-twbzz\" (UID: \"38b57350-6ca0-4090-876b-7727c983cf52\") " pod="openstack/swift-proxy-c6f44c475-twbzz" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.126848 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/38b57350-6ca0-4090-876b-7727c983cf52-public-tls-certs\") pod \"swift-proxy-c6f44c475-twbzz\" (UID: \"38b57350-6ca0-4090-876b-7727c983cf52\") " pod="openstack/swift-proxy-c6f44c475-twbzz" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.127126 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/38b57350-6ca0-4090-876b-7727c983cf52-etc-swift\") pod \"swift-proxy-c6f44c475-twbzz\" (UID: \"38b57350-6ca0-4090-876b-7727c983cf52\") " pod="openstack/swift-proxy-c6f44c475-twbzz" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.127195 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38b57350-6ca0-4090-876b-7727c983cf52-combined-ca-bundle\") pod \"swift-proxy-c6f44c475-twbzz\" (UID: \"38b57350-6ca0-4090-876b-7727c983cf52\") " pod="openstack/swift-proxy-c6f44c475-twbzz" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.127863 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38b57350-6ca0-4090-876b-7727c983cf52-config-data\") pod \"swift-proxy-c6f44c475-twbzz\" (UID: \"38b57350-6ca0-4090-876b-7727c983cf52\") " pod="openstack/swift-proxy-c6f44c475-twbzz" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.131786 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/38b57350-6ca0-4090-876b-7727c983cf52-internal-tls-certs\") pod \"swift-proxy-c6f44c475-twbzz\" (UID: \"38b57350-6ca0-4090-876b-7727c983cf52\") " pod="openstack/swift-proxy-c6f44c475-twbzz" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.141642 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgkhr\" (UniqueName: \"kubernetes.io/projected/38b57350-6ca0-4090-876b-7727c983cf52-kube-api-access-qgkhr\") pod \"swift-proxy-c6f44c475-twbzz\" (UID: \"38b57350-6ca0-4090-876b-7727c983cf52\") " pod="openstack/swift-proxy-c6f44c475-twbzz" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.200653 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-c6f44c475-twbzz" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.266717 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"da0f5d1f-1944-4347-8a18-fd946fb7ed6a","Type":"ContainerDied","Data":"910da7804241c63309a92f1879853230e879cdaf110b2204498a53e07581f3cb"} Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.266767 4830 scope.go:117] "RemoveContainer" containerID="4784515ee13ec5f2e9c5dbdedc925b93fda50d5cc115163838e530a4050b1298" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.266865 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.304555 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.317113 4830 scope.go:117] "RemoveContainer" containerID="6c582c45fe25a3893424ec2ce2705ebcf999c507acb0c5f092150ee067d084e7" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.318308 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.332997 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.334561 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.336620 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.338700 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.426803 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d6ca92a-3e98-4628-8936-37032cf27463-scripts\") pod \"cinder-scheduler-0\" (UID: \"6d6ca92a-3e98-4628-8936-37032cf27463\") " pod="openstack/cinder-scheduler-0" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.426873 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6d6ca92a-3e98-4628-8936-37032cf27463-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6d6ca92a-3e98-4628-8936-37032cf27463\") " pod="openstack/cinder-scheduler-0" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.426911 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d6ca92a-3e98-4628-8936-37032cf27463-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6d6ca92a-3e98-4628-8936-37032cf27463\") " pod="openstack/cinder-scheduler-0" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.426967 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9khfg\" (UniqueName: \"kubernetes.io/projected/6d6ca92a-3e98-4628-8936-37032cf27463-kube-api-access-9khfg\") pod \"cinder-scheduler-0\" (UID: \"6d6ca92a-3e98-4628-8936-37032cf27463\") " pod="openstack/cinder-scheduler-0" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.427091 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6d6ca92a-3e98-4628-8936-37032cf27463-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6d6ca92a-3e98-4628-8936-37032cf27463\") " pod="openstack/cinder-scheduler-0" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.427332 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d6ca92a-3e98-4628-8936-37032cf27463-config-data\") pod \"cinder-scheduler-0\" (UID: \"6d6ca92a-3e98-4628-8936-37032cf27463\") " pod="openstack/cinder-scheduler-0" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.528830 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d6ca92a-3e98-4628-8936-37032cf27463-config-data\") pod \"cinder-scheduler-0\" (UID: \"6d6ca92a-3e98-4628-8936-37032cf27463\") " pod="openstack/cinder-scheduler-0" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.528873 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d6ca92a-3e98-4628-8936-37032cf27463-scripts\") pod \"cinder-scheduler-0\" (UID: \"6d6ca92a-3e98-4628-8936-37032cf27463\") " pod="openstack/cinder-scheduler-0" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.528963 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6d6ca92a-3e98-4628-8936-37032cf27463-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6d6ca92a-3e98-4628-8936-37032cf27463\") " pod="openstack/cinder-scheduler-0" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.528999 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d6ca92a-3e98-4628-8936-37032cf27463-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6d6ca92a-3e98-4628-8936-37032cf27463\") " pod="openstack/cinder-scheduler-0" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.529017 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9khfg\" (UniqueName: \"kubernetes.io/projected/6d6ca92a-3e98-4628-8936-37032cf27463-kube-api-access-9khfg\") pod \"cinder-scheduler-0\" (UID: \"6d6ca92a-3e98-4628-8936-37032cf27463\") " pod="openstack/cinder-scheduler-0" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.529058 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6d6ca92a-3e98-4628-8936-37032cf27463-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6d6ca92a-3e98-4628-8936-37032cf27463\") " pod="openstack/cinder-scheduler-0" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.529159 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6d6ca92a-3e98-4628-8936-37032cf27463-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6d6ca92a-3e98-4628-8936-37032cf27463\") " pod="openstack/cinder-scheduler-0" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.548754 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d6ca92a-3e98-4628-8936-37032cf27463-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6d6ca92a-3e98-4628-8936-37032cf27463\") " pod="openstack/cinder-scheduler-0" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.548922 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6d6ca92a-3e98-4628-8936-37032cf27463-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6d6ca92a-3e98-4628-8936-37032cf27463\") " pod="openstack/cinder-scheduler-0" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.549474 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d6ca92a-3e98-4628-8936-37032cf27463-scripts\") pod \"cinder-scheduler-0\" (UID: \"6d6ca92a-3e98-4628-8936-37032cf27463\") " pod="openstack/cinder-scheduler-0" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.554501 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9khfg\" (UniqueName: \"kubernetes.io/projected/6d6ca92a-3e98-4628-8936-37032cf27463-kube-api-access-9khfg\") pod \"cinder-scheduler-0\" (UID: \"6d6ca92a-3e98-4628-8936-37032cf27463\") " pod="openstack/cinder-scheduler-0" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.562844 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d6ca92a-3e98-4628-8936-37032cf27463-config-data\") pod \"cinder-scheduler-0\" (UID: \"6d6ca92a-3e98-4628-8936-37032cf27463\") " pod="openstack/cinder-scheduler-0" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.765437 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.766635 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5d54db5966-xcg7l" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.857781 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.858052 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa935bfb-ebfd-4aa9-abc3-84d118252abe" containerName="ceilometer-central-agent" containerID="cri-o://9d7903ee5c9c8d27d585b4910f200bfb80e3e1da5bdfb566e661396da94d6a68" gracePeriod=30 Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.858178 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa935bfb-ebfd-4aa9-abc3-84d118252abe" containerName="proxy-httpd" containerID="cri-o://78fb851103bba5ab46085de10cfd0d141bab5e2bf3115eeb51d3305f719fb23b" gracePeriod=30 Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.858213 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa935bfb-ebfd-4aa9-abc3-84d118252abe" containerName="sg-core" containerID="cri-o://10d669d0a72502efb3bd8086dffa6db237238897617d2d5aa7425d67a1a8b135" gracePeriod=30 Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.858242 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aa935bfb-ebfd-4aa9-abc3-84d118252abe" containerName="ceilometer-notification-agent" containerID="cri-o://f70612a9d7987bfbe011c9d173a99294117f3554823216cc62088541c06772f4" gracePeriod=30 Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.868072 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="aa935bfb-ebfd-4aa9-abc3-84d118252abe" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.162:3000/\": EOF" Feb 27 16:30:27 crc kubenswrapper[4830]: I0227 16:30:27.906575 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-c6f44c475-twbzz"] Feb 27 16:30:28 crc kubenswrapper[4830]: I0227 16:30:28.280731 4830 generic.go:334] "Generic (PLEG): container finished" podID="aa935bfb-ebfd-4aa9-abc3-84d118252abe" containerID="78fb851103bba5ab46085de10cfd0d141bab5e2bf3115eeb51d3305f719fb23b" exitCode=0 Feb 27 16:30:28 crc kubenswrapper[4830]: I0227 16:30:28.281033 4830 generic.go:334] "Generic (PLEG): container finished" podID="aa935bfb-ebfd-4aa9-abc3-84d118252abe" containerID="10d669d0a72502efb3bd8086dffa6db237238897617d2d5aa7425d67a1a8b135" exitCode=2 Feb 27 16:30:28 crc kubenswrapper[4830]: I0227 16:30:28.280820 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa935bfb-ebfd-4aa9-abc3-84d118252abe","Type":"ContainerDied","Data":"78fb851103bba5ab46085de10cfd0d141bab5e2bf3115eeb51d3305f719fb23b"} Feb 27 16:30:28 crc kubenswrapper[4830]: I0227 16:30:28.281128 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa935bfb-ebfd-4aa9-abc3-84d118252abe","Type":"ContainerDied","Data":"10d669d0a72502efb3bd8086dffa6db237238897617d2d5aa7425d67a1a8b135"} Feb 27 16:30:28 crc kubenswrapper[4830]: I0227 16:30:28.801189 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da0f5d1f-1944-4347-8a18-fd946fb7ed6a" path="/var/lib/kubelet/pods/da0f5d1f-1944-4347-8a18-fd946fb7ed6a/volumes" Feb 27 16:30:29 crc kubenswrapper[4830]: I0227 16:30:29.296239 4830 generic.go:334] "Generic (PLEG): container finished" podID="aa935bfb-ebfd-4aa9-abc3-84d118252abe" containerID="9d7903ee5c9c8d27d585b4910f200bfb80e3e1da5bdfb566e661396da94d6a68" exitCode=0 Feb 27 16:30:29 crc kubenswrapper[4830]: I0227 16:30:29.296285 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa935bfb-ebfd-4aa9-abc3-84d118252abe","Type":"ContainerDied","Data":"9d7903ee5c9c8d27d585b4910f200bfb80e3e1da5bdfb566e661396da94d6a68"} Feb 27 16:30:29 crc kubenswrapper[4830]: I0227 16:30:29.459654 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5d54db5966-xcg7l" Feb 27 16:30:29 crc kubenswrapper[4830]: I0227 16:30:29.520626 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-86c4877d94-j48gv"] Feb 27 16:30:29 crc kubenswrapper[4830]: I0227 16:30:29.529219 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-86c4877d94-j48gv" podUID="b6c68417-9771-4ad5-acfa-b25ddda70e33" containerName="barbican-api-log" containerID="cri-o://87fe0cf182a5dc688fa01ab19965899ca6f4035532e1b667dffb3b4e0f3cee8a" gracePeriod=30 Feb 27 16:30:29 crc kubenswrapper[4830]: I0227 16:30:29.529617 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-86c4877d94-j48gv" podUID="b6c68417-9771-4ad5-acfa-b25ddda70e33" containerName="barbican-api" containerID="cri-o://f6f9eacfd59446aa4e25953cd1e74800b15b5cb949be8f1c201f6b98ceddfaea" gracePeriod=30 Feb 27 16:30:30 crc kubenswrapper[4830]: I0227 16:30:30.321085 4830 generic.go:334] "Generic (PLEG): container finished" podID="b6c68417-9771-4ad5-acfa-b25ddda70e33" containerID="87fe0cf182a5dc688fa01ab19965899ca6f4035532e1b667dffb3b4e0f3cee8a" exitCode=143 Feb 27 16:30:30 crc kubenswrapper[4830]: I0227 16:30:30.321135 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-86c4877d94-j48gv" event={"ID":"b6c68417-9771-4ad5-acfa-b25ddda70e33","Type":"ContainerDied","Data":"87fe0cf182a5dc688fa01ab19965899ca6f4035532e1b667dffb3b4e0f3cee8a"} Feb 27 16:30:31 crc kubenswrapper[4830]: I0227 16:30:31.434067 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 27 16:30:32 crc kubenswrapper[4830]: I0227 16:30:32.344497 4830 generic.go:334] "Generic (PLEG): container finished" podID="aa935bfb-ebfd-4aa9-abc3-84d118252abe" containerID="f70612a9d7987bfbe011c9d173a99294117f3554823216cc62088541c06772f4" exitCode=0 Feb 27 16:30:32 crc kubenswrapper[4830]: I0227 16:30:32.344560 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa935bfb-ebfd-4aa9-abc3-84d118252abe","Type":"ContainerDied","Data":"f70612a9d7987bfbe011c9d173a99294117f3554823216cc62088541c06772f4"} Feb 27 16:30:32 crc kubenswrapper[4830]: I0227 16:30:32.707749 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-86c4877d94-j48gv" podUID="b6c68417-9771-4ad5-acfa-b25ddda70e33" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": read tcp 10.217.0.2:49228->10.217.0.167:9311: read: connection reset by peer" Feb 27 16:30:32 crc kubenswrapper[4830]: I0227 16:30:32.707823 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-86c4877d94-j48gv" podUID="b6c68417-9771-4ad5-acfa-b25ddda70e33" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": read tcp 10.217.0.2:49214->10.217.0.167:9311: read: connection reset by peer" Feb 27 16:30:33 crc kubenswrapper[4830]: I0227 16:30:33.361833 4830 generic.go:334] "Generic (PLEG): container finished" podID="b6c68417-9771-4ad5-acfa-b25ddda70e33" containerID="f6f9eacfd59446aa4e25953cd1e74800b15b5cb949be8f1c201f6b98ceddfaea" exitCode=0 Feb 27 16:30:33 crc kubenswrapper[4830]: I0227 16:30:33.361878 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-86c4877d94-j48gv" event={"ID":"b6c68417-9771-4ad5-acfa-b25ddda70e33","Type":"ContainerDied","Data":"f6f9eacfd59446aa4e25953cd1e74800b15b5cb949be8f1c201f6b98ceddfaea"} Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.346445 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-86c4877d94-j48gv" Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.380678 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-86c4877d94-j48gv" event={"ID":"b6c68417-9771-4ad5-acfa-b25ddda70e33","Type":"ContainerDied","Data":"e0a53804b4e17fb255d28d1f34ee08b9a66644b29b7ada9f6f591f074a17fa8c"} Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.380776 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-86c4877d94-j48gv" Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.381254 4830 scope.go:117] "RemoveContainer" containerID="f6f9eacfd59446aa4e25953cd1e74800b15b5cb949be8f1c201f6b98ceddfaea" Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.382587 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"3482e9fb-53ae-4908-87fc-4096c5b26b76","Type":"ContainerStarted","Data":"ebe94bb0443ae2939345bc80a179e9644e55c467b0fc2c9d6043e5cff481e239"} Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.390729 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-c6f44c475-twbzz" event={"ID":"38b57350-6ca0-4090-876b-7727c983cf52","Type":"ContainerStarted","Data":"4379a4562487a2f829fd847e713d7b48e4f30ff72dfa48612a5cee4351449110"} Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.390776 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-c6f44c475-twbzz" event={"ID":"38b57350-6ca0-4090-876b-7727c983cf52","Type":"ContainerStarted","Data":"73b5f31020bdda84b1e0be41fcac15122bcecb86520bab6e99fc3e9b00a4627b"} Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.409612 4830 scope.go:117] "RemoveContainer" containerID="87fe0cf182a5dc688fa01ab19965899ca6f4035532e1b667dffb3b4e0f3cee8a" Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.415799 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.774925508 podStartE2EDuration="12.415780712s" podCreationTimestamp="2026-02-27 16:30:22 +0000 UTC" firstStartedPulling="2026-02-27 16:30:23.40555573 +0000 UTC m=+1419.494828193" lastFinishedPulling="2026-02-27 16:30:34.046410934 +0000 UTC m=+1430.135683397" observedRunningTime="2026-02-27 16:30:34.406416096 +0000 UTC m=+1430.495688559" watchObservedRunningTime="2026-02-27 16:30:34.415780712 +0000 UTC m=+1430.505053175" Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.453623 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.502708 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6c68417-9771-4ad5-acfa-b25ddda70e33-config-data\") pod \"b6c68417-9771-4ad5-acfa-b25ddda70e33\" (UID: \"b6c68417-9771-4ad5-acfa-b25ddda70e33\") " Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.502870 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c68417-9771-4ad5-acfa-b25ddda70e33-combined-ca-bundle\") pod \"b6c68417-9771-4ad5-acfa-b25ddda70e33\" (UID: \"b6c68417-9771-4ad5-acfa-b25ddda70e33\") " Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.502971 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6c68417-9771-4ad5-acfa-b25ddda70e33-logs\") pod \"b6c68417-9771-4ad5-acfa-b25ddda70e33\" (UID: \"b6c68417-9771-4ad5-acfa-b25ddda70e33\") " Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.503008 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdnmh\" (UniqueName: \"kubernetes.io/projected/b6c68417-9771-4ad5-acfa-b25ddda70e33-kube-api-access-wdnmh\") pod \"b6c68417-9771-4ad5-acfa-b25ddda70e33\" (UID: \"b6c68417-9771-4ad5-acfa-b25ddda70e33\") " Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.503073 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6c68417-9771-4ad5-acfa-b25ddda70e33-config-data-custom\") pod \"b6c68417-9771-4ad5-acfa-b25ddda70e33\" (UID: \"b6c68417-9771-4ad5-acfa-b25ddda70e33\") " Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.503677 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6c68417-9771-4ad5-acfa-b25ddda70e33-logs" (OuterVolumeSpecName: "logs") pod "b6c68417-9771-4ad5-acfa-b25ddda70e33" (UID: "b6c68417-9771-4ad5-acfa-b25ddda70e33"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.508420 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6c68417-9771-4ad5-acfa-b25ddda70e33-kube-api-access-wdnmh" (OuterVolumeSpecName: "kube-api-access-wdnmh") pod "b6c68417-9771-4ad5-acfa-b25ddda70e33" (UID: "b6c68417-9771-4ad5-acfa-b25ddda70e33"). InnerVolumeSpecName "kube-api-access-wdnmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.509589 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c68417-9771-4ad5-acfa-b25ddda70e33-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b6c68417-9771-4ad5-acfa-b25ddda70e33" (UID: "b6c68417-9771-4ad5-acfa-b25ddda70e33"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.550504 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c68417-9771-4ad5-acfa-b25ddda70e33-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6c68417-9771-4ad5-acfa-b25ddda70e33" (UID: "b6c68417-9771-4ad5-acfa-b25ddda70e33"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.581436 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.607483 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa935bfb-ebfd-4aa9-abc3-84d118252abe-combined-ca-bundle\") pod \"aa935bfb-ebfd-4aa9-abc3-84d118252abe\" (UID: \"aa935bfb-ebfd-4aa9-abc3-84d118252abe\") " Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.607726 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdqcg\" (UniqueName: \"kubernetes.io/projected/aa935bfb-ebfd-4aa9-abc3-84d118252abe-kube-api-access-sdqcg\") pod \"aa935bfb-ebfd-4aa9-abc3-84d118252abe\" (UID: \"aa935bfb-ebfd-4aa9-abc3-84d118252abe\") " Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.607758 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c68417-9771-4ad5-acfa-b25ddda70e33-config-data" (OuterVolumeSpecName: "config-data") pod "b6c68417-9771-4ad5-acfa-b25ddda70e33" (UID: "b6c68417-9771-4ad5-acfa-b25ddda70e33"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.607873 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa935bfb-ebfd-4aa9-abc3-84d118252abe-log-httpd\") pod \"aa935bfb-ebfd-4aa9-abc3-84d118252abe\" (UID: \"aa935bfb-ebfd-4aa9-abc3-84d118252abe\") " Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.607905 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa935bfb-ebfd-4aa9-abc3-84d118252abe-sg-core-conf-yaml\") pod \"aa935bfb-ebfd-4aa9-abc3-84d118252abe\" (UID: \"aa935bfb-ebfd-4aa9-abc3-84d118252abe\") " Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.607987 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6c68417-9771-4ad5-acfa-b25ddda70e33-config-data\") pod \"b6c68417-9771-4ad5-acfa-b25ddda70e33\" (UID: \"b6c68417-9771-4ad5-acfa-b25ddda70e33\") " Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.608056 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa935bfb-ebfd-4aa9-abc3-84d118252abe-run-httpd\") pod \"aa935bfb-ebfd-4aa9-abc3-84d118252abe\" (UID: \"aa935bfb-ebfd-4aa9-abc3-84d118252abe\") " Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.608071 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa935bfb-ebfd-4aa9-abc3-84d118252abe-config-data\") pod \"aa935bfb-ebfd-4aa9-abc3-84d118252abe\" (UID: \"aa935bfb-ebfd-4aa9-abc3-84d118252abe\") " Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.608103 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa935bfb-ebfd-4aa9-abc3-84d118252abe-scripts\") pod \"aa935bfb-ebfd-4aa9-abc3-84d118252abe\" (UID: \"aa935bfb-ebfd-4aa9-abc3-84d118252abe\") " Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.609138 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c68417-9771-4ad5-acfa-b25ddda70e33-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.609176 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6c68417-9771-4ad5-acfa-b25ddda70e33-logs\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.609186 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdnmh\" (UniqueName: \"kubernetes.io/projected/b6c68417-9771-4ad5-acfa-b25ddda70e33-kube-api-access-wdnmh\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.609196 4830 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6c68417-9771-4ad5-acfa-b25ddda70e33-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:34 crc kubenswrapper[4830]: W0227 16:30:34.609535 4830 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/b6c68417-9771-4ad5-acfa-b25ddda70e33/volumes/kubernetes.io~secret/config-data Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.609553 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c68417-9771-4ad5-acfa-b25ddda70e33-config-data" (OuterVolumeSpecName: "config-data") pod "b6c68417-9771-4ad5-acfa-b25ddda70e33" (UID: "b6c68417-9771-4ad5-acfa-b25ddda70e33"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.611107 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa935bfb-ebfd-4aa9-abc3-84d118252abe-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "aa935bfb-ebfd-4aa9-abc3-84d118252abe" (UID: "aa935bfb-ebfd-4aa9-abc3-84d118252abe"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.611221 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa935bfb-ebfd-4aa9-abc3-84d118252abe-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "aa935bfb-ebfd-4aa9-abc3-84d118252abe" (UID: "aa935bfb-ebfd-4aa9-abc3-84d118252abe"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.612835 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa935bfb-ebfd-4aa9-abc3-84d118252abe-kube-api-access-sdqcg" (OuterVolumeSpecName: "kube-api-access-sdqcg") pod "aa935bfb-ebfd-4aa9-abc3-84d118252abe" (UID: "aa935bfb-ebfd-4aa9-abc3-84d118252abe"). InnerVolumeSpecName "kube-api-access-sdqcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.617080 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa935bfb-ebfd-4aa9-abc3-84d118252abe-scripts" (OuterVolumeSpecName: "scripts") pod "aa935bfb-ebfd-4aa9-abc3-84d118252abe" (UID: "aa935bfb-ebfd-4aa9-abc3-84d118252abe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.675185 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa935bfb-ebfd-4aa9-abc3-84d118252abe-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "aa935bfb-ebfd-4aa9-abc3-84d118252abe" (UID: "aa935bfb-ebfd-4aa9-abc3-84d118252abe"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.712066 4830 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa935bfb-ebfd-4aa9-abc3-84d118252abe-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.712096 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa935bfb-ebfd-4aa9-abc3-84d118252abe-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.712105 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdqcg\" (UniqueName: \"kubernetes.io/projected/aa935bfb-ebfd-4aa9-abc3-84d118252abe-kube-api-access-sdqcg\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.712116 4830 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aa935bfb-ebfd-4aa9-abc3-84d118252abe-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.712125 4830 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aa935bfb-ebfd-4aa9-abc3-84d118252abe-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.712133 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6c68417-9771-4ad5-acfa-b25ddda70e33-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.745994 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-86c4877d94-j48gv"] Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.758066 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa935bfb-ebfd-4aa9-abc3-84d118252abe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa935bfb-ebfd-4aa9-abc3-84d118252abe" (UID: "aa935bfb-ebfd-4aa9-abc3-84d118252abe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.760714 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-86c4877d94-j48gv"] Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.776363 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6c68417-9771-4ad5-acfa-b25ddda70e33" path="/var/lib/kubelet/pods/b6c68417-9771-4ad5-acfa-b25ddda70e33/volumes" Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.783026 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa935bfb-ebfd-4aa9-abc3-84d118252abe-config-data" (OuterVolumeSpecName: "config-data") pod "aa935bfb-ebfd-4aa9-abc3-84d118252abe" (UID: "aa935bfb-ebfd-4aa9-abc3-84d118252abe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.817152 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa935bfb-ebfd-4aa9-abc3-84d118252abe-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:34 crc kubenswrapper[4830]: I0227 16:30:34.817359 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa935bfb-ebfd-4aa9-abc3-84d118252abe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.187396 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.187624 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="bb4fe631-52f0-445f-9e4c-90f4137bdba6" containerName="glance-log" containerID="cri-o://1f34e6f642aea7d4125be18b29bd3b54dedeb193e281e665389dc545b3650026" gracePeriod=30 Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.187993 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="bb4fe631-52f0-445f-9e4c-90f4137bdba6" containerName="glance-httpd" containerID="cri-o://3976783388fcdce522b0afa5b0ca99a1cf893c91a02d58f8d8a5a9a4a19a9296" gracePeriod=30 Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.404374 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6d6ca92a-3e98-4628-8936-37032cf27463","Type":"ContainerStarted","Data":"c6e289a18c1629684bcdb331c9033eb81b5cf53591f391b7c77955013ee8149f"} Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.404643 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6d6ca92a-3e98-4628-8936-37032cf27463","Type":"ContainerStarted","Data":"4da009fb0492324153cae8f54222ba75d4387ebdc9243a5ad16174a4cceea6c4"} Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.431414 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aa935bfb-ebfd-4aa9-abc3-84d118252abe","Type":"ContainerDied","Data":"b20b2a8db2343e30af76eeef218b63e3151bb756ad774c454a43318439550ffe"} Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.431462 4830 scope.go:117] "RemoveContainer" containerID="78fb851103bba5ab46085de10cfd0d141bab5e2bf3115eeb51d3305f719fb23b" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.431551 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.438768 4830 generic.go:334] "Generic (PLEG): container finished" podID="bb4fe631-52f0-445f-9e4c-90f4137bdba6" containerID="1f34e6f642aea7d4125be18b29bd3b54dedeb193e281e665389dc545b3650026" exitCode=143 Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.438823 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bb4fe631-52f0-445f-9e4c-90f4137bdba6","Type":"ContainerDied","Data":"1f34e6f642aea7d4125be18b29bd3b54dedeb193e281e665389dc545b3650026"} Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.441894 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-c6f44c475-twbzz" event={"ID":"38b57350-6ca0-4090-876b-7727c983cf52","Type":"ContainerStarted","Data":"7dad8ffa6283d569435591881ebf2eedf721235312643b6378985dffadc0a1cf"} Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.441922 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-c6f44c475-twbzz" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.441932 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-c6f44c475-twbzz" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.482102 4830 scope.go:117] "RemoveContainer" containerID="10d669d0a72502efb3bd8086dffa6db237238897617d2d5aa7425d67a1a8b135" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.482236 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-c6f44c475-twbzz" podStartSLOduration=9.48221816 podStartE2EDuration="9.48221816s" podCreationTimestamp="2026-02-27 16:30:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:30:35.481261528 +0000 UTC m=+1431.570533991" watchObservedRunningTime="2026-02-27 16:30:35.48221816 +0000 UTC m=+1431.571490623" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.539052 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.588881 4830 scope.go:117] "RemoveContainer" containerID="f70612a9d7987bfbe011c9d173a99294117f3554823216cc62088541c06772f4" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.607652 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.635113 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:30:35 crc kubenswrapper[4830]: E0227 16:30:35.635542 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6c68417-9771-4ad5-acfa-b25ddda70e33" containerName="barbican-api-log" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.635563 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6c68417-9771-4ad5-acfa-b25ddda70e33" containerName="barbican-api-log" Feb 27 16:30:35 crc kubenswrapper[4830]: E0227 16:30:35.635577 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa935bfb-ebfd-4aa9-abc3-84d118252abe" containerName="proxy-httpd" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.635584 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa935bfb-ebfd-4aa9-abc3-84d118252abe" containerName="proxy-httpd" Feb 27 16:30:35 crc kubenswrapper[4830]: E0227 16:30:35.635598 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa935bfb-ebfd-4aa9-abc3-84d118252abe" containerName="ceilometer-central-agent" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.635604 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa935bfb-ebfd-4aa9-abc3-84d118252abe" containerName="ceilometer-central-agent" Feb 27 16:30:35 crc kubenswrapper[4830]: E0227 16:30:35.635628 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa935bfb-ebfd-4aa9-abc3-84d118252abe" containerName="ceilometer-notification-agent" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.635634 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa935bfb-ebfd-4aa9-abc3-84d118252abe" containerName="ceilometer-notification-agent" Feb 27 16:30:35 crc kubenswrapper[4830]: E0227 16:30:35.635642 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6c68417-9771-4ad5-acfa-b25ddda70e33" containerName="barbican-api" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.635648 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6c68417-9771-4ad5-acfa-b25ddda70e33" containerName="barbican-api" Feb 27 16:30:35 crc kubenswrapper[4830]: E0227 16:30:35.635656 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa935bfb-ebfd-4aa9-abc3-84d118252abe" containerName="sg-core" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.635662 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa935bfb-ebfd-4aa9-abc3-84d118252abe" containerName="sg-core" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.636308 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6c68417-9771-4ad5-acfa-b25ddda70e33" containerName="barbican-api" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.636339 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa935bfb-ebfd-4aa9-abc3-84d118252abe" containerName="ceilometer-central-agent" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.636346 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6c68417-9771-4ad5-acfa-b25ddda70e33" containerName="barbican-api-log" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.636358 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa935bfb-ebfd-4aa9-abc3-84d118252abe" containerName="proxy-httpd" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.636368 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa935bfb-ebfd-4aa9-abc3-84d118252abe" containerName="ceilometer-notification-agent" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.636380 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa935bfb-ebfd-4aa9-abc3-84d118252abe" containerName="sg-core" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.638471 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.640526 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.642444 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.643058 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.652682 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-2jtqk"] Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.653953 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2jtqk" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.686320 4830 scope.go:117] "RemoveContainer" containerID="9d7903ee5c9c8d27d585b4910f200bfb80e3e1da5bdfb566e661396da94d6a68" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.718994 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-2jtqk"] Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.738871 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/814a49d3-5ece-4609-97f7-745b3843d2e3-log-httpd\") pod \"ceilometer-0\" (UID: \"814a49d3-5ece-4609-97f7-745b3843d2e3\") " pod="openstack/ceilometer-0" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.738994 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/814a49d3-5ece-4609-97f7-745b3843d2e3-config-data\") pod \"ceilometer-0\" (UID: \"814a49d3-5ece-4609-97f7-745b3843d2e3\") " pod="openstack/ceilometer-0" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.739046 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/814a49d3-5ece-4609-97f7-745b3843d2e3-scripts\") pod \"ceilometer-0\" (UID: \"814a49d3-5ece-4609-97f7-745b3843d2e3\") " pod="openstack/ceilometer-0" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.739063 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7scl\" (UniqueName: \"kubernetes.io/projected/814a49d3-5ece-4609-97f7-745b3843d2e3-kube-api-access-d7scl\") pod \"ceilometer-0\" (UID: \"814a49d3-5ece-4609-97f7-745b3843d2e3\") " pod="openstack/ceilometer-0" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.739087 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/814a49d3-5ece-4609-97f7-745b3843d2e3-run-httpd\") pod \"ceilometer-0\" (UID: \"814a49d3-5ece-4609-97f7-745b3843d2e3\") " pod="openstack/ceilometer-0" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.739125 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/814a49d3-5ece-4609-97f7-745b3843d2e3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"814a49d3-5ece-4609-97f7-745b3843d2e3\") " pod="openstack/ceilometer-0" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.739145 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/814a49d3-5ece-4609-97f7-745b3843d2e3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"814a49d3-5ece-4609-97f7-745b3843d2e3\") " pod="openstack/ceilometer-0" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.761821 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-drqxj"] Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.763008 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-drqxj" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.774069 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-c219-account-create-update-zndsj"] Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.780071 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c219-account-create-update-zndsj" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.785775 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.809601 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-drqxj"] Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.821000 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c219-account-create-update-zndsj"] Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.844652 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/814a49d3-5ece-4609-97f7-745b3843d2e3-log-httpd\") pod \"ceilometer-0\" (UID: \"814a49d3-5ece-4609-97f7-745b3843d2e3\") " pod="openstack/ceilometer-0" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.844697 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e59291fe-6cc6-4fda-870b-d3842d9b65ee-operator-scripts\") pod \"nova-api-c219-account-create-update-zndsj\" (UID: \"e59291fe-6cc6-4fda-870b-d3842d9b65ee\") " pod="openstack/nova-api-c219-account-create-update-zndsj" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.844734 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbqhn\" (UniqueName: \"kubernetes.io/projected/77b4533c-3623-4d0c-834c-dc2329c0ffc8-kube-api-access-qbqhn\") pod \"nova-cell0-db-create-drqxj\" (UID: \"77b4533c-3623-4d0c-834c-dc2329c0ffc8\") " pod="openstack/nova-cell0-db-create-drqxj" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.844795 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5b1adc8-187b-4662-b11e-c6ad31564ebf-operator-scripts\") pod \"nova-api-db-create-2jtqk\" (UID: \"d5b1adc8-187b-4662-b11e-c6ad31564ebf\") " pod="openstack/nova-api-db-create-2jtqk" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.844812 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dkx5\" (UniqueName: \"kubernetes.io/projected/e59291fe-6cc6-4fda-870b-d3842d9b65ee-kube-api-access-8dkx5\") pod \"nova-api-c219-account-create-update-zndsj\" (UID: \"e59291fe-6cc6-4fda-870b-d3842d9b65ee\") " pod="openstack/nova-api-c219-account-create-update-zndsj" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.844829 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/814a49d3-5ece-4609-97f7-745b3843d2e3-config-data\") pod \"ceilometer-0\" (UID: \"814a49d3-5ece-4609-97f7-745b3843d2e3\") " pod="openstack/ceilometer-0" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.844872 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77b4533c-3623-4d0c-834c-dc2329c0ffc8-operator-scripts\") pod \"nova-cell0-db-create-drqxj\" (UID: \"77b4533c-3623-4d0c-834c-dc2329c0ffc8\") " pod="openstack/nova-cell0-db-create-drqxj" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.844893 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/814a49d3-5ece-4609-97f7-745b3843d2e3-scripts\") pod \"ceilometer-0\" (UID: \"814a49d3-5ece-4609-97f7-745b3843d2e3\") " pod="openstack/ceilometer-0" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.844911 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mwr9\" (UniqueName: \"kubernetes.io/projected/d5b1adc8-187b-4662-b11e-c6ad31564ebf-kube-api-access-2mwr9\") pod \"nova-api-db-create-2jtqk\" (UID: \"d5b1adc8-187b-4662-b11e-c6ad31564ebf\") " pod="openstack/nova-api-db-create-2jtqk" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.844928 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7scl\" (UniqueName: \"kubernetes.io/projected/814a49d3-5ece-4609-97f7-745b3843d2e3-kube-api-access-d7scl\") pod \"ceilometer-0\" (UID: \"814a49d3-5ece-4609-97f7-745b3843d2e3\") " pod="openstack/ceilometer-0" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.844982 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/814a49d3-5ece-4609-97f7-745b3843d2e3-run-httpd\") pod \"ceilometer-0\" (UID: \"814a49d3-5ece-4609-97f7-745b3843d2e3\") " pod="openstack/ceilometer-0" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.845082 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/814a49d3-5ece-4609-97f7-745b3843d2e3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"814a49d3-5ece-4609-97f7-745b3843d2e3\") " pod="openstack/ceilometer-0" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.845342 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/814a49d3-5ece-4609-97f7-745b3843d2e3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"814a49d3-5ece-4609-97f7-745b3843d2e3\") " pod="openstack/ceilometer-0" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.845357 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/814a49d3-5ece-4609-97f7-745b3843d2e3-run-httpd\") pod \"ceilometer-0\" (UID: \"814a49d3-5ece-4609-97f7-745b3843d2e3\") " pod="openstack/ceilometer-0" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.846105 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/814a49d3-5ece-4609-97f7-745b3843d2e3-log-httpd\") pod \"ceilometer-0\" (UID: \"814a49d3-5ece-4609-97f7-745b3843d2e3\") " pod="openstack/ceilometer-0" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.860777 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/814a49d3-5ece-4609-97f7-745b3843d2e3-scripts\") pod \"ceilometer-0\" (UID: \"814a49d3-5ece-4609-97f7-745b3843d2e3\") " pod="openstack/ceilometer-0" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.861385 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/814a49d3-5ece-4609-97f7-745b3843d2e3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"814a49d3-5ece-4609-97f7-745b3843d2e3\") " pod="openstack/ceilometer-0" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.876708 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/814a49d3-5ece-4609-97f7-745b3843d2e3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"814a49d3-5ece-4609-97f7-745b3843d2e3\") " pod="openstack/ceilometer-0" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.888647 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7scl\" (UniqueName: \"kubernetes.io/projected/814a49d3-5ece-4609-97f7-745b3843d2e3-kube-api-access-d7scl\") pod \"ceilometer-0\" (UID: \"814a49d3-5ece-4609-97f7-745b3843d2e3\") " pod="openstack/ceilometer-0" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.890894 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/814a49d3-5ece-4609-97f7-745b3843d2e3-config-data\") pod \"ceilometer-0\" (UID: \"814a49d3-5ece-4609-97f7-745b3843d2e3\") " pod="openstack/ceilometer-0" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.916898 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-sd6bv"] Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.922939 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-sd6bv" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.953023 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-sd6bv"] Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.953088 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-29fd-account-create-update-n79tl"] Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.954284 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-29fd-account-create-update-n79tl" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.955568 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e59291fe-6cc6-4fda-870b-d3842d9b65ee-operator-scripts\") pod \"nova-api-c219-account-create-update-zndsj\" (UID: \"e59291fe-6cc6-4fda-870b-d3842d9b65ee\") " pod="openstack/nova-api-c219-account-create-update-zndsj" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.955624 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbqhn\" (UniqueName: \"kubernetes.io/projected/77b4533c-3623-4d0c-834c-dc2329c0ffc8-kube-api-access-qbqhn\") pod \"nova-cell0-db-create-drqxj\" (UID: \"77b4533c-3623-4d0c-834c-dc2329c0ffc8\") " pod="openstack/nova-cell0-db-create-drqxj" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.955680 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5b1adc8-187b-4662-b11e-c6ad31564ebf-operator-scripts\") pod \"nova-api-db-create-2jtqk\" (UID: \"d5b1adc8-187b-4662-b11e-c6ad31564ebf\") " pod="openstack/nova-api-db-create-2jtqk" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.955696 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dkx5\" (UniqueName: \"kubernetes.io/projected/e59291fe-6cc6-4fda-870b-d3842d9b65ee-kube-api-access-8dkx5\") pod \"nova-api-c219-account-create-update-zndsj\" (UID: \"e59291fe-6cc6-4fda-870b-d3842d9b65ee\") " pod="openstack/nova-api-c219-account-create-update-zndsj" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.955734 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7ghq\" (UniqueName: \"kubernetes.io/projected/f6d3ef08-a386-4c3a-aea1-7870a4192822-kube-api-access-s7ghq\") pod \"nova-cell0-29fd-account-create-update-n79tl\" (UID: \"f6d3ef08-a386-4c3a-aea1-7870a4192822\") " pod="openstack/nova-cell0-29fd-account-create-update-n79tl" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.955753 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkthj\" (UniqueName: \"kubernetes.io/projected/26e38f3b-1dee-4203-bd8c-5c4ce7d29b0d-kube-api-access-mkthj\") pod \"nova-cell1-db-create-sd6bv\" (UID: \"26e38f3b-1dee-4203-bd8c-5c4ce7d29b0d\") " pod="openstack/nova-cell1-db-create-sd6bv" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.955779 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77b4533c-3623-4d0c-834c-dc2329c0ffc8-operator-scripts\") pod \"nova-cell0-db-create-drqxj\" (UID: \"77b4533c-3623-4d0c-834c-dc2329c0ffc8\") " pod="openstack/nova-cell0-db-create-drqxj" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.955807 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mwr9\" (UniqueName: \"kubernetes.io/projected/d5b1adc8-187b-4662-b11e-c6ad31564ebf-kube-api-access-2mwr9\") pod \"nova-api-db-create-2jtqk\" (UID: \"d5b1adc8-187b-4662-b11e-c6ad31564ebf\") " pod="openstack/nova-api-db-create-2jtqk" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.955825 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26e38f3b-1dee-4203-bd8c-5c4ce7d29b0d-operator-scripts\") pod \"nova-cell1-db-create-sd6bv\" (UID: \"26e38f3b-1dee-4203-bd8c-5c4ce7d29b0d\") " pod="openstack/nova-cell1-db-create-sd6bv" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.955847 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6d3ef08-a386-4c3a-aea1-7870a4192822-operator-scripts\") pod \"nova-cell0-29fd-account-create-update-n79tl\" (UID: \"f6d3ef08-a386-4c3a-aea1-7870a4192822\") " pod="openstack/nova-cell0-29fd-account-create-update-n79tl" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.956451 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e59291fe-6cc6-4fda-870b-d3842d9b65ee-operator-scripts\") pod \"nova-api-c219-account-create-update-zndsj\" (UID: \"e59291fe-6cc6-4fda-870b-d3842d9b65ee\") " pod="openstack/nova-api-c219-account-create-update-zndsj" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.957161 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5b1adc8-187b-4662-b11e-c6ad31564ebf-operator-scripts\") pod \"nova-api-db-create-2jtqk\" (UID: \"d5b1adc8-187b-4662-b11e-c6ad31564ebf\") " pod="openstack/nova-api-db-create-2jtqk" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.957754 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77b4533c-3623-4d0c-834c-dc2329c0ffc8-operator-scripts\") pod \"nova-cell0-db-create-drqxj\" (UID: \"77b4533c-3623-4d0c-834c-dc2329c0ffc8\") " pod="openstack/nova-cell0-db-create-drqxj" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.965807 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-29fd-account-create-update-n79tl"] Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.966151 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 27 16:30:35 crc kubenswrapper[4830]: I0227 16:30:35.995528 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 16:30:36 crc kubenswrapper[4830]: I0227 16:30:36.009676 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mwr9\" (UniqueName: \"kubernetes.io/projected/d5b1adc8-187b-4662-b11e-c6ad31564ebf-kube-api-access-2mwr9\") pod \"nova-api-db-create-2jtqk\" (UID: \"d5b1adc8-187b-4662-b11e-c6ad31564ebf\") " pod="openstack/nova-api-db-create-2jtqk" Feb 27 16:30:36 crc kubenswrapper[4830]: I0227 16:30:36.009741 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbqhn\" (UniqueName: \"kubernetes.io/projected/77b4533c-3623-4d0c-834c-dc2329c0ffc8-kube-api-access-qbqhn\") pod \"nova-cell0-db-create-drqxj\" (UID: \"77b4533c-3623-4d0c-834c-dc2329c0ffc8\") " pod="openstack/nova-cell0-db-create-drqxj" Feb 27 16:30:36 crc kubenswrapper[4830]: I0227 16:30:36.025062 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dkx5\" (UniqueName: \"kubernetes.io/projected/e59291fe-6cc6-4fda-870b-d3842d9b65ee-kube-api-access-8dkx5\") pod \"nova-api-c219-account-create-update-zndsj\" (UID: \"e59291fe-6cc6-4fda-870b-d3842d9b65ee\") " pod="openstack/nova-api-c219-account-create-update-zndsj" Feb 27 16:30:36 crc kubenswrapper[4830]: I0227 16:30:36.073062 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7ghq\" (UniqueName: \"kubernetes.io/projected/f6d3ef08-a386-4c3a-aea1-7870a4192822-kube-api-access-s7ghq\") pod \"nova-cell0-29fd-account-create-update-n79tl\" (UID: \"f6d3ef08-a386-4c3a-aea1-7870a4192822\") " pod="openstack/nova-cell0-29fd-account-create-update-n79tl" Feb 27 16:30:36 crc kubenswrapper[4830]: I0227 16:30:36.073104 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkthj\" (UniqueName: \"kubernetes.io/projected/26e38f3b-1dee-4203-bd8c-5c4ce7d29b0d-kube-api-access-mkthj\") pod \"nova-cell1-db-create-sd6bv\" (UID: \"26e38f3b-1dee-4203-bd8c-5c4ce7d29b0d\") " pod="openstack/nova-cell1-db-create-sd6bv" Feb 27 16:30:36 crc kubenswrapper[4830]: I0227 16:30:36.073135 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26e38f3b-1dee-4203-bd8c-5c4ce7d29b0d-operator-scripts\") pod \"nova-cell1-db-create-sd6bv\" (UID: \"26e38f3b-1dee-4203-bd8c-5c4ce7d29b0d\") " pod="openstack/nova-cell1-db-create-sd6bv" Feb 27 16:30:36 crc kubenswrapper[4830]: I0227 16:30:36.073157 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6d3ef08-a386-4c3a-aea1-7870a4192822-operator-scripts\") pod \"nova-cell0-29fd-account-create-update-n79tl\" (UID: \"f6d3ef08-a386-4c3a-aea1-7870a4192822\") " pod="openstack/nova-cell0-29fd-account-create-update-n79tl" Feb 27 16:30:36 crc kubenswrapper[4830]: I0227 16:30:36.073888 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6d3ef08-a386-4c3a-aea1-7870a4192822-operator-scripts\") pod \"nova-cell0-29fd-account-create-update-n79tl\" (UID: \"f6d3ef08-a386-4c3a-aea1-7870a4192822\") " pod="openstack/nova-cell0-29fd-account-create-update-n79tl" Feb 27 16:30:36 crc kubenswrapper[4830]: I0227 16:30:36.074832 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26e38f3b-1dee-4203-bd8c-5c4ce7d29b0d-operator-scripts\") pod \"nova-cell1-db-create-sd6bv\" (UID: \"26e38f3b-1dee-4203-bd8c-5c4ce7d29b0d\") " pod="openstack/nova-cell1-db-create-sd6bv" Feb 27 16:30:36 crc kubenswrapper[4830]: I0227 16:30:36.094478 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-drqxj" Feb 27 16:30:36 crc kubenswrapper[4830]: I0227 16:30:36.096581 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7ghq\" (UniqueName: \"kubernetes.io/projected/f6d3ef08-a386-4c3a-aea1-7870a4192822-kube-api-access-s7ghq\") pod \"nova-cell0-29fd-account-create-update-n79tl\" (UID: \"f6d3ef08-a386-4c3a-aea1-7870a4192822\") " pod="openstack/nova-cell0-29fd-account-create-update-n79tl" Feb 27 16:30:36 crc kubenswrapper[4830]: I0227 16:30:36.096860 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkthj\" (UniqueName: \"kubernetes.io/projected/26e38f3b-1dee-4203-bd8c-5c4ce7d29b0d-kube-api-access-mkthj\") pod \"nova-cell1-db-create-sd6bv\" (UID: \"26e38f3b-1dee-4203-bd8c-5c4ce7d29b0d\") " pod="openstack/nova-cell1-db-create-sd6bv" Feb 27 16:30:36 crc kubenswrapper[4830]: I0227 16:30:36.099879 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-5e39-account-create-update-hqzqb"] Feb 27 16:30:36 crc kubenswrapper[4830]: I0227 16:30:36.101080 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5e39-account-create-update-hqzqb" Feb 27 16:30:36 crc kubenswrapper[4830]: I0227 16:30:36.101701 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c219-account-create-update-zndsj" Feb 27 16:30:36 crc kubenswrapper[4830]: I0227 16:30:36.107980 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 27 16:30:36 crc kubenswrapper[4830]: I0227 16:30:36.127097 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-5e39-account-create-update-hqzqb"] Feb 27 16:30:36 crc kubenswrapper[4830]: I0227 16:30:36.176533 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-sd6bv" Feb 27 16:30:36 crc kubenswrapper[4830]: I0227 16:30:36.176995 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmvkb\" (UniqueName: \"kubernetes.io/projected/7d45526f-ecc3-4132-bdd0-159572980ba7-kube-api-access-vmvkb\") pod \"nova-cell1-5e39-account-create-update-hqzqb\" (UID: \"7d45526f-ecc3-4132-bdd0-159572980ba7\") " pod="openstack/nova-cell1-5e39-account-create-update-hqzqb" Feb 27 16:30:36 crc kubenswrapper[4830]: I0227 16:30:36.177072 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d45526f-ecc3-4132-bdd0-159572980ba7-operator-scripts\") pod \"nova-cell1-5e39-account-create-update-hqzqb\" (UID: \"7d45526f-ecc3-4132-bdd0-159572980ba7\") " pod="openstack/nova-cell1-5e39-account-create-update-hqzqb" Feb 27 16:30:36 crc kubenswrapper[4830]: I0227 16:30:36.201104 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-29fd-account-create-update-n79tl" Feb 27 16:30:36 crc kubenswrapper[4830]: I0227 16:30:36.279795 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmvkb\" (UniqueName: \"kubernetes.io/projected/7d45526f-ecc3-4132-bdd0-159572980ba7-kube-api-access-vmvkb\") pod \"nova-cell1-5e39-account-create-update-hqzqb\" (UID: \"7d45526f-ecc3-4132-bdd0-159572980ba7\") " pod="openstack/nova-cell1-5e39-account-create-update-hqzqb" Feb 27 16:30:36 crc kubenswrapper[4830]: I0227 16:30:36.280138 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d45526f-ecc3-4132-bdd0-159572980ba7-operator-scripts\") pod \"nova-cell1-5e39-account-create-update-hqzqb\" (UID: \"7d45526f-ecc3-4132-bdd0-159572980ba7\") " pod="openstack/nova-cell1-5e39-account-create-update-hqzqb" Feb 27 16:30:36 crc kubenswrapper[4830]: I0227 16:30:36.287458 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d45526f-ecc3-4132-bdd0-159572980ba7-operator-scripts\") pod \"nova-cell1-5e39-account-create-update-hqzqb\" (UID: \"7d45526f-ecc3-4132-bdd0-159572980ba7\") " pod="openstack/nova-cell1-5e39-account-create-update-hqzqb" Feb 27 16:30:36 crc kubenswrapper[4830]: I0227 16:30:36.306344 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2jtqk" Feb 27 16:30:36 crc kubenswrapper[4830]: I0227 16:30:36.312399 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmvkb\" (UniqueName: \"kubernetes.io/projected/7d45526f-ecc3-4132-bdd0-159572980ba7-kube-api-access-vmvkb\") pod \"nova-cell1-5e39-account-create-update-hqzqb\" (UID: \"7d45526f-ecc3-4132-bdd0-159572980ba7\") " pod="openstack/nova-cell1-5e39-account-create-update-hqzqb" Feb 27 16:30:36 crc kubenswrapper[4830]: I0227 16:30:36.473978 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6d6ca92a-3e98-4628-8936-37032cf27463","Type":"ContainerStarted","Data":"08dae26c7de73c784a1c4cdf01a2ec48ed79b52c6c16691dcb728b190ce0bde0"} Feb 27 16:30:36 crc kubenswrapper[4830]: I0227 16:30:36.502627 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=9.50260891 podStartE2EDuration="9.50260891s" podCreationTimestamp="2026-02-27 16:30:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:30:36.501243217 +0000 UTC m=+1432.590515680" watchObservedRunningTime="2026-02-27 16:30:36.50260891 +0000 UTC m=+1432.591881373" Feb 27 16:30:36 crc kubenswrapper[4830]: I0227 16:30:36.511390 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5e39-account-create-update-hqzqb" Feb 27 16:30:36 crc kubenswrapper[4830]: I0227 16:30:36.600593 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 16:30:36 crc kubenswrapper[4830]: I0227 16:30:36.600860 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ab89052d-19a3-4bee-8e41-3fc364424b47" containerName="glance-log" containerID="cri-o://b45a73023f7c179a20bfe1c108b5dba26f62e40ce6291f0fc63775365ac5f555" gracePeriod=30 Feb 27 16:30:36 crc kubenswrapper[4830]: I0227 16:30:36.601183 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ab89052d-19a3-4bee-8e41-3fc364424b47" containerName="glance-httpd" containerID="cri-o://d3f9f381fd837c86af349bc5a2ece3fe00a4310cf835db8ff2d27eaf11394c2f" gracePeriod=30 Feb 27 16:30:36 crc kubenswrapper[4830]: I0227 16:30:36.731614 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:30:36 crc kubenswrapper[4830]: W0227 16:30:36.735639 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod814a49d3_5ece_4609_97f7_745b3843d2e3.slice/crio-fa47ec1490c231cd5f88f6545ac05a572f61122e20150d7b5495ef627d216e86 WatchSource:0}: Error finding container fa47ec1490c231cd5f88f6545ac05a572f61122e20150d7b5495ef627d216e86: Status 404 returned error can't find the container with id fa47ec1490c231cd5f88f6545ac05a572f61122e20150d7b5495ef627d216e86 Feb 27 16:30:36 crc kubenswrapper[4830]: I0227 16:30:36.774963 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa935bfb-ebfd-4aa9-abc3-84d118252abe" path="/var/lib/kubelet/pods/aa935bfb-ebfd-4aa9-abc3-84d118252abe/volumes" Feb 27 16:30:36 crc kubenswrapper[4830]: I0227 16:30:36.873742 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:30:36 crc kubenswrapper[4830]: I0227 16:30:36.906760 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c219-account-create-update-zndsj"] Feb 27 16:30:36 crc kubenswrapper[4830]: I0227 16:30:36.918713 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-sd6bv"] Feb 27 16:30:36 crc kubenswrapper[4830]: I0227 16:30:36.930837 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-drqxj"] Feb 27 16:30:37 crc kubenswrapper[4830]: I0227 16:30:37.061767 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-2jtqk"] Feb 27 16:30:37 crc kubenswrapper[4830]: I0227 16:30:37.102057 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-29fd-account-create-update-n79tl"] Feb 27 16:30:37 crc kubenswrapper[4830]: I0227 16:30:37.250383 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-5e39-account-create-update-hqzqb"] Feb 27 16:30:37 crc kubenswrapper[4830]: I0227 16:30:37.481878 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-drqxj" event={"ID":"77b4533c-3623-4d0c-834c-dc2329c0ffc8","Type":"ContainerStarted","Data":"950d48e73b6efcd60895c954c30b438b4679dbaef80ec5b055875078164bbaed"} Feb 27 16:30:37 crc kubenswrapper[4830]: I0227 16:30:37.481919 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-drqxj" event={"ID":"77b4533c-3623-4d0c-834c-dc2329c0ffc8","Type":"ContainerStarted","Data":"9b30395b4c26d58756bd9b74088f3e292bf04c67dfa8be6c0e6e5b715cd49f01"} Feb 27 16:30:37 crc kubenswrapper[4830]: I0227 16:30:37.485921 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5e39-account-create-update-hqzqb" event={"ID":"7d45526f-ecc3-4132-bdd0-159572980ba7","Type":"ContainerStarted","Data":"e9f5c3e023cd95041492158a368466cd55fb311d519a59bb0776c7d0e6ebc352"} Feb 27 16:30:37 crc kubenswrapper[4830]: I0227 16:30:37.485967 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5e39-account-create-update-hqzqb" event={"ID":"7d45526f-ecc3-4132-bdd0-159572980ba7","Type":"ContainerStarted","Data":"4e66e151b56976b8af5c10290ae2f91a1f831b1d2bd4b38b4bcd427c9fb52c98"} Feb 27 16:30:37 crc kubenswrapper[4830]: I0227 16:30:37.487125 4830 generic.go:334] "Generic (PLEG): container finished" podID="26e38f3b-1dee-4203-bd8c-5c4ce7d29b0d" containerID="e0ebf55234da05702605efd47d9b98f871b639eba4fd4ec313dd14863324ce11" exitCode=0 Feb 27 16:30:37 crc kubenswrapper[4830]: I0227 16:30:37.487169 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-sd6bv" event={"ID":"26e38f3b-1dee-4203-bd8c-5c4ce7d29b0d","Type":"ContainerDied","Data":"e0ebf55234da05702605efd47d9b98f871b639eba4fd4ec313dd14863324ce11"} Feb 27 16:30:37 crc kubenswrapper[4830]: I0227 16:30:37.487186 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-sd6bv" event={"ID":"26e38f3b-1dee-4203-bd8c-5c4ce7d29b0d","Type":"ContainerStarted","Data":"23d702627bec0c957633d74de5139c8bdce69fddcea494d3e810702c805fd873"} Feb 27 16:30:37 crc kubenswrapper[4830]: I0227 16:30:37.488789 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-29fd-account-create-update-n79tl" event={"ID":"f6d3ef08-a386-4c3a-aea1-7870a4192822","Type":"ContainerStarted","Data":"b09f3432889e78f005fbd21fbbd94888d63605d1bfe41b4d25fbe78bb2a37a78"} Feb 27 16:30:37 crc kubenswrapper[4830]: I0227 16:30:37.488815 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-29fd-account-create-update-n79tl" event={"ID":"f6d3ef08-a386-4c3a-aea1-7870a4192822","Type":"ContainerStarted","Data":"6ebf6246f2697c4dac340bad991679c64dd584900807861a58dca067f8a139e5"} Feb 27 16:30:37 crc kubenswrapper[4830]: I0227 16:30:37.491804 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c219-account-create-update-zndsj" event={"ID":"e59291fe-6cc6-4fda-870b-d3842d9b65ee","Type":"ContainerStarted","Data":"85d763a18db7b37b5aad502746d28ab199cdbba48317de720fcc8ea126e9dc74"} Feb 27 16:30:37 crc kubenswrapper[4830]: I0227 16:30:37.491841 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c219-account-create-update-zndsj" event={"ID":"e59291fe-6cc6-4fda-870b-d3842d9b65ee","Type":"ContainerStarted","Data":"fce0ae0eb3a7f6a10482eb4069abe5a6bb1aa9480ff0da77263b13819cbb8095"} Feb 27 16:30:37 crc kubenswrapper[4830]: I0227 16:30:37.505990 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-drqxj" podStartSLOduration=2.505970489 podStartE2EDuration="2.505970489s" podCreationTimestamp="2026-02-27 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:30:37.494875421 +0000 UTC m=+1433.584147884" watchObservedRunningTime="2026-02-27 16:30:37.505970489 +0000 UTC m=+1433.595242952" Feb 27 16:30:37 crc kubenswrapper[4830]: I0227 16:30:37.512509 4830 generic.go:334] "Generic (PLEG): container finished" podID="ab89052d-19a3-4bee-8e41-3fc364424b47" containerID="b45a73023f7c179a20bfe1c108b5dba26f62e40ce6291f0fc63775365ac5f555" exitCode=143 Feb 27 16:30:37 crc kubenswrapper[4830]: I0227 16:30:37.512743 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ab89052d-19a3-4bee-8e41-3fc364424b47","Type":"ContainerDied","Data":"b45a73023f7c179a20bfe1c108b5dba26f62e40ce6291f0fc63775365ac5f555"} Feb 27 16:30:37 crc kubenswrapper[4830]: I0227 16:30:37.518001 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"814a49d3-5ece-4609-97f7-745b3843d2e3","Type":"ContainerStarted","Data":"fa47ec1490c231cd5f88f6545ac05a572f61122e20150d7b5495ef627d216e86"} Feb 27 16:30:37 crc kubenswrapper[4830]: I0227 16:30:37.529247 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2jtqk" event={"ID":"d5b1adc8-187b-4662-b11e-c6ad31564ebf","Type":"ContainerStarted","Data":"2806447b980a5bb9a3cd7703b0ad68eb92d2cfebdeadd41257a5e2d7279f3f4f"} Feb 27 16:30:37 crc kubenswrapper[4830]: I0227 16:30:37.529277 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2jtqk" event={"ID":"d5b1adc8-187b-4662-b11e-c6ad31564ebf","Type":"ContainerStarted","Data":"954054ace9ad6d3b55aabcdf92a2f9ede535001dc756532b178cefb7c39e7814"} Feb 27 16:30:37 crc kubenswrapper[4830]: I0227 16:30:37.534963 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-c219-account-create-update-zndsj" podStartSLOduration=2.534930166 podStartE2EDuration="2.534930166s" podCreationTimestamp="2026-02-27 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:30:37.519271189 +0000 UTC m=+1433.608543652" watchObservedRunningTime="2026-02-27 16:30:37.534930166 +0000 UTC m=+1433.624202629" Feb 27 16:30:37 crc kubenswrapper[4830]: I0227 16:30:37.537850 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-5e39-account-create-update-hqzqb" podStartSLOduration=1.537843957 podStartE2EDuration="1.537843957s" podCreationTimestamp="2026-02-27 16:30:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:30:37.533348108 +0000 UTC m=+1433.622620571" watchObservedRunningTime="2026-02-27 16:30:37.537843957 +0000 UTC m=+1433.627116420" Feb 27 16:30:37 crc kubenswrapper[4830]: I0227 16:30:37.559181 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-29fd-account-create-update-n79tl" podStartSLOduration=2.559162481 podStartE2EDuration="2.559162481s" podCreationTimestamp="2026-02-27 16:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:30:37.54916973 +0000 UTC m=+1433.638442193" watchObservedRunningTime="2026-02-27 16:30:37.559162481 +0000 UTC m=+1433.648434934" Feb 27 16:30:37 crc kubenswrapper[4830]: I0227 16:30:37.765967 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 27 16:30:38 crc kubenswrapper[4830]: I0227 16:30:38.545344 4830 generic.go:334] "Generic (PLEG): container finished" podID="f6d3ef08-a386-4c3a-aea1-7870a4192822" containerID="b09f3432889e78f005fbd21fbbd94888d63605d1bfe41b4d25fbe78bb2a37a78" exitCode=0 Feb 27 16:30:38 crc kubenswrapper[4830]: I0227 16:30:38.545731 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-29fd-account-create-update-n79tl" event={"ID":"f6d3ef08-a386-4c3a-aea1-7870a4192822","Type":"ContainerDied","Data":"b09f3432889e78f005fbd21fbbd94888d63605d1bfe41b4d25fbe78bb2a37a78"} Feb 27 16:30:38 crc kubenswrapper[4830]: I0227 16:30:38.548059 4830 generic.go:334] "Generic (PLEG): container finished" podID="e59291fe-6cc6-4fda-870b-d3842d9b65ee" containerID="85d763a18db7b37b5aad502746d28ab199cdbba48317de720fcc8ea126e9dc74" exitCode=0 Feb 27 16:30:38 crc kubenswrapper[4830]: I0227 16:30:38.548170 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c219-account-create-update-zndsj" event={"ID":"e59291fe-6cc6-4fda-870b-d3842d9b65ee","Type":"ContainerDied","Data":"85d763a18db7b37b5aad502746d28ab199cdbba48317de720fcc8ea126e9dc74"} Feb 27 16:30:38 crc kubenswrapper[4830]: I0227 16:30:38.552219 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"814a49d3-5ece-4609-97f7-745b3843d2e3","Type":"ContainerStarted","Data":"9788c4d203c212b881b74765e66484cfa659abe121b4ea01bf45608a72848fda"} Feb 27 16:30:38 crc kubenswrapper[4830]: I0227 16:30:38.553880 4830 generic.go:334] "Generic (PLEG): container finished" podID="d5b1adc8-187b-4662-b11e-c6ad31564ebf" containerID="2806447b980a5bb9a3cd7703b0ad68eb92d2cfebdeadd41257a5e2d7279f3f4f" exitCode=0 Feb 27 16:30:38 crc kubenswrapper[4830]: I0227 16:30:38.553991 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2jtqk" event={"ID":"d5b1adc8-187b-4662-b11e-c6ad31564ebf","Type":"ContainerDied","Data":"2806447b980a5bb9a3cd7703b0ad68eb92d2cfebdeadd41257a5e2d7279f3f4f"} Feb 27 16:30:38 crc kubenswrapper[4830]: I0227 16:30:38.556634 4830 generic.go:334] "Generic (PLEG): container finished" podID="77b4533c-3623-4d0c-834c-dc2329c0ffc8" containerID="950d48e73b6efcd60895c954c30b438b4679dbaef80ec5b055875078164bbaed" exitCode=0 Feb 27 16:30:38 crc kubenswrapper[4830]: I0227 16:30:38.556762 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-drqxj" event={"ID":"77b4533c-3623-4d0c-834c-dc2329c0ffc8","Type":"ContainerDied","Data":"950d48e73b6efcd60895c954c30b438b4679dbaef80ec5b055875078164bbaed"} Feb 27 16:30:38 crc kubenswrapper[4830]: I0227 16:30:38.559486 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5e39-account-create-update-hqzqb" event={"ID":"7d45526f-ecc3-4132-bdd0-159572980ba7","Type":"ContainerDied","Data":"e9f5c3e023cd95041492158a368466cd55fb311d519a59bb0776c7d0e6ebc352"} Feb 27 16:30:38 crc kubenswrapper[4830]: I0227 16:30:38.559606 4830 generic.go:334] "Generic (PLEG): container finished" podID="7d45526f-ecc3-4132-bdd0-159572980ba7" containerID="e9f5c3e023cd95041492158a368466cd55fb311d519a59bb0776c7d0e6ebc352" exitCode=0 Feb 27 16:30:38 crc kubenswrapper[4830]: I0227 16:30:38.581423 4830 generic.go:334] "Generic (PLEG): container finished" podID="bb4fe631-52f0-445f-9e4c-90f4137bdba6" containerID="3976783388fcdce522b0afa5b0ca99a1cf893c91a02d58f8d8a5a9a4a19a9296" exitCode=0 Feb 27 16:30:38 crc kubenswrapper[4830]: I0227 16:30:38.581622 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bb4fe631-52f0-445f-9e4c-90f4137bdba6","Type":"ContainerDied","Data":"3976783388fcdce522b0afa5b0ca99a1cf893c91a02d58f8d8a5a9a4a19a9296"} Feb 27 16:30:38 crc kubenswrapper[4830]: I0227 16:30:38.904394 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2jtqk" Feb 27 16:30:38 crc kubenswrapper[4830]: I0227 16:30:38.939192 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5b1adc8-187b-4662-b11e-c6ad31564ebf-operator-scripts\") pod \"d5b1adc8-187b-4662-b11e-c6ad31564ebf\" (UID: \"d5b1adc8-187b-4662-b11e-c6ad31564ebf\") " Feb 27 16:30:38 crc kubenswrapper[4830]: I0227 16:30:38.939394 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mwr9\" (UniqueName: \"kubernetes.io/projected/d5b1adc8-187b-4662-b11e-c6ad31564ebf-kube-api-access-2mwr9\") pod \"d5b1adc8-187b-4662-b11e-c6ad31564ebf\" (UID: \"d5b1adc8-187b-4662-b11e-c6ad31564ebf\") " Feb 27 16:30:38 crc kubenswrapper[4830]: I0227 16:30:38.941384 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5b1adc8-187b-4662-b11e-c6ad31564ebf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d5b1adc8-187b-4662-b11e-c6ad31564ebf" (UID: "d5b1adc8-187b-4662-b11e-c6ad31564ebf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:30:38 crc kubenswrapper[4830]: I0227 16:30:38.947071 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5b1adc8-187b-4662-b11e-c6ad31564ebf-kube-api-access-2mwr9" (OuterVolumeSpecName: "kube-api-access-2mwr9") pod "d5b1adc8-187b-4662-b11e-c6ad31564ebf" (UID: "d5b1adc8-187b-4662-b11e-c6ad31564ebf"). InnerVolumeSpecName "kube-api-access-2mwr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.043237 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mwr9\" (UniqueName: \"kubernetes.io/projected/d5b1adc8-187b-4662-b11e-c6ad31564ebf-kube-api-access-2mwr9\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.043270 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5b1adc8-187b-4662-b11e-c6ad31564ebf-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.063193 4830 scope.go:117] "RemoveContainer" containerID="427efb10b90dced1a6f6d81475fe71ba7d102b5583d7add27988e759bbb7b566" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.149145 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-sd6bv" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.156890 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.248522 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krhd5\" (UniqueName: \"kubernetes.io/projected/bb4fe631-52f0-445f-9e4c-90f4137bdba6-kube-api-access-krhd5\") pod \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\" (UID: \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\") " Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.248589 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb4fe631-52f0-445f-9e4c-90f4137bdba6-scripts\") pod \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\" (UID: \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\") " Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.248707 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26e38f3b-1dee-4203-bd8c-5c4ce7d29b0d-operator-scripts\") pod \"26e38f3b-1dee-4203-bd8c-5c4ce7d29b0d\" (UID: \"26e38f3b-1dee-4203-bd8c-5c4ce7d29b0d\") " Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.248758 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb4fe631-52f0-445f-9e4c-90f4137bdba6-logs\") pod \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\" (UID: \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\") " Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.248780 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb4fe631-52f0-445f-9e4c-90f4137bdba6-config-data\") pod \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\" (UID: \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\") " Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.248849 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb4fe631-52f0-445f-9e4c-90f4137bdba6-public-tls-certs\") pod \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\" (UID: \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\") " Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.248898 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb4fe631-52f0-445f-9e4c-90f4137bdba6-combined-ca-bundle\") pod \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\" (UID: \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\") " Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.248955 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb4fe631-52f0-445f-9e4c-90f4137bdba6-httpd-run\") pod \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\" (UID: \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\") " Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.249016 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkthj\" (UniqueName: \"kubernetes.io/projected/26e38f3b-1dee-4203-bd8c-5c4ce7d29b0d-kube-api-access-mkthj\") pod \"26e38f3b-1dee-4203-bd8c-5c4ce7d29b0d\" (UID: \"26e38f3b-1dee-4203-bd8c-5c4ce7d29b0d\") " Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.249048 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\" (UID: \"bb4fe631-52f0-445f-9e4c-90f4137bdba6\") " Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.251245 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26e38f3b-1dee-4203-bd8c-5c4ce7d29b0d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "26e38f3b-1dee-4203-bd8c-5c4ce7d29b0d" (UID: "26e38f3b-1dee-4203-bd8c-5c4ce7d29b0d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.252191 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb4fe631-52f0-445f-9e4c-90f4137bdba6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "bb4fe631-52f0-445f-9e4c-90f4137bdba6" (UID: "bb4fe631-52f0-445f-9e4c-90f4137bdba6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.252204 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb4fe631-52f0-445f-9e4c-90f4137bdba6-logs" (OuterVolumeSpecName: "logs") pod "bb4fe631-52f0-445f-9e4c-90f4137bdba6" (UID: "bb4fe631-52f0-445f-9e4c-90f4137bdba6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.255076 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb4fe631-52f0-445f-9e4c-90f4137bdba6-scripts" (OuterVolumeSpecName: "scripts") pod "bb4fe631-52f0-445f-9e4c-90f4137bdba6" (UID: "bb4fe631-52f0-445f-9e4c-90f4137bdba6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.258089 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26e38f3b-1dee-4203-bd8c-5c4ce7d29b0d-kube-api-access-mkthj" (OuterVolumeSpecName: "kube-api-access-mkthj") pod "26e38f3b-1dee-4203-bd8c-5c4ce7d29b0d" (UID: "26e38f3b-1dee-4203-bd8c-5c4ce7d29b0d"). InnerVolumeSpecName "kube-api-access-mkthj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.258404 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb4fe631-52f0-445f-9e4c-90f4137bdba6-kube-api-access-krhd5" (OuterVolumeSpecName: "kube-api-access-krhd5") pod "bb4fe631-52f0-445f-9e4c-90f4137bdba6" (UID: "bb4fe631-52f0-445f-9e4c-90f4137bdba6"). InnerVolumeSpecName "kube-api-access-krhd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.265009 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "bb4fe631-52f0-445f-9e4c-90f4137bdba6" (UID: "bb4fe631-52f0-445f-9e4c-90f4137bdba6"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.276252 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb4fe631-52f0-445f-9e4c-90f4137bdba6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb4fe631-52f0-445f-9e4c-90f4137bdba6" (UID: "bb4fe631-52f0-445f-9e4c-90f4137bdba6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.320067 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb4fe631-52f0-445f-9e4c-90f4137bdba6-config-data" (OuterVolumeSpecName: "config-data") pod "bb4fe631-52f0-445f-9e4c-90f4137bdba6" (UID: "bb4fe631-52f0-445f-9e4c-90f4137bdba6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.339585 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb4fe631-52f0-445f-9e4c-90f4137bdba6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bb4fe631-52f0-445f-9e4c-90f4137bdba6" (UID: "bb4fe631-52f0-445f-9e4c-90f4137bdba6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.351152 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkthj\" (UniqueName: \"kubernetes.io/projected/26e38f3b-1dee-4203-bd8c-5c4ce7d29b0d-kube-api-access-mkthj\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.351275 4830 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.351369 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krhd5\" (UniqueName: \"kubernetes.io/projected/bb4fe631-52f0-445f-9e4c-90f4137bdba6-kube-api-access-krhd5\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.351438 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb4fe631-52f0-445f-9e4c-90f4137bdba6-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.351500 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26e38f3b-1dee-4203-bd8c-5c4ce7d29b0d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.351556 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb4fe631-52f0-445f-9e4c-90f4137bdba6-logs\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.351609 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb4fe631-52f0-445f-9e4c-90f4137bdba6-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.351667 4830 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb4fe631-52f0-445f-9e4c-90f4137bdba6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.351726 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb4fe631-52f0-445f-9e4c-90f4137bdba6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.351782 4830 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb4fe631-52f0-445f-9e4c-90f4137bdba6-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.375878 4830 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.456905 4830 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.589687 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2jtqk" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.589678 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2jtqk" event={"ID":"d5b1adc8-187b-4662-b11e-c6ad31564ebf","Type":"ContainerDied","Data":"954054ace9ad6d3b55aabcdf92a2f9ede535001dc756532b178cefb7c39e7814"} Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.589835 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="954054ace9ad6d3b55aabcdf92a2f9ede535001dc756532b178cefb7c39e7814" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.591507 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-sd6bv" event={"ID":"26e38f3b-1dee-4203-bd8c-5c4ce7d29b0d","Type":"ContainerDied","Data":"23d702627bec0c957633d74de5139c8bdce69fddcea494d3e810702c805fd873"} Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.591564 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23d702627bec0c957633d74de5139c8bdce69fddcea494d3e810702c805fd873" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.591526 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-sd6bv" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.593862 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bb4fe631-52f0-445f-9e4c-90f4137bdba6","Type":"ContainerDied","Data":"4a59a842baf7a5998b965141f2b75707ae5e894067ec0fa43a6b5cf53db034ff"} Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.593912 4830 scope.go:117] "RemoveContainer" containerID="3976783388fcdce522b0afa5b0ca99a1cf893c91a02d58f8d8a5a9a4a19a9296" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.594028 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.595644 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"814a49d3-5ece-4609-97f7-745b3843d2e3","Type":"ContainerStarted","Data":"efa64dfe4067a783fbcc157971a9465a93d065c612dbdb6cecefa3f2c9a3799c"} Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.643904 4830 scope.go:117] "RemoveContainer" containerID="1f34e6f642aea7d4125be18b29bd3b54dedeb193e281e665389dc545b3650026" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.653608 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.665205 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.673328 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 16:30:39 crc kubenswrapper[4830]: E0227 16:30:39.673697 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5b1adc8-187b-4662-b11e-c6ad31564ebf" containerName="mariadb-database-create" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.673715 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5b1adc8-187b-4662-b11e-c6ad31564ebf" containerName="mariadb-database-create" Feb 27 16:30:39 crc kubenswrapper[4830]: E0227 16:30:39.673738 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb4fe631-52f0-445f-9e4c-90f4137bdba6" containerName="glance-httpd" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.673744 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb4fe631-52f0-445f-9e4c-90f4137bdba6" containerName="glance-httpd" Feb 27 16:30:39 crc kubenswrapper[4830]: E0227 16:30:39.673760 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26e38f3b-1dee-4203-bd8c-5c4ce7d29b0d" containerName="mariadb-database-create" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.673766 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="26e38f3b-1dee-4203-bd8c-5c4ce7d29b0d" containerName="mariadb-database-create" Feb 27 16:30:39 crc kubenswrapper[4830]: E0227 16:30:39.673780 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb4fe631-52f0-445f-9e4c-90f4137bdba6" containerName="glance-log" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.673785 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb4fe631-52f0-445f-9e4c-90f4137bdba6" containerName="glance-log" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.673955 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb4fe631-52f0-445f-9e4c-90f4137bdba6" containerName="glance-httpd" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.673965 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="26e38f3b-1dee-4203-bd8c-5c4ce7d29b0d" containerName="mariadb-database-create" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.673978 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb4fe631-52f0-445f-9e4c-90f4137bdba6" containerName="glance-log" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.673989 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5b1adc8-187b-4662-b11e-c6ad31564ebf" containerName="mariadb-database-create" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.675132 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.677803 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.679741 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.680808 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.865225 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8d4cd44-9972-445e-bac3-63441b6fa4cc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\") " pod="openstack/glance-default-external-api-0" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.865593 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8d4cd44-9972-445e-bac3-63441b6fa4cc-scripts\") pod \"glance-default-external-api-0\" (UID: \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\") " pod="openstack/glance-default-external-api-0" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.865620 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d8d4cd44-9972-445e-bac3-63441b6fa4cc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\") " pod="openstack/glance-default-external-api-0" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.865658 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8d4cd44-9972-445e-bac3-63441b6fa4cc-config-data\") pod \"glance-default-external-api-0\" (UID: \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\") " pod="openstack/glance-default-external-api-0" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.865690 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtk2v\" (UniqueName: \"kubernetes.io/projected/d8d4cd44-9972-445e-bac3-63441b6fa4cc-kube-api-access-mtk2v\") pod \"glance-default-external-api-0\" (UID: \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\") " pod="openstack/glance-default-external-api-0" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.865711 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d4cd44-9972-445e-bac3-63441b6fa4cc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\") " pod="openstack/glance-default-external-api-0" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.865748 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8d4cd44-9972-445e-bac3-63441b6fa4cc-logs\") pod \"glance-default-external-api-0\" (UID: \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\") " pod="openstack/glance-default-external-api-0" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.865766 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\") " pod="openstack/glance-default-external-api-0" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.974592 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="ab89052d-19a3-4bee-8e41-3fc364424b47" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.155:9292/healthcheck\": dial tcp 10.217.0.155:9292: connect: connection refused" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.974711 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="ab89052d-19a3-4bee-8e41-3fc364424b47" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.155:9292/healthcheck\": dial tcp 10.217.0.155:9292: connect: connection refused" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.975802 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8d4cd44-9972-445e-bac3-63441b6fa4cc-config-data\") pod \"glance-default-external-api-0\" (UID: \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\") " pod="openstack/glance-default-external-api-0" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.975871 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtk2v\" (UniqueName: \"kubernetes.io/projected/d8d4cd44-9972-445e-bac3-63441b6fa4cc-kube-api-access-mtk2v\") pod \"glance-default-external-api-0\" (UID: \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\") " pod="openstack/glance-default-external-api-0" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.975894 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d4cd44-9972-445e-bac3-63441b6fa4cc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\") " pod="openstack/glance-default-external-api-0" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.975933 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8d4cd44-9972-445e-bac3-63441b6fa4cc-logs\") pod \"glance-default-external-api-0\" (UID: \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\") " pod="openstack/glance-default-external-api-0" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.975969 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\") " pod="openstack/glance-default-external-api-0" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.976030 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8d4cd44-9972-445e-bac3-63441b6fa4cc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\") " pod="openstack/glance-default-external-api-0" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.976071 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8d4cd44-9972-445e-bac3-63441b6fa4cc-scripts\") pod \"glance-default-external-api-0\" (UID: \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\") " pod="openstack/glance-default-external-api-0" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.976096 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d8d4cd44-9972-445e-bac3-63441b6fa4cc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\") " pod="openstack/glance-default-external-api-0" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.976556 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d8d4cd44-9972-445e-bac3-63441b6fa4cc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\") " pod="openstack/glance-default-external-api-0" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.982166 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.982933 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8d4cd44-9972-445e-bac3-63441b6fa4cc-logs\") pod \"glance-default-external-api-0\" (UID: \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\") " pod="openstack/glance-default-external-api-0" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.987348 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8d4cd44-9972-445e-bac3-63441b6fa4cc-scripts\") pod \"glance-default-external-api-0\" (UID: \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\") " pod="openstack/glance-default-external-api-0" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.987348 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d4cd44-9972-445e-bac3-63441b6fa4cc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\") " pod="openstack/glance-default-external-api-0" Feb 27 16:30:39 crc kubenswrapper[4830]: I0227 16:30:39.987461 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8d4cd44-9972-445e-bac3-63441b6fa4cc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\") " pod="openstack/glance-default-external-api-0" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:39.998193 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8d4cd44-9972-445e-bac3-63441b6fa4cc-config-data\") pod \"glance-default-external-api-0\" (UID: \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\") " pod="openstack/glance-default-external-api-0" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.003297 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtk2v\" (UniqueName: \"kubernetes.io/projected/d8d4cd44-9972-445e-bac3-63441b6fa4cc-kube-api-access-mtk2v\") pod \"glance-default-external-api-0\" (UID: \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\") " pod="openstack/glance-default-external-api-0" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.036668 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\") " pod="openstack/glance-default-external-api-0" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.176347 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-29fd-account-create-update-n79tl" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.181560 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6d3ef08-a386-4c3a-aea1-7870a4192822-operator-scripts\") pod \"f6d3ef08-a386-4c3a-aea1-7870a4192822\" (UID: \"f6d3ef08-a386-4c3a-aea1-7870a4192822\") " Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.181643 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7ghq\" (UniqueName: \"kubernetes.io/projected/f6d3ef08-a386-4c3a-aea1-7870a4192822-kube-api-access-s7ghq\") pod \"f6d3ef08-a386-4c3a-aea1-7870a4192822\" (UID: \"f6d3ef08-a386-4c3a-aea1-7870a4192822\") " Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.185472 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6d3ef08-a386-4c3a-aea1-7870a4192822-kube-api-access-s7ghq" (OuterVolumeSpecName: "kube-api-access-s7ghq") pod "f6d3ef08-a386-4c3a-aea1-7870a4192822" (UID: "f6d3ef08-a386-4c3a-aea1-7870a4192822"). InnerVolumeSpecName "kube-api-access-s7ghq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.185798 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6d3ef08-a386-4c3a-aea1-7870a4192822-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f6d3ef08-a386-4c3a-aea1-7870a4192822" (UID: "f6d3ef08-a386-4c3a-aea1-7870a4192822"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.210505 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5e39-account-create-update-hqzqb" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.271056 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c219-account-create-update-zndsj" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.271659 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-drqxj" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.282466 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dkx5\" (UniqueName: \"kubernetes.io/projected/e59291fe-6cc6-4fda-870b-d3842d9b65ee-kube-api-access-8dkx5\") pod \"e59291fe-6cc6-4fda-870b-d3842d9b65ee\" (UID: \"e59291fe-6cc6-4fda-870b-d3842d9b65ee\") " Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.282524 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d45526f-ecc3-4132-bdd0-159572980ba7-operator-scripts\") pod \"7d45526f-ecc3-4132-bdd0-159572980ba7\" (UID: \"7d45526f-ecc3-4132-bdd0-159572980ba7\") " Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.282546 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e59291fe-6cc6-4fda-870b-d3842d9b65ee-operator-scripts\") pod \"e59291fe-6cc6-4fda-870b-d3842d9b65ee\" (UID: \"e59291fe-6cc6-4fda-870b-d3842d9b65ee\") " Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.282574 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77b4533c-3623-4d0c-834c-dc2329c0ffc8-operator-scripts\") pod \"77b4533c-3623-4d0c-834c-dc2329c0ffc8\" (UID: \"77b4533c-3623-4d0c-834c-dc2329c0ffc8\") " Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.282636 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbqhn\" (UniqueName: \"kubernetes.io/projected/77b4533c-3623-4d0c-834c-dc2329c0ffc8-kube-api-access-qbqhn\") pod \"77b4533c-3623-4d0c-834c-dc2329c0ffc8\" (UID: \"77b4533c-3623-4d0c-834c-dc2329c0ffc8\") " Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.282669 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmvkb\" (UniqueName: \"kubernetes.io/projected/7d45526f-ecc3-4132-bdd0-159572980ba7-kube-api-access-vmvkb\") pod \"7d45526f-ecc3-4132-bdd0-159572980ba7\" (UID: \"7d45526f-ecc3-4132-bdd0-159572980ba7\") " Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.283187 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e59291fe-6cc6-4fda-870b-d3842d9b65ee-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e59291fe-6cc6-4fda-870b-d3842d9b65ee" (UID: "e59291fe-6cc6-4fda-870b-d3842d9b65ee"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.283274 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d45526f-ecc3-4132-bdd0-159572980ba7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7d45526f-ecc3-4132-bdd0-159572980ba7" (UID: "7d45526f-ecc3-4132-bdd0-159572980ba7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.283315 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77b4533c-3623-4d0c-834c-dc2329c0ffc8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "77b4533c-3623-4d0c-834c-dc2329c0ffc8" (UID: "77b4533c-3623-4d0c-834c-dc2329c0ffc8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.284804 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6d3ef08-a386-4c3a-aea1-7870a4192822-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.284822 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d45526f-ecc3-4132-bdd0-159572980ba7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.284830 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e59291fe-6cc6-4fda-870b-d3842d9b65ee-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.284839 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7ghq\" (UniqueName: \"kubernetes.io/projected/f6d3ef08-a386-4c3a-aea1-7870a4192822-kube-api-access-s7ghq\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.284848 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77b4533c-3623-4d0c-834c-dc2329c0ffc8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.285975 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e59291fe-6cc6-4fda-870b-d3842d9b65ee-kube-api-access-8dkx5" (OuterVolumeSpecName: "kube-api-access-8dkx5") pod "e59291fe-6cc6-4fda-870b-d3842d9b65ee" (UID: "e59291fe-6cc6-4fda-870b-d3842d9b65ee"). InnerVolumeSpecName "kube-api-access-8dkx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.286641 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d45526f-ecc3-4132-bdd0-159572980ba7-kube-api-access-vmvkb" (OuterVolumeSpecName: "kube-api-access-vmvkb") pod "7d45526f-ecc3-4132-bdd0-159572980ba7" (UID: "7d45526f-ecc3-4132-bdd0-159572980ba7"). InnerVolumeSpecName "kube-api-access-vmvkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.287295 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77b4533c-3623-4d0c-834c-dc2329c0ffc8-kube-api-access-qbqhn" (OuterVolumeSpecName: "kube-api-access-qbqhn") pod "77b4533c-3623-4d0c-834c-dc2329c0ffc8" (UID: "77b4533c-3623-4d0c-834c-dc2329c0ffc8"). InnerVolumeSpecName "kube-api-access-qbqhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.292043 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.386711 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbqhn\" (UniqueName: \"kubernetes.io/projected/77b4533c-3623-4d0c-834c-dc2329c0ffc8-kube-api-access-qbqhn\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.386745 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmvkb\" (UniqueName: \"kubernetes.io/projected/7d45526f-ecc3-4132-bdd0-159572980ba7-kube-api-access-vmvkb\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.386757 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dkx5\" (UniqueName: \"kubernetes.io/projected/e59291fe-6cc6-4fda-870b-d3842d9b65ee-kube-api-access-8dkx5\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.497012 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.589553 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab89052d-19a3-4bee-8e41-3fc364424b47-combined-ca-bundle\") pod \"ab89052d-19a3-4bee-8e41-3fc364424b47\" (UID: \"ab89052d-19a3-4bee-8e41-3fc364424b47\") " Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.589976 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab89052d-19a3-4bee-8e41-3fc364424b47-logs\") pod \"ab89052d-19a3-4bee-8e41-3fc364424b47\" (UID: \"ab89052d-19a3-4bee-8e41-3fc364424b47\") " Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.590015 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab89052d-19a3-4bee-8e41-3fc364424b47-internal-tls-certs\") pod \"ab89052d-19a3-4bee-8e41-3fc364424b47\" (UID: \"ab89052d-19a3-4bee-8e41-3fc364424b47\") " Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.590073 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ab89052d-19a3-4bee-8e41-3fc364424b47\" (UID: \"ab89052d-19a3-4bee-8e41-3fc364424b47\") " Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.590105 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxg2h\" (UniqueName: \"kubernetes.io/projected/ab89052d-19a3-4bee-8e41-3fc364424b47-kube-api-access-bxg2h\") pod \"ab89052d-19a3-4bee-8e41-3fc364424b47\" (UID: \"ab89052d-19a3-4bee-8e41-3fc364424b47\") " Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.590125 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ab89052d-19a3-4bee-8e41-3fc364424b47-httpd-run\") pod \"ab89052d-19a3-4bee-8e41-3fc364424b47\" (UID: \"ab89052d-19a3-4bee-8e41-3fc364424b47\") " Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.590144 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab89052d-19a3-4bee-8e41-3fc364424b47-scripts\") pod \"ab89052d-19a3-4bee-8e41-3fc364424b47\" (UID: \"ab89052d-19a3-4bee-8e41-3fc364424b47\") " Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.590224 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab89052d-19a3-4bee-8e41-3fc364424b47-config-data\") pod \"ab89052d-19a3-4bee-8e41-3fc364424b47\" (UID: \"ab89052d-19a3-4bee-8e41-3fc364424b47\") " Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.590514 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab89052d-19a3-4bee-8e41-3fc364424b47-logs" (OuterVolumeSpecName: "logs") pod "ab89052d-19a3-4bee-8e41-3fc364424b47" (UID: "ab89052d-19a3-4bee-8e41-3fc364424b47"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.590611 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab89052d-19a3-4bee-8e41-3fc364424b47-logs\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.590846 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab89052d-19a3-4bee-8e41-3fc364424b47-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ab89052d-19a3-4bee-8e41-3fc364424b47" (UID: "ab89052d-19a3-4bee-8e41-3fc364424b47"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.594101 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab89052d-19a3-4bee-8e41-3fc364424b47-kube-api-access-bxg2h" (OuterVolumeSpecName: "kube-api-access-bxg2h") pod "ab89052d-19a3-4bee-8e41-3fc364424b47" (UID: "ab89052d-19a3-4bee-8e41-3fc364424b47"). InnerVolumeSpecName "kube-api-access-bxg2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.594125 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab89052d-19a3-4bee-8e41-3fc364424b47-scripts" (OuterVolumeSpecName: "scripts") pod "ab89052d-19a3-4bee-8e41-3fc364424b47" (UID: "ab89052d-19a3-4bee-8e41-3fc364424b47"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.600083 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "ab89052d-19a3-4bee-8e41-3fc364424b47" (UID: "ab89052d-19a3-4bee-8e41-3fc364424b47"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.628668 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5e39-account-create-update-hqzqb" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.630794 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5e39-account-create-update-hqzqb" event={"ID":"7d45526f-ecc3-4132-bdd0-159572980ba7","Type":"ContainerDied","Data":"4e66e151b56976b8af5c10290ae2f91a1f831b1d2bd4b38b4bcd427c9fb52c98"} Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.630855 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e66e151b56976b8af5c10290ae2f91a1f831b1d2bd4b38b4bcd427c9fb52c98" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.634853 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-29fd-account-create-update-n79tl" event={"ID":"f6d3ef08-a386-4c3a-aea1-7870a4192822","Type":"ContainerDied","Data":"6ebf6246f2697c4dac340bad991679c64dd584900807861a58dca067f8a139e5"} Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.634900 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ebf6246f2697c4dac340bad991679c64dd584900807861a58dca067f8a139e5" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.635009 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-29fd-account-create-update-n79tl" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.635402 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab89052d-19a3-4bee-8e41-3fc364424b47-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab89052d-19a3-4bee-8e41-3fc364424b47" (UID: "ab89052d-19a3-4bee-8e41-3fc364424b47"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.638213 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c219-account-create-update-zndsj" event={"ID":"e59291fe-6cc6-4fda-870b-d3842d9b65ee","Type":"ContainerDied","Data":"fce0ae0eb3a7f6a10482eb4069abe5a6bb1aa9480ff0da77263b13819cbb8095"} Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.638242 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fce0ae0eb3a7f6a10482eb4069abe5a6bb1aa9480ff0da77263b13819cbb8095" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.638282 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c219-account-create-update-zndsj" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.647422 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-c844968fb-vzqlt" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.651199 4830 generic.go:334] "Generic (PLEG): container finished" podID="ab89052d-19a3-4bee-8e41-3fc364424b47" containerID="d3f9f381fd837c86af349bc5a2ece3fe00a4310cf835db8ff2d27eaf11394c2f" exitCode=0 Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.651308 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ab89052d-19a3-4bee-8e41-3fc364424b47","Type":"ContainerDied","Data":"d3f9f381fd837c86af349bc5a2ece3fe00a4310cf835db8ff2d27eaf11394c2f"} Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.651341 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ab89052d-19a3-4bee-8e41-3fc364424b47","Type":"ContainerDied","Data":"685755162edab3a265bcc645b673533a44e0aacb72e710cd701790c8efb9a257"} Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.651392 4830 scope.go:117] "RemoveContainer" containerID="d3f9f381fd837c86af349bc5a2ece3fe00a4310cf835db8ff2d27eaf11394c2f" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.652231 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.658488 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"814a49d3-5ece-4609-97f7-745b3843d2e3","Type":"ContainerStarted","Data":"4fddcdedfa5cede42a0d195f0456a21bf7ccf689dd2c45dc7aede513c11e4192"} Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.669329 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab89052d-19a3-4bee-8e41-3fc364424b47-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ab89052d-19a3-4bee-8e41-3fc364424b47" (UID: "ab89052d-19a3-4bee-8e41-3fc364424b47"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.677043 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab89052d-19a3-4bee-8e41-3fc364424b47-config-data" (OuterVolumeSpecName: "config-data") pod "ab89052d-19a3-4bee-8e41-3fc364424b47" (UID: "ab89052d-19a3-4bee-8e41-3fc364424b47"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.684611 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-drqxj" event={"ID":"77b4533c-3623-4d0c-834c-dc2329c0ffc8","Type":"ContainerDied","Data":"9b30395b4c26d58756bd9b74088f3e292bf04c67dfa8be6c0e6e5b715cd49f01"} Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.684655 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b30395b4c26d58756bd9b74088f3e292bf04c67dfa8be6c0e6e5b715cd49f01" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.684711 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-drqxj" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.685112 4830 scope.go:117] "RemoveContainer" containerID="b45a73023f7c179a20bfe1c108b5dba26f62e40ce6291f0fc63775365ac5f555" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.692254 4830 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab89052d-19a3-4bee-8e41-3fc364424b47-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.692292 4830 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.692301 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxg2h\" (UniqueName: \"kubernetes.io/projected/ab89052d-19a3-4bee-8e41-3fc364424b47-kube-api-access-bxg2h\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.692311 4830 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ab89052d-19a3-4bee-8e41-3fc364424b47-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.692325 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab89052d-19a3-4bee-8e41-3fc364424b47-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.692333 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab89052d-19a3-4bee-8e41-3fc364424b47-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.692341 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab89052d-19a3-4bee-8e41-3fc364424b47-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.710372 4830 scope.go:117] "RemoveContainer" containerID="d3f9f381fd837c86af349bc5a2ece3fe00a4310cf835db8ff2d27eaf11394c2f" Feb 27 16:30:40 crc kubenswrapper[4830]: E0227 16:30:40.711282 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3f9f381fd837c86af349bc5a2ece3fe00a4310cf835db8ff2d27eaf11394c2f\": container with ID starting with d3f9f381fd837c86af349bc5a2ece3fe00a4310cf835db8ff2d27eaf11394c2f not found: ID does not exist" containerID="d3f9f381fd837c86af349bc5a2ece3fe00a4310cf835db8ff2d27eaf11394c2f" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.711328 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3f9f381fd837c86af349bc5a2ece3fe00a4310cf835db8ff2d27eaf11394c2f"} err="failed to get container status \"d3f9f381fd837c86af349bc5a2ece3fe00a4310cf835db8ff2d27eaf11394c2f\": rpc error: code = NotFound desc = could not find container \"d3f9f381fd837c86af349bc5a2ece3fe00a4310cf835db8ff2d27eaf11394c2f\": container with ID starting with d3f9f381fd837c86af349bc5a2ece3fe00a4310cf835db8ff2d27eaf11394c2f not found: ID does not exist" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.711346 4830 scope.go:117] "RemoveContainer" containerID="b45a73023f7c179a20bfe1c108b5dba26f62e40ce6291f0fc63775365ac5f555" Feb 27 16:30:40 crc kubenswrapper[4830]: E0227 16:30:40.719756 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b45a73023f7c179a20bfe1c108b5dba26f62e40ce6291f0fc63775365ac5f555\": container with ID starting with b45a73023f7c179a20bfe1c108b5dba26f62e40ce6291f0fc63775365ac5f555 not found: ID does not exist" containerID="b45a73023f7c179a20bfe1c108b5dba26f62e40ce6291f0fc63775365ac5f555" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.719857 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b45a73023f7c179a20bfe1c108b5dba26f62e40ce6291f0fc63775365ac5f555"} err="failed to get container status \"b45a73023f7c179a20bfe1c108b5dba26f62e40ce6291f0fc63775365ac5f555\": rpc error: code = NotFound desc = could not find container \"b45a73023f7c179a20bfe1c108b5dba26f62e40ce6291f0fc63775365ac5f555\": container with ID starting with b45a73023f7c179a20bfe1c108b5dba26f62e40ce6291f0fc63775365ac5f555 not found: ID does not exist" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.724239 4830 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.779131 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb4fe631-52f0-445f-9e4c-90f4137bdba6" path="/var/lib/kubelet/pods/bb4fe631-52f0-445f-9e4c-90f4137bdba6/volumes" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.794102 4830 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:40 crc kubenswrapper[4830]: I0227 16:30:40.876350 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.042079 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.050379 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.062360 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 16:30:41 crc kubenswrapper[4830]: E0227 16:30:41.062712 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab89052d-19a3-4bee-8e41-3fc364424b47" containerName="glance-httpd" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.062728 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab89052d-19a3-4bee-8e41-3fc364424b47" containerName="glance-httpd" Feb 27 16:30:41 crc kubenswrapper[4830]: E0227 16:30:41.062742 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d45526f-ecc3-4132-bdd0-159572980ba7" containerName="mariadb-account-create-update" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.062748 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d45526f-ecc3-4132-bdd0-159572980ba7" containerName="mariadb-account-create-update" Feb 27 16:30:41 crc kubenswrapper[4830]: E0227 16:30:41.062767 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab89052d-19a3-4bee-8e41-3fc364424b47" containerName="glance-log" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.062773 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab89052d-19a3-4bee-8e41-3fc364424b47" containerName="glance-log" Feb 27 16:30:41 crc kubenswrapper[4830]: E0227 16:30:41.062781 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59291fe-6cc6-4fda-870b-d3842d9b65ee" containerName="mariadb-account-create-update" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.062788 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59291fe-6cc6-4fda-870b-d3842d9b65ee" containerName="mariadb-account-create-update" Feb 27 16:30:41 crc kubenswrapper[4830]: E0227 16:30:41.062811 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77b4533c-3623-4d0c-834c-dc2329c0ffc8" containerName="mariadb-database-create" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.062816 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="77b4533c-3623-4d0c-834c-dc2329c0ffc8" containerName="mariadb-database-create" Feb 27 16:30:41 crc kubenswrapper[4830]: E0227 16:30:41.062826 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6d3ef08-a386-4c3a-aea1-7870a4192822" containerName="mariadb-account-create-update" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.062832 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6d3ef08-a386-4c3a-aea1-7870a4192822" containerName="mariadb-account-create-update" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.062986 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d45526f-ecc3-4132-bdd0-159572980ba7" containerName="mariadb-account-create-update" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.062999 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab89052d-19a3-4bee-8e41-3fc364424b47" containerName="glance-httpd" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.063009 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="e59291fe-6cc6-4fda-870b-d3842d9b65ee" containerName="mariadb-account-create-update" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.063017 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab89052d-19a3-4bee-8e41-3fc364424b47" containerName="glance-log" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.063022 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6d3ef08-a386-4c3a-aea1-7870a4192822" containerName="mariadb-account-create-update" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.063034 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="77b4533c-3623-4d0c-834c-dc2329c0ffc8" containerName="mariadb-database-create" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.064458 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.068076 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.071299 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.109888 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.213395 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73fa27e0-b59d-44b0-8648-7e696f71cd61-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"73fa27e0-b59d-44b0-8648-7e696f71cd61\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.213441 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73fa27e0-b59d-44b0-8648-7e696f71cd61-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"73fa27e0-b59d-44b0-8648-7e696f71cd61\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.213464 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzf7p\" (UniqueName: \"kubernetes.io/projected/73fa27e0-b59d-44b0-8648-7e696f71cd61-kube-api-access-zzf7p\") pod \"glance-default-internal-api-0\" (UID: \"73fa27e0-b59d-44b0-8648-7e696f71cd61\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.213524 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73fa27e0-b59d-44b0-8648-7e696f71cd61-scripts\") pod \"glance-default-internal-api-0\" (UID: \"73fa27e0-b59d-44b0-8648-7e696f71cd61\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.213578 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/73fa27e0-b59d-44b0-8648-7e696f71cd61-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"73fa27e0-b59d-44b0-8648-7e696f71cd61\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.213613 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73fa27e0-b59d-44b0-8648-7e696f71cd61-logs\") pod \"glance-default-internal-api-0\" (UID: \"73fa27e0-b59d-44b0-8648-7e696f71cd61\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.213650 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"73fa27e0-b59d-44b0-8648-7e696f71cd61\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.213667 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73fa27e0-b59d-44b0-8648-7e696f71cd61-config-data\") pod \"glance-default-internal-api-0\" (UID: \"73fa27e0-b59d-44b0-8648-7e696f71cd61\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.314740 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"73fa27e0-b59d-44b0-8648-7e696f71cd61\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.314781 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73fa27e0-b59d-44b0-8648-7e696f71cd61-config-data\") pod \"glance-default-internal-api-0\" (UID: \"73fa27e0-b59d-44b0-8648-7e696f71cd61\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.314813 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73fa27e0-b59d-44b0-8648-7e696f71cd61-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"73fa27e0-b59d-44b0-8648-7e696f71cd61\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.314837 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73fa27e0-b59d-44b0-8648-7e696f71cd61-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"73fa27e0-b59d-44b0-8648-7e696f71cd61\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.314860 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzf7p\" (UniqueName: \"kubernetes.io/projected/73fa27e0-b59d-44b0-8648-7e696f71cd61-kube-api-access-zzf7p\") pod \"glance-default-internal-api-0\" (UID: \"73fa27e0-b59d-44b0-8648-7e696f71cd61\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.314922 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73fa27e0-b59d-44b0-8648-7e696f71cd61-scripts\") pod \"glance-default-internal-api-0\" (UID: \"73fa27e0-b59d-44b0-8648-7e696f71cd61\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.314988 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/73fa27e0-b59d-44b0-8648-7e696f71cd61-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"73fa27e0-b59d-44b0-8648-7e696f71cd61\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.315020 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73fa27e0-b59d-44b0-8648-7e696f71cd61-logs\") pod \"glance-default-internal-api-0\" (UID: \"73fa27e0-b59d-44b0-8648-7e696f71cd61\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.314921 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"73fa27e0-b59d-44b0-8648-7e696f71cd61\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.315472 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73fa27e0-b59d-44b0-8648-7e696f71cd61-logs\") pod \"glance-default-internal-api-0\" (UID: \"73fa27e0-b59d-44b0-8648-7e696f71cd61\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.315533 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/73fa27e0-b59d-44b0-8648-7e696f71cd61-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"73fa27e0-b59d-44b0-8648-7e696f71cd61\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.325320 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73fa27e0-b59d-44b0-8648-7e696f71cd61-config-data\") pod \"glance-default-internal-api-0\" (UID: \"73fa27e0-b59d-44b0-8648-7e696f71cd61\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.325329 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73fa27e0-b59d-44b0-8648-7e696f71cd61-scripts\") pod \"glance-default-internal-api-0\" (UID: \"73fa27e0-b59d-44b0-8648-7e696f71cd61\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.331670 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73fa27e0-b59d-44b0-8648-7e696f71cd61-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"73fa27e0-b59d-44b0-8648-7e696f71cd61\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.336687 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73fa27e0-b59d-44b0-8648-7e696f71cd61-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"73fa27e0-b59d-44b0-8648-7e696f71cd61\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.387165 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzf7p\" (UniqueName: \"kubernetes.io/projected/73fa27e0-b59d-44b0-8648-7e696f71cd61-kube-api-access-zzf7p\") pod \"glance-default-internal-api-0\" (UID: \"73fa27e0-b59d-44b0-8648-7e696f71cd61\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.387457 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"73fa27e0-b59d-44b0-8648-7e696f71cd61\") " pod="openstack/glance-default-internal-api-0" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.474389 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.708922 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d8d4cd44-9972-445e-bac3-63441b6fa4cc","Type":"ContainerStarted","Data":"0e99db8779b62c9b60211a3a800d8786d6e5d19fd2046d962c492ef86848b48c"} Feb 27 16:30:41 crc kubenswrapper[4830]: I0227 16:30:41.709197 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d8d4cd44-9972-445e-bac3-63441b6fa4cc","Type":"ContainerStarted","Data":"f6b559b33a9c41bfd5e5daf5942e8e99f985853b1767f0c655d4bc26524a9085"} Feb 27 16:30:42 crc kubenswrapper[4830]: I0227 16:30:42.008893 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 16:30:42 crc kubenswrapper[4830]: I0227 16:30:42.214151 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-c6f44c475-twbzz" Feb 27 16:30:42 crc kubenswrapper[4830]: I0227 16:30:42.227918 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-c6f44c475-twbzz" Feb 27 16:30:42 crc kubenswrapper[4830]: I0227 16:30:42.733214 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"814a49d3-5ece-4609-97f7-745b3843d2e3","Type":"ContainerStarted","Data":"24d962bc6ef2486b362fb5bbcbcec25cf59beb4d276baf80822f83eaa8a09453"} Feb 27 16:30:42 crc kubenswrapper[4830]: I0227 16:30:42.733422 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="814a49d3-5ece-4609-97f7-745b3843d2e3" containerName="proxy-httpd" containerID="cri-o://24d962bc6ef2486b362fb5bbcbcec25cf59beb4d276baf80822f83eaa8a09453" gracePeriod=30 Feb 27 16:30:42 crc kubenswrapper[4830]: I0227 16:30:42.733486 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 16:30:42 crc kubenswrapper[4830]: I0227 16:30:42.733427 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="814a49d3-5ece-4609-97f7-745b3843d2e3" containerName="sg-core" containerID="cri-o://4fddcdedfa5cede42a0d195f0456a21bf7ccf689dd2c45dc7aede513c11e4192" gracePeriod=30 Feb 27 16:30:42 crc kubenswrapper[4830]: I0227 16:30:42.733289 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="814a49d3-5ece-4609-97f7-745b3843d2e3" containerName="ceilometer-central-agent" containerID="cri-o://9788c4d203c212b881b74765e66484cfa659abe121b4ea01bf45608a72848fda" gracePeriod=30 Feb 27 16:30:42 crc kubenswrapper[4830]: I0227 16:30:42.733359 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="814a49d3-5ece-4609-97f7-745b3843d2e3" containerName="ceilometer-notification-agent" containerID="cri-o://efa64dfe4067a783fbcc157971a9465a93d065c612dbdb6cecefa3f2c9a3799c" gracePeriod=30 Feb 27 16:30:42 crc kubenswrapper[4830]: I0227 16:30:42.741525 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"73fa27e0-b59d-44b0-8648-7e696f71cd61","Type":"ContainerStarted","Data":"25a00b007e3e1a8c77c7bf619655cf9ead3a6eb2aa47a2c778cfc3371c33e4c5"} Feb 27 16:30:42 crc kubenswrapper[4830]: I0227 16:30:42.741570 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"73fa27e0-b59d-44b0-8648-7e696f71cd61","Type":"ContainerStarted","Data":"de43a7a66c7c10082a14de9a23a6b16f51cafd5a47a4318321033a2e89b70b49"} Feb 27 16:30:42 crc kubenswrapper[4830]: I0227 16:30:42.748423 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d8d4cd44-9972-445e-bac3-63441b6fa4cc","Type":"ContainerStarted","Data":"7b743cc093d9cd3e5deb61678bf56225726f2ee5f6b916d24acb306d92c0ebc6"} Feb 27 16:30:42 crc kubenswrapper[4830]: I0227 16:30:42.754790 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.898201048 podStartE2EDuration="7.754770252s" podCreationTimestamp="2026-02-27 16:30:35 +0000 UTC" firstStartedPulling="2026-02-27 16:30:36.737424354 +0000 UTC m=+1432.826696817" lastFinishedPulling="2026-02-27 16:30:41.593993558 +0000 UTC m=+1437.683266021" observedRunningTime="2026-02-27 16:30:42.751379741 +0000 UTC m=+1438.840652204" watchObservedRunningTime="2026-02-27 16:30:42.754770252 +0000 UTC m=+1438.844042715" Feb 27 16:30:42 crc kubenswrapper[4830]: I0227 16:30:42.776644 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab89052d-19a3-4bee-8e41-3fc364424b47" path="/var/lib/kubelet/pods/ab89052d-19a3-4bee-8e41-3fc364424b47/volumes" Feb 27 16:30:42 crc kubenswrapper[4830]: I0227 16:30:42.958333 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 27 16:30:42 crc kubenswrapper[4830]: I0227 16:30:42.981705 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.981688045 podStartE2EDuration="3.981688045s" podCreationTimestamp="2026-02-27 16:30:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:30:42.777847359 +0000 UTC m=+1438.867119822" watchObservedRunningTime="2026-02-27 16:30:42.981688045 +0000 UTC m=+1439.070960508" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.509756 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.654750 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/814a49d3-5ece-4609-97f7-745b3843d2e3-log-httpd\") pod \"814a49d3-5ece-4609-97f7-745b3843d2e3\" (UID: \"814a49d3-5ece-4609-97f7-745b3843d2e3\") " Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.654884 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/814a49d3-5ece-4609-97f7-745b3843d2e3-combined-ca-bundle\") pod \"814a49d3-5ece-4609-97f7-745b3843d2e3\" (UID: \"814a49d3-5ece-4609-97f7-745b3843d2e3\") " Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.654956 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7scl\" (UniqueName: \"kubernetes.io/projected/814a49d3-5ece-4609-97f7-745b3843d2e3-kube-api-access-d7scl\") pod \"814a49d3-5ece-4609-97f7-745b3843d2e3\" (UID: \"814a49d3-5ece-4609-97f7-745b3843d2e3\") " Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.654998 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/814a49d3-5ece-4609-97f7-745b3843d2e3-run-httpd\") pod \"814a49d3-5ece-4609-97f7-745b3843d2e3\" (UID: \"814a49d3-5ece-4609-97f7-745b3843d2e3\") " Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.655060 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/814a49d3-5ece-4609-97f7-745b3843d2e3-sg-core-conf-yaml\") pod \"814a49d3-5ece-4609-97f7-745b3843d2e3\" (UID: \"814a49d3-5ece-4609-97f7-745b3843d2e3\") " Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.655081 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/814a49d3-5ece-4609-97f7-745b3843d2e3-scripts\") pod \"814a49d3-5ece-4609-97f7-745b3843d2e3\" (UID: \"814a49d3-5ece-4609-97f7-745b3843d2e3\") " Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.655106 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/814a49d3-5ece-4609-97f7-745b3843d2e3-config-data\") pod \"814a49d3-5ece-4609-97f7-745b3843d2e3\" (UID: \"814a49d3-5ece-4609-97f7-745b3843d2e3\") " Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.655537 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/814a49d3-5ece-4609-97f7-745b3843d2e3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "814a49d3-5ece-4609-97f7-745b3843d2e3" (UID: "814a49d3-5ece-4609-97f7-745b3843d2e3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.656236 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/814a49d3-5ece-4609-97f7-745b3843d2e3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "814a49d3-5ece-4609-97f7-745b3843d2e3" (UID: "814a49d3-5ece-4609-97f7-745b3843d2e3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.660214 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/814a49d3-5ece-4609-97f7-745b3843d2e3-kube-api-access-d7scl" (OuterVolumeSpecName: "kube-api-access-d7scl") pod "814a49d3-5ece-4609-97f7-745b3843d2e3" (UID: "814a49d3-5ece-4609-97f7-745b3843d2e3"). InnerVolumeSpecName "kube-api-access-d7scl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.660862 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/814a49d3-5ece-4609-97f7-745b3843d2e3-scripts" (OuterVolumeSpecName: "scripts") pod "814a49d3-5ece-4609-97f7-745b3843d2e3" (UID: "814a49d3-5ece-4609-97f7-745b3843d2e3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.694861 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/814a49d3-5ece-4609-97f7-745b3843d2e3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "814a49d3-5ece-4609-97f7-745b3843d2e3" (UID: "814a49d3-5ece-4609-97f7-745b3843d2e3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.756566 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7scl\" (UniqueName: \"kubernetes.io/projected/814a49d3-5ece-4609-97f7-745b3843d2e3-kube-api-access-d7scl\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.756600 4830 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/814a49d3-5ece-4609-97f7-745b3843d2e3-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.756613 4830 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/814a49d3-5ece-4609-97f7-745b3843d2e3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.756625 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/814a49d3-5ece-4609-97f7-745b3843d2e3-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.756638 4830 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/814a49d3-5ece-4609-97f7-745b3843d2e3-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.768612 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/814a49d3-5ece-4609-97f7-745b3843d2e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "814a49d3-5ece-4609-97f7-745b3843d2e3" (UID: "814a49d3-5ece-4609-97f7-745b3843d2e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.782261 4830 generic.go:334] "Generic (PLEG): container finished" podID="814a49d3-5ece-4609-97f7-745b3843d2e3" containerID="24d962bc6ef2486b362fb5bbcbcec25cf59beb4d276baf80822f83eaa8a09453" exitCode=0 Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.782294 4830 generic.go:334] "Generic (PLEG): container finished" podID="814a49d3-5ece-4609-97f7-745b3843d2e3" containerID="4fddcdedfa5cede42a0d195f0456a21bf7ccf689dd2c45dc7aede513c11e4192" exitCode=2 Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.782303 4830 generic.go:334] "Generic (PLEG): container finished" podID="814a49d3-5ece-4609-97f7-745b3843d2e3" containerID="efa64dfe4067a783fbcc157971a9465a93d065c612dbdb6cecefa3f2c9a3799c" exitCode=0 Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.782309 4830 generic.go:334] "Generic (PLEG): container finished" podID="814a49d3-5ece-4609-97f7-745b3843d2e3" containerID="9788c4d203c212b881b74765e66484cfa659abe121b4ea01bf45608a72848fda" exitCode=0 Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.782308 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"814a49d3-5ece-4609-97f7-745b3843d2e3","Type":"ContainerDied","Data":"24d962bc6ef2486b362fb5bbcbcec25cf59beb4d276baf80822f83eaa8a09453"} Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.782326 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.782364 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"814a49d3-5ece-4609-97f7-745b3843d2e3","Type":"ContainerDied","Data":"4fddcdedfa5cede42a0d195f0456a21bf7ccf689dd2c45dc7aede513c11e4192"} Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.782386 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"814a49d3-5ece-4609-97f7-745b3843d2e3","Type":"ContainerDied","Data":"efa64dfe4067a783fbcc157971a9465a93d065c612dbdb6cecefa3f2c9a3799c"} Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.782404 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"814a49d3-5ece-4609-97f7-745b3843d2e3","Type":"ContainerDied","Data":"9788c4d203c212b881b74765e66484cfa659abe121b4ea01bf45608a72848fda"} Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.782425 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"814a49d3-5ece-4609-97f7-745b3843d2e3","Type":"ContainerDied","Data":"fa47ec1490c231cd5f88f6545ac05a572f61122e20150d7b5495ef627d216e86"} Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.782431 4830 scope.go:117] "RemoveContainer" containerID="24d962bc6ef2486b362fb5bbcbcec25cf59beb4d276baf80822f83eaa8a09453" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.785301 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"73fa27e0-b59d-44b0-8648-7e696f71cd61","Type":"ContainerStarted","Data":"a5137475aad41fb8eb7b0a7b72def6633e3820a0b964c9cad287965ce3680cca"} Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.801162 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/814a49d3-5ece-4609-97f7-745b3843d2e3-config-data" (OuterVolumeSpecName: "config-data") pod "814a49d3-5ece-4609-97f7-745b3843d2e3" (UID: "814a49d3-5ece-4609-97f7-745b3843d2e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.807837 4830 scope.go:117] "RemoveContainer" containerID="4fddcdedfa5cede42a0d195f0456a21bf7ccf689dd2c45dc7aede513c11e4192" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.824763 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.8247431069999998 podStartE2EDuration="2.824743107s" podCreationTimestamp="2026-02-27 16:30:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:30:43.820385731 +0000 UTC m=+1439.909658194" watchObservedRunningTime="2026-02-27 16:30:43.824743107 +0000 UTC m=+1439.914015570" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.830106 4830 scope.go:117] "RemoveContainer" containerID="efa64dfe4067a783fbcc157971a9465a93d065c612dbdb6cecefa3f2c9a3799c" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.848146 4830 scope.go:117] "RemoveContainer" containerID="9788c4d203c212b881b74765e66484cfa659abe121b4ea01bf45608a72848fda" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.860317 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/814a49d3-5ece-4609-97f7-745b3843d2e3-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.860352 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/814a49d3-5ece-4609-97f7-745b3843d2e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.878512 4830 scope.go:117] "RemoveContainer" containerID="24d962bc6ef2486b362fb5bbcbcec25cf59beb4d276baf80822f83eaa8a09453" Feb 27 16:30:43 crc kubenswrapper[4830]: E0227 16:30:43.879028 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24d962bc6ef2486b362fb5bbcbcec25cf59beb4d276baf80822f83eaa8a09453\": container with ID starting with 24d962bc6ef2486b362fb5bbcbcec25cf59beb4d276baf80822f83eaa8a09453 not found: ID does not exist" containerID="24d962bc6ef2486b362fb5bbcbcec25cf59beb4d276baf80822f83eaa8a09453" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.879068 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24d962bc6ef2486b362fb5bbcbcec25cf59beb4d276baf80822f83eaa8a09453"} err="failed to get container status \"24d962bc6ef2486b362fb5bbcbcec25cf59beb4d276baf80822f83eaa8a09453\": rpc error: code = NotFound desc = could not find container \"24d962bc6ef2486b362fb5bbcbcec25cf59beb4d276baf80822f83eaa8a09453\": container with ID starting with 24d962bc6ef2486b362fb5bbcbcec25cf59beb4d276baf80822f83eaa8a09453 not found: ID does not exist" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.879092 4830 scope.go:117] "RemoveContainer" containerID="4fddcdedfa5cede42a0d195f0456a21bf7ccf689dd2c45dc7aede513c11e4192" Feb 27 16:30:43 crc kubenswrapper[4830]: E0227 16:30:43.879419 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fddcdedfa5cede42a0d195f0456a21bf7ccf689dd2c45dc7aede513c11e4192\": container with ID starting with 4fddcdedfa5cede42a0d195f0456a21bf7ccf689dd2c45dc7aede513c11e4192 not found: ID does not exist" containerID="4fddcdedfa5cede42a0d195f0456a21bf7ccf689dd2c45dc7aede513c11e4192" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.879454 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fddcdedfa5cede42a0d195f0456a21bf7ccf689dd2c45dc7aede513c11e4192"} err="failed to get container status \"4fddcdedfa5cede42a0d195f0456a21bf7ccf689dd2c45dc7aede513c11e4192\": rpc error: code = NotFound desc = could not find container \"4fddcdedfa5cede42a0d195f0456a21bf7ccf689dd2c45dc7aede513c11e4192\": container with ID starting with 4fddcdedfa5cede42a0d195f0456a21bf7ccf689dd2c45dc7aede513c11e4192 not found: ID does not exist" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.879480 4830 scope.go:117] "RemoveContainer" containerID="efa64dfe4067a783fbcc157971a9465a93d065c612dbdb6cecefa3f2c9a3799c" Feb 27 16:30:43 crc kubenswrapper[4830]: E0227 16:30:43.879859 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efa64dfe4067a783fbcc157971a9465a93d065c612dbdb6cecefa3f2c9a3799c\": container with ID starting with efa64dfe4067a783fbcc157971a9465a93d065c612dbdb6cecefa3f2c9a3799c not found: ID does not exist" containerID="efa64dfe4067a783fbcc157971a9465a93d065c612dbdb6cecefa3f2c9a3799c" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.879909 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efa64dfe4067a783fbcc157971a9465a93d065c612dbdb6cecefa3f2c9a3799c"} err="failed to get container status \"efa64dfe4067a783fbcc157971a9465a93d065c612dbdb6cecefa3f2c9a3799c\": rpc error: code = NotFound desc = could not find container \"efa64dfe4067a783fbcc157971a9465a93d065c612dbdb6cecefa3f2c9a3799c\": container with ID starting with efa64dfe4067a783fbcc157971a9465a93d065c612dbdb6cecefa3f2c9a3799c not found: ID does not exist" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.880267 4830 scope.go:117] "RemoveContainer" containerID="9788c4d203c212b881b74765e66484cfa659abe121b4ea01bf45608a72848fda" Feb 27 16:30:43 crc kubenswrapper[4830]: E0227 16:30:43.880603 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9788c4d203c212b881b74765e66484cfa659abe121b4ea01bf45608a72848fda\": container with ID starting with 9788c4d203c212b881b74765e66484cfa659abe121b4ea01bf45608a72848fda not found: ID does not exist" containerID="9788c4d203c212b881b74765e66484cfa659abe121b4ea01bf45608a72848fda" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.880637 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9788c4d203c212b881b74765e66484cfa659abe121b4ea01bf45608a72848fda"} err="failed to get container status \"9788c4d203c212b881b74765e66484cfa659abe121b4ea01bf45608a72848fda\": rpc error: code = NotFound desc = could not find container \"9788c4d203c212b881b74765e66484cfa659abe121b4ea01bf45608a72848fda\": container with ID starting with 9788c4d203c212b881b74765e66484cfa659abe121b4ea01bf45608a72848fda not found: ID does not exist" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.880655 4830 scope.go:117] "RemoveContainer" containerID="24d962bc6ef2486b362fb5bbcbcec25cf59beb4d276baf80822f83eaa8a09453" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.880931 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24d962bc6ef2486b362fb5bbcbcec25cf59beb4d276baf80822f83eaa8a09453"} err="failed to get container status \"24d962bc6ef2486b362fb5bbcbcec25cf59beb4d276baf80822f83eaa8a09453\": rpc error: code = NotFound desc = could not find container \"24d962bc6ef2486b362fb5bbcbcec25cf59beb4d276baf80822f83eaa8a09453\": container with ID starting with 24d962bc6ef2486b362fb5bbcbcec25cf59beb4d276baf80822f83eaa8a09453 not found: ID does not exist" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.880993 4830 scope.go:117] "RemoveContainer" containerID="4fddcdedfa5cede42a0d195f0456a21bf7ccf689dd2c45dc7aede513c11e4192" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.881356 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fddcdedfa5cede42a0d195f0456a21bf7ccf689dd2c45dc7aede513c11e4192"} err="failed to get container status \"4fddcdedfa5cede42a0d195f0456a21bf7ccf689dd2c45dc7aede513c11e4192\": rpc error: code = NotFound desc = could not find container \"4fddcdedfa5cede42a0d195f0456a21bf7ccf689dd2c45dc7aede513c11e4192\": container with ID starting with 4fddcdedfa5cede42a0d195f0456a21bf7ccf689dd2c45dc7aede513c11e4192 not found: ID does not exist" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.881380 4830 scope.go:117] "RemoveContainer" containerID="efa64dfe4067a783fbcc157971a9465a93d065c612dbdb6cecefa3f2c9a3799c" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.881631 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efa64dfe4067a783fbcc157971a9465a93d065c612dbdb6cecefa3f2c9a3799c"} err="failed to get container status \"efa64dfe4067a783fbcc157971a9465a93d065c612dbdb6cecefa3f2c9a3799c\": rpc error: code = NotFound desc = could not find container \"efa64dfe4067a783fbcc157971a9465a93d065c612dbdb6cecefa3f2c9a3799c\": container with ID starting with efa64dfe4067a783fbcc157971a9465a93d065c612dbdb6cecefa3f2c9a3799c not found: ID does not exist" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.881651 4830 scope.go:117] "RemoveContainer" containerID="9788c4d203c212b881b74765e66484cfa659abe121b4ea01bf45608a72848fda" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.881881 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9788c4d203c212b881b74765e66484cfa659abe121b4ea01bf45608a72848fda"} err="failed to get container status \"9788c4d203c212b881b74765e66484cfa659abe121b4ea01bf45608a72848fda\": rpc error: code = NotFound desc = could not find container \"9788c4d203c212b881b74765e66484cfa659abe121b4ea01bf45608a72848fda\": container with ID starting with 9788c4d203c212b881b74765e66484cfa659abe121b4ea01bf45608a72848fda not found: ID does not exist" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.881900 4830 scope.go:117] "RemoveContainer" containerID="24d962bc6ef2486b362fb5bbcbcec25cf59beb4d276baf80822f83eaa8a09453" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.882147 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24d962bc6ef2486b362fb5bbcbcec25cf59beb4d276baf80822f83eaa8a09453"} err="failed to get container status \"24d962bc6ef2486b362fb5bbcbcec25cf59beb4d276baf80822f83eaa8a09453\": rpc error: code = NotFound desc = could not find container \"24d962bc6ef2486b362fb5bbcbcec25cf59beb4d276baf80822f83eaa8a09453\": container with ID starting with 24d962bc6ef2486b362fb5bbcbcec25cf59beb4d276baf80822f83eaa8a09453 not found: ID does not exist" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.882166 4830 scope.go:117] "RemoveContainer" containerID="4fddcdedfa5cede42a0d195f0456a21bf7ccf689dd2c45dc7aede513c11e4192" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.882415 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fddcdedfa5cede42a0d195f0456a21bf7ccf689dd2c45dc7aede513c11e4192"} err="failed to get container status \"4fddcdedfa5cede42a0d195f0456a21bf7ccf689dd2c45dc7aede513c11e4192\": rpc error: code = NotFound desc = could not find container \"4fddcdedfa5cede42a0d195f0456a21bf7ccf689dd2c45dc7aede513c11e4192\": container with ID starting with 4fddcdedfa5cede42a0d195f0456a21bf7ccf689dd2c45dc7aede513c11e4192 not found: ID does not exist" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.882434 4830 scope.go:117] "RemoveContainer" containerID="efa64dfe4067a783fbcc157971a9465a93d065c612dbdb6cecefa3f2c9a3799c" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.882665 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efa64dfe4067a783fbcc157971a9465a93d065c612dbdb6cecefa3f2c9a3799c"} err="failed to get container status \"efa64dfe4067a783fbcc157971a9465a93d065c612dbdb6cecefa3f2c9a3799c\": rpc error: code = NotFound desc = could not find container \"efa64dfe4067a783fbcc157971a9465a93d065c612dbdb6cecefa3f2c9a3799c\": container with ID starting with efa64dfe4067a783fbcc157971a9465a93d065c612dbdb6cecefa3f2c9a3799c not found: ID does not exist" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.882690 4830 scope.go:117] "RemoveContainer" containerID="9788c4d203c212b881b74765e66484cfa659abe121b4ea01bf45608a72848fda" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.882941 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9788c4d203c212b881b74765e66484cfa659abe121b4ea01bf45608a72848fda"} err="failed to get container status \"9788c4d203c212b881b74765e66484cfa659abe121b4ea01bf45608a72848fda\": rpc error: code = NotFound desc = could not find container \"9788c4d203c212b881b74765e66484cfa659abe121b4ea01bf45608a72848fda\": container with ID starting with 9788c4d203c212b881b74765e66484cfa659abe121b4ea01bf45608a72848fda not found: ID does not exist" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.882982 4830 scope.go:117] "RemoveContainer" containerID="24d962bc6ef2486b362fb5bbcbcec25cf59beb4d276baf80822f83eaa8a09453" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.883216 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24d962bc6ef2486b362fb5bbcbcec25cf59beb4d276baf80822f83eaa8a09453"} err="failed to get container status \"24d962bc6ef2486b362fb5bbcbcec25cf59beb4d276baf80822f83eaa8a09453\": rpc error: code = NotFound desc = could not find container \"24d962bc6ef2486b362fb5bbcbcec25cf59beb4d276baf80822f83eaa8a09453\": container with ID starting with 24d962bc6ef2486b362fb5bbcbcec25cf59beb4d276baf80822f83eaa8a09453 not found: ID does not exist" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.883233 4830 scope.go:117] "RemoveContainer" containerID="4fddcdedfa5cede42a0d195f0456a21bf7ccf689dd2c45dc7aede513c11e4192" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.883749 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fddcdedfa5cede42a0d195f0456a21bf7ccf689dd2c45dc7aede513c11e4192"} err="failed to get container status \"4fddcdedfa5cede42a0d195f0456a21bf7ccf689dd2c45dc7aede513c11e4192\": rpc error: code = NotFound desc = could not find container \"4fddcdedfa5cede42a0d195f0456a21bf7ccf689dd2c45dc7aede513c11e4192\": container with ID starting with 4fddcdedfa5cede42a0d195f0456a21bf7ccf689dd2c45dc7aede513c11e4192 not found: ID does not exist" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.883774 4830 scope.go:117] "RemoveContainer" containerID="efa64dfe4067a783fbcc157971a9465a93d065c612dbdb6cecefa3f2c9a3799c" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.884009 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efa64dfe4067a783fbcc157971a9465a93d065c612dbdb6cecefa3f2c9a3799c"} err="failed to get container status \"efa64dfe4067a783fbcc157971a9465a93d065c612dbdb6cecefa3f2c9a3799c\": rpc error: code = NotFound desc = could not find container \"efa64dfe4067a783fbcc157971a9465a93d065c612dbdb6cecefa3f2c9a3799c\": container with ID starting with efa64dfe4067a783fbcc157971a9465a93d065c612dbdb6cecefa3f2c9a3799c not found: ID does not exist" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.884032 4830 scope.go:117] "RemoveContainer" containerID="9788c4d203c212b881b74765e66484cfa659abe121b4ea01bf45608a72848fda" Feb 27 16:30:43 crc kubenswrapper[4830]: I0227 16:30:43.884247 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9788c4d203c212b881b74765e66484cfa659abe121b4ea01bf45608a72848fda"} err="failed to get container status \"9788c4d203c212b881b74765e66484cfa659abe121b4ea01bf45608a72848fda\": rpc error: code = NotFound desc = could not find container \"9788c4d203c212b881b74765e66484cfa659abe121b4ea01bf45608a72848fda\": container with ID starting with 9788c4d203c212b881b74765e66484cfa659abe121b4ea01bf45608a72848fda not found: ID does not exist" Feb 27 16:30:44 crc kubenswrapper[4830]: I0227 16:30:44.136459 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:30:44 crc kubenswrapper[4830]: I0227 16:30:44.149891 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:30:44 crc kubenswrapper[4830]: I0227 16:30:44.883634 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="814a49d3-5ece-4609-97f7-745b3843d2e3" path="/var/lib/kubelet/pods/814a49d3-5ece-4609-97f7-745b3843d2e3/volumes" Feb 27 16:30:44 crc kubenswrapper[4830]: I0227 16:30:44.888848 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:30:44 crc kubenswrapper[4830]: E0227 16:30:44.889119 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="814a49d3-5ece-4609-97f7-745b3843d2e3" containerName="ceilometer-central-agent" Feb 27 16:30:44 crc kubenswrapper[4830]: I0227 16:30:44.889135 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="814a49d3-5ece-4609-97f7-745b3843d2e3" containerName="ceilometer-central-agent" Feb 27 16:30:44 crc kubenswrapper[4830]: E0227 16:30:44.889150 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="814a49d3-5ece-4609-97f7-745b3843d2e3" containerName="ceilometer-notification-agent" Feb 27 16:30:44 crc kubenswrapper[4830]: I0227 16:30:44.889157 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="814a49d3-5ece-4609-97f7-745b3843d2e3" containerName="ceilometer-notification-agent" Feb 27 16:30:44 crc kubenswrapper[4830]: E0227 16:30:44.889175 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="814a49d3-5ece-4609-97f7-745b3843d2e3" containerName="sg-core" Feb 27 16:30:44 crc kubenswrapper[4830]: I0227 16:30:44.889182 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="814a49d3-5ece-4609-97f7-745b3843d2e3" containerName="sg-core" Feb 27 16:30:44 crc kubenswrapper[4830]: E0227 16:30:44.889201 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="814a49d3-5ece-4609-97f7-745b3843d2e3" containerName="proxy-httpd" Feb 27 16:30:44 crc kubenswrapper[4830]: I0227 16:30:44.889207 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="814a49d3-5ece-4609-97f7-745b3843d2e3" containerName="proxy-httpd" Feb 27 16:30:44 crc kubenswrapper[4830]: I0227 16:30:44.889382 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="814a49d3-5ece-4609-97f7-745b3843d2e3" containerName="ceilometer-central-agent" Feb 27 16:30:44 crc kubenswrapper[4830]: I0227 16:30:44.889397 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="814a49d3-5ece-4609-97f7-745b3843d2e3" containerName="ceilometer-notification-agent" Feb 27 16:30:44 crc kubenswrapper[4830]: I0227 16:30:44.889413 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="814a49d3-5ece-4609-97f7-745b3843d2e3" containerName="proxy-httpd" Feb 27 16:30:44 crc kubenswrapper[4830]: I0227 16:30:44.889424 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="814a49d3-5ece-4609-97f7-745b3843d2e3" containerName="sg-core" Feb 27 16:30:44 crc kubenswrapper[4830]: I0227 16:30:44.890882 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:30:44 crc kubenswrapper[4830]: I0227 16:30:44.891332 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 16:30:44 crc kubenswrapper[4830]: I0227 16:30:44.893884 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 16:30:44 crc kubenswrapper[4830]: I0227 16:30:44.894589 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 16:30:44 crc kubenswrapper[4830]: I0227 16:30:44.989740 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19288868-0c0f-4ded-98f3-80cd07b350c2-scripts\") pod \"ceilometer-0\" (UID: \"19288868-0c0f-4ded-98f3-80cd07b350c2\") " pod="openstack/ceilometer-0" Feb 27 16:30:44 crc kubenswrapper[4830]: I0227 16:30:44.989865 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/19288868-0c0f-4ded-98f3-80cd07b350c2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"19288868-0c0f-4ded-98f3-80cd07b350c2\") " pod="openstack/ceilometer-0" Feb 27 16:30:44 crc kubenswrapper[4830]: I0227 16:30:44.989888 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19288868-0c0f-4ded-98f3-80cd07b350c2-config-data\") pod \"ceilometer-0\" (UID: \"19288868-0c0f-4ded-98f3-80cd07b350c2\") " pod="openstack/ceilometer-0" Feb 27 16:30:44 crc kubenswrapper[4830]: I0227 16:30:44.989936 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/19288868-0c0f-4ded-98f3-80cd07b350c2-log-httpd\") pod \"ceilometer-0\" (UID: \"19288868-0c0f-4ded-98f3-80cd07b350c2\") " pod="openstack/ceilometer-0" Feb 27 16:30:44 crc kubenswrapper[4830]: I0227 16:30:44.989979 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19288868-0c0f-4ded-98f3-80cd07b350c2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"19288868-0c0f-4ded-98f3-80cd07b350c2\") " pod="openstack/ceilometer-0" Feb 27 16:30:44 crc kubenswrapper[4830]: I0227 16:30:44.990001 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/19288868-0c0f-4ded-98f3-80cd07b350c2-run-httpd\") pod \"ceilometer-0\" (UID: \"19288868-0c0f-4ded-98f3-80cd07b350c2\") " pod="openstack/ceilometer-0" Feb 27 16:30:44 crc kubenswrapper[4830]: I0227 16:30:44.990029 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sk4g\" (UniqueName: \"kubernetes.io/projected/19288868-0c0f-4ded-98f3-80cd07b350c2-kube-api-access-5sk4g\") pod \"ceilometer-0\" (UID: \"19288868-0c0f-4ded-98f3-80cd07b350c2\") " pod="openstack/ceilometer-0" Feb 27 16:30:45 crc kubenswrapper[4830]: I0227 16:30:45.091978 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19288868-0c0f-4ded-98f3-80cd07b350c2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"19288868-0c0f-4ded-98f3-80cd07b350c2\") " pod="openstack/ceilometer-0" Feb 27 16:30:45 crc kubenswrapper[4830]: I0227 16:30:45.092318 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/19288868-0c0f-4ded-98f3-80cd07b350c2-run-httpd\") pod \"ceilometer-0\" (UID: \"19288868-0c0f-4ded-98f3-80cd07b350c2\") " pod="openstack/ceilometer-0" Feb 27 16:30:45 crc kubenswrapper[4830]: I0227 16:30:45.092444 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sk4g\" (UniqueName: \"kubernetes.io/projected/19288868-0c0f-4ded-98f3-80cd07b350c2-kube-api-access-5sk4g\") pod \"ceilometer-0\" (UID: \"19288868-0c0f-4ded-98f3-80cd07b350c2\") " pod="openstack/ceilometer-0" Feb 27 16:30:45 crc kubenswrapper[4830]: I0227 16:30:45.093074 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/19288868-0c0f-4ded-98f3-80cd07b350c2-run-httpd\") pod \"ceilometer-0\" (UID: \"19288868-0c0f-4ded-98f3-80cd07b350c2\") " pod="openstack/ceilometer-0" Feb 27 16:30:45 crc kubenswrapper[4830]: I0227 16:30:45.093259 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19288868-0c0f-4ded-98f3-80cd07b350c2-scripts\") pod \"ceilometer-0\" (UID: \"19288868-0c0f-4ded-98f3-80cd07b350c2\") " pod="openstack/ceilometer-0" Feb 27 16:30:45 crc kubenswrapper[4830]: I0227 16:30:45.094021 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/19288868-0c0f-4ded-98f3-80cd07b350c2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"19288868-0c0f-4ded-98f3-80cd07b350c2\") " pod="openstack/ceilometer-0" Feb 27 16:30:45 crc kubenswrapper[4830]: I0227 16:30:45.094174 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19288868-0c0f-4ded-98f3-80cd07b350c2-config-data\") pod \"ceilometer-0\" (UID: \"19288868-0c0f-4ded-98f3-80cd07b350c2\") " pod="openstack/ceilometer-0" Feb 27 16:30:45 crc kubenswrapper[4830]: I0227 16:30:45.094333 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/19288868-0c0f-4ded-98f3-80cd07b350c2-log-httpd\") pod \"ceilometer-0\" (UID: \"19288868-0c0f-4ded-98f3-80cd07b350c2\") " pod="openstack/ceilometer-0" Feb 27 16:30:45 crc kubenswrapper[4830]: I0227 16:30:45.094890 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/19288868-0c0f-4ded-98f3-80cd07b350c2-log-httpd\") pod \"ceilometer-0\" (UID: \"19288868-0c0f-4ded-98f3-80cd07b350c2\") " pod="openstack/ceilometer-0" Feb 27 16:30:45 crc kubenswrapper[4830]: I0227 16:30:45.097858 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19288868-0c0f-4ded-98f3-80cd07b350c2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"19288868-0c0f-4ded-98f3-80cd07b350c2\") " pod="openstack/ceilometer-0" Feb 27 16:30:45 crc kubenswrapper[4830]: I0227 16:30:45.098754 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19288868-0c0f-4ded-98f3-80cd07b350c2-scripts\") pod \"ceilometer-0\" (UID: \"19288868-0c0f-4ded-98f3-80cd07b350c2\") " pod="openstack/ceilometer-0" Feb 27 16:30:45 crc kubenswrapper[4830]: I0227 16:30:45.099509 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/19288868-0c0f-4ded-98f3-80cd07b350c2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"19288868-0c0f-4ded-98f3-80cd07b350c2\") " pod="openstack/ceilometer-0" Feb 27 16:30:45 crc kubenswrapper[4830]: I0227 16:30:45.101174 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19288868-0c0f-4ded-98f3-80cd07b350c2-config-data\") pod \"ceilometer-0\" (UID: \"19288868-0c0f-4ded-98f3-80cd07b350c2\") " pod="openstack/ceilometer-0" Feb 27 16:30:45 crc kubenswrapper[4830]: I0227 16:30:45.118618 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sk4g\" (UniqueName: \"kubernetes.io/projected/19288868-0c0f-4ded-98f3-80cd07b350c2-kube-api-access-5sk4g\") pod \"ceilometer-0\" (UID: \"19288868-0c0f-4ded-98f3-80cd07b350c2\") " pod="openstack/ceilometer-0" Feb 27 16:30:45 crc kubenswrapper[4830]: I0227 16:30:45.209889 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 16:30:45 crc kubenswrapper[4830]: I0227 16:30:45.673306 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:30:45 crc kubenswrapper[4830]: W0227 16:30:45.682062 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19288868_0c0f_4ded_98f3_80cd07b350c2.slice/crio-67eb07c6a65223bf5d5068cd63302aa4c76bbe7b10a48116cc7988dce8898a9a WatchSource:0}: Error finding container 67eb07c6a65223bf5d5068cd63302aa4c76bbe7b10a48116cc7988dce8898a9a: Status 404 returned error can't find the container with id 67eb07c6a65223bf5d5068cd63302aa4c76bbe7b10a48116cc7988dce8898a9a Feb 27 16:30:45 crc kubenswrapper[4830]: I0227 16:30:45.842289 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"19288868-0c0f-4ded-98f3-80cd07b350c2","Type":"ContainerStarted","Data":"67eb07c6a65223bf5d5068cd63302aa4c76bbe7b10a48116cc7988dce8898a9a"} Feb 27 16:30:46 crc kubenswrapper[4830]: I0227 16:30:46.119632 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-x7bbz"] Feb 27 16:30:46 crc kubenswrapper[4830]: I0227 16:30:46.121027 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-x7bbz" Feb 27 16:30:46 crc kubenswrapper[4830]: I0227 16:30:46.124007 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 27 16:30:46 crc kubenswrapper[4830]: I0227 16:30:46.124390 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-ztm2w" Feb 27 16:30:46 crc kubenswrapper[4830]: I0227 16:30:46.129138 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-x7bbz"] Feb 27 16:30:46 crc kubenswrapper[4830]: I0227 16:30:46.152758 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 27 16:30:46 crc kubenswrapper[4830]: I0227 16:30:46.213113 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4902066e-ebd0-4ea5-8620-939e120b7862-config-data\") pod \"nova-cell0-conductor-db-sync-x7bbz\" (UID: \"4902066e-ebd0-4ea5-8620-939e120b7862\") " pod="openstack/nova-cell0-conductor-db-sync-x7bbz" Feb 27 16:30:46 crc kubenswrapper[4830]: I0227 16:30:46.213379 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4902066e-ebd0-4ea5-8620-939e120b7862-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-x7bbz\" (UID: \"4902066e-ebd0-4ea5-8620-939e120b7862\") " pod="openstack/nova-cell0-conductor-db-sync-x7bbz" Feb 27 16:30:46 crc kubenswrapper[4830]: I0227 16:30:46.213528 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4902066e-ebd0-4ea5-8620-939e120b7862-scripts\") pod \"nova-cell0-conductor-db-sync-x7bbz\" (UID: \"4902066e-ebd0-4ea5-8620-939e120b7862\") " pod="openstack/nova-cell0-conductor-db-sync-x7bbz" Feb 27 16:30:46 crc kubenswrapper[4830]: I0227 16:30:46.213627 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdgns\" (UniqueName: \"kubernetes.io/projected/4902066e-ebd0-4ea5-8620-939e120b7862-kube-api-access-jdgns\") pod \"nova-cell0-conductor-db-sync-x7bbz\" (UID: \"4902066e-ebd0-4ea5-8620-939e120b7862\") " pod="openstack/nova-cell0-conductor-db-sync-x7bbz" Feb 27 16:30:46 crc kubenswrapper[4830]: I0227 16:30:46.314800 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4902066e-ebd0-4ea5-8620-939e120b7862-scripts\") pod \"nova-cell0-conductor-db-sync-x7bbz\" (UID: \"4902066e-ebd0-4ea5-8620-939e120b7862\") " pod="openstack/nova-cell0-conductor-db-sync-x7bbz" Feb 27 16:30:46 crc kubenswrapper[4830]: I0227 16:30:46.314872 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdgns\" (UniqueName: \"kubernetes.io/projected/4902066e-ebd0-4ea5-8620-939e120b7862-kube-api-access-jdgns\") pod \"nova-cell0-conductor-db-sync-x7bbz\" (UID: \"4902066e-ebd0-4ea5-8620-939e120b7862\") " pod="openstack/nova-cell0-conductor-db-sync-x7bbz" Feb 27 16:30:46 crc kubenswrapper[4830]: I0227 16:30:46.314969 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4902066e-ebd0-4ea5-8620-939e120b7862-config-data\") pod \"nova-cell0-conductor-db-sync-x7bbz\" (UID: \"4902066e-ebd0-4ea5-8620-939e120b7862\") " pod="openstack/nova-cell0-conductor-db-sync-x7bbz" Feb 27 16:30:46 crc kubenswrapper[4830]: I0227 16:30:46.315016 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4902066e-ebd0-4ea5-8620-939e120b7862-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-x7bbz\" (UID: \"4902066e-ebd0-4ea5-8620-939e120b7862\") " pod="openstack/nova-cell0-conductor-db-sync-x7bbz" Feb 27 16:30:46 crc kubenswrapper[4830]: I0227 16:30:46.320934 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4902066e-ebd0-4ea5-8620-939e120b7862-config-data\") pod \"nova-cell0-conductor-db-sync-x7bbz\" (UID: \"4902066e-ebd0-4ea5-8620-939e120b7862\") " pod="openstack/nova-cell0-conductor-db-sync-x7bbz" Feb 27 16:30:46 crc kubenswrapper[4830]: I0227 16:30:46.323586 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4902066e-ebd0-4ea5-8620-939e120b7862-scripts\") pod \"nova-cell0-conductor-db-sync-x7bbz\" (UID: \"4902066e-ebd0-4ea5-8620-939e120b7862\") " pod="openstack/nova-cell0-conductor-db-sync-x7bbz" Feb 27 16:30:46 crc kubenswrapper[4830]: I0227 16:30:46.332244 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4902066e-ebd0-4ea5-8620-939e120b7862-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-x7bbz\" (UID: \"4902066e-ebd0-4ea5-8620-939e120b7862\") " pod="openstack/nova-cell0-conductor-db-sync-x7bbz" Feb 27 16:30:46 crc kubenswrapper[4830]: I0227 16:30:46.338774 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdgns\" (UniqueName: \"kubernetes.io/projected/4902066e-ebd0-4ea5-8620-939e120b7862-kube-api-access-jdgns\") pod \"nova-cell0-conductor-db-sync-x7bbz\" (UID: \"4902066e-ebd0-4ea5-8620-939e120b7862\") " pod="openstack/nova-cell0-conductor-db-sync-x7bbz" Feb 27 16:30:46 crc kubenswrapper[4830]: I0227 16:30:46.468092 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-x7bbz" Feb 27 16:30:46 crc kubenswrapper[4830]: I0227 16:30:46.851721 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"19288868-0c0f-4ded-98f3-80cd07b350c2","Type":"ContainerStarted","Data":"f8f34796ac91c21f0c695f92907c09775357969b6a31121699e96e8f2d086147"} Feb 27 16:30:46 crc kubenswrapper[4830]: I0227 16:30:46.936310 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-x7bbz"] Feb 27 16:30:47 crc kubenswrapper[4830]: I0227 16:30:47.018260 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-8559c55d4f-z6hpf" Feb 27 16:30:47 crc kubenswrapper[4830]: I0227 16:30:47.085835 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c844968fb-vzqlt"] Feb 27 16:30:47 crc kubenswrapper[4830]: I0227 16:30:47.086497 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-c844968fb-vzqlt" podUID="92e3fe75-3936-4491-80ad-e2b738f023b2" containerName="neutron-httpd" containerID="cri-o://370cccbbf378833ab78c48ea79a72b415f5be5b63595a1d5c9da597419ac42f8" gracePeriod=30 Feb 27 16:30:47 crc kubenswrapper[4830]: I0227 16:30:47.086571 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-c844968fb-vzqlt" podUID="92e3fe75-3936-4491-80ad-e2b738f023b2" containerName="neutron-api" containerID="cri-o://a986bbda403364dd28f3ffc0954e8e1f8595a2d731d8bb3cf54223d09a324a21" gracePeriod=30 Feb 27 16:30:47 crc kubenswrapper[4830]: I0227 16:30:47.865090 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-x7bbz" event={"ID":"4902066e-ebd0-4ea5-8620-939e120b7862","Type":"ContainerStarted","Data":"d2ad27794c89cb64479c046dc8f008e32e7d43ab722503d8779cb789817fe98f"} Feb 27 16:30:47 crc kubenswrapper[4830]: I0227 16:30:47.867085 4830 generic.go:334] "Generic (PLEG): container finished" podID="92e3fe75-3936-4491-80ad-e2b738f023b2" containerID="370cccbbf378833ab78c48ea79a72b415f5be5b63595a1d5c9da597419ac42f8" exitCode=0 Feb 27 16:30:47 crc kubenswrapper[4830]: I0227 16:30:47.867170 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c844968fb-vzqlt" event={"ID":"92e3fe75-3936-4491-80ad-e2b738f023b2","Type":"ContainerDied","Data":"370cccbbf378833ab78c48ea79a72b415f5be5b63595a1d5c9da597419ac42f8"} Feb 27 16:30:47 crc kubenswrapper[4830]: I0227 16:30:47.871779 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"19288868-0c0f-4ded-98f3-80cd07b350c2","Type":"ContainerStarted","Data":"98c4c8d56c7429d2d9520ab93e5ce3ee5be86799ca9c538051edf6e0b6ea6c3d"} Feb 27 16:30:48 crc kubenswrapper[4830]: I0227 16:30:48.883208 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"19288868-0c0f-4ded-98f3-80cd07b350c2","Type":"ContainerStarted","Data":"d5a7dd60a232991741b101e4a9891977b3f095d90be1312762610a6cc6b35dfd"} Feb 27 16:30:50 crc kubenswrapper[4830]: I0227 16:30:50.293382 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 27 16:30:50 crc kubenswrapper[4830]: I0227 16:30:50.293675 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 27 16:30:50 crc kubenswrapper[4830]: I0227 16:30:50.336870 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 27 16:30:50 crc kubenswrapper[4830]: I0227 16:30:50.338184 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 27 16:30:50 crc kubenswrapper[4830]: I0227 16:30:50.441117 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:30:50 crc kubenswrapper[4830]: I0227 16:30:50.899604 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 27 16:30:50 crc kubenswrapper[4830]: I0227 16:30:50.899837 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 27 16:30:51 crc kubenswrapper[4830]: I0227 16:30:51.474503 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 27 16:30:51 crc kubenswrapper[4830]: I0227 16:30:51.475211 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 27 16:30:51 crc kubenswrapper[4830]: I0227 16:30:51.517751 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 27 16:30:51 crc kubenswrapper[4830]: I0227 16:30:51.533084 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 27 16:30:51 crc kubenswrapper[4830]: I0227 16:30:51.907771 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 27 16:30:51 crc kubenswrapper[4830]: I0227 16:30:51.907824 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 27 16:30:52 crc kubenswrapper[4830]: I0227 16:30:52.916572 4830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 27 16:30:52 crc kubenswrapper[4830]: I0227 16:30:52.916909 4830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 27 16:30:52 crc kubenswrapper[4830]: I0227 16:30:52.941828 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 27 16:30:53 crc kubenswrapper[4830]: I0227 16:30:53.228438 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 27 16:30:53 crc kubenswrapper[4830]: I0227 16:30:53.934731 4830 generic.go:334] "Generic (PLEG): container finished" podID="92e3fe75-3936-4491-80ad-e2b738f023b2" containerID="a986bbda403364dd28f3ffc0954e8e1f8595a2d731d8bb3cf54223d09a324a21" exitCode=0 Feb 27 16:30:53 crc kubenswrapper[4830]: I0227 16:30:53.934815 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c844968fb-vzqlt" event={"ID":"92e3fe75-3936-4491-80ad-e2b738f023b2","Type":"ContainerDied","Data":"a986bbda403364dd28f3ffc0954e8e1f8595a2d731d8bb3cf54223d09a324a21"} Feb 27 16:30:53 crc kubenswrapper[4830]: I0227 16:30:53.965371 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 27 16:30:53 crc kubenswrapper[4830]: I0227 16:30:53.965625 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 27 16:30:57 crc kubenswrapper[4830]: I0227 16:30:57.935711 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c844968fb-vzqlt" Feb 27 16:30:57 crc kubenswrapper[4830]: I0227 16:30:57.988636 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-c844968fb-vzqlt" event={"ID":"92e3fe75-3936-4491-80ad-e2b738f023b2","Type":"ContainerDied","Data":"6d503f9c2d8929099442767fef35a030b24a05a6f13adde4a763f40df6a0ba49"} Feb 27 16:30:57 crc kubenswrapper[4830]: I0227 16:30:57.988684 4830 scope.go:117] "RemoveContainer" containerID="370cccbbf378833ab78c48ea79a72b415f5be5b63595a1d5c9da597419ac42f8" Feb 27 16:30:57 crc kubenswrapper[4830]: I0227 16:30:57.988837 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-c844968fb-vzqlt" Feb 27 16:30:58 crc kubenswrapper[4830]: I0227 16:30:58.042138 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/92e3fe75-3936-4491-80ad-e2b738f023b2-config\") pod \"92e3fe75-3936-4491-80ad-e2b738f023b2\" (UID: \"92e3fe75-3936-4491-80ad-e2b738f023b2\") " Feb 27 16:30:58 crc kubenswrapper[4830]: I0227 16:30:58.042231 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnrhg\" (UniqueName: \"kubernetes.io/projected/92e3fe75-3936-4491-80ad-e2b738f023b2-kube-api-access-jnrhg\") pod \"92e3fe75-3936-4491-80ad-e2b738f023b2\" (UID: \"92e3fe75-3936-4491-80ad-e2b738f023b2\") " Feb 27 16:30:58 crc kubenswrapper[4830]: I0227 16:30:58.042351 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/92e3fe75-3936-4491-80ad-e2b738f023b2-httpd-config\") pod \"92e3fe75-3936-4491-80ad-e2b738f023b2\" (UID: \"92e3fe75-3936-4491-80ad-e2b738f023b2\") " Feb 27 16:30:58 crc kubenswrapper[4830]: I0227 16:30:58.042403 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92e3fe75-3936-4491-80ad-e2b738f023b2-combined-ca-bundle\") pod \"92e3fe75-3936-4491-80ad-e2b738f023b2\" (UID: \"92e3fe75-3936-4491-80ad-e2b738f023b2\") " Feb 27 16:30:58 crc kubenswrapper[4830]: I0227 16:30:58.042435 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/92e3fe75-3936-4491-80ad-e2b738f023b2-ovndb-tls-certs\") pod \"92e3fe75-3936-4491-80ad-e2b738f023b2\" (UID: \"92e3fe75-3936-4491-80ad-e2b738f023b2\") " Feb 27 16:30:58 crc kubenswrapper[4830]: I0227 16:30:58.047372 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92e3fe75-3936-4491-80ad-e2b738f023b2-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "92e3fe75-3936-4491-80ad-e2b738f023b2" (UID: "92e3fe75-3936-4491-80ad-e2b738f023b2"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:58 crc kubenswrapper[4830]: I0227 16:30:58.047448 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92e3fe75-3936-4491-80ad-e2b738f023b2-kube-api-access-jnrhg" (OuterVolumeSpecName: "kube-api-access-jnrhg") pod "92e3fe75-3936-4491-80ad-e2b738f023b2" (UID: "92e3fe75-3936-4491-80ad-e2b738f023b2"). InnerVolumeSpecName "kube-api-access-jnrhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:30:58 crc kubenswrapper[4830]: I0227 16:30:58.088153 4830 scope.go:117] "RemoveContainer" containerID="a986bbda403364dd28f3ffc0954e8e1f8595a2d731d8bb3cf54223d09a324a21" Feb 27 16:30:58 crc kubenswrapper[4830]: I0227 16:30:58.091288 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92e3fe75-3936-4491-80ad-e2b738f023b2-config" (OuterVolumeSpecName: "config") pod "92e3fe75-3936-4491-80ad-e2b738f023b2" (UID: "92e3fe75-3936-4491-80ad-e2b738f023b2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:58 crc kubenswrapper[4830]: I0227 16:30:58.101437 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92e3fe75-3936-4491-80ad-e2b738f023b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92e3fe75-3936-4491-80ad-e2b738f023b2" (UID: "92e3fe75-3936-4491-80ad-e2b738f023b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:58 crc kubenswrapper[4830]: I0227 16:30:58.114578 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92e3fe75-3936-4491-80ad-e2b738f023b2-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "92e3fe75-3936-4491-80ad-e2b738f023b2" (UID: "92e3fe75-3936-4491-80ad-e2b738f023b2"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:30:58 crc kubenswrapper[4830]: I0227 16:30:58.151155 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92e3fe75-3936-4491-80ad-e2b738f023b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:58 crc kubenswrapper[4830]: I0227 16:30:58.151180 4830 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/92e3fe75-3936-4491-80ad-e2b738f023b2-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:58 crc kubenswrapper[4830]: I0227 16:30:58.151190 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/92e3fe75-3936-4491-80ad-e2b738f023b2-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:58 crc kubenswrapper[4830]: I0227 16:30:58.151199 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnrhg\" (UniqueName: \"kubernetes.io/projected/92e3fe75-3936-4491-80ad-e2b738f023b2-kube-api-access-jnrhg\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:58 crc kubenswrapper[4830]: I0227 16:30:58.151208 4830 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/92e3fe75-3936-4491-80ad-e2b738f023b2-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:30:58 crc kubenswrapper[4830]: I0227 16:30:58.320393 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-c844968fb-vzqlt"] Feb 27 16:30:58 crc kubenswrapper[4830]: I0227 16:30:58.329634 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-c844968fb-vzqlt"] Feb 27 16:30:58 crc kubenswrapper[4830]: I0227 16:30:58.775530 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92e3fe75-3936-4491-80ad-e2b738f023b2" path="/var/lib/kubelet/pods/92e3fe75-3936-4491-80ad-e2b738f023b2/volumes" Feb 27 16:30:58 crc kubenswrapper[4830]: I0227 16:30:58.999712 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"19288868-0c0f-4ded-98f3-80cd07b350c2","Type":"ContainerStarted","Data":"0706f1a0759f33eb60e2fb30aec7479b6c7a940dfc76f25533bdda83b5ca913e"} Feb 27 16:30:59 crc kubenswrapper[4830]: I0227 16:30:58.999805 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="19288868-0c0f-4ded-98f3-80cd07b350c2" containerName="ceilometer-central-agent" containerID="cri-o://f8f34796ac91c21f0c695f92907c09775357969b6a31121699e96e8f2d086147" gracePeriod=30 Feb 27 16:30:59 crc kubenswrapper[4830]: I0227 16:30:59.000027 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 16:30:59 crc kubenswrapper[4830]: I0227 16:30:59.000068 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="19288868-0c0f-4ded-98f3-80cd07b350c2" containerName="proxy-httpd" containerID="cri-o://0706f1a0759f33eb60e2fb30aec7479b6c7a940dfc76f25533bdda83b5ca913e" gracePeriod=30 Feb 27 16:30:59 crc kubenswrapper[4830]: I0227 16:30:59.000118 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="19288868-0c0f-4ded-98f3-80cd07b350c2" containerName="sg-core" containerID="cri-o://d5a7dd60a232991741b101e4a9891977b3f095d90be1312762610a6cc6b35dfd" gracePeriod=30 Feb 27 16:30:59 crc kubenswrapper[4830]: I0227 16:30:59.000160 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="19288868-0c0f-4ded-98f3-80cd07b350c2" containerName="ceilometer-notification-agent" containerID="cri-o://98c4c8d56c7429d2d9520ab93e5ce3ee5be86799ca9c538051edf6e0b6ea6c3d" gracePeriod=30 Feb 27 16:30:59 crc kubenswrapper[4830]: I0227 16:30:59.004852 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-x7bbz" event={"ID":"4902066e-ebd0-4ea5-8620-939e120b7862","Type":"ContainerStarted","Data":"fab28b8a8cf858968ae516c93ad0ff86bedd83c0c7423732d17b0e07a14d18d2"} Feb 27 16:30:59 crc kubenswrapper[4830]: I0227 16:30:59.067491 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.8047144250000002 podStartE2EDuration="15.067471213s" podCreationTimestamp="2026-02-27 16:30:44 +0000 UTC" firstStartedPulling="2026-02-27 16:30:45.688356412 +0000 UTC m=+1441.777628895" lastFinishedPulling="2026-02-27 16:30:57.95111322 +0000 UTC m=+1454.040385683" observedRunningTime="2026-02-27 16:30:59.031402613 +0000 UTC m=+1455.120675076" watchObservedRunningTime="2026-02-27 16:30:59.067471213 +0000 UTC m=+1455.156743686" Feb 27 16:30:59 crc kubenswrapper[4830]: I0227 16:30:59.067777 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-x7bbz" podStartSLOduration=2.047951406 podStartE2EDuration="13.06777282s" podCreationTimestamp="2026-02-27 16:30:46 +0000 UTC" firstStartedPulling="2026-02-27 16:30:46.939461284 +0000 UTC m=+1443.028733747" lastFinishedPulling="2026-02-27 16:30:57.959282698 +0000 UTC m=+1454.048555161" observedRunningTime="2026-02-27 16:30:59.057688427 +0000 UTC m=+1455.146960900" watchObservedRunningTime="2026-02-27 16:30:59.06777282 +0000 UTC m=+1455.157045293" Feb 27 16:31:00 crc kubenswrapper[4830]: I0227 16:31:00.024066 4830 generic.go:334] "Generic (PLEG): container finished" podID="19288868-0c0f-4ded-98f3-80cd07b350c2" containerID="0706f1a0759f33eb60e2fb30aec7479b6c7a940dfc76f25533bdda83b5ca913e" exitCode=0 Feb 27 16:31:00 crc kubenswrapper[4830]: I0227 16:31:00.024743 4830 generic.go:334] "Generic (PLEG): container finished" podID="19288868-0c0f-4ded-98f3-80cd07b350c2" containerID="d5a7dd60a232991741b101e4a9891977b3f095d90be1312762610a6cc6b35dfd" exitCode=2 Feb 27 16:31:00 crc kubenswrapper[4830]: I0227 16:31:00.024753 4830 generic.go:334] "Generic (PLEG): container finished" podID="19288868-0c0f-4ded-98f3-80cd07b350c2" containerID="98c4c8d56c7429d2d9520ab93e5ce3ee5be86799ca9c538051edf6e0b6ea6c3d" exitCode=0 Feb 27 16:31:00 crc kubenswrapper[4830]: I0227 16:31:00.024760 4830 generic.go:334] "Generic (PLEG): container finished" podID="19288868-0c0f-4ded-98f3-80cd07b350c2" containerID="f8f34796ac91c21f0c695f92907c09775357969b6a31121699e96e8f2d086147" exitCode=0 Feb 27 16:31:00 crc kubenswrapper[4830]: I0227 16:31:00.024113 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"19288868-0c0f-4ded-98f3-80cd07b350c2","Type":"ContainerDied","Data":"0706f1a0759f33eb60e2fb30aec7479b6c7a940dfc76f25533bdda83b5ca913e"} Feb 27 16:31:00 crc kubenswrapper[4830]: I0227 16:31:00.024871 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"19288868-0c0f-4ded-98f3-80cd07b350c2","Type":"ContainerDied","Data":"d5a7dd60a232991741b101e4a9891977b3f095d90be1312762610a6cc6b35dfd"} Feb 27 16:31:00 crc kubenswrapper[4830]: I0227 16:31:00.024901 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"19288868-0c0f-4ded-98f3-80cd07b350c2","Type":"ContainerDied","Data":"98c4c8d56c7429d2d9520ab93e5ce3ee5be86799ca9c538051edf6e0b6ea6c3d"} Feb 27 16:31:00 crc kubenswrapper[4830]: I0227 16:31:00.024920 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"19288868-0c0f-4ded-98f3-80cd07b350c2","Type":"ContainerDied","Data":"f8f34796ac91c21f0c695f92907c09775357969b6a31121699e96e8f2d086147"} Feb 27 16:31:00 crc kubenswrapper[4830]: I0227 16:31:00.025024 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"19288868-0c0f-4ded-98f3-80cd07b350c2","Type":"ContainerDied","Data":"67eb07c6a65223bf5d5068cd63302aa4c76bbe7b10a48116cc7988dce8898a9a"} Feb 27 16:31:00 crc kubenswrapper[4830]: I0227 16:31:00.025053 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67eb07c6a65223bf5d5068cd63302aa4c76bbe7b10a48116cc7988dce8898a9a" Feb 27 16:31:00 crc kubenswrapper[4830]: I0227 16:31:00.092693 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 16:31:00 crc kubenswrapper[4830]: I0227 16:31:00.192801 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sk4g\" (UniqueName: \"kubernetes.io/projected/19288868-0c0f-4ded-98f3-80cd07b350c2-kube-api-access-5sk4g\") pod \"19288868-0c0f-4ded-98f3-80cd07b350c2\" (UID: \"19288868-0c0f-4ded-98f3-80cd07b350c2\") " Feb 27 16:31:00 crc kubenswrapper[4830]: I0227 16:31:00.192856 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/19288868-0c0f-4ded-98f3-80cd07b350c2-sg-core-conf-yaml\") pod \"19288868-0c0f-4ded-98f3-80cd07b350c2\" (UID: \"19288868-0c0f-4ded-98f3-80cd07b350c2\") " Feb 27 16:31:00 crc kubenswrapper[4830]: I0227 16:31:00.192898 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19288868-0c0f-4ded-98f3-80cd07b350c2-combined-ca-bundle\") pod \"19288868-0c0f-4ded-98f3-80cd07b350c2\" (UID: \"19288868-0c0f-4ded-98f3-80cd07b350c2\") " Feb 27 16:31:00 crc kubenswrapper[4830]: I0227 16:31:00.193005 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19288868-0c0f-4ded-98f3-80cd07b350c2-scripts\") pod \"19288868-0c0f-4ded-98f3-80cd07b350c2\" (UID: \"19288868-0c0f-4ded-98f3-80cd07b350c2\") " Feb 27 16:31:00 crc kubenswrapper[4830]: I0227 16:31:00.193059 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/19288868-0c0f-4ded-98f3-80cd07b350c2-run-httpd\") pod \"19288868-0c0f-4ded-98f3-80cd07b350c2\" (UID: \"19288868-0c0f-4ded-98f3-80cd07b350c2\") " Feb 27 16:31:00 crc kubenswrapper[4830]: I0227 16:31:00.193107 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/19288868-0c0f-4ded-98f3-80cd07b350c2-log-httpd\") pod \"19288868-0c0f-4ded-98f3-80cd07b350c2\" (UID: \"19288868-0c0f-4ded-98f3-80cd07b350c2\") " Feb 27 16:31:00 crc kubenswrapper[4830]: I0227 16:31:00.193272 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19288868-0c0f-4ded-98f3-80cd07b350c2-config-data\") pod \"19288868-0c0f-4ded-98f3-80cd07b350c2\" (UID: \"19288868-0c0f-4ded-98f3-80cd07b350c2\") " Feb 27 16:31:00 crc kubenswrapper[4830]: I0227 16:31:00.193654 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19288868-0c0f-4ded-98f3-80cd07b350c2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "19288868-0c0f-4ded-98f3-80cd07b350c2" (UID: "19288868-0c0f-4ded-98f3-80cd07b350c2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:31:00 crc kubenswrapper[4830]: I0227 16:31:00.193783 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19288868-0c0f-4ded-98f3-80cd07b350c2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "19288868-0c0f-4ded-98f3-80cd07b350c2" (UID: "19288868-0c0f-4ded-98f3-80cd07b350c2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:31:00 crc kubenswrapper[4830]: I0227 16:31:00.200137 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19288868-0c0f-4ded-98f3-80cd07b350c2-kube-api-access-5sk4g" (OuterVolumeSpecName: "kube-api-access-5sk4g") pod "19288868-0c0f-4ded-98f3-80cd07b350c2" (UID: "19288868-0c0f-4ded-98f3-80cd07b350c2"). InnerVolumeSpecName "kube-api-access-5sk4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:31:00 crc kubenswrapper[4830]: I0227 16:31:00.207288 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19288868-0c0f-4ded-98f3-80cd07b350c2-scripts" (OuterVolumeSpecName: "scripts") pod "19288868-0c0f-4ded-98f3-80cd07b350c2" (UID: "19288868-0c0f-4ded-98f3-80cd07b350c2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:31:00 crc kubenswrapper[4830]: I0227 16:31:00.231576 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19288868-0c0f-4ded-98f3-80cd07b350c2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "19288868-0c0f-4ded-98f3-80cd07b350c2" (UID: "19288868-0c0f-4ded-98f3-80cd07b350c2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:31:00 crc kubenswrapper[4830]: I0227 16:31:00.281644 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19288868-0c0f-4ded-98f3-80cd07b350c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "19288868-0c0f-4ded-98f3-80cd07b350c2" (UID: "19288868-0c0f-4ded-98f3-80cd07b350c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:31:00 crc kubenswrapper[4830]: I0227 16:31:00.295228 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sk4g\" (UniqueName: \"kubernetes.io/projected/19288868-0c0f-4ded-98f3-80cd07b350c2-kube-api-access-5sk4g\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:00 crc kubenswrapper[4830]: I0227 16:31:00.295508 4830 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/19288868-0c0f-4ded-98f3-80cd07b350c2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:00 crc kubenswrapper[4830]: I0227 16:31:00.295518 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19288868-0c0f-4ded-98f3-80cd07b350c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:00 crc kubenswrapper[4830]: I0227 16:31:00.295526 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19288868-0c0f-4ded-98f3-80cd07b350c2-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:00 crc kubenswrapper[4830]: I0227 16:31:00.295538 4830 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/19288868-0c0f-4ded-98f3-80cd07b350c2-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:00 crc kubenswrapper[4830]: I0227 16:31:00.295546 4830 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/19288868-0c0f-4ded-98f3-80cd07b350c2-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:00 crc kubenswrapper[4830]: I0227 16:31:00.319870 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19288868-0c0f-4ded-98f3-80cd07b350c2-config-data" (OuterVolumeSpecName: "config-data") pod "19288868-0c0f-4ded-98f3-80cd07b350c2" (UID: "19288868-0c0f-4ded-98f3-80cd07b350c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:31:00 crc kubenswrapper[4830]: I0227 16:31:00.397400 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19288868-0c0f-4ded-98f3-80cd07b350c2-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.033736 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.062049 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.079914 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.097716 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:31:01 crc kubenswrapper[4830]: E0227 16:31:01.098317 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92e3fe75-3936-4491-80ad-e2b738f023b2" containerName="neutron-api" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.098346 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="92e3fe75-3936-4491-80ad-e2b738f023b2" containerName="neutron-api" Feb 27 16:31:01 crc kubenswrapper[4830]: E0227 16:31:01.098366 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19288868-0c0f-4ded-98f3-80cd07b350c2" containerName="ceilometer-central-agent" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.098378 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="19288868-0c0f-4ded-98f3-80cd07b350c2" containerName="ceilometer-central-agent" Feb 27 16:31:01 crc kubenswrapper[4830]: E0227 16:31:01.098409 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19288868-0c0f-4ded-98f3-80cd07b350c2" containerName="proxy-httpd" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.098420 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="19288868-0c0f-4ded-98f3-80cd07b350c2" containerName="proxy-httpd" Feb 27 16:31:01 crc kubenswrapper[4830]: E0227 16:31:01.098437 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92e3fe75-3936-4491-80ad-e2b738f023b2" containerName="neutron-httpd" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.098450 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="92e3fe75-3936-4491-80ad-e2b738f023b2" containerName="neutron-httpd" Feb 27 16:31:01 crc kubenswrapper[4830]: E0227 16:31:01.098472 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19288868-0c0f-4ded-98f3-80cd07b350c2" containerName="ceilometer-notification-agent" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.098486 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="19288868-0c0f-4ded-98f3-80cd07b350c2" containerName="ceilometer-notification-agent" Feb 27 16:31:01 crc kubenswrapper[4830]: E0227 16:31:01.098508 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19288868-0c0f-4ded-98f3-80cd07b350c2" containerName="sg-core" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.098518 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="19288868-0c0f-4ded-98f3-80cd07b350c2" containerName="sg-core" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.099082 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="19288868-0c0f-4ded-98f3-80cd07b350c2" containerName="ceilometer-central-agent" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.099352 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="19288868-0c0f-4ded-98f3-80cd07b350c2" containerName="ceilometer-notification-agent" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.099366 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="92e3fe75-3936-4491-80ad-e2b738f023b2" containerName="neutron-api" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.099377 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="92e3fe75-3936-4491-80ad-e2b738f023b2" containerName="neutron-httpd" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.099392 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="19288868-0c0f-4ded-98f3-80cd07b350c2" containerName="sg-core" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.099405 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="19288868-0c0f-4ded-98f3-80cd07b350c2" containerName="proxy-httpd" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.100926 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.103646 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.103788 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.112877 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.216005 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76ad9c55-0d81-4a2b-8d91-486c19d80b98-scripts\") pod \"ceilometer-0\" (UID: \"76ad9c55-0d81-4a2b-8d91-486c19d80b98\") " pod="openstack/ceilometer-0" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.216090 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76ad9c55-0d81-4a2b-8d91-486c19d80b98-config-data\") pod \"ceilometer-0\" (UID: \"76ad9c55-0d81-4a2b-8d91-486c19d80b98\") " pod="openstack/ceilometer-0" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.216315 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns5bx\" (UniqueName: \"kubernetes.io/projected/76ad9c55-0d81-4a2b-8d91-486c19d80b98-kube-api-access-ns5bx\") pod \"ceilometer-0\" (UID: \"76ad9c55-0d81-4a2b-8d91-486c19d80b98\") " pod="openstack/ceilometer-0" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.216508 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76ad9c55-0d81-4a2b-8d91-486c19d80b98-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"76ad9c55-0d81-4a2b-8d91-486c19d80b98\") " pod="openstack/ceilometer-0" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.216620 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76ad9c55-0d81-4a2b-8d91-486c19d80b98-log-httpd\") pod \"ceilometer-0\" (UID: \"76ad9c55-0d81-4a2b-8d91-486c19d80b98\") " pod="openstack/ceilometer-0" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.216712 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76ad9c55-0d81-4a2b-8d91-486c19d80b98-run-httpd\") pod \"ceilometer-0\" (UID: \"76ad9c55-0d81-4a2b-8d91-486c19d80b98\") " pod="openstack/ceilometer-0" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.216855 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/76ad9c55-0d81-4a2b-8d91-486c19d80b98-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"76ad9c55-0d81-4a2b-8d91-486c19d80b98\") " pod="openstack/ceilometer-0" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.318256 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/76ad9c55-0d81-4a2b-8d91-486c19d80b98-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"76ad9c55-0d81-4a2b-8d91-486c19d80b98\") " pod="openstack/ceilometer-0" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.318325 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76ad9c55-0d81-4a2b-8d91-486c19d80b98-scripts\") pod \"ceilometer-0\" (UID: \"76ad9c55-0d81-4a2b-8d91-486c19d80b98\") " pod="openstack/ceilometer-0" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.318360 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76ad9c55-0d81-4a2b-8d91-486c19d80b98-config-data\") pod \"ceilometer-0\" (UID: \"76ad9c55-0d81-4a2b-8d91-486c19d80b98\") " pod="openstack/ceilometer-0" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.318488 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns5bx\" (UniqueName: \"kubernetes.io/projected/76ad9c55-0d81-4a2b-8d91-486c19d80b98-kube-api-access-ns5bx\") pod \"ceilometer-0\" (UID: \"76ad9c55-0d81-4a2b-8d91-486c19d80b98\") " pod="openstack/ceilometer-0" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.318548 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76ad9c55-0d81-4a2b-8d91-486c19d80b98-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"76ad9c55-0d81-4a2b-8d91-486c19d80b98\") " pod="openstack/ceilometer-0" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.318590 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76ad9c55-0d81-4a2b-8d91-486c19d80b98-log-httpd\") pod \"ceilometer-0\" (UID: \"76ad9c55-0d81-4a2b-8d91-486c19d80b98\") " pod="openstack/ceilometer-0" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.318641 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76ad9c55-0d81-4a2b-8d91-486c19d80b98-run-httpd\") pod \"ceilometer-0\" (UID: \"76ad9c55-0d81-4a2b-8d91-486c19d80b98\") " pod="openstack/ceilometer-0" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.319336 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76ad9c55-0d81-4a2b-8d91-486c19d80b98-run-httpd\") pod \"ceilometer-0\" (UID: \"76ad9c55-0d81-4a2b-8d91-486c19d80b98\") " pod="openstack/ceilometer-0" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.320918 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76ad9c55-0d81-4a2b-8d91-486c19d80b98-log-httpd\") pod \"ceilometer-0\" (UID: \"76ad9c55-0d81-4a2b-8d91-486c19d80b98\") " pod="openstack/ceilometer-0" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.324565 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76ad9c55-0d81-4a2b-8d91-486c19d80b98-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"76ad9c55-0d81-4a2b-8d91-486c19d80b98\") " pod="openstack/ceilometer-0" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.333898 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/76ad9c55-0d81-4a2b-8d91-486c19d80b98-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"76ad9c55-0d81-4a2b-8d91-486c19d80b98\") " pod="openstack/ceilometer-0" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.334111 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76ad9c55-0d81-4a2b-8d91-486c19d80b98-scripts\") pod \"ceilometer-0\" (UID: \"76ad9c55-0d81-4a2b-8d91-486c19d80b98\") " pod="openstack/ceilometer-0" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.335166 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76ad9c55-0d81-4a2b-8d91-486c19d80b98-config-data\") pod \"ceilometer-0\" (UID: \"76ad9c55-0d81-4a2b-8d91-486c19d80b98\") " pod="openstack/ceilometer-0" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.338315 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns5bx\" (UniqueName: \"kubernetes.io/projected/76ad9c55-0d81-4a2b-8d91-486c19d80b98-kube-api-access-ns5bx\") pod \"ceilometer-0\" (UID: \"76ad9c55-0d81-4a2b-8d91-486c19d80b98\") " pod="openstack/ceilometer-0" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.436549 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 16:31:01 crc kubenswrapper[4830]: I0227 16:31:01.937529 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:31:01 crc kubenswrapper[4830]: W0227 16:31:01.946444 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76ad9c55_0d81_4a2b_8d91_486c19d80b98.slice/crio-00704f4a7e93b7cf77fe42a04206568d53292c13d382544a9626f15acca6ea6d WatchSource:0}: Error finding container 00704f4a7e93b7cf77fe42a04206568d53292c13d382544a9626f15acca6ea6d: Status 404 returned error can't find the container with id 00704f4a7e93b7cf77fe42a04206568d53292c13d382544a9626f15acca6ea6d Feb 27 16:31:02 crc kubenswrapper[4830]: I0227 16:31:02.046687 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76ad9c55-0d81-4a2b-8d91-486c19d80b98","Type":"ContainerStarted","Data":"00704f4a7e93b7cf77fe42a04206568d53292c13d382544a9626f15acca6ea6d"} Feb 27 16:31:02 crc kubenswrapper[4830]: I0227 16:31:02.792708 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19288868-0c0f-4ded-98f3-80cd07b350c2" path="/var/lib/kubelet/pods/19288868-0c0f-4ded-98f3-80cd07b350c2/volumes" Feb 27 16:31:04 crc kubenswrapper[4830]: I0227 16:31:04.066089 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76ad9c55-0d81-4a2b-8d91-486c19d80b98","Type":"ContainerStarted","Data":"e552f36126944b7e337ca983460d6896d461f4583fb8a603092c58de0b16f327"} Feb 27 16:31:04 crc kubenswrapper[4830]: I0227 16:31:04.066377 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76ad9c55-0d81-4a2b-8d91-486c19d80b98","Type":"ContainerStarted","Data":"f61a4d1495ee95d206af98c462051c4abc76a9a448d4c09f4ba9571dd7550d3f"} Feb 27 16:31:04 crc kubenswrapper[4830]: I0227 16:31:04.687028 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:31:05 crc kubenswrapper[4830]: I0227 16:31:05.078337 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76ad9c55-0d81-4a2b-8d91-486c19d80b98","Type":"ContainerStarted","Data":"2eb955199624369be6edc562cc1caaf56b86d7232b831b0a0d2a7906c4e4a458"} Feb 27 16:31:08 crc kubenswrapper[4830]: I0227 16:31:08.115206 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76ad9c55-0d81-4a2b-8d91-486c19d80b98","Type":"ContainerStarted","Data":"012a655aadcb4bac701f0aebe2619b6abef41c22c727904f4186a5d0bdd5290f"} Feb 27 16:31:08 crc kubenswrapper[4830]: I0227 16:31:08.115869 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="76ad9c55-0d81-4a2b-8d91-486c19d80b98" containerName="ceilometer-central-agent" containerID="cri-o://f61a4d1495ee95d206af98c462051c4abc76a9a448d4c09f4ba9571dd7550d3f" gracePeriod=30 Feb 27 16:31:08 crc kubenswrapper[4830]: I0227 16:31:08.116168 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 16:31:08 crc kubenswrapper[4830]: I0227 16:31:08.116480 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="76ad9c55-0d81-4a2b-8d91-486c19d80b98" containerName="proxy-httpd" containerID="cri-o://012a655aadcb4bac701f0aebe2619b6abef41c22c727904f4186a5d0bdd5290f" gracePeriod=30 Feb 27 16:31:08 crc kubenswrapper[4830]: I0227 16:31:08.116511 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="76ad9c55-0d81-4a2b-8d91-486c19d80b98" containerName="ceilometer-notification-agent" containerID="cri-o://e552f36126944b7e337ca983460d6896d461f4583fb8a603092c58de0b16f327" gracePeriod=30 Feb 27 16:31:08 crc kubenswrapper[4830]: I0227 16:31:08.116569 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="76ad9c55-0d81-4a2b-8d91-486c19d80b98" containerName="sg-core" containerID="cri-o://2eb955199624369be6edc562cc1caaf56b86d7232b831b0a0d2a7906c4e4a458" gracePeriod=30 Feb 27 16:31:08 crc kubenswrapper[4830]: I0227 16:31:08.146976 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.545844311 podStartE2EDuration="7.146956732s" podCreationTimestamp="2026-02-27 16:31:01 +0000 UTC" firstStartedPulling="2026-02-27 16:31:01.949498719 +0000 UTC m=+1458.038771182" lastFinishedPulling="2026-02-27 16:31:07.5506111 +0000 UTC m=+1463.639883603" observedRunningTime="2026-02-27 16:31:08.138409356 +0000 UTC m=+1464.227681829" watchObservedRunningTime="2026-02-27 16:31:08.146956732 +0000 UTC m=+1464.236229195" Feb 27 16:31:09 crc kubenswrapper[4830]: I0227 16:31:09.131661 4830 generic.go:334] "Generic (PLEG): container finished" podID="4902066e-ebd0-4ea5-8620-939e120b7862" containerID="fab28b8a8cf858968ae516c93ad0ff86bedd83c0c7423732d17b0e07a14d18d2" exitCode=0 Feb 27 16:31:09 crc kubenswrapper[4830]: I0227 16:31:09.131723 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-x7bbz" event={"ID":"4902066e-ebd0-4ea5-8620-939e120b7862","Type":"ContainerDied","Data":"fab28b8a8cf858968ae516c93ad0ff86bedd83c0c7423732d17b0e07a14d18d2"} Feb 27 16:31:09 crc kubenswrapper[4830]: I0227 16:31:09.137510 4830 generic.go:334] "Generic (PLEG): container finished" podID="76ad9c55-0d81-4a2b-8d91-486c19d80b98" containerID="012a655aadcb4bac701f0aebe2619b6abef41c22c727904f4186a5d0bdd5290f" exitCode=0 Feb 27 16:31:09 crc kubenswrapper[4830]: I0227 16:31:09.137541 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76ad9c55-0d81-4a2b-8d91-486c19d80b98","Type":"ContainerDied","Data":"012a655aadcb4bac701f0aebe2619b6abef41c22c727904f4186a5d0bdd5290f"} Feb 27 16:31:09 crc kubenswrapper[4830]: I0227 16:31:09.137571 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76ad9c55-0d81-4a2b-8d91-486c19d80b98","Type":"ContainerDied","Data":"2eb955199624369be6edc562cc1caaf56b86d7232b831b0a0d2a7906c4e4a458"} Feb 27 16:31:09 crc kubenswrapper[4830]: I0227 16:31:09.137549 4830 generic.go:334] "Generic (PLEG): container finished" podID="76ad9c55-0d81-4a2b-8d91-486c19d80b98" containerID="2eb955199624369be6edc562cc1caaf56b86d7232b831b0a0d2a7906c4e4a458" exitCode=2 Feb 27 16:31:09 crc kubenswrapper[4830]: I0227 16:31:09.137619 4830 generic.go:334] "Generic (PLEG): container finished" podID="76ad9c55-0d81-4a2b-8d91-486c19d80b98" containerID="e552f36126944b7e337ca983460d6896d461f4583fb8a603092c58de0b16f327" exitCode=0 Feb 27 16:31:09 crc kubenswrapper[4830]: I0227 16:31:09.137637 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76ad9c55-0d81-4a2b-8d91-486c19d80b98","Type":"ContainerDied","Data":"e552f36126944b7e337ca983460d6896d461f4583fb8a603092c58de0b16f327"} Feb 27 16:31:10 crc kubenswrapper[4830]: I0227 16:31:10.565274 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-x7bbz" Feb 27 16:31:10 crc kubenswrapper[4830]: I0227 16:31:10.712889 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4902066e-ebd0-4ea5-8620-939e120b7862-combined-ca-bundle\") pod \"4902066e-ebd0-4ea5-8620-939e120b7862\" (UID: \"4902066e-ebd0-4ea5-8620-939e120b7862\") " Feb 27 16:31:10 crc kubenswrapper[4830]: I0227 16:31:10.712979 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdgns\" (UniqueName: \"kubernetes.io/projected/4902066e-ebd0-4ea5-8620-939e120b7862-kube-api-access-jdgns\") pod \"4902066e-ebd0-4ea5-8620-939e120b7862\" (UID: \"4902066e-ebd0-4ea5-8620-939e120b7862\") " Feb 27 16:31:10 crc kubenswrapper[4830]: I0227 16:31:10.713125 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4902066e-ebd0-4ea5-8620-939e120b7862-config-data\") pod \"4902066e-ebd0-4ea5-8620-939e120b7862\" (UID: \"4902066e-ebd0-4ea5-8620-939e120b7862\") " Feb 27 16:31:10 crc kubenswrapper[4830]: I0227 16:31:10.713281 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4902066e-ebd0-4ea5-8620-939e120b7862-scripts\") pod \"4902066e-ebd0-4ea5-8620-939e120b7862\" (UID: \"4902066e-ebd0-4ea5-8620-939e120b7862\") " Feb 27 16:31:10 crc kubenswrapper[4830]: I0227 16:31:10.720119 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4902066e-ebd0-4ea5-8620-939e120b7862-scripts" (OuterVolumeSpecName: "scripts") pod "4902066e-ebd0-4ea5-8620-939e120b7862" (UID: "4902066e-ebd0-4ea5-8620-939e120b7862"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:31:10 crc kubenswrapper[4830]: I0227 16:31:10.724063 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4902066e-ebd0-4ea5-8620-939e120b7862-kube-api-access-jdgns" (OuterVolumeSpecName: "kube-api-access-jdgns") pod "4902066e-ebd0-4ea5-8620-939e120b7862" (UID: "4902066e-ebd0-4ea5-8620-939e120b7862"). InnerVolumeSpecName "kube-api-access-jdgns". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:31:10 crc kubenswrapper[4830]: I0227 16:31:10.745718 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4902066e-ebd0-4ea5-8620-939e120b7862-config-data" (OuterVolumeSpecName: "config-data") pod "4902066e-ebd0-4ea5-8620-939e120b7862" (UID: "4902066e-ebd0-4ea5-8620-939e120b7862"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:31:10 crc kubenswrapper[4830]: I0227 16:31:10.753029 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4902066e-ebd0-4ea5-8620-939e120b7862-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4902066e-ebd0-4ea5-8620-939e120b7862" (UID: "4902066e-ebd0-4ea5-8620-939e120b7862"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:31:10 crc kubenswrapper[4830]: I0227 16:31:10.815929 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4902066e-ebd0-4ea5-8620-939e120b7862-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:10 crc kubenswrapper[4830]: I0227 16:31:10.816002 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4902066e-ebd0-4ea5-8620-939e120b7862-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:10 crc kubenswrapper[4830]: I0227 16:31:10.816026 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdgns\" (UniqueName: \"kubernetes.io/projected/4902066e-ebd0-4ea5-8620-939e120b7862-kube-api-access-jdgns\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:10 crc kubenswrapper[4830]: I0227 16:31:10.816044 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4902066e-ebd0-4ea5-8620-939e120b7862-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:11 crc kubenswrapper[4830]: I0227 16:31:11.165104 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-x7bbz" event={"ID":"4902066e-ebd0-4ea5-8620-939e120b7862","Type":"ContainerDied","Data":"d2ad27794c89cb64479c046dc8f008e32e7d43ab722503d8779cb789817fe98f"} Feb 27 16:31:11 crc kubenswrapper[4830]: I0227 16:31:11.165158 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2ad27794c89cb64479c046dc8f008e32e7d43ab722503d8779cb789817fe98f" Feb 27 16:31:11 crc kubenswrapper[4830]: I0227 16:31:11.165578 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-x7bbz" Feb 27 16:31:11 crc kubenswrapper[4830]: I0227 16:31:11.295246 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 16:31:11 crc kubenswrapper[4830]: E0227 16:31:11.295935 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4902066e-ebd0-4ea5-8620-939e120b7862" containerName="nova-cell0-conductor-db-sync" Feb 27 16:31:11 crc kubenswrapper[4830]: I0227 16:31:11.295992 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="4902066e-ebd0-4ea5-8620-939e120b7862" containerName="nova-cell0-conductor-db-sync" Feb 27 16:31:11 crc kubenswrapper[4830]: I0227 16:31:11.296315 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="4902066e-ebd0-4ea5-8620-939e120b7862" containerName="nova-cell0-conductor-db-sync" Feb 27 16:31:11 crc kubenswrapper[4830]: I0227 16:31:11.297248 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 27 16:31:11 crc kubenswrapper[4830]: I0227 16:31:11.303753 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-ztm2w" Feb 27 16:31:11 crc kubenswrapper[4830]: I0227 16:31:11.303820 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 27 16:31:11 crc kubenswrapper[4830]: I0227 16:31:11.309880 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 16:31:11 crc kubenswrapper[4830]: I0227 16:31:11.430417 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mgbg\" (UniqueName: \"kubernetes.io/projected/0bee1ae7-32fb-484d-a81a-47fe31e25d70-kube-api-access-4mgbg\") pod \"nova-cell0-conductor-0\" (UID: \"0bee1ae7-32fb-484d-a81a-47fe31e25d70\") " pod="openstack/nova-cell0-conductor-0" Feb 27 16:31:11 crc kubenswrapper[4830]: I0227 16:31:11.431061 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bee1ae7-32fb-484d-a81a-47fe31e25d70-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"0bee1ae7-32fb-484d-a81a-47fe31e25d70\") " pod="openstack/nova-cell0-conductor-0" Feb 27 16:31:11 crc kubenswrapper[4830]: I0227 16:31:11.431274 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bee1ae7-32fb-484d-a81a-47fe31e25d70-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"0bee1ae7-32fb-484d-a81a-47fe31e25d70\") " pod="openstack/nova-cell0-conductor-0" Feb 27 16:31:11 crc kubenswrapper[4830]: I0227 16:31:11.533653 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bee1ae7-32fb-484d-a81a-47fe31e25d70-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"0bee1ae7-32fb-484d-a81a-47fe31e25d70\") " pod="openstack/nova-cell0-conductor-0" Feb 27 16:31:11 crc kubenswrapper[4830]: I0227 16:31:11.533765 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bee1ae7-32fb-484d-a81a-47fe31e25d70-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"0bee1ae7-32fb-484d-a81a-47fe31e25d70\") " pod="openstack/nova-cell0-conductor-0" Feb 27 16:31:11 crc kubenswrapper[4830]: I0227 16:31:11.533836 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mgbg\" (UniqueName: \"kubernetes.io/projected/0bee1ae7-32fb-484d-a81a-47fe31e25d70-kube-api-access-4mgbg\") pod \"nova-cell0-conductor-0\" (UID: \"0bee1ae7-32fb-484d-a81a-47fe31e25d70\") " pod="openstack/nova-cell0-conductor-0" Feb 27 16:31:11 crc kubenswrapper[4830]: I0227 16:31:11.540119 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bee1ae7-32fb-484d-a81a-47fe31e25d70-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"0bee1ae7-32fb-484d-a81a-47fe31e25d70\") " pod="openstack/nova-cell0-conductor-0" Feb 27 16:31:11 crc kubenswrapper[4830]: I0227 16:31:11.546713 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bee1ae7-32fb-484d-a81a-47fe31e25d70-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"0bee1ae7-32fb-484d-a81a-47fe31e25d70\") " pod="openstack/nova-cell0-conductor-0" Feb 27 16:31:11 crc kubenswrapper[4830]: I0227 16:31:11.557546 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mgbg\" (UniqueName: \"kubernetes.io/projected/0bee1ae7-32fb-484d-a81a-47fe31e25d70-kube-api-access-4mgbg\") pod \"nova-cell0-conductor-0\" (UID: \"0bee1ae7-32fb-484d-a81a-47fe31e25d70\") " pod="openstack/nova-cell0-conductor-0" Feb 27 16:31:11 crc kubenswrapper[4830]: I0227 16:31:11.628037 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 27 16:31:12 crc kubenswrapper[4830]: I0227 16:31:11.922413 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 16:31:12 crc kubenswrapper[4830]: W0227 16:31:11.933367 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0bee1ae7_32fb_484d_a81a_47fe31e25d70.slice/crio-3cb0de386802fcebf81aef3d8ec6687de2ac855669305853b68e07c352ad1bdc WatchSource:0}: Error finding container 3cb0de386802fcebf81aef3d8ec6687de2ac855669305853b68e07c352ad1bdc: Status 404 returned error can't find the container with id 3cb0de386802fcebf81aef3d8ec6687de2ac855669305853b68e07c352ad1bdc Feb 27 16:31:12 crc kubenswrapper[4830]: I0227 16:31:12.188522 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"0bee1ae7-32fb-484d-a81a-47fe31e25d70","Type":"ContainerStarted","Data":"3cb0de386802fcebf81aef3d8ec6687de2ac855669305853b68e07c352ad1bdc"} Feb 27 16:31:13 crc kubenswrapper[4830]: I0227 16:31:13.202853 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"0bee1ae7-32fb-484d-a81a-47fe31e25d70","Type":"ContainerStarted","Data":"c2905f95d9b1bd685977d7be7161ae0adaba055e9615f02fecc0602b6c991b5c"} Feb 27 16:31:13 crc kubenswrapper[4830]: I0227 16:31:13.203465 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 27 16:31:13 crc kubenswrapper[4830]: I0227 16:31:13.238010 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.237973951 podStartE2EDuration="2.237973951s" podCreationTimestamp="2026-02-27 16:31:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:31:13.225431309 +0000 UTC m=+1469.314703802" watchObservedRunningTime="2026-02-27 16:31:13.237973951 +0000 UTC m=+1469.327246454" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.226380 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.242205 4830 generic.go:334] "Generic (PLEG): container finished" podID="76ad9c55-0d81-4a2b-8d91-486c19d80b98" containerID="f61a4d1495ee95d206af98c462051c4abc76a9a448d4c09f4ba9571dd7550d3f" exitCode=0 Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.242273 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76ad9c55-0d81-4a2b-8d91-486c19d80b98","Type":"ContainerDied","Data":"f61a4d1495ee95d206af98c462051c4abc76a9a448d4c09f4ba9571dd7550d3f"} Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.242320 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.242345 4830 scope.go:117] "RemoveContainer" containerID="012a655aadcb4bac701f0aebe2619b6abef41c22c727904f4186a5d0bdd5290f" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.242327 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76ad9c55-0d81-4a2b-8d91-486c19d80b98","Type":"ContainerDied","Data":"00704f4a7e93b7cf77fe42a04206568d53292c13d382544a9626f15acca6ea6d"} Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.278312 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76ad9c55-0d81-4a2b-8d91-486c19d80b98-combined-ca-bundle\") pod \"76ad9c55-0d81-4a2b-8d91-486c19d80b98\" (UID: \"76ad9c55-0d81-4a2b-8d91-486c19d80b98\") " Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.278417 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76ad9c55-0d81-4a2b-8d91-486c19d80b98-config-data\") pod \"76ad9c55-0d81-4a2b-8d91-486c19d80b98\" (UID: \"76ad9c55-0d81-4a2b-8d91-486c19d80b98\") " Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.278484 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76ad9c55-0d81-4a2b-8d91-486c19d80b98-scripts\") pod \"76ad9c55-0d81-4a2b-8d91-486c19d80b98\" (UID: \"76ad9c55-0d81-4a2b-8d91-486c19d80b98\") " Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.278576 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76ad9c55-0d81-4a2b-8d91-486c19d80b98-run-httpd\") pod \"76ad9c55-0d81-4a2b-8d91-486c19d80b98\" (UID: \"76ad9c55-0d81-4a2b-8d91-486c19d80b98\") " Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.278674 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ns5bx\" (UniqueName: \"kubernetes.io/projected/76ad9c55-0d81-4a2b-8d91-486c19d80b98-kube-api-access-ns5bx\") pod \"76ad9c55-0d81-4a2b-8d91-486c19d80b98\" (UID: \"76ad9c55-0d81-4a2b-8d91-486c19d80b98\") " Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.278934 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76ad9c55-0d81-4a2b-8d91-486c19d80b98-log-httpd\") pod \"76ad9c55-0d81-4a2b-8d91-486c19d80b98\" (UID: \"76ad9c55-0d81-4a2b-8d91-486c19d80b98\") " Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.279057 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/76ad9c55-0d81-4a2b-8d91-486c19d80b98-sg-core-conf-yaml\") pod \"76ad9c55-0d81-4a2b-8d91-486c19d80b98\" (UID: \"76ad9c55-0d81-4a2b-8d91-486c19d80b98\") " Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.279316 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76ad9c55-0d81-4a2b-8d91-486c19d80b98-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "76ad9c55-0d81-4a2b-8d91-486c19d80b98" (UID: "76ad9c55-0d81-4a2b-8d91-486c19d80b98"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.279589 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76ad9c55-0d81-4a2b-8d91-486c19d80b98-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "76ad9c55-0d81-4a2b-8d91-486c19d80b98" (UID: "76ad9c55-0d81-4a2b-8d91-486c19d80b98"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.279682 4830 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76ad9c55-0d81-4a2b-8d91-486c19d80b98-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.282578 4830 scope.go:117] "RemoveContainer" containerID="2eb955199624369be6edc562cc1caaf56b86d7232b831b0a0d2a7906c4e4a458" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.285638 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76ad9c55-0d81-4a2b-8d91-486c19d80b98-scripts" (OuterVolumeSpecName: "scripts") pod "76ad9c55-0d81-4a2b-8d91-486c19d80b98" (UID: "76ad9c55-0d81-4a2b-8d91-486c19d80b98"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.286280 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76ad9c55-0d81-4a2b-8d91-486c19d80b98-kube-api-access-ns5bx" (OuterVolumeSpecName: "kube-api-access-ns5bx") pod "76ad9c55-0d81-4a2b-8d91-486c19d80b98" (UID: "76ad9c55-0d81-4a2b-8d91-486c19d80b98"). InnerVolumeSpecName "kube-api-access-ns5bx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.336224 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76ad9c55-0d81-4a2b-8d91-486c19d80b98-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "76ad9c55-0d81-4a2b-8d91-486c19d80b98" (UID: "76ad9c55-0d81-4a2b-8d91-486c19d80b98"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.381806 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ns5bx\" (UniqueName: \"kubernetes.io/projected/76ad9c55-0d81-4a2b-8d91-486c19d80b98-kube-api-access-ns5bx\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.381839 4830 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76ad9c55-0d81-4a2b-8d91-486c19d80b98-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.381853 4830 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/76ad9c55-0d81-4a2b-8d91-486c19d80b98-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.381864 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76ad9c55-0d81-4a2b-8d91-486c19d80b98-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.381974 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76ad9c55-0d81-4a2b-8d91-486c19d80b98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76ad9c55-0d81-4a2b-8d91-486c19d80b98" (UID: "76ad9c55-0d81-4a2b-8d91-486c19d80b98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.385396 4830 scope.go:117] "RemoveContainer" containerID="e552f36126944b7e337ca983460d6896d461f4583fb8a603092c58de0b16f327" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.401682 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76ad9c55-0d81-4a2b-8d91-486c19d80b98-config-data" (OuterVolumeSpecName: "config-data") pod "76ad9c55-0d81-4a2b-8d91-486c19d80b98" (UID: "76ad9c55-0d81-4a2b-8d91-486c19d80b98"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.415360 4830 scope.go:117] "RemoveContainer" containerID="f61a4d1495ee95d206af98c462051c4abc76a9a448d4c09f4ba9571dd7550d3f" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.449515 4830 scope.go:117] "RemoveContainer" containerID="012a655aadcb4bac701f0aebe2619b6abef41c22c727904f4186a5d0bdd5290f" Feb 27 16:31:15 crc kubenswrapper[4830]: E0227 16:31:15.455602 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"012a655aadcb4bac701f0aebe2619b6abef41c22c727904f4186a5d0bdd5290f\": container with ID starting with 012a655aadcb4bac701f0aebe2619b6abef41c22c727904f4186a5d0bdd5290f not found: ID does not exist" containerID="012a655aadcb4bac701f0aebe2619b6abef41c22c727904f4186a5d0bdd5290f" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.455653 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"012a655aadcb4bac701f0aebe2619b6abef41c22c727904f4186a5d0bdd5290f"} err="failed to get container status \"012a655aadcb4bac701f0aebe2619b6abef41c22c727904f4186a5d0bdd5290f\": rpc error: code = NotFound desc = could not find container \"012a655aadcb4bac701f0aebe2619b6abef41c22c727904f4186a5d0bdd5290f\": container with ID starting with 012a655aadcb4bac701f0aebe2619b6abef41c22c727904f4186a5d0bdd5290f not found: ID does not exist" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.455684 4830 scope.go:117] "RemoveContainer" containerID="2eb955199624369be6edc562cc1caaf56b86d7232b831b0a0d2a7906c4e4a458" Feb 27 16:31:15 crc kubenswrapper[4830]: E0227 16:31:15.456151 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2eb955199624369be6edc562cc1caaf56b86d7232b831b0a0d2a7906c4e4a458\": container with ID starting with 2eb955199624369be6edc562cc1caaf56b86d7232b831b0a0d2a7906c4e4a458 not found: ID does not exist" containerID="2eb955199624369be6edc562cc1caaf56b86d7232b831b0a0d2a7906c4e4a458" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.456218 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eb955199624369be6edc562cc1caaf56b86d7232b831b0a0d2a7906c4e4a458"} err="failed to get container status \"2eb955199624369be6edc562cc1caaf56b86d7232b831b0a0d2a7906c4e4a458\": rpc error: code = NotFound desc = could not find container \"2eb955199624369be6edc562cc1caaf56b86d7232b831b0a0d2a7906c4e4a458\": container with ID starting with 2eb955199624369be6edc562cc1caaf56b86d7232b831b0a0d2a7906c4e4a458 not found: ID does not exist" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.456263 4830 scope.go:117] "RemoveContainer" containerID="e552f36126944b7e337ca983460d6896d461f4583fb8a603092c58de0b16f327" Feb 27 16:31:15 crc kubenswrapper[4830]: E0227 16:31:15.456689 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e552f36126944b7e337ca983460d6896d461f4583fb8a603092c58de0b16f327\": container with ID starting with e552f36126944b7e337ca983460d6896d461f4583fb8a603092c58de0b16f327 not found: ID does not exist" containerID="e552f36126944b7e337ca983460d6896d461f4583fb8a603092c58de0b16f327" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.456721 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e552f36126944b7e337ca983460d6896d461f4583fb8a603092c58de0b16f327"} err="failed to get container status \"e552f36126944b7e337ca983460d6896d461f4583fb8a603092c58de0b16f327\": rpc error: code = NotFound desc = could not find container \"e552f36126944b7e337ca983460d6896d461f4583fb8a603092c58de0b16f327\": container with ID starting with e552f36126944b7e337ca983460d6896d461f4583fb8a603092c58de0b16f327 not found: ID does not exist" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.456741 4830 scope.go:117] "RemoveContainer" containerID="f61a4d1495ee95d206af98c462051c4abc76a9a448d4c09f4ba9571dd7550d3f" Feb 27 16:31:15 crc kubenswrapper[4830]: E0227 16:31:15.457074 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f61a4d1495ee95d206af98c462051c4abc76a9a448d4c09f4ba9571dd7550d3f\": container with ID starting with f61a4d1495ee95d206af98c462051c4abc76a9a448d4c09f4ba9571dd7550d3f not found: ID does not exist" containerID="f61a4d1495ee95d206af98c462051c4abc76a9a448d4c09f4ba9571dd7550d3f" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.457117 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f61a4d1495ee95d206af98c462051c4abc76a9a448d4c09f4ba9571dd7550d3f"} err="failed to get container status \"f61a4d1495ee95d206af98c462051c4abc76a9a448d4c09f4ba9571dd7550d3f\": rpc error: code = NotFound desc = could not find container \"f61a4d1495ee95d206af98c462051c4abc76a9a448d4c09f4ba9571dd7550d3f\": container with ID starting with f61a4d1495ee95d206af98c462051c4abc76a9a448d4c09f4ba9571dd7550d3f not found: ID does not exist" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.483265 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76ad9c55-0d81-4a2b-8d91-486c19d80b98-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.483299 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76ad9c55-0d81-4a2b-8d91-486c19d80b98-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.582964 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.597506 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.614452 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:31:15 crc kubenswrapper[4830]: E0227 16:31:15.615092 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76ad9c55-0d81-4a2b-8d91-486c19d80b98" containerName="proxy-httpd" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.615123 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="76ad9c55-0d81-4a2b-8d91-486c19d80b98" containerName="proxy-httpd" Feb 27 16:31:15 crc kubenswrapper[4830]: E0227 16:31:15.615153 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76ad9c55-0d81-4a2b-8d91-486c19d80b98" containerName="sg-core" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.615162 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="76ad9c55-0d81-4a2b-8d91-486c19d80b98" containerName="sg-core" Feb 27 16:31:15 crc kubenswrapper[4830]: E0227 16:31:15.615194 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76ad9c55-0d81-4a2b-8d91-486c19d80b98" containerName="ceilometer-central-agent" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.615206 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="76ad9c55-0d81-4a2b-8d91-486c19d80b98" containerName="ceilometer-central-agent" Feb 27 16:31:15 crc kubenswrapper[4830]: E0227 16:31:15.615218 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76ad9c55-0d81-4a2b-8d91-486c19d80b98" containerName="ceilometer-notification-agent" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.615227 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="76ad9c55-0d81-4a2b-8d91-486c19d80b98" containerName="ceilometer-notification-agent" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.615473 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="76ad9c55-0d81-4a2b-8d91-486c19d80b98" containerName="proxy-httpd" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.615497 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="76ad9c55-0d81-4a2b-8d91-486c19d80b98" containerName="ceilometer-notification-agent" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.615511 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="76ad9c55-0d81-4a2b-8d91-486c19d80b98" containerName="sg-core" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.615526 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="76ad9c55-0d81-4a2b-8d91-486c19d80b98" containerName="ceilometer-central-agent" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.617885 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.621196 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.621196 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.626128 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.689212 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/076dd25a-67a2-4121-84a9-4e994d1542ce-log-httpd\") pod \"ceilometer-0\" (UID: \"076dd25a-67a2-4121-84a9-4e994d1542ce\") " pod="openstack/ceilometer-0" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.689436 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxpqs\" (UniqueName: \"kubernetes.io/projected/076dd25a-67a2-4121-84a9-4e994d1542ce-kube-api-access-pxpqs\") pod \"ceilometer-0\" (UID: \"076dd25a-67a2-4121-84a9-4e994d1542ce\") " pod="openstack/ceilometer-0" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.689557 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/076dd25a-67a2-4121-84a9-4e994d1542ce-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"076dd25a-67a2-4121-84a9-4e994d1542ce\") " pod="openstack/ceilometer-0" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.689734 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/076dd25a-67a2-4121-84a9-4e994d1542ce-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"076dd25a-67a2-4121-84a9-4e994d1542ce\") " pod="openstack/ceilometer-0" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.690253 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/076dd25a-67a2-4121-84a9-4e994d1542ce-config-data\") pod \"ceilometer-0\" (UID: \"076dd25a-67a2-4121-84a9-4e994d1542ce\") " pod="openstack/ceilometer-0" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.690386 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/076dd25a-67a2-4121-84a9-4e994d1542ce-scripts\") pod \"ceilometer-0\" (UID: \"076dd25a-67a2-4121-84a9-4e994d1542ce\") " pod="openstack/ceilometer-0" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.690434 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/076dd25a-67a2-4121-84a9-4e994d1542ce-run-httpd\") pod \"ceilometer-0\" (UID: \"076dd25a-67a2-4121-84a9-4e994d1542ce\") " pod="openstack/ceilometer-0" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.792412 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/076dd25a-67a2-4121-84a9-4e994d1542ce-config-data\") pod \"ceilometer-0\" (UID: \"076dd25a-67a2-4121-84a9-4e994d1542ce\") " pod="openstack/ceilometer-0" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.792471 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/076dd25a-67a2-4121-84a9-4e994d1542ce-scripts\") pod \"ceilometer-0\" (UID: \"076dd25a-67a2-4121-84a9-4e994d1542ce\") " pod="openstack/ceilometer-0" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.792496 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/076dd25a-67a2-4121-84a9-4e994d1542ce-run-httpd\") pod \"ceilometer-0\" (UID: \"076dd25a-67a2-4121-84a9-4e994d1542ce\") " pod="openstack/ceilometer-0" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.792541 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/076dd25a-67a2-4121-84a9-4e994d1542ce-log-httpd\") pod \"ceilometer-0\" (UID: \"076dd25a-67a2-4121-84a9-4e994d1542ce\") " pod="openstack/ceilometer-0" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.792583 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxpqs\" (UniqueName: \"kubernetes.io/projected/076dd25a-67a2-4121-84a9-4e994d1542ce-kube-api-access-pxpqs\") pod \"ceilometer-0\" (UID: \"076dd25a-67a2-4121-84a9-4e994d1542ce\") " pod="openstack/ceilometer-0" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.792630 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/076dd25a-67a2-4121-84a9-4e994d1542ce-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"076dd25a-67a2-4121-84a9-4e994d1542ce\") " pod="openstack/ceilometer-0" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.792707 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/076dd25a-67a2-4121-84a9-4e994d1542ce-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"076dd25a-67a2-4121-84a9-4e994d1542ce\") " pod="openstack/ceilometer-0" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.793344 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/076dd25a-67a2-4121-84a9-4e994d1542ce-run-httpd\") pod \"ceilometer-0\" (UID: \"076dd25a-67a2-4121-84a9-4e994d1542ce\") " pod="openstack/ceilometer-0" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.793473 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/076dd25a-67a2-4121-84a9-4e994d1542ce-log-httpd\") pod \"ceilometer-0\" (UID: \"076dd25a-67a2-4121-84a9-4e994d1542ce\") " pod="openstack/ceilometer-0" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.800500 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/076dd25a-67a2-4121-84a9-4e994d1542ce-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"076dd25a-67a2-4121-84a9-4e994d1542ce\") " pod="openstack/ceilometer-0" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.800874 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/076dd25a-67a2-4121-84a9-4e994d1542ce-config-data\") pod \"ceilometer-0\" (UID: \"076dd25a-67a2-4121-84a9-4e994d1542ce\") " pod="openstack/ceilometer-0" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.801377 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/076dd25a-67a2-4121-84a9-4e994d1542ce-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"076dd25a-67a2-4121-84a9-4e994d1542ce\") " pod="openstack/ceilometer-0" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.802483 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/076dd25a-67a2-4121-84a9-4e994d1542ce-scripts\") pod \"ceilometer-0\" (UID: \"076dd25a-67a2-4121-84a9-4e994d1542ce\") " pod="openstack/ceilometer-0" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.822599 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxpqs\" (UniqueName: \"kubernetes.io/projected/076dd25a-67a2-4121-84a9-4e994d1542ce-kube-api-access-pxpqs\") pod \"ceilometer-0\" (UID: \"076dd25a-67a2-4121-84a9-4e994d1542ce\") " pod="openstack/ceilometer-0" Feb 27 16:31:15 crc kubenswrapper[4830]: I0227 16:31:15.934669 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 16:31:16 crc kubenswrapper[4830]: I0227 16:31:16.427708 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:31:16 crc kubenswrapper[4830]: I0227 16:31:16.780001 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76ad9c55-0d81-4a2b-8d91-486c19d80b98" path="/var/lib/kubelet/pods/76ad9c55-0d81-4a2b-8d91-486c19d80b98/volumes" Feb 27 16:31:17 crc kubenswrapper[4830]: I0227 16:31:17.263898 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"076dd25a-67a2-4121-84a9-4e994d1542ce","Type":"ContainerStarted","Data":"22741c5d01cb17e65dcdbeffac4d0598253449e5bff515a084bb2ef319a9e758"} Feb 27 16:31:17 crc kubenswrapper[4830]: I0227 16:31:17.263976 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"076dd25a-67a2-4121-84a9-4e994d1542ce","Type":"ContainerStarted","Data":"493e75bff298a75d951a121b9e341b6a285c863941272f57bcb65dd611477c77"} Feb 27 16:31:18 crc kubenswrapper[4830]: I0227 16:31:18.275409 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"076dd25a-67a2-4121-84a9-4e994d1542ce","Type":"ContainerStarted","Data":"ffc7ddaa779b4943f7503c86cac6715511b054f8dada907f6fc1ad5026b8db9f"} Feb 27 16:31:19 crc kubenswrapper[4830]: I0227 16:31:19.287655 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"076dd25a-67a2-4121-84a9-4e994d1542ce","Type":"ContainerStarted","Data":"21717c499a9f7e20a1678a6597bd34eb02984002201d2d1d998eb40e3e49be34"} Feb 27 16:31:21 crc kubenswrapper[4830]: I0227 16:31:21.335396 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"076dd25a-67a2-4121-84a9-4e994d1542ce","Type":"ContainerStarted","Data":"0ea976097f7afb21fe4680f75bc7769e2ba1a39c27a7ebe2d079ca5e4a75d4db"} Feb 27 16:31:21 crc kubenswrapper[4830]: I0227 16:31:21.336636 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 16:31:21 crc kubenswrapper[4830]: I0227 16:31:21.385795 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.544575603 podStartE2EDuration="6.38576051s" podCreationTimestamp="2026-02-27 16:31:15 +0000 UTC" firstStartedPulling="2026-02-27 16:31:16.443875428 +0000 UTC m=+1472.533147891" lastFinishedPulling="2026-02-27 16:31:20.285060335 +0000 UTC m=+1476.374332798" observedRunningTime="2026-02-27 16:31:21.37951581 +0000 UTC m=+1477.468788303" watchObservedRunningTime="2026-02-27 16:31:21.38576051 +0000 UTC m=+1477.475033013" Feb 27 16:31:21 crc kubenswrapper[4830]: I0227 16:31:21.674919 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.352909 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-8mnsh"] Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.354929 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8mnsh" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.357690 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.367175 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-8mnsh"] Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.373752 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.447752 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zb56\" (UniqueName: \"kubernetes.io/projected/5cf85768-fd08-43b7-a8bf-a2738e493b22-kube-api-access-8zb56\") pod \"nova-cell0-cell-mapping-8mnsh\" (UID: \"5cf85768-fd08-43b7-a8bf-a2738e493b22\") " pod="openstack/nova-cell0-cell-mapping-8mnsh" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.448261 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cf85768-fd08-43b7-a8bf-a2738e493b22-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8mnsh\" (UID: \"5cf85768-fd08-43b7-a8bf-a2738e493b22\") " pod="openstack/nova-cell0-cell-mapping-8mnsh" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.448438 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cf85768-fd08-43b7-a8bf-a2738e493b22-scripts\") pod \"nova-cell0-cell-mapping-8mnsh\" (UID: \"5cf85768-fd08-43b7-a8bf-a2738e493b22\") " pod="openstack/nova-cell0-cell-mapping-8mnsh" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.448489 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cf85768-fd08-43b7-a8bf-a2738e493b22-config-data\") pod \"nova-cell0-cell-mapping-8mnsh\" (UID: \"5cf85768-fd08-43b7-a8bf-a2738e493b22\") " pod="openstack/nova-cell0-cell-mapping-8mnsh" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.502024 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.504012 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.507252 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.529426 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.551042 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cf85768-fd08-43b7-a8bf-a2738e493b22-scripts\") pod \"nova-cell0-cell-mapping-8mnsh\" (UID: \"5cf85768-fd08-43b7-a8bf-a2738e493b22\") " pod="openstack/nova-cell0-cell-mapping-8mnsh" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.551086 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cf85768-fd08-43b7-a8bf-a2738e493b22-config-data\") pod \"nova-cell0-cell-mapping-8mnsh\" (UID: \"5cf85768-fd08-43b7-a8bf-a2738e493b22\") " pod="openstack/nova-cell0-cell-mapping-8mnsh" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.551149 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/050ac6bf-ac1c-406d-af59-2259ceb05ff8-config-data\") pod \"nova-api-0\" (UID: \"050ac6bf-ac1c-406d-af59-2259ceb05ff8\") " pod="openstack/nova-api-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.551235 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/050ac6bf-ac1c-406d-af59-2259ceb05ff8-logs\") pod \"nova-api-0\" (UID: \"050ac6bf-ac1c-406d-af59-2259ceb05ff8\") " pod="openstack/nova-api-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.551258 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zb56\" (UniqueName: \"kubernetes.io/projected/5cf85768-fd08-43b7-a8bf-a2738e493b22-kube-api-access-8zb56\") pod \"nova-cell0-cell-mapping-8mnsh\" (UID: \"5cf85768-fd08-43b7-a8bf-a2738e493b22\") " pod="openstack/nova-cell0-cell-mapping-8mnsh" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.551280 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050ac6bf-ac1c-406d-af59-2259ceb05ff8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"050ac6bf-ac1c-406d-af59-2259ceb05ff8\") " pod="openstack/nova-api-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.551307 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cf85768-fd08-43b7-a8bf-a2738e493b22-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8mnsh\" (UID: \"5cf85768-fd08-43b7-a8bf-a2738e493b22\") " pod="openstack/nova-cell0-cell-mapping-8mnsh" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.551325 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj288\" (UniqueName: \"kubernetes.io/projected/050ac6bf-ac1c-406d-af59-2259ceb05ff8-kube-api-access-nj288\") pod \"nova-api-0\" (UID: \"050ac6bf-ac1c-406d-af59-2259ceb05ff8\") " pod="openstack/nova-api-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.572990 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cf85768-fd08-43b7-a8bf-a2738e493b22-scripts\") pod \"nova-cell0-cell-mapping-8mnsh\" (UID: \"5cf85768-fd08-43b7-a8bf-a2738e493b22\") " pod="openstack/nova-cell0-cell-mapping-8mnsh" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.582718 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cf85768-fd08-43b7-a8bf-a2738e493b22-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8mnsh\" (UID: \"5cf85768-fd08-43b7-a8bf-a2738e493b22\") " pod="openstack/nova-cell0-cell-mapping-8mnsh" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.585882 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cf85768-fd08-43b7-a8bf-a2738e493b22-config-data\") pod \"nova-cell0-cell-mapping-8mnsh\" (UID: \"5cf85768-fd08-43b7-a8bf-a2738e493b22\") " pod="openstack/nova-cell0-cell-mapping-8mnsh" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.608872 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zb56\" (UniqueName: \"kubernetes.io/projected/5cf85768-fd08-43b7-a8bf-a2738e493b22-kube-api-access-8zb56\") pod \"nova-cell0-cell-mapping-8mnsh\" (UID: \"5cf85768-fd08-43b7-a8bf-a2738e493b22\") " pod="openstack/nova-cell0-cell-mapping-8mnsh" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.608965 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.610182 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.622750 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.652933 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj288\" (UniqueName: \"kubernetes.io/projected/050ac6bf-ac1c-406d-af59-2259ceb05ff8-kube-api-access-nj288\") pod \"nova-api-0\" (UID: \"050ac6bf-ac1c-406d-af59-2259ceb05ff8\") " pod="openstack/nova-api-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.653026 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50ea60ad-3437-435c-ba9c-462adae597a2-config-data\") pod \"nova-scheduler-0\" (UID: \"50ea60ad-3437-435c-ba9c-462adae597a2\") " pod="openstack/nova-scheduler-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.653048 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwxq2\" (UniqueName: \"kubernetes.io/projected/50ea60ad-3437-435c-ba9c-462adae597a2-kube-api-access-dwxq2\") pod \"nova-scheduler-0\" (UID: \"50ea60ad-3437-435c-ba9c-462adae597a2\") " pod="openstack/nova-scheduler-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.653081 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/050ac6bf-ac1c-406d-af59-2259ceb05ff8-config-data\") pod \"nova-api-0\" (UID: \"050ac6bf-ac1c-406d-af59-2259ceb05ff8\") " pod="openstack/nova-api-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.653115 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ea60ad-3437-435c-ba9c-462adae597a2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"50ea60ad-3437-435c-ba9c-462adae597a2\") " pod="openstack/nova-scheduler-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.653181 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/050ac6bf-ac1c-406d-af59-2259ceb05ff8-logs\") pod \"nova-api-0\" (UID: \"050ac6bf-ac1c-406d-af59-2259ceb05ff8\") " pod="openstack/nova-api-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.653207 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050ac6bf-ac1c-406d-af59-2259ceb05ff8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"050ac6bf-ac1c-406d-af59-2259ceb05ff8\") " pod="openstack/nova-api-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.654812 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/050ac6bf-ac1c-406d-af59-2259ceb05ff8-logs\") pod \"nova-api-0\" (UID: \"050ac6bf-ac1c-406d-af59-2259ceb05ff8\") " pod="openstack/nova-api-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.659778 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.661331 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.667622 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050ac6bf-ac1c-406d-af59-2259ceb05ff8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"050ac6bf-ac1c-406d-af59-2259ceb05ff8\") " pod="openstack/nova-api-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.674439 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.677442 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/050ac6bf-ac1c-406d-af59-2259ceb05ff8-config-data\") pod \"nova-api-0\" (UID: \"050ac6bf-ac1c-406d-af59-2259ceb05ff8\") " pod="openstack/nova-api-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.688205 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8mnsh" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.692892 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj288\" (UniqueName: \"kubernetes.io/projected/050ac6bf-ac1c-406d-af59-2259ceb05ff8-kube-api-access-nj288\") pod \"nova-api-0\" (UID: \"050ac6bf-ac1c-406d-af59-2259ceb05ff8\") " pod="openstack/nova-api-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.730570 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.758010 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c04f58d0-dbc3-46f2-bea2-96a29fc38dd1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c04f58d0-dbc3-46f2-bea2-96a29fc38dd1\") " pod="openstack/nova-metadata-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.758053 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50ea60ad-3437-435c-ba9c-462adae597a2-config-data\") pod \"nova-scheduler-0\" (UID: \"50ea60ad-3437-435c-ba9c-462adae597a2\") " pod="openstack/nova-scheduler-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.758086 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c04f58d0-dbc3-46f2-bea2-96a29fc38dd1-logs\") pod \"nova-metadata-0\" (UID: \"c04f58d0-dbc3-46f2-bea2-96a29fc38dd1\") " pod="openstack/nova-metadata-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.758108 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwxq2\" (UniqueName: \"kubernetes.io/projected/50ea60ad-3437-435c-ba9c-462adae597a2-kube-api-access-dwxq2\") pod \"nova-scheduler-0\" (UID: \"50ea60ad-3437-435c-ba9c-462adae597a2\") " pod="openstack/nova-scheduler-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.758158 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ea60ad-3437-435c-ba9c-462adae597a2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"50ea60ad-3437-435c-ba9c-462adae597a2\") " pod="openstack/nova-scheduler-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.758194 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c04f58d0-dbc3-46f2-bea2-96a29fc38dd1-config-data\") pod \"nova-metadata-0\" (UID: \"c04f58d0-dbc3-46f2-bea2-96a29fc38dd1\") " pod="openstack/nova-metadata-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.758244 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7nx8\" (UniqueName: \"kubernetes.io/projected/c04f58d0-dbc3-46f2-bea2-96a29fc38dd1-kube-api-access-z7nx8\") pod \"nova-metadata-0\" (UID: \"c04f58d0-dbc3-46f2-bea2-96a29fc38dd1\") " pod="openstack/nova-metadata-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.769763 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50ea60ad-3437-435c-ba9c-462adae597a2-config-data\") pod \"nova-scheduler-0\" (UID: \"50ea60ad-3437-435c-ba9c-462adae597a2\") " pod="openstack/nova-scheduler-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.807397 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.813670 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ea60ad-3437-435c-ba9c-462adae597a2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"50ea60ad-3437-435c-ba9c-462adae597a2\") " pod="openstack/nova-scheduler-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.822763 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.835282 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwxq2\" (UniqueName: \"kubernetes.io/projected/50ea60ad-3437-435c-ba9c-462adae597a2-kube-api-access-dwxq2\") pod \"nova-scheduler-0\" (UID: \"50ea60ad-3437-435c-ba9c-462adae597a2\") " pod="openstack/nova-scheduler-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.863332 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c04f58d0-dbc3-46f2-bea2-96a29fc38dd1-config-data\") pod \"nova-metadata-0\" (UID: \"c04f58d0-dbc3-46f2-bea2-96a29fc38dd1\") " pod="openstack/nova-metadata-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.863785 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7nx8\" (UniqueName: \"kubernetes.io/projected/c04f58d0-dbc3-46f2-bea2-96a29fc38dd1-kube-api-access-z7nx8\") pod \"nova-metadata-0\" (UID: \"c04f58d0-dbc3-46f2-bea2-96a29fc38dd1\") " pod="openstack/nova-metadata-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.863843 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c04f58d0-dbc3-46f2-bea2-96a29fc38dd1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c04f58d0-dbc3-46f2-bea2-96a29fc38dd1\") " pod="openstack/nova-metadata-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.863866 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c04f58d0-dbc3-46f2-bea2-96a29fc38dd1-logs\") pod \"nova-metadata-0\" (UID: \"c04f58d0-dbc3-46f2-bea2-96a29fc38dd1\") " pod="openstack/nova-metadata-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.876805 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.892720 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c04f58d0-dbc3-46f2-bea2-96a29fc38dd1-logs\") pod \"nova-metadata-0\" (UID: \"c04f58d0-dbc3-46f2-bea2-96a29fc38dd1\") " pod="openstack/nova-metadata-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.895836 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c04f58d0-dbc3-46f2-bea2-96a29fc38dd1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c04f58d0-dbc3-46f2-bea2-96a29fc38dd1\") " pod="openstack/nova-metadata-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.898820 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c04f58d0-dbc3-46f2-bea2-96a29fc38dd1-config-data\") pod \"nova-metadata-0\" (UID: \"c04f58d0-dbc3-46f2-bea2-96a29fc38dd1\") " pod="openstack/nova-metadata-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.934153 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7nx8\" (UniqueName: \"kubernetes.io/projected/c04f58d0-dbc3-46f2-bea2-96a29fc38dd1-kube-api-access-z7nx8\") pod \"nova-metadata-0\" (UID: \"c04f58d0-dbc3-46f2-bea2-96a29fc38dd1\") " pod="openstack/nova-metadata-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.960758 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-p8gmd"] Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.962673 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-p8gmd" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.995578 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.996762 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 16:31:22 crc kubenswrapper[4830]: I0227 16:31:22.998295 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.040240 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-p8gmd"] Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.095472 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03f3ea66-a50c-42c4-a54b-5ea85ac2973f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"03f3ea66-a50c-42c4-a54b-5ea85ac2973f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.095565 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03f3ea66-a50c-42c4-a54b-5ea85ac2973f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"03f3ea66-a50c-42c4-a54b-5ea85ac2973f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.095589 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa63e972-7d02-4b84-8f48-c4126c0e6b06-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-p8gmd\" (UID: \"fa63e972-7d02-4b84-8f48-c4126c0e6b06\") " pod="openstack/dnsmasq-dns-bccf8f775-p8gmd" Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.095633 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgbzl\" (UniqueName: \"kubernetes.io/projected/fa63e972-7d02-4b84-8f48-c4126c0e6b06-kube-api-access-hgbzl\") pod \"dnsmasq-dns-bccf8f775-p8gmd\" (UID: \"fa63e972-7d02-4b84-8f48-c4126c0e6b06\") " pod="openstack/dnsmasq-dns-bccf8f775-p8gmd" Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.095691 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa63e972-7d02-4b84-8f48-c4126c0e6b06-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-p8gmd\" (UID: \"fa63e972-7d02-4b84-8f48-c4126c0e6b06\") " pod="openstack/dnsmasq-dns-bccf8f775-p8gmd" Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.095765 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdrtv\" (UniqueName: \"kubernetes.io/projected/03f3ea66-a50c-42c4-a54b-5ea85ac2973f-kube-api-access-jdrtv\") pod \"nova-cell1-novncproxy-0\" (UID: \"03f3ea66-a50c-42c4-a54b-5ea85ac2973f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.095865 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa63e972-7d02-4b84-8f48-c4126c0e6b06-config\") pod \"dnsmasq-dns-bccf8f775-p8gmd\" (UID: \"fa63e972-7d02-4b84-8f48-c4126c0e6b06\") " pod="openstack/dnsmasq-dns-bccf8f775-p8gmd" Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.095987 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa63e972-7d02-4b84-8f48-c4126c0e6b06-dns-svc\") pod \"dnsmasq-dns-bccf8f775-p8gmd\" (UID: \"fa63e972-7d02-4b84-8f48-c4126c0e6b06\") " pod="openstack/dnsmasq-dns-bccf8f775-p8gmd" Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.096005 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa63e972-7d02-4b84-8f48-c4126c0e6b06-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-p8gmd\" (UID: \"fa63e972-7d02-4b84-8f48-c4126c0e6b06\") " pod="openstack/dnsmasq-dns-bccf8f775-p8gmd" Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.117338 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.202409 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03f3ea66-a50c-42c4-a54b-5ea85ac2973f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"03f3ea66-a50c-42c4-a54b-5ea85ac2973f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.202458 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03f3ea66-a50c-42c4-a54b-5ea85ac2973f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"03f3ea66-a50c-42c4-a54b-5ea85ac2973f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.202480 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa63e972-7d02-4b84-8f48-c4126c0e6b06-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-p8gmd\" (UID: \"fa63e972-7d02-4b84-8f48-c4126c0e6b06\") " pod="openstack/dnsmasq-dns-bccf8f775-p8gmd" Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.202505 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgbzl\" (UniqueName: \"kubernetes.io/projected/fa63e972-7d02-4b84-8f48-c4126c0e6b06-kube-api-access-hgbzl\") pod \"dnsmasq-dns-bccf8f775-p8gmd\" (UID: \"fa63e972-7d02-4b84-8f48-c4126c0e6b06\") " pod="openstack/dnsmasq-dns-bccf8f775-p8gmd" Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.202536 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa63e972-7d02-4b84-8f48-c4126c0e6b06-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-p8gmd\" (UID: \"fa63e972-7d02-4b84-8f48-c4126c0e6b06\") " pod="openstack/dnsmasq-dns-bccf8f775-p8gmd" Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.202580 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdrtv\" (UniqueName: \"kubernetes.io/projected/03f3ea66-a50c-42c4-a54b-5ea85ac2973f-kube-api-access-jdrtv\") pod \"nova-cell1-novncproxy-0\" (UID: \"03f3ea66-a50c-42c4-a54b-5ea85ac2973f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.202637 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa63e972-7d02-4b84-8f48-c4126c0e6b06-config\") pod \"dnsmasq-dns-bccf8f775-p8gmd\" (UID: \"fa63e972-7d02-4b84-8f48-c4126c0e6b06\") " pod="openstack/dnsmasq-dns-bccf8f775-p8gmd" Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.202692 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa63e972-7d02-4b84-8f48-c4126c0e6b06-dns-svc\") pod \"dnsmasq-dns-bccf8f775-p8gmd\" (UID: \"fa63e972-7d02-4b84-8f48-c4126c0e6b06\") " pod="openstack/dnsmasq-dns-bccf8f775-p8gmd" Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.202711 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa63e972-7d02-4b84-8f48-c4126c0e6b06-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-p8gmd\" (UID: \"fa63e972-7d02-4b84-8f48-c4126c0e6b06\") " pod="openstack/dnsmasq-dns-bccf8f775-p8gmd" Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.204567 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa63e972-7d02-4b84-8f48-c4126c0e6b06-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-p8gmd\" (UID: \"fa63e972-7d02-4b84-8f48-c4126c0e6b06\") " pod="openstack/dnsmasq-dns-bccf8f775-p8gmd" Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.204681 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa63e972-7d02-4b84-8f48-c4126c0e6b06-config\") pod \"dnsmasq-dns-bccf8f775-p8gmd\" (UID: \"fa63e972-7d02-4b84-8f48-c4126c0e6b06\") " pod="openstack/dnsmasq-dns-bccf8f775-p8gmd" Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.204763 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa63e972-7d02-4b84-8f48-c4126c0e6b06-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-p8gmd\" (UID: \"fa63e972-7d02-4b84-8f48-c4126c0e6b06\") " pod="openstack/dnsmasq-dns-bccf8f775-p8gmd" Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.205350 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa63e972-7d02-4b84-8f48-c4126c0e6b06-dns-svc\") pod \"dnsmasq-dns-bccf8f775-p8gmd\" (UID: \"fa63e972-7d02-4b84-8f48-c4126c0e6b06\") " pod="openstack/dnsmasq-dns-bccf8f775-p8gmd" Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.205872 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa63e972-7d02-4b84-8f48-c4126c0e6b06-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-p8gmd\" (UID: \"fa63e972-7d02-4b84-8f48-c4126c0e6b06\") " pod="openstack/dnsmasq-dns-bccf8f775-p8gmd" Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.210547 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.226593 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03f3ea66-a50c-42c4-a54b-5ea85ac2973f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"03f3ea66-a50c-42c4-a54b-5ea85ac2973f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.230011 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03f3ea66-a50c-42c4-a54b-5ea85ac2973f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"03f3ea66-a50c-42c4-a54b-5ea85ac2973f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.230330 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdrtv\" (UniqueName: \"kubernetes.io/projected/03f3ea66-a50c-42c4-a54b-5ea85ac2973f-kube-api-access-jdrtv\") pod \"nova-cell1-novncproxy-0\" (UID: \"03f3ea66-a50c-42c4-a54b-5ea85ac2973f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.234683 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgbzl\" (UniqueName: \"kubernetes.io/projected/fa63e972-7d02-4b84-8f48-c4126c0e6b06-kube-api-access-hgbzl\") pod \"dnsmasq-dns-bccf8f775-p8gmd\" (UID: \"fa63e972-7d02-4b84-8f48-c4126c0e6b06\") " pod="openstack/dnsmasq-dns-bccf8f775-p8gmd" Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.329244 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-p8gmd" Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.340930 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.633882 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-8mnsh"] Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.710875 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.747406 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.836598 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.931426 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-b8tph"] Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.932657 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-b8tph" Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.934674 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.934844 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.945406 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-b8tph"] Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.983304 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-p8gmd"] Feb 27 16:31:23 crc kubenswrapper[4830]: W0227 16:31:23.990722 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03f3ea66_a50c_42c4_a54b_5ea85ac2973f.slice/crio-17beda40d5b478cf740af71e1e8fed09b8a80861c72240c13aa9d29e0fb268f4 WatchSource:0}: Error finding container 17beda40d5b478cf740af71e1e8fed09b8a80861c72240c13aa9d29e0fb268f4: Status 404 returned error can't find the container with id 17beda40d5b478cf740af71e1e8fed09b8a80861c72240c13aa9d29e0fb268f4 Feb 27 16:31:23 crc kubenswrapper[4830]: I0227 16:31:23.995620 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 16:31:24 crc kubenswrapper[4830]: I0227 16:31:24.023078 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43877352-b9c6-4179-82a0-3b194a870e8a-scripts\") pod \"nova-cell1-conductor-db-sync-b8tph\" (UID: \"43877352-b9c6-4179-82a0-3b194a870e8a\") " pod="openstack/nova-cell1-conductor-db-sync-b8tph" Feb 27 16:31:24 crc kubenswrapper[4830]: I0227 16:31:24.023169 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43877352-b9c6-4179-82a0-3b194a870e8a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-b8tph\" (UID: \"43877352-b9c6-4179-82a0-3b194a870e8a\") " pod="openstack/nova-cell1-conductor-db-sync-b8tph" Feb 27 16:31:24 crc kubenswrapper[4830]: I0227 16:31:24.023196 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spp26\" (UniqueName: \"kubernetes.io/projected/43877352-b9c6-4179-82a0-3b194a870e8a-kube-api-access-spp26\") pod \"nova-cell1-conductor-db-sync-b8tph\" (UID: \"43877352-b9c6-4179-82a0-3b194a870e8a\") " pod="openstack/nova-cell1-conductor-db-sync-b8tph" Feb 27 16:31:24 crc kubenswrapper[4830]: I0227 16:31:24.023255 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43877352-b9c6-4179-82a0-3b194a870e8a-config-data\") pod \"nova-cell1-conductor-db-sync-b8tph\" (UID: \"43877352-b9c6-4179-82a0-3b194a870e8a\") " pod="openstack/nova-cell1-conductor-db-sync-b8tph" Feb 27 16:31:24 crc kubenswrapper[4830]: I0227 16:31:24.125092 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spp26\" (UniqueName: \"kubernetes.io/projected/43877352-b9c6-4179-82a0-3b194a870e8a-kube-api-access-spp26\") pod \"nova-cell1-conductor-db-sync-b8tph\" (UID: \"43877352-b9c6-4179-82a0-3b194a870e8a\") " pod="openstack/nova-cell1-conductor-db-sync-b8tph" Feb 27 16:31:24 crc kubenswrapper[4830]: I0227 16:31:24.125204 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43877352-b9c6-4179-82a0-3b194a870e8a-config-data\") pod \"nova-cell1-conductor-db-sync-b8tph\" (UID: \"43877352-b9c6-4179-82a0-3b194a870e8a\") " pod="openstack/nova-cell1-conductor-db-sync-b8tph" Feb 27 16:31:24 crc kubenswrapper[4830]: I0227 16:31:24.125274 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43877352-b9c6-4179-82a0-3b194a870e8a-scripts\") pod \"nova-cell1-conductor-db-sync-b8tph\" (UID: \"43877352-b9c6-4179-82a0-3b194a870e8a\") " pod="openstack/nova-cell1-conductor-db-sync-b8tph" Feb 27 16:31:24 crc kubenswrapper[4830]: I0227 16:31:24.125333 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43877352-b9c6-4179-82a0-3b194a870e8a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-b8tph\" (UID: \"43877352-b9c6-4179-82a0-3b194a870e8a\") " pod="openstack/nova-cell1-conductor-db-sync-b8tph" Feb 27 16:31:24 crc kubenswrapper[4830]: I0227 16:31:24.131393 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43877352-b9c6-4179-82a0-3b194a870e8a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-b8tph\" (UID: \"43877352-b9c6-4179-82a0-3b194a870e8a\") " pod="openstack/nova-cell1-conductor-db-sync-b8tph" Feb 27 16:31:24 crc kubenswrapper[4830]: I0227 16:31:24.131580 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43877352-b9c6-4179-82a0-3b194a870e8a-config-data\") pod \"nova-cell1-conductor-db-sync-b8tph\" (UID: \"43877352-b9c6-4179-82a0-3b194a870e8a\") " pod="openstack/nova-cell1-conductor-db-sync-b8tph" Feb 27 16:31:24 crc kubenswrapper[4830]: I0227 16:31:24.132273 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43877352-b9c6-4179-82a0-3b194a870e8a-scripts\") pod \"nova-cell1-conductor-db-sync-b8tph\" (UID: \"43877352-b9c6-4179-82a0-3b194a870e8a\") " pod="openstack/nova-cell1-conductor-db-sync-b8tph" Feb 27 16:31:24 crc kubenswrapper[4830]: I0227 16:31:24.143486 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spp26\" (UniqueName: \"kubernetes.io/projected/43877352-b9c6-4179-82a0-3b194a870e8a-kube-api-access-spp26\") pod \"nova-cell1-conductor-db-sync-b8tph\" (UID: \"43877352-b9c6-4179-82a0-3b194a870e8a\") " pod="openstack/nova-cell1-conductor-db-sync-b8tph" Feb 27 16:31:24 crc kubenswrapper[4830]: I0227 16:31:24.252463 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-b8tph" Feb 27 16:31:24 crc kubenswrapper[4830]: I0227 16:31:24.418958 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"03f3ea66-a50c-42c4-a54b-5ea85ac2973f","Type":"ContainerStarted","Data":"17beda40d5b478cf740af71e1e8fed09b8a80861c72240c13aa9d29e0fb268f4"} Feb 27 16:31:24 crc kubenswrapper[4830]: I0227 16:31:24.426391 4830 generic.go:334] "Generic (PLEG): container finished" podID="fa63e972-7d02-4b84-8f48-c4126c0e6b06" containerID="c628a74f0963b41f934fe48342ac8ac62afeee0bc6d1e12b9006b8133b207093" exitCode=0 Feb 27 16:31:24 crc kubenswrapper[4830]: I0227 16:31:24.426987 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-p8gmd" event={"ID":"fa63e972-7d02-4b84-8f48-c4126c0e6b06","Type":"ContainerDied","Data":"c628a74f0963b41f934fe48342ac8ac62afeee0bc6d1e12b9006b8133b207093"} Feb 27 16:31:24 crc kubenswrapper[4830]: I0227 16:31:24.427020 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-p8gmd" event={"ID":"fa63e972-7d02-4b84-8f48-c4126c0e6b06","Type":"ContainerStarted","Data":"58eba474e106c61c1feaa3b8b7712eef0faef5992f6a2b410b738b0088c6ccb7"} Feb 27 16:31:24 crc kubenswrapper[4830]: I0227 16:31:24.431972 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"50ea60ad-3437-435c-ba9c-462adae597a2","Type":"ContainerStarted","Data":"5bcf51d1d6dd08738ec4ebeb8c00b20f52b955747184e88d86384ce6321dae3a"} Feb 27 16:31:24 crc kubenswrapper[4830]: I0227 16:31:24.437729 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c04f58d0-dbc3-46f2-bea2-96a29fc38dd1","Type":"ContainerStarted","Data":"6c0244baad06e56a0dccdadda73d5cb52c60a92f4690d55d929ef7d6480c30d9"} Feb 27 16:31:24 crc kubenswrapper[4830]: I0227 16:31:24.457873 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8mnsh" event={"ID":"5cf85768-fd08-43b7-a8bf-a2738e493b22","Type":"ContainerStarted","Data":"aaca4e638aa616674edc02748979015b7798beec0a50cf331a81661c6f522394"} Feb 27 16:31:24 crc kubenswrapper[4830]: I0227 16:31:24.457927 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8mnsh" event={"ID":"5cf85768-fd08-43b7-a8bf-a2738e493b22","Type":"ContainerStarted","Data":"9bc87efe4a6998e054a1cb73f7acdf00899c3787f6bbab5bcb911c5f203e54fb"} Feb 27 16:31:24 crc kubenswrapper[4830]: I0227 16:31:24.466099 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"050ac6bf-ac1c-406d-af59-2259ceb05ff8","Type":"ContainerStarted","Data":"d5751dec71d6142a3e503c7f94f5e2e7059ae53aaab367d25e2d40bee6bad587"} Feb 27 16:31:24 crc kubenswrapper[4830]: I0227 16:31:24.498147 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-8mnsh" podStartSLOduration=2.498130881 podStartE2EDuration="2.498130881s" podCreationTimestamp="2026-02-27 16:31:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:31:24.486110302 +0000 UTC m=+1480.575382765" watchObservedRunningTime="2026-02-27 16:31:24.498130881 +0000 UTC m=+1480.587403344" Feb 27 16:31:24 crc kubenswrapper[4830]: I0227 16:31:24.743602 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-b8tph"] Feb 27 16:31:25 crc kubenswrapper[4830]: I0227 16:31:25.494105 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-b8tph" event={"ID":"43877352-b9c6-4179-82a0-3b194a870e8a","Type":"ContainerStarted","Data":"e8228851dc153740caa4991add05b87921eb8d07bae6164bd7ec594683dd08a2"} Feb 27 16:31:25 crc kubenswrapper[4830]: I0227 16:31:25.494681 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-b8tph" event={"ID":"43877352-b9c6-4179-82a0-3b194a870e8a","Type":"ContainerStarted","Data":"da6b5581921b79491915387ead00a363d139fbf422b269d22e3d2829aa4943e8"} Feb 27 16:31:25 crc kubenswrapper[4830]: I0227 16:31:25.497996 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-p8gmd" event={"ID":"fa63e972-7d02-4b84-8f48-c4126c0e6b06","Type":"ContainerStarted","Data":"84e5aebe73093b2a7e9c000daf0f6b003429ded822d6e5698a1d1f248efe4601"} Feb 27 16:31:25 crc kubenswrapper[4830]: I0227 16:31:25.498758 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-p8gmd" Feb 27 16:31:25 crc kubenswrapper[4830]: I0227 16:31:25.518159 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-b8tph" podStartSLOduration=2.518139111 podStartE2EDuration="2.518139111s" podCreationTimestamp="2026-02-27 16:31:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:31:25.512518865 +0000 UTC m=+1481.601791328" watchObservedRunningTime="2026-02-27 16:31:25.518139111 +0000 UTC m=+1481.607411564" Feb 27 16:31:25 crc kubenswrapper[4830]: I0227 16:31:25.532504 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-p8gmd" podStartSLOduration=3.532491947 podStartE2EDuration="3.532491947s" podCreationTimestamp="2026-02-27 16:31:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:31:25.531572454 +0000 UTC m=+1481.620844917" watchObservedRunningTime="2026-02-27 16:31:25.532491947 +0000 UTC m=+1481.621764400" Feb 27 16:31:26 crc kubenswrapper[4830]: I0227 16:31:26.681959 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 16:31:26 crc kubenswrapper[4830]: I0227 16:31:26.721209 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 16:31:27 crc kubenswrapper[4830]: I0227 16:31:27.514894 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"50ea60ad-3437-435c-ba9c-462adae597a2","Type":"ContainerStarted","Data":"309f6573e73b8d560ad1c1da3fcdeddf2c652c2005f376d7dfe4b638529e7068"} Feb 27 16:31:27 crc kubenswrapper[4830]: I0227 16:31:27.520362 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c04f58d0-dbc3-46f2-bea2-96a29fc38dd1","Type":"ContainerStarted","Data":"36a109282809e0a53e8d34f19e626157b45b5e89602e283c6c155411d043c14c"} Feb 27 16:31:27 crc kubenswrapper[4830]: I0227 16:31:27.522151 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"050ac6bf-ac1c-406d-af59-2259ceb05ff8","Type":"ContainerStarted","Data":"00d0e6466d0ca2034b4b6ada51d1f529a9778eb8658e79f7cb55a956e8889ff0"} Feb 27 16:31:27 crc kubenswrapper[4830]: I0227 16:31:27.532018 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"03f3ea66-a50c-42c4-a54b-5ea85ac2973f","Type":"ContainerStarted","Data":"c295229ad63020dc74e32f395e0052762d2b710c258cbe248a93f4458f7f6cdf"} Feb 27 16:31:27 crc kubenswrapper[4830]: I0227 16:31:27.532377 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="03f3ea66-a50c-42c4-a54b-5ea85ac2973f" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://c295229ad63020dc74e32f395e0052762d2b710c258cbe248a93f4458f7f6cdf" gracePeriod=30 Feb 27 16:31:27 crc kubenswrapper[4830]: I0227 16:31:27.537982 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.146388938 podStartE2EDuration="5.537964192s" podCreationTimestamp="2026-02-27 16:31:22 +0000 UTC" firstStartedPulling="2026-02-27 16:31:23.739083745 +0000 UTC m=+1479.828356208" lastFinishedPulling="2026-02-27 16:31:27.130658999 +0000 UTC m=+1483.219931462" observedRunningTime="2026-02-27 16:31:27.533009253 +0000 UTC m=+1483.622281716" watchObservedRunningTime="2026-02-27 16:31:27.537964192 +0000 UTC m=+1483.627236655" Feb 27 16:31:27 crc kubenswrapper[4830]: I0227 16:31:27.553384 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.41682502 podStartE2EDuration="5.553364643s" podCreationTimestamp="2026-02-27 16:31:22 +0000 UTC" firstStartedPulling="2026-02-27 16:31:23.994117586 +0000 UTC m=+1480.083390049" lastFinishedPulling="2026-02-27 16:31:27.130657209 +0000 UTC m=+1483.219929672" observedRunningTime="2026-02-27 16:31:27.549561011 +0000 UTC m=+1483.638833484" watchObservedRunningTime="2026-02-27 16:31:27.553364643 +0000 UTC m=+1483.642637116" Feb 27 16:31:27 crc kubenswrapper[4830]: I0227 16:31:27.880754 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 27 16:31:28 crc kubenswrapper[4830]: I0227 16:31:28.341649 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 27 16:31:28 crc kubenswrapper[4830]: I0227 16:31:28.551653 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c04f58d0-dbc3-46f2-bea2-96a29fc38dd1","Type":"ContainerStarted","Data":"134c014e2e6a2fc2f31f951c05bf58a5056e2c3d94228581fb80cc856d0ca673"} Feb 27 16:31:28 crc kubenswrapper[4830]: I0227 16:31:28.551753 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c04f58d0-dbc3-46f2-bea2-96a29fc38dd1" containerName="nova-metadata-log" containerID="cri-o://36a109282809e0a53e8d34f19e626157b45b5e89602e283c6c155411d043c14c" gracePeriod=30 Feb 27 16:31:28 crc kubenswrapper[4830]: I0227 16:31:28.552005 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c04f58d0-dbc3-46f2-bea2-96a29fc38dd1" containerName="nova-metadata-metadata" containerID="cri-o://134c014e2e6a2fc2f31f951c05bf58a5056e2c3d94228581fb80cc856d0ca673" gracePeriod=30 Feb 27 16:31:28 crc kubenswrapper[4830]: I0227 16:31:28.564383 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"050ac6bf-ac1c-406d-af59-2259ceb05ff8","Type":"ContainerStarted","Data":"2471a08c8f1dbd502aa24cfd61bccc96c9b0d247bc7a1f8681485d3df12f3999"} Feb 27 16:31:28 crc kubenswrapper[4830]: I0227 16:31:28.592309 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.309394386 podStartE2EDuration="6.592293369s" podCreationTimestamp="2026-02-27 16:31:22 +0000 UTC" firstStartedPulling="2026-02-27 16:31:23.851452705 +0000 UTC m=+1479.940725168" lastFinishedPulling="2026-02-27 16:31:27.134351688 +0000 UTC m=+1483.223624151" observedRunningTime="2026-02-27 16:31:28.589222045 +0000 UTC m=+1484.678494508" watchObservedRunningTime="2026-02-27 16:31:28.592293369 +0000 UTC m=+1484.681565832" Feb 27 16:31:28 crc kubenswrapper[4830]: I0227 16:31:28.612099 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.288804349 podStartE2EDuration="6.612084726s" podCreationTimestamp="2026-02-27 16:31:22 +0000 UTC" firstStartedPulling="2026-02-27 16:31:23.807805943 +0000 UTC m=+1479.897078406" lastFinishedPulling="2026-02-27 16:31:27.13108632 +0000 UTC m=+1483.220358783" observedRunningTime="2026-02-27 16:31:28.608053159 +0000 UTC m=+1484.697325622" watchObservedRunningTime="2026-02-27 16:31:28.612084726 +0000 UTC m=+1484.701357189" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.145268 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.243233 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c04f58d0-dbc3-46f2-bea2-96a29fc38dd1-config-data\") pod \"c04f58d0-dbc3-46f2-bea2-96a29fc38dd1\" (UID: \"c04f58d0-dbc3-46f2-bea2-96a29fc38dd1\") " Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.243282 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c04f58d0-dbc3-46f2-bea2-96a29fc38dd1-combined-ca-bundle\") pod \"c04f58d0-dbc3-46f2-bea2-96a29fc38dd1\" (UID: \"c04f58d0-dbc3-46f2-bea2-96a29fc38dd1\") " Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.243639 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c04f58d0-dbc3-46f2-bea2-96a29fc38dd1-logs\") pod \"c04f58d0-dbc3-46f2-bea2-96a29fc38dd1\" (UID: \"c04f58d0-dbc3-46f2-bea2-96a29fc38dd1\") " Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.243688 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7nx8\" (UniqueName: \"kubernetes.io/projected/c04f58d0-dbc3-46f2-bea2-96a29fc38dd1-kube-api-access-z7nx8\") pod \"c04f58d0-dbc3-46f2-bea2-96a29fc38dd1\" (UID: \"c04f58d0-dbc3-46f2-bea2-96a29fc38dd1\") " Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.245684 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c04f58d0-dbc3-46f2-bea2-96a29fc38dd1-logs" (OuterVolumeSpecName: "logs") pod "c04f58d0-dbc3-46f2-bea2-96a29fc38dd1" (UID: "c04f58d0-dbc3-46f2-bea2-96a29fc38dd1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.252362 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c04f58d0-dbc3-46f2-bea2-96a29fc38dd1-kube-api-access-z7nx8" (OuterVolumeSpecName: "kube-api-access-z7nx8") pod "c04f58d0-dbc3-46f2-bea2-96a29fc38dd1" (UID: "c04f58d0-dbc3-46f2-bea2-96a29fc38dd1"). InnerVolumeSpecName "kube-api-access-z7nx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.277769 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c04f58d0-dbc3-46f2-bea2-96a29fc38dd1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c04f58d0-dbc3-46f2-bea2-96a29fc38dd1" (UID: "c04f58d0-dbc3-46f2-bea2-96a29fc38dd1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.303306 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c04f58d0-dbc3-46f2-bea2-96a29fc38dd1-config-data" (OuterVolumeSpecName: "config-data") pod "c04f58d0-dbc3-46f2-bea2-96a29fc38dd1" (UID: "c04f58d0-dbc3-46f2-bea2-96a29fc38dd1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.346267 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c04f58d0-dbc3-46f2-bea2-96a29fc38dd1-logs\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.346306 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7nx8\" (UniqueName: \"kubernetes.io/projected/c04f58d0-dbc3-46f2-bea2-96a29fc38dd1-kube-api-access-z7nx8\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.346323 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c04f58d0-dbc3-46f2-bea2-96a29fc38dd1-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.346334 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c04f58d0-dbc3-46f2-bea2-96a29fc38dd1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.578197 4830 generic.go:334] "Generic (PLEG): container finished" podID="c04f58d0-dbc3-46f2-bea2-96a29fc38dd1" containerID="134c014e2e6a2fc2f31f951c05bf58a5056e2c3d94228581fb80cc856d0ca673" exitCode=0 Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.578622 4830 generic.go:334] "Generic (PLEG): container finished" podID="c04f58d0-dbc3-46f2-bea2-96a29fc38dd1" containerID="36a109282809e0a53e8d34f19e626157b45b5e89602e283c6c155411d043c14c" exitCode=143 Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.578272 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c04f58d0-dbc3-46f2-bea2-96a29fc38dd1","Type":"ContainerDied","Data":"134c014e2e6a2fc2f31f951c05bf58a5056e2c3d94228581fb80cc856d0ca673"} Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.578257 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.578740 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c04f58d0-dbc3-46f2-bea2-96a29fc38dd1","Type":"ContainerDied","Data":"36a109282809e0a53e8d34f19e626157b45b5e89602e283c6c155411d043c14c"} Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.578758 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c04f58d0-dbc3-46f2-bea2-96a29fc38dd1","Type":"ContainerDied","Data":"6c0244baad06e56a0dccdadda73d5cb52c60a92f4690d55d929ef7d6480c30d9"} Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.578779 4830 scope.go:117] "RemoveContainer" containerID="134c014e2e6a2fc2f31f951c05bf58a5056e2c3d94228581fb80cc856d0ca673" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.628061 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.640265 4830 scope.go:117] "RemoveContainer" containerID="36a109282809e0a53e8d34f19e626157b45b5e89602e283c6c155411d043c14c" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.640561 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.652263 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 27 16:31:29 crc kubenswrapper[4830]: E0227 16:31:29.652836 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c04f58d0-dbc3-46f2-bea2-96a29fc38dd1" containerName="nova-metadata-log" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.652863 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="c04f58d0-dbc3-46f2-bea2-96a29fc38dd1" containerName="nova-metadata-log" Feb 27 16:31:29 crc kubenswrapper[4830]: E0227 16:31:29.652894 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c04f58d0-dbc3-46f2-bea2-96a29fc38dd1" containerName="nova-metadata-metadata" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.652903 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="c04f58d0-dbc3-46f2-bea2-96a29fc38dd1" containerName="nova-metadata-metadata" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.653261 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="c04f58d0-dbc3-46f2-bea2-96a29fc38dd1" containerName="nova-metadata-log" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.653299 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="c04f58d0-dbc3-46f2-bea2-96a29fc38dd1" containerName="nova-metadata-metadata" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.654576 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.661209 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.665360 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.665672 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.696514 4830 scope.go:117] "RemoveContainer" containerID="134c014e2e6a2fc2f31f951c05bf58a5056e2c3d94228581fb80cc856d0ca673" Feb 27 16:31:29 crc kubenswrapper[4830]: E0227 16:31:29.697027 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"134c014e2e6a2fc2f31f951c05bf58a5056e2c3d94228581fb80cc856d0ca673\": container with ID starting with 134c014e2e6a2fc2f31f951c05bf58a5056e2c3d94228581fb80cc856d0ca673 not found: ID does not exist" containerID="134c014e2e6a2fc2f31f951c05bf58a5056e2c3d94228581fb80cc856d0ca673" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.697068 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"134c014e2e6a2fc2f31f951c05bf58a5056e2c3d94228581fb80cc856d0ca673"} err="failed to get container status \"134c014e2e6a2fc2f31f951c05bf58a5056e2c3d94228581fb80cc856d0ca673\": rpc error: code = NotFound desc = could not find container \"134c014e2e6a2fc2f31f951c05bf58a5056e2c3d94228581fb80cc856d0ca673\": container with ID starting with 134c014e2e6a2fc2f31f951c05bf58a5056e2c3d94228581fb80cc856d0ca673 not found: ID does not exist" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.697094 4830 scope.go:117] "RemoveContainer" containerID="36a109282809e0a53e8d34f19e626157b45b5e89602e283c6c155411d043c14c" Feb 27 16:31:29 crc kubenswrapper[4830]: E0227 16:31:29.697392 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36a109282809e0a53e8d34f19e626157b45b5e89602e283c6c155411d043c14c\": container with ID starting with 36a109282809e0a53e8d34f19e626157b45b5e89602e283c6c155411d043c14c not found: ID does not exist" containerID="36a109282809e0a53e8d34f19e626157b45b5e89602e283c6c155411d043c14c" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.697418 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36a109282809e0a53e8d34f19e626157b45b5e89602e283c6c155411d043c14c"} err="failed to get container status \"36a109282809e0a53e8d34f19e626157b45b5e89602e283c6c155411d043c14c\": rpc error: code = NotFound desc = could not find container \"36a109282809e0a53e8d34f19e626157b45b5e89602e283c6c155411d043c14c\": container with ID starting with 36a109282809e0a53e8d34f19e626157b45b5e89602e283c6c155411d043c14c not found: ID does not exist" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.697436 4830 scope.go:117] "RemoveContainer" containerID="134c014e2e6a2fc2f31f951c05bf58a5056e2c3d94228581fb80cc856d0ca673" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.697692 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"134c014e2e6a2fc2f31f951c05bf58a5056e2c3d94228581fb80cc856d0ca673"} err="failed to get container status \"134c014e2e6a2fc2f31f951c05bf58a5056e2c3d94228581fb80cc856d0ca673\": rpc error: code = NotFound desc = could not find container \"134c014e2e6a2fc2f31f951c05bf58a5056e2c3d94228581fb80cc856d0ca673\": container with ID starting with 134c014e2e6a2fc2f31f951c05bf58a5056e2c3d94228581fb80cc856d0ca673 not found: ID does not exist" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.697713 4830 scope.go:117] "RemoveContainer" containerID="36a109282809e0a53e8d34f19e626157b45b5e89602e283c6c155411d043c14c" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.698182 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36a109282809e0a53e8d34f19e626157b45b5e89602e283c6c155411d043c14c"} err="failed to get container status \"36a109282809e0a53e8d34f19e626157b45b5e89602e283c6c155411d043c14c\": rpc error: code = NotFound desc = could not find container \"36a109282809e0a53e8d34f19e626157b45b5e89602e283c6c155411d043c14c\": container with ID starting with 36a109282809e0a53e8d34f19e626157b45b5e89602e283c6c155411d043c14c not found: ID does not exist" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.756037 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76c31a2b-7df1-4d67-b7c9-71bbb2536891-logs\") pod \"nova-metadata-0\" (UID: \"76c31a2b-7df1-4d67-b7c9-71bbb2536891\") " pod="openstack/nova-metadata-0" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.756108 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2mkl\" (UniqueName: \"kubernetes.io/projected/76c31a2b-7df1-4d67-b7c9-71bbb2536891-kube-api-access-d2mkl\") pod \"nova-metadata-0\" (UID: \"76c31a2b-7df1-4d67-b7c9-71bbb2536891\") " pod="openstack/nova-metadata-0" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.756161 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c31a2b-7df1-4d67-b7c9-71bbb2536891-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"76c31a2b-7df1-4d67-b7c9-71bbb2536891\") " pod="openstack/nova-metadata-0" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.756213 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76c31a2b-7df1-4d67-b7c9-71bbb2536891-config-data\") pod \"nova-metadata-0\" (UID: \"76c31a2b-7df1-4d67-b7c9-71bbb2536891\") " pod="openstack/nova-metadata-0" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.756297 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/76c31a2b-7df1-4d67-b7c9-71bbb2536891-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"76c31a2b-7df1-4d67-b7c9-71bbb2536891\") " pod="openstack/nova-metadata-0" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.858825 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/76c31a2b-7df1-4d67-b7c9-71bbb2536891-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"76c31a2b-7df1-4d67-b7c9-71bbb2536891\") " pod="openstack/nova-metadata-0" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.858978 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76c31a2b-7df1-4d67-b7c9-71bbb2536891-logs\") pod \"nova-metadata-0\" (UID: \"76c31a2b-7df1-4d67-b7c9-71bbb2536891\") " pod="openstack/nova-metadata-0" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.859031 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2mkl\" (UniqueName: \"kubernetes.io/projected/76c31a2b-7df1-4d67-b7c9-71bbb2536891-kube-api-access-d2mkl\") pod \"nova-metadata-0\" (UID: \"76c31a2b-7df1-4d67-b7c9-71bbb2536891\") " pod="openstack/nova-metadata-0" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.859094 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c31a2b-7df1-4d67-b7c9-71bbb2536891-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"76c31a2b-7df1-4d67-b7c9-71bbb2536891\") " pod="openstack/nova-metadata-0" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.859152 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76c31a2b-7df1-4d67-b7c9-71bbb2536891-config-data\") pod \"nova-metadata-0\" (UID: \"76c31a2b-7df1-4d67-b7c9-71bbb2536891\") " pod="openstack/nova-metadata-0" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.861108 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76c31a2b-7df1-4d67-b7c9-71bbb2536891-logs\") pod \"nova-metadata-0\" (UID: \"76c31a2b-7df1-4d67-b7c9-71bbb2536891\") " pod="openstack/nova-metadata-0" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.863882 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/76c31a2b-7df1-4d67-b7c9-71bbb2536891-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"76c31a2b-7df1-4d67-b7c9-71bbb2536891\") " pod="openstack/nova-metadata-0" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.866113 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c31a2b-7df1-4d67-b7c9-71bbb2536891-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"76c31a2b-7df1-4d67-b7c9-71bbb2536891\") " pod="openstack/nova-metadata-0" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.872209 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76c31a2b-7df1-4d67-b7c9-71bbb2536891-config-data\") pod \"nova-metadata-0\" (UID: \"76c31a2b-7df1-4d67-b7c9-71bbb2536891\") " pod="openstack/nova-metadata-0" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.878998 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2mkl\" (UniqueName: \"kubernetes.io/projected/76c31a2b-7df1-4d67-b7c9-71bbb2536891-kube-api-access-d2mkl\") pod \"nova-metadata-0\" (UID: \"76c31a2b-7df1-4d67-b7c9-71bbb2536891\") " pod="openstack/nova-metadata-0" Feb 27 16:31:29 crc kubenswrapper[4830]: I0227 16:31:29.977991 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 16:31:30 crc kubenswrapper[4830]: I0227 16:31:30.512906 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 16:31:30 crc kubenswrapper[4830]: W0227 16:31:30.521286 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76c31a2b_7df1_4d67_b7c9_71bbb2536891.slice/crio-b4e868567a1c3c02b8bbbc9effd17618fb777d1865ea6cc995ba4e8003d50c25 WatchSource:0}: Error finding container b4e868567a1c3c02b8bbbc9effd17618fb777d1865ea6cc995ba4e8003d50c25: Status 404 returned error can't find the container with id b4e868567a1c3c02b8bbbc9effd17618fb777d1865ea6cc995ba4e8003d50c25 Feb 27 16:31:30 crc kubenswrapper[4830]: I0227 16:31:30.592109 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"76c31a2b-7df1-4d67-b7c9-71bbb2536891","Type":"ContainerStarted","Data":"b4e868567a1c3c02b8bbbc9effd17618fb777d1865ea6cc995ba4e8003d50c25"} Feb 27 16:31:30 crc kubenswrapper[4830]: I0227 16:31:30.776156 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c04f58d0-dbc3-46f2-bea2-96a29fc38dd1" path="/var/lib/kubelet/pods/c04f58d0-dbc3-46f2-bea2-96a29fc38dd1/volumes" Feb 27 16:31:31 crc kubenswrapper[4830]: I0227 16:31:31.606752 4830 generic.go:334] "Generic (PLEG): container finished" podID="5cf85768-fd08-43b7-a8bf-a2738e493b22" containerID="aaca4e638aa616674edc02748979015b7798beec0a50cf331a81661c6f522394" exitCode=0 Feb 27 16:31:31 crc kubenswrapper[4830]: I0227 16:31:31.606891 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8mnsh" event={"ID":"5cf85768-fd08-43b7-a8bf-a2738e493b22","Type":"ContainerDied","Data":"aaca4e638aa616674edc02748979015b7798beec0a50cf331a81661c6f522394"} Feb 27 16:31:31 crc kubenswrapper[4830]: I0227 16:31:31.611159 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"76c31a2b-7df1-4d67-b7c9-71bbb2536891","Type":"ContainerStarted","Data":"9830c8224ec287023f2ca25b198b4130742c807151da9ca3714819bfc8fdcd16"} Feb 27 16:31:31 crc kubenswrapper[4830]: I0227 16:31:31.611222 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"76c31a2b-7df1-4d67-b7c9-71bbb2536891","Type":"ContainerStarted","Data":"3fbcef26336cdd6474955ee97c16bb97a54b9c672d754410553a5973a0f5a054"} Feb 27 16:31:31 crc kubenswrapper[4830]: I0227 16:31:31.666348 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.666321695 podStartE2EDuration="2.666321695s" podCreationTimestamp="2026-02-27 16:31:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:31:31.65365604 +0000 UTC m=+1487.742928543" watchObservedRunningTime="2026-02-27 16:31:31.666321695 +0000 UTC m=+1487.755594188" Feb 27 16:31:32 crc kubenswrapper[4830]: I0227 16:31:32.825831 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 27 16:31:32 crc kubenswrapper[4830]: I0227 16:31:32.826189 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 27 16:31:32 crc kubenswrapper[4830]: I0227 16:31:32.884082 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 27 16:31:32 crc kubenswrapper[4830]: I0227 16:31:32.916641 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.117935 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8mnsh" Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.235334 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zb56\" (UniqueName: \"kubernetes.io/projected/5cf85768-fd08-43b7-a8bf-a2738e493b22-kube-api-access-8zb56\") pod \"5cf85768-fd08-43b7-a8bf-a2738e493b22\" (UID: \"5cf85768-fd08-43b7-a8bf-a2738e493b22\") " Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.235524 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cf85768-fd08-43b7-a8bf-a2738e493b22-combined-ca-bundle\") pod \"5cf85768-fd08-43b7-a8bf-a2738e493b22\" (UID: \"5cf85768-fd08-43b7-a8bf-a2738e493b22\") " Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.235767 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cf85768-fd08-43b7-a8bf-a2738e493b22-config-data\") pod \"5cf85768-fd08-43b7-a8bf-a2738e493b22\" (UID: \"5cf85768-fd08-43b7-a8bf-a2738e493b22\") " Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.236717 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cf85768-fd08-43b7-a8bf-a2738e493b22-scripts\") pod \"5cf85768-fd08-43b7-a8bf-a2738e493b22\" (UID: \"5cf85768-fd08-43b7-a8bf-a2738e493b22\") " Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.245606 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cf85768-fd08-43b7-a8bf-a2738e493b22-scripts" (OuterVolumeSpecName: "scripts") pod "5cf85768-fd08-43b7-a8bf-a2738e493b22" (UID: "5cf85768-fd08-43b7-a8bf-a2738e493b22"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.246077 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cf85768-fd08-43b7-a8bf-a2738e493b22-kube-api-access-8zb56" (OuterVolumeSpecName: "kube-api-access-8zb56") pod "5cf85768-fd08-43b7-a8bf-a2738e493b22" (UID: "5cf85768-fd08-43b7-a8bf-a2738e493b22"). InnerVolumeSpecName "kube-api-access-8zb56". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.268082 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cf85768-fd08-43b7-a8bf-a2738e493b22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5cf85768-fd08-43b7-a8bf-a2738e493b22" (UID: "5cf85768-fd08-43b7-a8bf-a2738e493b22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.278757 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cf85768-fd08-43b7-a8bf-a2738e493b22-config-data" (OuterVolumeSpecName: "config-data") pod "5cf85768-fd08-43b7-a8bf-a2738e493b22" (UID: "5cf85768-fd08-43b7-a8bf-a2738e493b22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.331226 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-p8gmd" Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.338916 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zb56\" (UniqueName: \"kubernetes.io/projected/5cf85768-fd08-43b7-a8bf-a2738e493b22-kube-api-access-8zb56\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.339245 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cf85768-fd08-43b7-a8bf-a2738e493b22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.339276 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cf85768-fd08-43b7-a8bf-a2738e493b22-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.339294 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cf85768-fd08-43b7-a8bf-a2738e493b22-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.426110 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-g94gr"] Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.426424 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-g94gr" podUID="eeb02ef6-6b7f-4e31-8446-f2376b49d69a" containerName="dnsmasq-dns" containerID="cri-o://8cb22e02dc7c56d9a73491851f8034a163c7f8516c7abd172d22f31cec725929" gracePeriod=10 Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.637646 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8mnsh" event={"ID":"5cf85768-fd08-43b7-a8bf-a2738e493b22","Type":"ContainerDied","Data":"9bc87efe4a6998e054a1cb73f7acdf00899c3787f6bbab5bcb911c5f203e54fb"} Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.637699 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bc87efe4a6998e054a1cb73f7acdf00899c3787f6bbab5bcb911c5f203e54fb" Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.637660 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8mnsh" Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.640740 4830 generic.go:334] "Generic (PLEG): container finished" podID="43877352-b9c6-4179-82a0-3b194a870e8a" containerID="e8228851dc153740caa4991add05b87921eb8d07bae6164bd7ec594683dd08a2" exitCode=0 Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.640806 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-b8tph" event={"ID":"43877352-b9c6-4179-82a0-3b194a870e8a","Type":"ContainerDied","Data":"e8228851dc153740caa4991add05b87921eb8d07bae6164bd7ec594683dd08a2"} Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.644402 4830 generic.go:334] "Generic (PLEG): container finished" podID="eeb02ef6-6b7f-4e31-8446-f2376b49d69a" containerID="8cb22e02dc7c56d9a73491851f8034a163c7f8516c7abd172d22f31cec725929" exitCode=0 Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.646104 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-g94gr" event={"ID":"eeb02ef6-6b7f-4e31-8446-f2376b49d69a","Type":"ContainerDied","Data":"8cb22e02dc7c56d9a73491851f8034a163c7f8516c7abd172d22f31cec725929"} Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.688790 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.761570 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.761925 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="050ac6bf-ac1c-406d-af59-2259ceb05ff8" containerName="nova-api-log" containerID="cri-o://00d0e6466d0ca2034b4b6ada51d1f529a9778eb8658e79f7cb55a956e8889ff0" gracePeriod=30 Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.762062 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="050ac6bf-ac1c-406d-af59-2259ceb05ff8" containerName="nova-api-api" containerID="cri-o://2471a08c8f1dbd502aa24cfd61bccc96c9b0d247bc7a1f8681485d3df12f3999" gracePeriod=30 Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.769116 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="050ac6bf-ac1c-406d-af59-2259ceb05ff8" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": EOF" Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.769144 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="050ac6bf-ac1c-406d-af59-2259ceb05ff8" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": EOF" Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.825490 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.825687 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="76c31a2b-7df1-4d67-b7c9-71bbb2536891" containerName="nova-metadata-log" containerID="cri-o://3fbcef26336cdd6474955ee97c16bb97a54b9c672d754410553a5973a0f5a054" gracePeriod=30 Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.826075 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="76c31a2b-7df1-4d67-b7c9-71bbb2536891" containerName="nova-metadata-metadata" containerID="cri-o://9830c8224ec287023f2ca25b198b4130742c807151da9ca3714819bfc8fdcd16" gracePeriod=30 Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.858185 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-g94gr" Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.951052 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ss7wg\" (UniqueName: \"kubernetes.io/projected/eeb02ef6-6b7f-4e31-8446-f2376b49d69a-kube-api-access-ss7wg\") pod \"eeb02ef6-6b7f-4e31-8446-f2376b49d69a\" (UID: \"eeb02ef6-6b7f-4e31-8446-f2376b49d69a\") " Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.951090 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eeb02ef6-6b7f-4e31-8446-f2376b49d69a-ovsdbserver-sb\") pod \"eeb02ef6-6b7f-4e31-8446-f2376b49d69a\" (UID: \"eeb02ef6-6b7f-4e31-8446-f2376b49d69a\") " Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.951134 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eeb02ef6-6b7f-4e31-8446-f2376b49d69a-dns-swift-storage-0\") pod \"eeb02ef6-6b7f-4e31-8446-f2376b49d69a\" (UID: \"eeb02ef6-6b7f-4e31-8446-f2376b49d69a\") " Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.951157 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eeb02ef6-6b7f-4e31-8446-f2376b49d69a-ovsdbserver-nb\") pod \"eeb02ef6-6b7f-4e31-8446-f2376b49d69a\" (UID: \"eeb02ef6-6b7f-4e31-8446-f2376b49d69a\") " Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.951343 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eeb02ef6-6b7f-4e31-8446-f2376b49d69a-dns-svc\") pod \"eeb02ef6-6b7f-4e31-8446-f2376b49d69a\" (UID: \"eeb02ef6-6b7f-4e31-8446-f2376b49d69a\") " Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.951404 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eeb02ef6-6b7f-4e31-8446-f2376b49d69a-config\") pod \"eeb02ef6-6b7f-4e31-8446-f2376b49d69a\" (UID: \"eeb02ef6-6b7f-4e31-8446-f2376b49d69a\") " Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.954879 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eeb02ef6-6b7f-4e31-8446-f2376b49d69a-kube-api-access-ss7wg" (OuterVolumeSpecName: "kube-api-access-ss7wg") pod "eeb02ef6-6b7f-4e31-8446-f2376b49d69a" (UID: "eeb02ef6-6b7f-4e31-8446-f2376b49d69a"). InnerVolumeSpecName "kube-api-access-ss7wg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.992006 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eeb02ef6-6b7f-4e31-8446-f2376b49d69a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "eeb02ef6-6b7f-4e31-8446-f2376b49d69a" (UID: "eeb02ef6-6b7f-4e31-8446-f2376b49d69a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.992839 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eeb02ef6-6b7f-4e31-8446-f2376b49d69a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eeb02ef6-6b7f-4e31-8446-f2376b49d69a" (UID: "eeb02ef6-6b7f-4e31-8446-f2376b49d69a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:31:33 crc kubenswrapper[4830]: I0227 16:31:33.995447 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eeb02ef6-6b7f-4e31-8446-f2376b49d69a-config" (OuterVolumeSpecName: "config") pod "eeb02ef6-6b7f-4e31-8446-f2376b49d69a" (UID: "eeb02ef6-6b7f-4e31-8446-f2376b49d69a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.001829 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eeb02ef6-6b7f-4e31-8446-f2376b49d69a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eeb02ef6-6b7f-4e31-8446-f2376b49d69a" (UID: "eeb02ef6-6b7f-4e31-8446-f2376b49d69a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.006252 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eeb02ef6-6b7f-4e31-8446-f2376b49d69a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eeb02ef6-6b7f-4e31-8446-f2376b49d69a" (UID: "eeb02ef6-6b7f-4e31-8446-f2376b49d69a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.053364 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eeb02ef6-6b7f-4e31-8446-f2376b49d69a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.053399 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eeb02ef6-6b7f-4e31-8446-f2376b49d69a-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.053415 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ss7wg\" (UniqueName: \"kubernetes.io/projected/eeb02ef6-6b7f-4e31-8446-f2376b49d69a-kube-api-access-ss7wg\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.053429 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eeb02ef6-6b7f-4e31-8446-f2376b49d69a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.053443 4830 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eeb02ef6-6b7f-4e31-8446-f2376b49d69a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.053454 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eeb02ef6-6b7f-4e31-8446-f2376b49d69a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.205540 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.382180 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.571286 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/76c31a2b-7df1-4d67-b7c9-71bbb2536891-nova-metadata-tls-certs\") pod \"76c31a2b-7df1-4d67-b7c9-71bbb2536891\" (UID: \"76c31a2b-7df1-4d67-b7c9-71bbb2536891\") " Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.571388 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76c31a2b-7df1-4d67-b7c9-71bbb2536891-logs\") pod \"76c31a2b-7df1-4d67-b7c9-71bbb2536891\" (UID: \"76c31a2b-7df1-4d67-b7c9-71bbb2536891\") " Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.571485 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76c31a2b-7df1-4d67-b7c9-71bbb2536891-config-data\") pod \"76c31a2b-7df1-4d67-b7c9-71bbb2536891\" (UID: \"76c31a2b-7df1-4d67-b7c9-71bbb2536891\") " Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.571508 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2mkl\" (UniqueName: \"kubernetes.io/projected/76c31a2b-7df1-4d67-b7c9-71bbb2536891-kube-api-access-d2mkl\") pod \"76c31a2b-7df1-4d67-b7c9-71bbb2536891\" (UID: \"76c31a2b-7df1-4d67-b7c9-71bbb2536891\") " Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.571571 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c31a2b-7df1-4d67-b7c9-71bbb2536891-combined-ca-bundle\") pod \"76c31a2b-7df1-4d67-b7c9-71bbb2536891\" (UID: \"76c31a2b-7df1-4d67-b7c9-71bbb2536891\") " Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.572506 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76c31a2b-7df1-4d67-b7c9-71bbb2536891-logs" (OuterVolumeSpecName: "logs") pod "76c31a2b-7df1-4d67-b7c9-71bbb2536891" (UID: "76c31a2b-7df1-4d67-b7c9-71bbb2536891"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.579181 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76c31a2b-7df1-4d67-b7c9-71bbb2536891-kube-api-access-d2mkl" (OuterVolumeSpecName: "kube-api-access-d2mkl") pod "76c31a2b-7df1-4d67-b7c9-71bbb2536891" (UID: "76c31a2b-7df1-4d67-b7c9-71bbb2536891"). InnerVolumeSpecName "kube-api-access-d2mkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.599671 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76c31a2b-7df1-4d67-b7c9-71bbb2536891-config-data" (OuterVolumeSpecName: "config-data") pod "76c31a2b-7df1-4d67-b7c9-71bbb2536891" (UID: "76c31a2b-7df1-4d67-b7c9-71bbb2536891"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.620460 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76c31a2b-7df1-4d67-b7c9-71bbb2536891-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76c31a2b-7df1-4d67-b7c9-71bbb2536891" (UID: "76c31a2b-7df1-4d67-b7c9-71bbb2536891"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.640191 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76c31a2b-7df1-4d67-b7c9-71bbb2536891-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "76c31a2b-7df1-4d67-b7c9-71bbb2536891" (UID: "76c31a2b-7df1-4d67-b7c9-71bbb2536891"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.658773 4830 generic.go:334] "Generic (PLEG): container finished" podID="050ac6bf-ac1c-406d-af59-2259ceb05ff8" containerID="00d0e6466d0ca2034b4b6ada51d1f529a9778eb8658e79f7cb55a956e8889ff0" exitCode=143 Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.659096 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"050ac6bf-ac1c-406d-af59-2259ceb05ff8","Type":"ContainerDied","Data":"00d0e6466d0ca2034b4b6ada51d1f529a9778eb8658e79f7cb55a956e8889ff0"} Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.661294 4830 generic.go:334] "Generic (PLEG): container finished" podID="76c31a2b-7df1-4d67-b7c9-71bbb2536891" containerID="9830c8224ec287023f2ca25b198b4130742c807151da9ca3714819bfc8fdcd16" exitCode=0 Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.661313 4830 generic.go:334] "Generic (PLEG): container finished" podID="76c31a2b-7df1-4d67-b7c9-71bbb2536891" containerID="3fbcef26336cdd6474955ee97c16bb97a54b9c672d754410553a5973a0f5a054" exitCode=143 Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.661371 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.661388 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"76c31a2b-7df1-4d67-b7c9-71bbb2536891","Type":"ContainerDied","Data":"9830c8224ec287023f2ca25b198b4130742c807151da9ca3714819bfc8fdcd16"} Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.661431 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"76c31a2b-7df1-4d67-b7c9-71bbb2536891","Type":"ContainerDied","Data":"3fbcef26336cdd6474955ee97c16bb97a54b9c672d754410553a5973a0f5a054"} Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.661442 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"76c31a2b-7df1-4d67-b7c9-71bbb2536891","Type":"ContainerDied","Data":"b4e868567a1c3c02b8bbbc9effd17618fb777d1865ea6cc995ba4e8003d50c25"} Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.661457 4830 scope.go:117] "RemoveContainer" containerID="9830c8224ec287023f2ca25b198b4130742c807151da9ca3714819bfc8fdcd16" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.663771 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-g94gr" event={"ID":"eeb02ef6-6b7f-4e31-8446-f2376b49d69a","Type":"ContainerDied","Data":"92ee00367a3de4046aa627874cec439510e16c033fb5062bbff130ecb2d11c30"} Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.664031 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-g94gr" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.674087 4830 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/76c31a2b-7df1-4d67-b7c9-71bbb2536891-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.674124 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76c31a2b-7df1-4d67-b7c9-71bbb2536891-logs\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.674137 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76c31a2b-7df1-4d67-b7c9-71bbb2536891-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.674150 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2mkl\" (UniqueName: \"kubernetes.io/projected/76c31a2b-7df1-4d67-b7c9-71bbb2536891-kube-api-access-d2mkl\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.674162 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76c31a2b-7df1-4d67-b7c9-71bbb2536891-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.794676 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-g94gr"] Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.795801 4830 scope.go:117] "RemoveContainer" containerID="3fbcef26336cdd6474955ee97c16bb97a54b9c672d754410553a5973a0f5a054" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.797513 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-g94gr"] Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.809059 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.819979 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.842002 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 27 16:31:34 crc kubenswrapper[4830]: E0227 16:31:34.842442 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeb02ef6-6b7f-4e31-8446-f2376b49d69a" containerName="dnsmasq-dns" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.842459 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeb02ef6-6b7f-4e31-8446-f2376b49d69a" containerName="dnsmasq-dns" Feb 27 16:31:34 crc kubenswrapper[4830]: E0227 16:31:34.842485 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76c31a2b-7df1-4d67-b7c9-71bbb2536891" containerName="nova-metadata-log" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.842491 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="76c31a2b-7df1-4d67-b7c9-71bbb2536891" containerName="nova-metadata-log" Feb 27 16:31:34 crc kubenswrapper[4830]: E0227 16:31:34.842514 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76c31a2b-7df1-4d67-b7c9-71bbb2536891" containerName="nova-metadata-metadata" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.842520 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="76c31a2b-7df1-4d67-b7c9-71bbb2536891" containerName="nova-metadata-metadata" Feb 27 16:31:34 crc kubenswrapper[4830]: E0227 16:31:34.842531 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cf85768-fd08-43b7-a8bf-a2738e493b22" containerName="nova-manage" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.842537 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cf85768-fd08-43b7-a8bf-a2738e493b22" containerName="nova-manage" Feb 27 16:31:34 crc kubenswrapper[4830]: E0227 16:31:34.842548 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeb02ef6-6b7f-4e31-8446-f2376b49d69a" containerName="init" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.842554 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeb02ef6-6b7f-4e31-8446-f2376b49d69a" containerName="init" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.842718 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeb02ef6-6b7f-4e31-8446-f2376b49d69a" containerName="dnsmasq-dns" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.842746 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="76c31a2b-7df1-4d67-b7c9-71bbb2536891" containerName="nova-metadata-log" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.842758 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cf85768-fd08-43b7-a8bf-a2738e493b22" containerName="nova-manage" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.842766 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="76c31a2b-7df1-4d67-b7c9-71bbb2536891" containerName="nova-metadata-metadata" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.843724 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.859304 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.859574 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.862450 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.870854 4830 scope.go:117] "RemoveContainer" containerID="9830c8224ec287023f2ca25b198b4130742c807151da9ca3714819bfc8fdcd16" Feb 27 16:31:34 crc kubenswrapper[4830]: E0227 16:31:34.878267 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9830c8224ec287023f2ca25b198b4130742c807151da9ca3714819bfc8fdcd16\": container with ID starting with 9830c8224ec287023f2ca25b198b4130742c807151da9ca3714819bfc8fdcd16 not found: ID does not exist" containerID="9830c8224ec287023f2ca25b198b4130742c807151da9ca3714819bfc8fdcd16" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.878307 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9830c8224ec287023f2ca25b198b4130742c807151da9ca3714819bfc8fdcd16"} err="failed to get container status \"9830c8224ec287023f2ca25b198b4130742c807151da9ca3714819bfc8fdcd16\": rpc error: code = NotFound desc = could not find container \"9830c8224ec287023f2ca25b198b4130742c807151da9ca3714819bfc8fdcd16\": container with ID starting with 9830c8224ec287023f2ca25b198b4130742c807151da9ca3714819bfc8fdcd16 not found: ID does not exist" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.878331 4830 scope.go:117] "RemoveContainer" containerID="3fbcef26336cdd6474955ee97c16bb97a54b9c672d754410553a5973a0f5a054" Feb 27 16:31:34 crc kubenswrapper[4830]: E0227 16:31:34.881086 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fbcef26336cdd6474955ee97c16bb97a54b9c672d754410553a5973a0f5a054\": container with ID starting with 3fbcef26336cdd6474955ee97c16bb97a54b9c672d754410553a5973a0f5a054 not found: ID does not exist" containerID="3fbcef26336cdd6474955ee97c16bb97a54b9c672d754410553a5973a0f5a054" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.881110 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fbcef26336cdd6474955ee97c16bb97a54b9c672d754410553a5973a0f5a054"} err="failed to get container status \"3fbcef26336cdd6474955ee97c16bb97a54b9c672d754410553a5973a0f5a054\": rpc error: code = NotFound desc = could not find container \"3fbcef26336cdd6474955ee97c16bb97a54b9c672d754410553a5973a0f5a054\": container with ID starting with 3fbcef26336cdd6474955ee97c16bb97a54b9c672d754410553a5973a0f5a054 not found: ID does not exist" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.881142 4830 scope.go:117] "RemoveContainer" containerID="9830c8224ec287023f2ca25b198b4130742c807151da9ca3714819bfc8fdcd16" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.882902 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9830c8224ec287023f2ca25b198b4130742c807151da9ca3714819bfc8fdcd16"} err="failed to get container status \"9830c8224ec287023f2ca25b198b4130742c807151da9ca3714819bfc8fdcd16\": rpc error: code = NotFound desc = could not find container \"9830c8224ec287023f2ca25b198b4130742c807151da9ca3714819bfc8fdcd16\": container with ID starting with 9830c8224ec287023f2ca25b198b4130742c807151da9ca3714819bfc8fdcd16 not found: ID does not exist" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.882968 4830 scope.go:117] "RemoveContainer" containerID="3fbcef26336cdd6474955ee97c16bb97a54b9c672d754410553a5973a0f5a054" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.883291 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fbcef26336cdd6474955ee97c16bb97a54b9c672d754410553a5973a0f5a054"} err="failed to get container status \"3fbcef26336cdd6474955ee97c16bb97a54b9c672d754410553a5973a0f5a054\": rpc error: code = NotFound desc = could not find container \"3fbcef26336cdd6474955ee97c16bb97a54b9c672d754410553a5973a0f5a054\": container with ID starting with 3fbcef26336cdd6474955ee97c16bb97a54b9c672d754410553a5973a0f5a054 not found: ID does not exist" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.883310 4830 scope.go:117] "RemoveContainer" containerID="8cb22e02dc7c56d9a73491851f8034a163c7f8516c7abd172d22f31cec725929" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.952242 4830 scope.go:117] "RemoveContainer" containerID="533b31f6d0ca4f32c6256537889ca87b10608b92bd1415efbb3780a2f2b99d4c" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.983694 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34e10b21-9e53-464a-a707-cb587ab15199-config-data\") pod \"nova-metadata-0\" (UID: \"34e10b21-9e53-464a-a707-cb587ab15199\") " pod="openstack/nova-metadata-0" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.983767 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4klkj\" (UniqueName: \"kubernetes.io/projected/34e10b21-9e53-464a-a707-cb587ab15199-kube-api-access-4klkj\") pod \"nova-metadata-0\" (UID: \"34e10b21-9e53-464a-a707-cb587ab15199\") " pod="openstack/nova-metadata-0" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.983816 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/34e10b21-9e53-464a-a707-cb587ab15199-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"34e10b21-9e53-464a-a707-cb587ab15199\") " pod="openstack/nova-metadata-0" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.983895 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34e10b21-9e53-464a-a707-cb587ab15199-logs\") pod \"nova-metadata-0\" (UID: \"34e10b21-9e53-464a-a707-cb587ab15199\") " pod="openstack/nova-metadata-0" Feb 27 16:31:34 crc kubenswrapper[4830]: I0227 16:31:34.983961 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e10b21-9e53-464a-a707-cb587ab15199-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"34e10b21-9e53-464a-a707-cb587ab15199\") " pod="openstack/nova-metadata-0" Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.085268 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34e10b21-9e53-464a-a707-cb587ab15199-logs\") pod \"nova-metadata-0\" (UID: \"34e10b21-9e53-464a-a707-cb587ab15199\") " pod="openstack/nova-metadata-0" Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.085573 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e10b21-9e53-464a-a707-cb587ab15199-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"34e10b21-9e53-464a-a707-cb587ab15199\") " pod="openstack/nova-metadata-0" Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.085643 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34e10b21-9e53-464a-a707-cb587ab15199-config-data\") pod \"nova-metadata-0\" (UID: \"34e10b21-9e53-464a-a707-cb587ab15199\") " pod="openstack/nova-metadata-0" Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.085686 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4klkj\" (UniqueName: \"kubernetes.io/projected/34e10b21-9e53-464a-a707-cb587ab15199-kube-api-access-4klkj\") pod \"nova-metadata-0\" (UID: \"34e10b21-9e53-464a-a707-cb587ab15199\") " pod="openstack/nova-metadata-0" Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.085715 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/34e10b21-9e53-464a-a707-cb587ab15199-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"34e10b21-9e53-464a-a707-cb587ab15199\") " pod="openstack/nova-metadata-0" Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.085744 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34e10b21-9e53-464a-a707-cb587ab15199-logs\") pod \"nova-metadata-0\" (UID: \"34e10b21-9e53-464a-a707-cb587ab15199\") " pod="openstack/nova-metadata-0" Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.093472 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/34e10b21-9e53-464a-a707-cb587ab15199-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"34e10b21-9e53-464a-a707-cb587ab15199\") " pod="openstack/nova-metadata-0" Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.093577 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34e10b21-9e53-464a-a707-cb587ab15199-config-data\") pod \"nova-metadata-0\" (UID: \"34e10b21-9e53-464a-a707-cb587ab15199\") " pod="openstack/nova-metadata-0" Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.094549 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e10b21-9e53-464a-a707-cb587ab15199-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"34e10b21-9e53-464a-a707-cb587ab15199\") " pod="openstack/nova-metadata-0" Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.108060 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4klkj\" (UniqueName: \"kubernetes.io/projected/34e10b21-9e53-464a-a707-cb587ab15199-kube-api-access-4klkj\") pod \"nova-metadata-0\" (UID: \"34e10b21-9e53-464a-a707-cb587ab15199\") " pod="openstack/nova-metadata-0" Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.181028 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-b8tph" Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.195363 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.288609 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43877352-b9c6-4179-82a0-3b194a870e8a-scripts\") pod \"43877352-b9c6-4179-82a0-3b194a870e8a\" (UID: \"43877352-b9c6-4179-82a0-3b194a870e8a\") " Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.288685 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43877352-b9c6-4179-82a0-3b194a870e8a-config-data\") pod \"43877352-b9c6-4179-82a0-3b194a870e8a\" (UID: \"43877352-b9c6-4179-82a0-3b194a870e8a\") " Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.288762 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43877352-b9c6-4179-82a0-3b194a870e8a-combined-ca-bundle\") pod \"43877352-b9c6-4179-82a0-3b194a870e8a\" (UID: \"43877352-b9c6-4179-82a0-3b194a870e8a\") " Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.288792 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spp26\" (UniqueName: \"kubernetes.io/projected/43877352-b9c6-4179-82a0-3b194a870e8a-kube-api-access-spp26\") pod \"43877352-b9c6-4179-82a0-3b194a870e8a\" (UID: \"43877352-b9c6-4179-82a0-3b194a870e8a\") " Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.293117 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43877352-b9c6-4179-82a0-3b194a870e8a-scripts" (OuterVolumeSpecName: "scripts") pod "43877352-b9c6-4179-82a0-3b194a870e8a" (UID: "43877352-b9c6-4179-82a0-3b194a870e8a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.297445 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43877352-b9c6-4179-82a0-3b194a870e8a-kube-api-access-spp26" (OuterVolumeSpecName: "kube-api-access-spp26") pod "43877352-b9c6-4179-82a0-3b194a870e8a" (UID: "43877352-b9c6-4179-82a0-3b194a870e8a"). InnerVolumeSpecName "kube-api-access-spp26". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.317158 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43877352-b9c6-4179-82a0-3b194a870e8a-config-data" (OuterVolumeSpecName: "config-data") pod "43877352-b9c6-4179-82a0-3b194a870e8a" (UID: "43877352-b9c6-4179-82a0-3b194a870e8a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.331625 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43877352-b9c6-4179-82a0-3b194a870e8a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43877352-b9c6-4179-82a0-3b194a870e8a" (UID: "43877352-b9c6-4179-82a0-3b194a870e8a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.391126 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43877352-b9c6-4179-82a0-3b194a870e8a-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.391150 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43877352-b9c6-4179-82a0-3b194a870e8a-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.391161 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43877352-b9c6-4179-82a0-3b194a870e8a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.391172 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spp26\" (UniqueName: \"kubernetes.io/projected/43877352-b9c6-4179-82a0-3b194a870e8a-kube-api-access-spp26\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.687577 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="50ea60ad-3437-435c-ba9c-462adae597a2" containerName="nova-scheduler-scheduler" containerID="cri-o://309f6573e73b8d560ad1c1da3fcdeddf2c652c2005f376d7dfe4b638529e7068" gracePeriod=30 Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.687835 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-b8tph" Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.690588 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-b8tph" event={"ID":"43877352-b9c6-4179-82a0-3b194a870e8a","Type":"ContainerDied","Data":"da6b5581921b79491915387ead00a363d139fbf422b269d22e3d2829aa4943e8"} Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.690663 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da6b5581921b79491915387ead00a363d139fbf422b269d22e3d2829aa4943e8" Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.729478 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 16:31:35 crc kubenswrapper[4830]: W0227 16:31:35.731769 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34e10b21_9e53_464a_a707_cb587ab15199.slice/crio-d4174918fb6c20d990c1995356845eb5e906d733e7b0ba614eec5de386d4c062 WatchSource:0}: Error finding container d4174918fb6c20d990c1995356845eb5e906d733e7b0ba614eec5de386d4c062: Status 404 returned error can't find the container with id d4174918fb6c20d990c1995356845eb5e906d733e7b0ba614eec5de386d4c062 Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.757564 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 27 16:31:35 crc kubenswrapper[4830]: E0227 16:31:35.758048 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43877352-b9c6-4179-82a0-3b194a870e8a" containerName="nova-cell1-conductor-db-sync" Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.758061 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="43877352-b9c6-4179-82a0-3b194a870e8a" containerName="nova-cell1-conductor-db-sync" Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.758228 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="43877352-b9c6-4179-82a0-3b194a870e8a" containerName="nova-cell1-conductor-db-sync" Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.758833 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.760677 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.778431 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.798601 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a989aa76-9246-46b2-9f1e-7900cfecedc2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a989aa76-9246-46b2-9f1e-7900cfecedc2\") " pod="openstack/nova-cell1-conductor-0" Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.800079 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a989aa76-9246-46b2-9f1e-7900cfecedc2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a989aa76-9246-46b2-9f1e-7900cfecedc2\") " pod="openstack/nova-cell1-conductor-0" Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.800129 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcn4p\" (UniqueName: \"kubernetes.io/projected/a989aa76-9246-46b2-9f1e-7900cfecedc2-kube-api-access-rcn4p\") pod \"nova-cell1-conductor-0\" (UID: \"a989aa76-9246-46b2-9f1e-7900cfecedc2\") " pod="openstack/nova-cell1-conductor-0" Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.902802 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a989aa76-9246-46b2-9f1e-7900cfecedc2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a989aa76-9246-46b2-9f1e-7900cfecedc2\") " pod="openstack/nova-cell1-conductor-0" Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.903019 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a989aa76-9246-46b2-9f1e-7900cfecedc2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a989aa76-9246-46b2-9f1e-7900cfecedc2\") " pod="openstack/nova-cell1-conductor-0" Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.903059 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcn4p\" (UniqueName: \"kubernetes.io/projected/a989aa76-9246-46b2-9f1e-7900cfecedc2-kube-api-access-rcn4p\") pod \"nova-cell1-conductor-0\" (UID: \"a989aa76-9246-46b2-9f1e-7900cfecedc2\") " pod="openstack/nova-cell1-conductor-0" Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.907925 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a989aa76-9246-46b2-9f1e-7900cfecedc2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a989aa76-9246-46b2-9f1e-7900cfecedc2\") " pod="openstack/nova-cell1-conductor-0" Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.911479 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a989aa76-9246-46b2-9f1e-7900cfecedc2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a989aa76-9246-46b2-9f1e-7900cfecedc2\") " pod="openstack/nova-cell1-conductor-0" Feb 27 16:31:35 crc kubenswrapper[4830]: I0227 16:31:35.920641 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcn4p\" (UniqueName: \"kubernetes.io/projected/a989aa76-9246-46b2-9f1e-7900cfecedc2-kube-api-access-rcn4p\") pod \"nova-cell1-conductor-0\" (UID: \"a989aa76-9246-46b2-9f1e-7900cfecedc2\") " pod="openstack/nova-cell1-conductor-0" Feb 27 16:31:36 crc kubenswrapper[4830]: I0227 16:31:36.080594 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 27 16:31:36 crc kubenswrapper[4830]: I0227 16:31:36.586390 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 27 16:31:36 crc kubenswrapper[4830]: I0227 16:31:36.704785 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34e10b21-9e53-464a-a707-cb587ab15199","Type":"ContainerStarted","Data":"ed489a7ac08882b9d78c835ff27e520076b06b9bccf96912640b432acd899e9a"} Feb 27 16:31:36 crc kubenswrapper[4830]: I0227 16:31:36.705470 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34e10b21-9e53-464a-a707-cb587ab15199","Type":"ContainerStarted","Data":"88b9e53a42c4a09fc14bc5ea02179f51920e7ad14dd3c90d36fd4745296b055e"} Feb 27 16:31:36 crc kubenswrapper[4830]: I0227 16:31:36.705495 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34e10b21-9e53-464a-a707-cb587ab15199","Type":"ContainerStarted","Data":"d4174918fb6c20d990c1995356845eb5e906d733e7b0ba614eec5de386d4c062"} Feb 27 16:31:36 crc kubenswrapper[4830]: I0227 16:31:36.707309 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a989aa76-9246-46b2-9f1e-7900cfecedc2","Type":"ContainerStarted","Data":"0cdeaecb8f58ab83bb70e3c942e1583e6c782dcc702e86c59532ba7ea8a3d3a3"} Feb 27 16:31:36 crc kubenswrapper[4830]: I0227 16:31:36.745371 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.745334515 podStartE2EDuration="2.745334515s" podCreationTimestamp="2026-02-27 16:31:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:31:36.729840881 +0000 UTC m=+1492.819113384" watchObservedRunningTime="2026-02-27 16:31:36.745334515 +0000 UTC m=+1492.834607008" Feb 27 16:31:36 crc kubenswrapper[4830]: I0227 16:31:36.778716 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76c31a2b-7df1-4d67-b7c9-71bbb2536891" path="/var/lib/kubelet/pods/76c31a2b-7df1-4d67-b7c9-71bbb2536891/volumes" Feb 27 16:31:36 crc kubenswrapper[4830]: I0227 16:31:36.779529 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eeb02ef6-6b7f-4e31-8446-f2376b49d69a" path="/var/lib/kubelet/pods/eeb02ef6-6b7f-4e31-8446-f2376b49d69a/volumes" Feb 27 16:31:37 crc kubenswrapper[4830]: I0227 16:31:37.725194 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a989aa76-9246-46b2-9f1e-7900cfecedc2","Type":"ContainerStarted","Data":"0177eede3f4945d97bcd0d90fed75c1aa58d1276a7fd71e80b0683515562f9b1"} Feb 27 16:31:37 crc kubenswrapper[4830]: I0227 16:31:37.725407 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 27 16:31:37 crc kubenswrapper[4830]: E0227 16:31:37.882823 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="309f6573e73b8d560ad1c1da3fcdeddf2c652c2005f376d7dfe4b638529e7068" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 27 16:31:37 crc kubenswrapper[4830]: E0227 16:31:37.885499 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="309f6573e73b8d560ad1c1da3fcdeddf2c652c2005f376d7dfe4b638529e7068" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 27 16:31:37 crc kubenswrapper[4830]: E0227 16:31:37.889836 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="309f6573e73b8d560ad1c1da3fcdeddf2c652c2005f376d7dfe4b638529e7068" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 27 16:31:37 crc kubenswrapper[4830]: E0227 16:31:37.889900 4830 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="50ea60ad-3437-435c-ba9c-462adae597a2" containerName="nova-scheduler-scheduler" Feb 27 16:31:38 crc kubenswrapper[4830]: E0227 16:31:38.818737 4830 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50ea60ad_3437_435c_ba9c_462adae597a2.slice/crio-309f6573e73b8d560ad1c1da3fcdeddf2c652c2005f376d7dfe4b638529e7068.scope\": RecentStats: unable to find data in memory cache]" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.323070 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.344209 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=4.34418304 podStartE2EDuration="4.34418304s" podCreationTimestamp="2026-02-27 16:31:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:31:37.755330863 +0000 UTC m=+1493.844603366" watchObservedRunningTime="2026-02-27 16:31:39.34418304 +0000 UTC m=+1495.433455543" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.493120 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ea60ad-3437-435c-ba9c-462adae597a2-combined-ca-bundle\") pod \"50ea60ad-3437-435c-ba9c-462adae597a2\" (UID: \"50ea60ad-3437-435c-ba9c-462adae597a2\") " Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.493241 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwxq2\" (UniqueName: \"kubernetes.io/projected/50ea60ad-3437-435c-ba9c-462adae597a2-kube-api-access-dwxq2\") pod \"50ea60ad-3437-435c-ba9c-462adae597a2\" (UID: \"50ea60ad-3437-435c-ba9c-462adae597a2\") " Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.493330 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50ea60ad-3437-435c-ba9c-462adae597a2-config-data\") pod \"50ea60ad-3437-435c-ba9c-462adae597a2\" (UID: \"50ea60ad-3437-435c-ba9c-462adae597a2\") " Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.500570 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50ea60ad-3437-435c-ba9c-462adae597a2-kube-api-access-dwxq2" (OuterVolumeSpecName: "kube-api-access-dwxq2") pod "50ea60ad-3437-435c-ba9c-462adae597a2" (UID: "50ea60ad-3437-435c-ba9c-462adae597a2"). InnerVolumeSpecName "kube-api-access-dwxq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.535816 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50ea60ad-3437-435c-ba9c-462adae597a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50ea60ad-3437-435c-ba9c-462adae597a2" (UID: "50ea60ad-3437-435c-ba9c-462adae597a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.565825 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50ea60ad-3437-435c-ba9c-462adae597a2-config-data" (OuterVolumeSpecName: "config-data") pod "50ea60ad-3437-435c-ba9c-462adae597a2" (UID: "50ea60ad-3437-435c-ba9c-462adae597a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.603697 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ea60ad-3437-435c-ba9c-462adae597a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.603750 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwxq2\" (UniqueName: \"kubernetes.io/projected/50ea60ad-3437-435c-ba9c-462adae597a2-kube-api-access-dwxq2\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.603763 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50ea60ad-3437-435c-ba9c-462adae597a2-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.605764 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.705333 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj288\" (UniqueName: \"kubernetes.io/projected/050ac6bf-ac1c-406d-af59-2259ceb05ff8-kube-api-access-nj288\") pod \"050ac6bf-ac1c-406d-af59-2259ceb05ff8\" (UID: \"050ac6bf-ac1c-406d-af59-2259ceb05ff8\") " Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.705479 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/050ac6bf-ac1c-406d-af59-2259ceb05ff8-config-data\") pod \"050ac6bf-ac1c-406d-af59-2259ceb05ff8\" (UID: \"050ac6bf-ac1c-406d-af59-2259ceb05ff8\") " Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.705614 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/050ac6bf-ac1c-406d-af59-2259ceb05ff8-logs\") pod \"050ac6bf-ac1c-406d-af59-2259ceb05ff8\" (UID: \"050ac6bf-ac1c-406d-af59-2259ceb05ff8\") " Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.705663 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050ac6bf-ac1c-406d-af59-2259ceb05ff8-combined-ca-bundle\") pod \"050ac6bf-ac1c-406d-af59-2259ceb05ff8\" (UID: \"050ac6bf-ac1c-406d-af59-2259ceb05ff8\") " Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.706146 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/050ac6bf-ac1c-406d-af59-2259ceb05ff8-logs" (OuterVolumeSpecName: "logs") pod "050ac6bf-ac1c-406d-af59-2259ceb05ff8" (UID: "050ac6bf-ac1c-406d-af59-2259ceb05ff8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.708540 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/050ac6bf-ac1c-406d-af59-2259ceb05ff8-kube-api-access-nj288" (OuterVolumeSpecName: "kube-api-access-nj288") pod "050ac6bf-ac1c-406d-af59-2259ceb05ff8" (UID: "050ac6bf-ac1c-406d-af59-2259ceb05ff8"). InnerVolumeSpecName "kube-api-access-nj288". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.729923 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050ac6bf-ac1c-406d-af59-2259ceb05ff8-config-data" (OuterVolumeSpecName: "config-data") pod "050ac6bf-ac1c-406d-af59-2259ceb05ff8" (UID: "050ac6bf-ac1c-406d-af59-2259ceb05ff8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.729987 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/050ac6bf-ac1c-406d-af59-2259ceb05ff8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "050ac6bf-ac1c-406d-af59-2259ceb05ff8" (UID: "050ac6bf-ac1c-406d-af59-2259ceb05ff8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.756025 4830 generic.go:334] "Generic (PLEG): container finished" podID="050ac6bf-ac1c-406d-af59-2259ceb05ff8" containerID="2471a08c8f1dbd502aa24cfd61bccc96c9b0d247bc7a1f8681485d3df12f3999" exitCode=0 Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.756084 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"050ac6bf-ac1c-406d-af59-2259ceb05ff8","Type":"ContainerDied","Data":"2471a08c8f1dbd502aa24cfd61bccc96c9b0d247bc7a1f8681485d3df12f3999"} Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.756107 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"050ac6bf-ac1c-406d-af59-2259ceb05ff8","Type":"ContainerDied","Data":"d5751dec71d6142a3e503c7f94f5e2e7059ae53aaab367d25e2d40bee6bad587"} Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.756127 4830 scope.go:117] "RemoveContainer" containerID="2471a08c8f1dbd502aa24cfd61bccc96c9b0d247bc7a1f8681485d3df12f3999" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.756224 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.759120 4830 generic.go:334] "Generic (PLEG): container finished" podID="50ea60ad-3437-435c-ba9c-462adae597a2" containerID="309f6573e73b8d560ad1c1da3fcdeddf2c652c2005f376d7dfe4b638529e7068" exitCode=0 Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.759145 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"50ea60ad-3437-435c-ba9c-462adae597a2","Type":"ContainerDied","Data":"309f6573e73b8d560ad1c1da3fcdeddf2c652c2005f376d7dfe4b638529e7068"} Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.759170 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"50ea60ad-3437-435c-ba9c-462adae597a2","Type":"ContainerDied","Data":"5bcf51d1d6dd08738ec4ebeb8c00b20f52b955747184e88d86384ce6321dae3a"} Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.759200 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.781447 4830 scope.go:117] "RemoveContainer" containerID="00d0e6466d0ca2034b4b6ada51d1f529a9778eb8658e79f7cb55a956e8889ff0" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.793862 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.807655 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/050ac6bf-ac1c-406d-af59-2259ceb05ff8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.807683 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj288\" (UniqueName: \"kubernetes.io/projected/050ac6bf-ac1c-406d-af59-2259ceb05ff8-kube-api-access-nj288\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.807695 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/050ac6bf-ac1c-406d-af59-2259ceb05ff8-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.807705 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/050ac6bf-ac1c-406d-af59-2259ceb05ff8-logs\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.812019 4830 scope.go:117] "RemoveContainer" containerID="2471a08c8f1dbd502aa24cfd61bccc96c9b0d247bc7a1f8681485d3df12f3999" Feb 27 16:31:39 crc kubenswrapper[4830]: E0227 16:31:39.823369 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2471a08c8f1dbd502aa24cfd61bccc96c9b0d247bc7a1f8681485d3df12f3999\": container with ID starting with 2471a08c8f1dbd502aa24cfd61bccc96c9b0d247bc7a1f8681485d3df12f3999 not found: ID does not exist" containerID="2471a08c8f1dbd502aa24cfd61bccc96c9b0d247bc7a1f8681485d3df12f3999" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.823425 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2471a08c8f1dbd502aa24cfd61bccc96c9b0d247bc7a1f8681485d3df12f3999"} err="failed to get container status \"2471a08c8f1dbd502aa24cfd61bccc96c9b0d247bc7a1f8681485d3df12f3999\": rpc error: code = NotFound desc = could not find container \"2471a08c8f1dbd502aa24cfd61bccc96c9b0d247bc7a1f8681485d3df12f3999\": container with ID starting with 2471a08c8f1dbd502aa24cfd61bccc96c9b0d247bc7a1f8681485d3df12f3999 not found: ID does not exist" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.823458 4830 scope.go:117] "RemoveContainer" containerID="00d0e6466d0ca2034b4b6ada51d1f529a9778eb8658e79f7cb55a956e8889ff0" Feb 27 16:31:39 crc kubenswrapper[4830]: E0227 16:31:39.824567 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00d0e6466d0ca2034b4b6ada51d1f529a9778eb8658e79f7cb55a956e8889ff0\": container with ID starting with 00d0e6466d0ca2034b4b6ada51d1f529a9778eb8658e79f7cb55a956e8889ff0 not found: ID does not exist" containerID="00d0e6466d0ca2034b4b6ada51d1f529a9778eb8658e79f7cb55a956e8889ff0" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.824593 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00d0e6466d0ca2034b4b6ada51d1f529a9778eb8658e79f7cb55a956e8889ff0"} err="failed to get container status \"00d0e6466d0ca2034b4b6ada51d1f529a9778eb8658e79f7cb55a956e8889ff0\": rpc error: code = NotFound desc = could not find container \"00d0e6466d0ca2034b4b6ada51d1f529a9778eb8658e79f7cb55a956e8889ff0\": container with ID starting with 00d0e6466d0ca2034b4b6ada51d1f529a9778eb8658e79f7cb55a956e8889ff0 not found: ID does not exist" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.824613 4830 scope.go:117] "RemoveContainer" containerID="309f6573e73b8d560ad1c1da3fcdeddf2c652c2005f376d7dfe4b638529e7068" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.826824 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.843165 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.858151 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.862790 4830 scope.go:117] "RemoveContainer" containerID="309f6573e73b8d560ad1c1da3fcdeddf2c652c2005f376d7dfe4b638529e7068" Feb 27 16:31:39 crc kubenswrapper[4830]: E0227 16:31:39.864774 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"309f6573e73b8d560ad1c1da3fcdeddf2c652c2005f376d7dfe4b638529e7068\": container with ID starting with 309f6573e73b8d560ad1c1da3fcdeddf2c652c2005f376d7dfe4b638529e7068 not found: ID does not exist" containerID="309f6573e73b8d560ad1c1da3fcdeddf2c652c2005f376d7dfe4b638529e7068" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.864813 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"309f6573e73b8d560ad1c1da3fcdeddf2c652c2005f376d7dfe4b638529e7068"} err="failed to get container status \"309f6573e73b8d560ad1c1da3fcdeddf2c652c2005f376d7dfe4b638529e7068\": rpc error: code = NotFound desc = could not find container \"309f6573e73b8d560ad1c1da3fcdeddf2c652c2005f376d7dfe4b638529e7068\": container with ID starting with 309f6573e73b8d560ad1c1da3fcdeddf2c652c2005f376d7dfe4b638529e7068 not found: ID does not exist" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.866352 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 27 16:31:39 crc kubenswrapper[4830]: E0227 16:31:39.866838 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="050ac6bf-ac1c-406d-af59-2259ceb05ff8" containerName="nova-api-api" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.866853 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="050ac6bf-ac1c-406d-af59-2259ceb05ff8" containerName="nova-api-api" Feb 27 16:31:39 crc kubenswrapper[4830]: E0227 16:31:39.866878 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="050ac6bf-ac1c-406d-af59-2259ceb05ff8" containerName="nova-api-log" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.866886 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="050ac6bf-ac1c-406d-af59-2259ceb05ff8" containerName="nova-api-log" Feb 27 16:31:39 crc kubenswrapper[4830]: E0227 16:31:39.866914 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50ea60ad-3437-435c-ba9c-462adae597a2" containerName="nova-scheduler-scheduler" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.866923 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="50ea60ad-3437-435c-ba9c-462adae597a2" containerName="nova-scheduler-scheduler" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.867156 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="050ac6bf-ac1c-406d-af59-2259ceb05ff8" containerName="nova-api-api" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.867177 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="50ea60ad-3437-435c-ba9c-462adae597a2" containerName="nova-scheduler-scheduler" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.867197 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="050ac6bf-ac1c-406d-af59-2259ceb05ff8" containerName="nova-api-log" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.868505 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.870341 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.875442 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.876905 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.879373 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.885856 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.900049 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.909441 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42523523-0867-4520-8f31-f11949fb08f4-logs\") pod \"nova-api-0\" (UID: \"42523523-0867-4520-8f31-f11949fb08f4\") " pod="openstack/nova-api-0" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.909501 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac57b71a-c649-451a-8cd8-a71f13e1387d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ac57b71a-c649-451a-8cd8-a71f13e1387d\") " pod="openstack/nova-scheduler-0" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.909659 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42523523-0867-4520-8f31-f11949fb08f4-config-data\") pod \"nova-api-0\" (UID: \"42523523-0867-4520-8f31-f11949fb08f4\") " pod="openstack/nova-api-0" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.909697 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42523523-0867-4520-8f31-f11949fb08f4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"42523523-0867-4520-8f31-f11949fb08f4\") " pod="openstack/nova-api-0" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.909788 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp7kr\" (UniqueName: \"kubernetes.io/projected/42523523-0867-4520-8f31-f11949fb08f4-kube-api-access-sp7kr\") pod \"nova-api-0\" (UID: \"42523523-0867-4520-8f31-f11949fb08f4\") " pod="openstack/nova-api-0" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.909971 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhp6l\" (UniqueName: \"kubernetes.io/projected/ac57b71a-c649-451a-8cd8-a71f13e1387d-kube-api-access-bhp6l\") pod \"nova-scheduler-0\" (UID: \"ac57b71a-c649-451a-8cd8-a71f13e1387d\") " pod="openstack/nova-scheduler-0" Feb 27 16:31:39 crc kubenswrapper[4830]: I0227 16:31:39.910062 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac57b71a-c649-451a-8cd8-a71f13e1387d-config-data\") pod \"nova-scheduler-0\" (UID: \"ac57b71a-c649-451a-8cd8-a71f13e1387d\") " pod="openstack/nova-scheduler-0" Feb 27 16:31:40 crc kubenswrapper[4830]: I0227 16:31:40.012846 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhp6l\" (UniqueName: \"kubernetes.io/projected/ac57b71a-c649-451a-8cd8-a71f13e1387d-kube-api-access-bhp6l\") pod \"nova-scheduler-0\" (UID: \"ac57b71a-c649-451a-8cd8-a71f13e1387d\") " pod="openstack/nova-scheduler-0" Feb 27 16:31:40 crc kubenswrapper[4830]: I0227 16:31:40.012985 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac57b71a-c649-451a-8cd8-a71f13e1387d-config-data\") pod \"nova-scheduler-0\" (UID: \"ac57b71a-c649-451a-8cd8-a71f13e1387d\") " pod="openstack/nova-scheduler-0" Feb 27 16:31:40 crc kubenswrapper[4830]: I0227 16:31:40.013046 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42523523-0867-4520-8f31-f11949fb08f4-logs\") pod \"nova-api-0\" (UID: \"42523523-0867-4520-8f31-f11949fb08f4\") " pod="openstack/nova-api-0" Feb 27 16:31:40 crc kubenswrapper[4830]: I0227 16:31:40.013117 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac57b71a-c649-451a-8cd8-a71f13e1387d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ac57b71a-c649-451a-8cd8-a71f13e1387d\") " pod="openstack/nova-scheduler-0" Feb 27 16:31:40 crc kubenswrapper[4830]: I0227 16:31:40.013220 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42523523-0867-4520-8f31-f11949fb08f4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"42523523-0867-4520-8f31-f11949fb08f4\") " pod="openstack/nova-api-0" Feb 27 16:31:40 crc kubenswrapper[4830]: I0227 16:31:40.013252 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42523523-0867-4520-8f31-f11949fb08f4-config-data\") pod \"nova-api-0\" (UID: \"42523523-0867-4520-8f31-f11949fb08f4\") " pod="openstack/nova-api-0" Feb 27 16:31:40 crc kubenswrapper[4830]: I0227 16:31:40.013319 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp7kr\" (UniqueName: \"kubernetes.io/projected/42523523-0867-4520-8f31-f11949fb08f4-kube-api-access-sp7kr\") pod \"nova-api-0\" (UID: \"42523523-0867-4520-8f31-f11949fb08f4\") " pod="openstack/nova-api-0" Feb 27 16:31:40 crc kubenswrapper[4830]: I0227 16:31:40.014168 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42523523-0867-4520-8f31-f11949fb08f4-logs\") pod \"nova-api-0\" (UID: \"42523523-0867-4520-8f31-f11949fb08f4\") " pod="openstack/nova-api-0" Feb 27 16:31:40 crc kubenswrapper[4830]: I0227 16:31:40.017214 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42523523-0867-4520-8f31-f11949fb08f4-config-data\") pod \"nova-api-0\" (UID: \"42523523-0867-4520-8f31-f11949fb08f4\") " pod="openstack/nova-api-0" Feb 27 16:31:40 crc kubenswrapper[4830]: I0227 16:31:40.017437 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac57b71a-c649-451a-8cd8-a71f13e1387d-config-data\") pod \"nova-scheduler-0\" (UID: \"ac57b71a-c649-451a-8cd8-a71f13e1387d\") " pod="openstack/nova-scheduler-0" Feb 27 16:31:40 crc kubenswrapper[4830]: I0227 16:31:40.018089 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac57b71a-c649-451a-8cd8-a71f13e1387d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ac57b71a-c649-451a-8cd8-a71f13e1387d\") " pod="openstack/nova-scheduler-0" Feb 27 16:31:40 crc kubenswrapper[4830]: I0227 16:31:40.024913 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42523523-0867-4520-8f31-f11949fb08f4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"42523523-0867-4520-8f31-f11949fb08f4\") " pod="openstack/nova-api-0" Feb 27 16:31:40 crc kubenswrapper[4830]: I0227 16:31:40.029631 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp7kr\" (UniqueName: \"kubernetes.io/projected/42523523-0867-4520-8f31-f11949fb08f4-kube-api-access-sp7kr\") pod \"nova-api-0\" (UID: \"42523523-0867-4520-8f31-f11949fb08f4\") " pod="openstack/nova-api-0" Feb 27 16:31:40 crc kubenswrapper[4830]: I0227 16:31:40.036205 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhp6l\" (UniqueName: \"kubernetes.io/projected/ac57b71a-c649-451a-8cd8-a71f13e1387d-kube-api-access-bhp6l\") pod \"nova-scheduler-0\" (UID: \"ac57b71a-c649-451a-8cd8-a71f13e1387d\") " pod="openstack/nova-scheduler-0" Feb 27 16:31:40 crc kubenswrapper[4830]: I0227 16:31:40.184581 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 16:31:40 crc kubenswrapper[4830]: I0227 16:31:40.195609 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 16:31:40 crc kubenswrapper[4830]: I0227 16:31:40.196205 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 27 16:31:40 crc kubenswrapper[4830]: I0227 16:31:40.197024 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 27 16:31:40 crc kubenswrapper[4830]: I0227 16:31:40.783284 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="050ac6bf-ac1c-406d-af59-2259ceb05ff8" path="/var/lib/kubelet/pods/050ac6bf-ac1c-406d-af59-2259ceb05ff8/volumes" Feb 27 16:31:40 crc kubenswrapper[4830]: I0227 16:31:40.784881 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50ea60ad-3437-435c-ba9c-462adae597a2" path="/var/lib/kubelet/pods/50ea60ad-3437-435c-ba9c-462adae597a2/volumes" Feb 27 16:31:40 crc kubenswrapper[4830]: I0227 16:31:40.787782 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 16:31:40 crc kubenswrapper[4830]: W0227 16:31:40.790550 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac57b71a_c649_451a_8cd8_a71f13e1387d.slice/crio-f2b9cc10cba138b3fef51c45a1d8d3056239bfd96370b191cfda84adc73c5df0 WatchSource:0}: Error finding container f2b9cc10cba138b3fef51c45a1d8d3056239bfd96370b191cfda84adc73c5df0: Status 404 returned error can't find the container with id f2b9cc10cba138b3fef51c45a1d8d3056239bfd96370b191cfda84adc73c5df0 Feb 27 16:31:40 crc kubenswrapper[4830]: I0227 16:31:40.858040 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 16:31:40 crc kubenswrapper[4830]: W0227 16:31:40.862327 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42523523_0867_4520_8f31_f11949fb08f4.slice/crio-af1324a2f66fe97068081a8714dbd5f48bdfe63649824b5ab78e0e7001c99424 WatchSource:0}: Error finding container af1324a2f66fe97068081a8714dbd5f48bdfe63649824b5ab78e0e7001c99424: Status 404 returned error can't find the container with id af1324a2f66fe97068081a8714dbd5f48bdfe63649824b5ab78e0e7001c99424 Feb 27 16:31:41 crc kubenswrapper[4830]: I0227 16:31:41.112341 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 27 16:31:41 crc kubenswrapper[4830]: I0227 16:31:41.782964 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"42523523-0867-4520-8f31-f11949fb08f4","Type":"ContainerStarted","Data":"1d6559428ad57ae73914e9bc0f3d82e639a8f07404438b74f346af3906edf948"} Feb 27 16:31:41 crc kubenswrapper[4830]: I0227 16:31:41.783232 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"42523523-0867-4520-8f31-f11949fb08f4","Type":"ContainerStarted","Data":"1382af5fe49790df2c419c664aad5930fef37ad780b2c578a935c9462e7352bb"} Feb 27 16:31:41 crc kubenswrapper[4830]: I0227 16:31:41.783252 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"42523523-0867-4520-8f31-f11949fb08f4","Type":"ContainerStarted","Data":"af1324a2f66fe97068081a8714dbd5f48bdfe63649824b5ab78e0e7001c99424"} Feb 27 16:31:41 crc kubenswrapper[4830]: I0227 16:31:41.814726 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ac57b71a-c649-451a-8cd8-a71f13e1387d","Type":"ContainerStarted","Data":"fa47c79e0160bd3ec8239f111a947e8a403a1646d83d1d09c44c1606c5678dd6"} Feb 27 16:31:41 crc kubenswrapper[4830]: I0227 16:31:41.814812 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ac57b71a-c649-451a-8cd8-a71f13e1387d","Type":"ContainerStarted","Data":"f2b9cc10cba138b3fef51c45a1d8d3056239bfd96370b191cfda84adc73c5df0"} Feb 27 16:31:41 crc kubenswrapper[4830]: I0227 16:31:41.837880 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.83785695 podStartE2EDuration="2.83785695s" podCreationTimestamp="2026-02-27 16:31:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:31:41.830102993 +0000 UTC m=+1497.919375486" watchObservedRunningTime="2026-02-27 16:31:41.83785695 +0000 UTC m=+1497.927129453" Feb 27 16:31:41 crc kubenswrapper[4830]: I0227 16:31:41.868933 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.86891669 podStartE2EDuration="2.86891669s" podCreationTimestamp="2026-02-27 16:31:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:31:41.855632899 +0000 UTC m=+1497.944905372" watchObservedRunningTime="2026-02-27 16:31:41.86891669 +0000 UTC m=+1497.958189163" Feb 27 16:31:45 crc kubenswrapper[4830]: I0227 16:31:45.196257 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 27 16:31:45 crc kubenswrapper[4830]: I0227 16:31:45.196761 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 27 16:31:45 crc kubenswrapper[4830]: I0227 16:31:45.196779 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 27 16:31:45 crc kubenswrapper[4830]: I0227 16:31:45.958839 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 27 16:31:46 crc kubenswrapper[4830]: I0227 16:31:46.214102 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="34e10b21-9e53-464a-a707-cb587ab15199" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.200:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 27 16:31:46 crc kubenswrapper[4830]: I0227 16:31:46.214110 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="34e10b21-9e53-464a-a707-cb587ab15199" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.200:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 27 16:31:49 crc kubenswrapper[4830]: I0227 16:31:49.839070 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 16:31:49 crc kubenswrapper[4830]: I0227 16:31:49.839794 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="4627a6ad-d0c1-4e72-9090-3ed47a060c24" containerName="kube-state-metrics" containerID="cri-o://ffd8687317c4fccae792a8d33e62b6e1f2b8b993f9c6bb0e24dc3a98f16c4c50" gracePeriod=30 Feb 27 16:31:50 crc kubenswrapper[4830]: I0227 16:31:50.186053 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 27 16:31:50 crc kubenswrapper[4830]: I0227 16:31:50.186803 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 27 16:31:50 crc kubenswrapper[4830]: I0227 16:31:50.196790 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 27 16:31:50 crc kubenswrapper[4830]: I0227 16:31:50.270221 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 27 16:31:50 crc kubenswrapper[4830]: I0227 16:31:50.415209 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 27 16:31:50 crc kubenswrapper[4830]: I0227 16:31:50.555173 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6965\" (UniqueName: \"kubernetes.io/projected/4627a6ad-d0c1-4e72-9090-3ed47a060c24-kube-api-access-w6965\") pod \"4627a6ad-d0c1-4e72-9090-3ed47a060c24\" (UID: \"4627a6ad-d0c1-4e72-9090-3ed47a060c24\") " Feb 27 16:31:50 crc kubenswrapper[4830]: I0227 16:31:50.561087 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4627a6ad-d0c1-4e72-9090-3ed47a060c24-kube-api-access-w6965" (OuterVolumeSpecName: "kube-api-access-w6965") pod "4627a6ad-d0c1-4e72-9090-3ed47a060c24" (UID: "4627a6ad-d0c1-4e72-9090-3ed47a060c24"). InnerVolumeSpecName "kube-api-access-w6965". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:31:50 crc kubenswrapper[4830]: I0227 16:31:50.657457 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6965\" (UniqueName: \"kubernetes.io/projected/4627a6ad-d0c1-4e72-9090-3ed47a060c24-kube-api-access-w6965\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:50 crc kubenswrapper[4830]: I0227 16:31:50.906138 4830 generic.go:334] "Generic (PLEG): container finished" podID="4627a6ad-d0c1-4e72-9090-3ed47a060c24" containerID="ffd8687317c4fccae792a8d33e62b6e1f2b8b993f9c6bb0e24dc3a98f16c4c50" exitCode=2 Feb 27 16:31:50 crc kubenswrapper[4830]: I0227 16:31:50.906191 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 27 16:31:50 crc kubenswrapper[4830]: I0227 16:31:50.906209 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4627a6ad-d0c1-4e72-9090-3ed47a060c24","Type":"ContainerDied","Data":"ffd8687317c4fccae792a8d33e62b6e1f2b8b993f9c6bb0e24dc3a98f16c4c50"} Feb 27 16:31:50 crc kubenswrapper[4830]: I0227 16:31:50.906516 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4627a6ad-d0c1-4e72-9090-3ed47a060c24","Type":"ContainerDied","Data":"75df5a7f07d1ff3fee1c155c7a5ec6bec4d204132d3f0a4ac9f4c73374d43908"} Feb 27 16:31:50 crc kubenswrapper[4830]: I0227 16:31:50.906550 4830 scope.go:117] "RemoveContainer" containerID="ffd8687317c4fccae792a8d33e62b6e1f2b8b993f9c6bb0e24dc3a98f16c4c50" Feb 27 16:31:50 crc kubenswrapper[4830]: I0227 16:31:50.936362 4830 scope.go:117] "RemoveContainer" containerID="ffd8687317c4fccae792a8d33e62b6e1f2b8b993f9c6bb0e24dc3a98f16c4c50" Feb 27 16:31:50 crc kubenswrapper[4830]: E0227 16:31:50.938621 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffd8687317c4fccae792a8d33e62b6e1f2b8b993f9c6bb0e24dc3a98f16c4c50\": container with ID starting with ffd8687317c4fccae792a8d33e62b6e1f2b8b993f9c6bb0e24dc3a98f16c4c50 not found: ID does not exist" containerID="ffd8687317c4fccae792a8d33e62b6e1f2b8b993f9c6bb0e24dc3a98f16c4c50" Feb 27 16:31:50 crc kubenswrapper[4830]: I0227 16:31:50.938658 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffd8687317c4fccae792a8d33e62b6e1f2b8b993f9c6bb0e24dc3a98f16c4c50"} err="failed to get container status \"ffd8687317c4fccae792a8d33e62b6e1f2b8b993f9c6bb0e24dc3a98f16c4c50\": rpc error: code = NotFound desc = could not find container \"ffd8687317c4fccae792a8d33e62b6e1f2b8b993f9c6bb0e24dc3a98f16c4c50\": container with ID starting with ffd8687317c4fccae792a8d33e62b6e1f2b8b993f9c6bb0e24dc3a98f16c4c50 not found: ID does not exist" Feb 27 16:31:50 crc kubenswrapper[4830]: I0227 16:31:50.943019 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 16:31:50 crc kubenswrapper[4830]: I0227 16:31:50.964003 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 16:31:50 crc kubenswrapper[4830]: I0227 16:31:50.976004 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 16:31:50 crc kubenswrapper[4830]: E0227 16:31:50.976380 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4627a6ad-d0c1-4e72-9090-3ed47a060c24" containerName="kube-state-metrics" Feb 27 16:31:50 crc kubenswrapper[4830]: I0227 16:31:50.976395 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="4627a6ad-d0c1-4e72-9090-3ed47a060c24" containerName="kube-state-metrics" Feb 27 16:31:50 crc kubenswrapper[4830]: I0227 16:31:50.976592 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="4627a6ad-d0c1-4e72-9090-3ed47a060c24" containerName="kube-state-metrics" Feb 27 16:31:50 crc kubenswrapper[4830]: I0227 16:31:50.985651 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 27 16:31:50 crc kubenswrapper[4830]: I0227 16:31:50.988287 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 27 16:31:50 crc kubenswrapper[4830]: I0227 16:31:50.992281 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 27 16:31:50 crc kubenswrapper[4830]: I0227 16:31:50.996253 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 27 16:31:51 crc kubenswrapper[4830]: I0227 16:31:51.003998 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 16:31:51 crc kubenswrapper[4830]: I0227 16:31:51.064465 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aef23409-e12b-4ef3-a968-f666e5a127ae-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"aef23409-e12b-4ef3-a968-f666e5a127ae\") " pod="openstack/kube-state-metrics-0" Feb 27 16:31:51 crc kubenswrapper[4830]: I0227 16:31:51.064619 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/aef23409-e12b-4ef3-a968-f666e5a127ae-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"aef23409-e12b-4ef3-a968-f666e5a127ae\") " pod="openstack/kube-state-metrics-0" Feb 27 16:31:51 crc kubenswrapper[4830]: I0227 16:31:51.064659 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/aef23409-e12b-4ef3-a968-f666e5a127ae-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"aef23409-e12b-4ef3-a968-f666e5a127ae\") " pod="openstack/kube-state-metrics-0" Feb 27 16:31:51 crc kubenswrapper[4830]: I0227 16:31:51.064693 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bdrk\" (UniqueName: \"kubernetes.io/projected/aef23409-e12b-4ef3-a968-f666e5a127ae-kube-api-access-7bdrk\") pod \"kube-state-metrics-0\" (UID: \"aef23409-e12b-4ef3-a968-f666e5a127ae\") " pod="openstack/kube-state-metrics-0" Feb 27 16:31:51 crc kubenswrapper[4830]: I0227 16:31:51.166223 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/aef23409-e12b-4ef3-a968-f666e5a127ae-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"aef23409-e12b-4ef3-a968-f666e5a127ae\") " pod="openstack/kube-state-metrics-0" Feb 27 16:31:51 crc kubenswrapper[4830]: I0227 16:31:51.166275 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/aef23409-e12b-4ef3-a968-f666e5a127ae-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"aef23409-e12b-4ef3-a968-f666e5a127ae\") " pod="openstack/kube-state-metrics-0" Feb 27 16:31:51 crc kubenswrapper[4830]: I0227 16:31:51.166304 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bdrk\" (UniqueName: \"kubernetes.io/projected/aef23409-e12b-4ef3-a968-f666e5a127ae-kube-api-access-7bdrk\") pod \"kube-state-metrics-0\" (UID: \"aef23409-e12b-4ef3-a968-f666e5a127ae\") " pod="openstack/kube-state-metrics-0" Feb 27 16:31:51 crc kubenswrapper[4830]: I0227 16:31:51.166386 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aef23409-e12b-4ef3-a968-f666e5a127ae-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"aef23409-e12b-4ef3-a968-f666e5a127ae\") " pod="openstack/kube-state-metrics-0" Feb 27 16:31:51 crc kubenswrapper[4830]: I0227 16:31:51.170753 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/aef23409-e12b-4ef3-a968-f666e5a127ae-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"aef23409-e12b-4ef3-a968-f666e5a127ae\") " pod="openstack/kube-state-metrics-0" Feb 27 16:31:51 crc kubenswrapper[4830]: I0227 16:31:51.171732 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/aef23409-e12b-4ef3-a968-f666e5a127ae-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"aef23409-e12b-4ef3-a968-f666e5a127ae\") " pod="openstack/kube-state-metrics-0" Feb 27 16:31:51 crc kubenswrapper[4830]: I0227 16:31:51.171850 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aef23409-e12b-4ef3-a968-f666e5a127ae-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"aef23409-e12b-4ef3-a968-f666e5a127ae\") " pod="openstack/kube-state-metrics-0" Feb 27 16:31:51 crc kubenswrapper[4830]: I0227 16:31:51.187757 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bdrk\" (UniqueName: \"kubernetes.io/projected/aef23409-e12b-4ef3-a968-f666e5a127ae-kube-api-access-7bdrk\") pod \"kube-state-metrics-0\" (UID: \"aef23409-e12b-4ef3-a968-f666e5a127ae\") " pod="openstack/kube-state-metrics-0" Feb 27 16:31:51 crc kubenswrapper[4830]: I0227 16:31:51.268296 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="42523523-0867-4520-8f31-f11949fb08f4" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.202:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 27 16:31:51 crc kubenswrapper[4830]: I0227 16:31:51.268356 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="42523523-0867-4520-8f31-f11949fb08f4" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.202:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 27 16:31:51 crc kubenswrapper[4830]: I0227 16:31:51.303107 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 27 16:31:51 crc kubenswrapper[4830]: I0227 16:31:51.754037 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 16:31:51 crc kubenswrapper[4830]: I0227 16:31:51.918078 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"aef23409-e12b-4ef3-a968-f666e5a127ae","Type":"ContainerStarted","Data":"936ad490bb55603d661c0e2ce4fe785a6cf5df1c8aaad0883b862facf2e9c797"} Feb 27 16:31:51 crc kubenswrapper[4830]: I0227 16:31:51.996499 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:31:51 crc kubenswrapper[4830]: I0227 16:31:51.997076 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="076dd25a-67a2-4121-84a9-4e994d1542ce" containerName="ceilometer-central-agent" containerID="cri-o://22741c5d01cb17e65dcdbeffac4d0598253449e5bff515a084bb2ef319a9e758" gracePeriod=30 Feb 27 16:31:51 crc kubenswrapper[4830]: I0227 16:31:51.997328 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="076dd25a-67a2-4121-84a9-4e994d1542ce" containerName="proxy-httpd" containerID="cri-o://0ea976097f7afb21fe4680f75bc7769e2ba1a39c27a7ebe2d079ca5e4a75d4db" gracePeriod=30 Feb 27 16:31:51 crc kubenswrapper[4830]: I0227 16:31:51.997456 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="076dd25a-67a2-4121-84a9-4e994d1542ce" containerName="sg-core" containerID="cri-o://21717c499a9f7e20a1678a6597bd34eb02984002201d2d1d998eb40e3e49be34" gracePeriod=30 Feb 27 16:31:51 crc kubenswrapper[4830]: I0227 16:31:51.997528 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="076dd25a-67a2-4121-84a9-4e994d1542ce" containerName="ceilometer-notification-agent" containerID="cri-o://ffc7ddaa779b4943f7503c86cac6715511b054f8dada907f6fc1ad5026b8db9f" gracePeriod=30 Feb 27 16:31:52 crc kubenswrapper[4830]: I0227 16:31:52.778363 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4627a6ad-d0c1-4e72-9090-3ed47a060c24" path="/var/lib/kubelet/pods/4627a6ad-d0c1-4e72-9090-3ed47a060c24/volumes" Feb 27 16:31:52 crc kubenswrapper[4830]: I0227 16:31:52.931088 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"aef23409-e12b-4ef3-a968-f666e5a127ae","Type":"ContainerStarted","Data":"1954751f889385192cc38a0ea54da4d4fbf33340070fa0346fa385af89879ac7"} Feb 27 16:31:52 crc kubenswrapper[4830]: I0227 16:31:52.931444 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 27 16:31:52 crc kubenswrapper[4830]: I0227 16:31:52.933852 4830 generic.go:334] "Generic (PLEG): container finished" podID="076dd25a-67a2-4121-84a9-4e994d1542ce" containerID="0ea976097f7afb21fe4680f75bc7769e2ba1a39c27a7ebe2d079ca5e4a75d4db" exitCode=0 Feb 27 16:31:52 crc kubenswrapper[4830]: I0227 16:31:52.933878 4830 generic.go:334] "Generic (PLEG): container finished" podID="076dd25a-67a2-4121-84a9-4e994d1542ce" containerID="21717c499a9f7e20a1678a6597bd34eb02984002201d2d1d998eb40e3e49be34" exitCode=2 Feb 27 16:31:52 crc kubenswrapper[4830]: I0227 16:31:52.933886 4830 generic.go:334] "Generic (PLEG): container finished" podID="076dd25a-67a2-4121-84a9-4e994d1542ce" containerID="22741c5d01cb17e65dcdbeffac4d0598253449e5bff515a084bb2ef319a9e758" exitCode=0 Feb 27 16:31:52 crc kubenswrapper[4830]: I0227 16:31:52.933906 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"076dd25a-67a2-4121-84a9-4e994d1542ce","Type":"ContainerDied","Data":"0ea976097f7afb21fe4680f75bc7769e2ba1a39c27a7ebe2d079ca5e4a75d4db"} Feb 27 16:31:52 crc kubenswrapper[4830]: I0227 16:31:52.933926 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"076dd25a-67a2-4121-84a9-4e994d1542ce","Type":"ContainerDied","Data":"21717c499a9f7e20a1678a6597bd34eb02984002201d2d1d998eb40e3e49be34"} Feb 27 16:31:52 crc kubenswrapper[4830]: I0227 16:31:52.933936 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"076dd25a-67a2-4121-84a9-4e994d1542ce","Type":"ContainerDied","Data":"22741c5d01cb17e65dcdbeffac4d0598253449e5bff515a084bb2ef319a9e758"} Feb 27 16:31:52 crc kubenswrapper[4830]: I0227 16:31:52.956518 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.556166651 podStartE2EDuration="2.956499276s" podCreationTimestamp="2026-02-27 16:31:50 +0000 UTC" firstStartedPulling="2026-02-27 16:31:51.75705359 +0000 UTC m=+1507.846326043" lastFinishedPulling="2026-02-27 16:31:52.157386195 +0000 UTC m=+1508.246658668" observedRunningTime="2026-02-27 16:31:52.951581908 +0000 UTC m=+1509.040854371" watchObservedRunningTime="2026-02-27 16:31:52.956499276 +0000 UTC m=+1509.045771739" Feb 27 16:31:54 crc kubenswrapper[4830]: I0227 16:31:54.964995 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 16:31:54 crc kubenswrapper[4830]: I0227 16:31:54.979385 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 16:31:54 crc kubenswrapper[4830]: I0227 16:31:54.979545 4830 generic.go:334] "Generic (PLEG): container finished" podID="076dd25a-67a2-4121-84a9-4e994d1542ce" containerID="ffc7ddaa779b4943f7503c86cac6715511b054f8dada907f6fc1ad5026b8db9f" exitCode=0 Feb 27 16:31:54 crc kubenswrapper[4830]: I0227 16:31:54.979599 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"076dd25a-67a2-4121-84a9-4e994d1542ce","Type":"ContainerDied","Data":"ffc7ddaa779b4943f7503c86cac6715511b054f8dada907f6fc1ad5026b8db9f"} Feb 27 16:31:54 crc kubenswrapper[4830]: I0227 16:31:54.979877 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"076dd25a-67a2-4121-84a9-4e994d1542ce","Type":"ContainerDied","Data":"493e75bff298a75d951a121b9e341b6a285c863941272f57bcb65dd611477c77"} Feb 27 16:31:54 crc kubenswrapper[4830]: I0227 16:31:54.979918 4830 scope.go:117] "RemoveContainer" containerID="0ea976097f7afb21fe4680f75bc7769e2ba1a39c27a7ebe2d079ca5e4a75d4db" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.030214 4830 scope.go:117] "RemoveContainer" containerID="21717c499a9f7e20a1678a6597bd34eb02984002201d2d1d998eb40e3e49be34" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.069299 4830 scope.go:117] "RemoveContainer" containerID="ffc7ddaa779b4943f7503c86cac6715511b054f8dada907f6fc1ad5026b8db9f" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.081313 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/076dd25a-67a2-4121-84a9-4e994d1542ce-log-httpd\") pod \"076dd25a-67a2-4121-84a9-4e994d1542ce\" (UID: \"076dd25a-67a2-4121-84a9-4e994d1542ce\") " Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.081564 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxpqs\" (UniqueName: \"kubernetes.io/projected/076dd25a-67a2-4121-84a9-4e994d1542ce-kube-api-access-pxpqs\") pod \"076dd25a-67a2-4121-84a9-4e994d1542ce\" (UID: \"076dd25a-67a2-4121-84a9-4e994d1542ce\") " Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.081709 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/076dd25a-67a2-4121-84a9-4e994d1542ce-combined-ca-bundle\") pod \"076dd25a-67a2-4121-84a9-4e994d1542ce\" (UID: \"076dd25a-67a2-4121-84a9-4e994d1542ce\") " Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.081755 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/076dd25a-67a2-4121-84a9-4e994d1542ce-run-httpd\") pod \"076dd25a-67a2-4121-84a9-4e994d1542ce\" (UID: \"076dd25a-67a2-4121-84a9-4e994d1542ce\") " Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.081797 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/076dd25a-67a2-4121-84a9-4e994d1542ce-config-data\") pod \"076dd25a-67a2-4121-84a9-4e994d1542ce\" (UID: \"076dd25a-67a2-4121-84a9-4e994d1542ce\") " Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.081858 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/076dd25a-67a2-4121-84a9-4e994d1542ce-scripts\") pod \"076dd25a-67a2-4121-84a9-4e994d1542ce\" (UID: \"076dd25a-67a2-4121-84a9-4e994d1542ce\") " Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.081908 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/076dd25a-67a2-4121-84a9-4e994d1542ce-sg-core-conf-yaml\") pod \"076dd25a-67a2-4121-84a9-4e994d1542ce\" (UID: \"076dd25a-67a2-4121-84a9-4e994d1542ce\") " Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.082080 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/076dd25a-67a2-4121-84a9-4e994d1542ce-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "076dd25a-67a2-4121-84a9-4e994d1542ce" (UID: "076dd25a-67a2-4121-84a9-4e994d1542ce"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.082209 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/076dd25a-67a2-4121-84a9-4e994d1542ce-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "076dd25a-67a2-4121-84a9-4e994d1542ce" (UID: "076dd25a-67a2-4121-84a9-4e994d1542ce"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.082638 4830 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/076dd25a-67a2-4121-84a9-4e994d1542ce-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.082665 4830 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/076dd25a-67a2-4121-84a9-4e994d1542ce-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.111009 4830 scope.go:117] "RemoveContainer" containerID="22741c5d01cb17e65dcdbeffac4d0598253449e5bff515a084bb2ef319a9e758" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.114381 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/076dd25a-67a2-4121-84a9-4e994d1542ce-kube-api-access-pxpqs" (OuterVolumeSpecName: "kube-api-access-pxpqs") pod "076dd25a-67a2-4121-84a9-4e994d1542ce" (UID: "076dd25a-67a2-4121-84a9-4e994d1542ce"). InnerVolumeSpecName "kube-api-access-pxpqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.115004 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/076dd25a-67a2-4121-84a9-4e994d1542ce-scripts" (OuterVolumeSpecName: "scripts") pod "076dd25a-67a2-4121-84a9-4e994d1542ce" (UID: "076dd25a-67a2-4121-84a9-4e994d1542ce"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.150741 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/076dd25a-67a2-4121-84a9-4e994d1542ce-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "076dd25a-67a2-4121-84a9-4e994d1542ce" (UID: "076dd25a-67a2-4121-84a9-4e994d1542ce"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.184839 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/076dd25a-67a2-4121-84a9-4e994d1542ce-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.184889 4830 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/076dd25a-67a2-4121-84a9-4e994d1542ce-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.184912 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxpqs\" (UniqueName: \"kubernetes.io/projected/076dd25a-67a2-4121-84a9-4e994d1542ce-kube-api-access-pxpqs\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.196171 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/076dd25a-67a2-4121-84a9-4e994d1542ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "076dd25a-67a2-4121-84a9-4e994d1542ce" (UID: "076dd25a-67a2-4121-84a9-4e994d1542ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.203904 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.206078 4830 scope.go:117] "RemoveContainer" containerID="0ea976097f7afb21fe4680f75bc7769e2ba1a39c27a7ebe2d079ca5e4a75d4db" Feb 27 16:31:55 crc kubenswrapper[4830]: E0227 16:31:55.206623 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ea976097f7afb21fe4680f75bc7769e2ba1a39c27a7ebe2d079ca5e4a75d4db\": container with ID starting with 0ea976097f7afb21fe4680f75bc7769e2ba1a39c27a7ebe2d079ca5e4a75d4db not found: ID does not exist" containerID="0ea976097f7afb21fe4680f75bc7769e2ba1a39c27a7ebe2d079ca5e4a75d4db" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.206677 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ea976097f7afb21fe4680f75bc7769e2ba1a39c27a7ebe2d079ca5e4a75d4db"} err="failed to get container status \"0ea976097f7afb21fe4680f75bc7769e2ba1a39c27a7ebe2d079ca5e4a75d4db\": rpc error: code = NotFound desc = could not find container \"0ea976097f7afb21fe4680f75bc7769e2ba1a39c27a7ebe2d079ca5e4a75d4db\": container with ID starting with 0ea976097f7afb21fe4680f75bc7769e2ba1a39c27a7ebe2d079ca5e4a75d4db not found: ID does not exist" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.206712 4830 scope.go:117] "RemoveContainer" containerID="21717c499a9f7e20a1678a6597bd34eb02984002201d2d1d998eb40e3e49be34" Feb 27 16:31:55 crc kubenswrapper[4830]: E0227 16:31:55.207100 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21717c499a9f7e20a1678a6597bd34eb02984002201d2d1d998eb40e3e49be34\": container with ID starting with 21717c499a9f7e20a1678a6597bd34eb02984002201d2d1d998eb40e3e49be34 not found: ID does not exist" containerID="21717c499a9f7e20a1678a6597bd34eb02984002201d2d1d998eb40e3e49be34" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.207141 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21717c499a9f7e20a1678a6597bd34eb02984002201d2d1d998eb40e3e49be34"} err="failed to get container status \"21717c499a9f7e20a1678a6597bd34eb02984002201d2d1d998eb40e3e49be34\": rpc error: code = NotFound desc = could not find container \"21717c499a9f7e20a1678a6597bd34eb02984002201d2d1d998eb40e3e49be34\": container with ID starting with 21717c499a9f7e20a1678a6597bd34eb02984002201d2d1d998eb40e3e49be34 not found: ID does not exist" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.207169 4830 scope.go:117] "RemoveContainer" containerID="ffc7ddaa779b4943f7503c86cac6715511b054f8dada907f6fc1ad5026b8db9f" Feb 27 16:31:55 crc kubenswrapper[4830]: E0227 16:31:55.207554 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffc7ddaa779b4943f7503c86cac6715511b054f8dada907f6fc1ad5026b8db9f\": container with ID starting with ffc7ddaa779b4943f7503c86cac6715511b054f8dada907f6fc1ad5026b8db9f not found: ID does not exist" containerID="ffc7ddaa779b4943f7503c86cac6715511b054f8dada907f6fc1ad5026b8db9f" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.207600 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffc7ddaa779b4943f7503c86cac6715511b054f8dada907f6fc1ad5026b8db9f"} err="failed to get container status \"ffc7ddaa779b4943f7503c86cac6715511b054f8dada907f6fc1ad5026b8db9f\": rpc error: code = NotFound desc = could not find container \"ffc7ddaa779b4943f7503c86cac6715511b054f8dada907f6fc1ad5026b8db9f\": container with ID starting with ffc7ddaa779b4943f7503c86cac6715511b054f8dada907f6fc1ad5026b8db9f not found: ID does not exist" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.207628 4830 scope.go:117] "RemoveContainer" containerID="22741c5d01cb17e65dcdbeffac4d0598253449e5bff515a084bb2ef319a9e758" Feb 27 16:31:55 crc kubenswrapper[4830]: E0227 16:31:55.207928 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22741c5d01cb17e65dcdbeffac4d0598253449e5bff515a084bb2ef319a9e758\": container with ID starting with 22741c5d01cb17e65dcdbeffac4d0598253449e5bff515a084bb2ef319a9e758 not found: ID does not exist" containerID="22741c5d01cb17e65dcdbeffac4d0598253449e5bff515a084bb2ef319a9e758" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.207963 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22741c5d01cb17e65dcdbeffac4d0598253449e5bff515a084bb2ef319a9e758"} err="failed to get container status \"22741c5d01cb17e65dcdbeffac4d0598253449e5bff515a084bb2ef319a9e758\": rpc error: code = NotFound desc = could not find container \"22741c5d01cb17e65dcdbeffac4d0598253449e5bff515a084bb2ef319a9e758\": container with ID starting with 22741c5d01cb17e65dcdbeffac4d0598253449e5bff515a084bb2ef319a9e758 not found: ID does not exist" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.212507 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.216356 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.239845 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/076dd25a-67a2-4121-84a9-4e994d1542ce-config-data" (OuterVolumeSpecName: "config-data") pod "076dd25a-67a2-4121-84a9-4e994d1542ce" (UID: "076dd25a-67a2-4121-84a9-4e994d1542ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.286998 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/076dd25a-67a2-4121-84a9-4e994d1542ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.287032 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/076dd25a-67a2-4121-84a9-4e994d1542ce-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.314569 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.321880 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.345265 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:31:55 crc kubenswrapper[4830]: E0227 16:31:55.345669 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="076dd25a-67a2-4121-84a9-4e994d1542ce" containerName="sg-core" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.345691 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="076dd25a-67a2-4121-84a9-4e994d1542ce" containerName="sg-core" Feb 27 16:31:55 crc kubenswrapper[4830]: E0227 16:31:55.345716 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="076dd25a-67a2-4121-84a9-4e994d1542ce" containerName="proxy-httpd" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.345727 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="076dd25a-67a2-4121-84a9-4e994d1542ce" containerName="proxy-httpd" Feb 27 16:31:55 crc kubenswrapper[4830]: E0227 16:31:55.345746 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="076dd25a-67a2-4121-84a9-4e994d1542ce" containerName="ceilometer-notification-agent" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.345754 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="076dd25a-67a2-4121-84a9-4e994d1542ce" containerName="ceilometer-notification-agent" Feb 27 16:31:55 crc kubenswrapper[4830]: E0227 16:31:55.345779 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="076dd25a-67a2-4121-84a9-4e994d1542ce" containerName="ceilometer-central-agent" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.345789 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="076dd25a-67a2-4121-84a9-4e994d1542ce" containerName="ceilometer-central-agent" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.346046 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="076dd25a-67a2-4121-84a9-4e994d1542ce" containerName="ceilometer-central-agent" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.346067 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="076dd25a-67a2-4121-84a9-4e994d1542ce" containerName="proxy-httpd" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.346107 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="076dd25a-67a2-4121-84a9-4e994d1542ce" containerName="sg-core" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.346121 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="076dd25a-67a2-4121-84a9-4e994d1542ce" containerName="ceilometer-notification-agent" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.349995 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.352284 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.352676 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.355108 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.355707 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.492037 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/011b41ca-3a73-4e05-a626-fd630fe10bd5-log-httpd\") pod \"ceilometer-0\" (UID: \"011b41ca-3a73-4e05-a626-fd630fe10bd5\") " pod="openstack/ceilometer-0" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.492083 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/011b41ca-3a73-4e05-a626-fd630fe10bd5-scripts\") pod \"ceilometer-0\" (UID: \"011b41ca-3a73-4e05-a626-fd630fe10bd5\") " pod="openstack/ceilometer-0" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.492348 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/011b41ca-3a73-4e05-a626-fd630fe10bd5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"011b41ca-3a73-4e05-a626-fd630fe10bd5\") " pod="openstack/ceilometer-0" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.492388 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/011b41ca-3a73-4e05-a626-fd630fe10bd5-config-data\") pod \"ceilometer-0\" (UID: \"011b41ca-3a73-4e05-a626-fd630fe10bd5\") " pod="openstack/ceilometer-0" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.492541 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tvxb\" (UniqueName: \"kubernetes.io/projected/011b41ca-3a73-4e05-a626-fd630fe10bd5-kube-api-access-5tvxb\") pod \"ceilometer-0\" (UID: \"011b41ca-3a73-4e05-a626-fd630fe10bd5\") " pod="openstack/ceilometer-0" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.492634 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/011b41ca-3a73-4e05-a626-fd630fe10bd5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"011b41ca-3a73-4e05-a626-fd630fe10bd5\") " pod="openstack/ceilometer-0" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.492701 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/011b41ca-3a73-4e05-a626-fd630fe10bd5-run-httpd\") pod \"ceilometer-0\" (UID: \"011b41ca-3a73-4e05-a626-fd630fe10bd5\") " pod="openstack/ceilometer-0" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.492736 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/011b41ca-3a73-4e05-a626-fd630fe10bd5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"011b41ca-3a73-4e05-a626-fd630fe10bd5\") " pod="openstack/ceilometer-0" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.593992 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/011b41ca-3a73-4e05-a626-fd630fe10bd5-config-data\") pod \"ceilometer-0\" (UID: \"011b41ca-3a73-4e05-a626-fd630fe10bd5\") " pod="openstack/ceilometer-0" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.594030 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/011b41ca-3a73-4e05-a626-fd630fe10bd5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"011b41ca-3a73-4e05-a626-fd630fe10bd5\") " pod="openstack/ceilometer-0" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.594087 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tvxb\" (UniqueName: \"kubernetes.io/projected/011b41ca-3a73-4e05-a626-fd630fe10bd5-kube-api-access-5tvxb\") pod \"ceilometer-0\" (UID: \"011b41ca-3a73-4e05-a626-fd630fe10bd5\") " pod="openstack/ceilometer-0" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.594126 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/011b41ca-3a73-4e05-a626-fd630fe10bd5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"011b41ca-3a73-4e05-a626-fd630fe10bd5\") " pod="openstack/ceilometer-0" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.594165 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/011b41ca-3a73-4e05-a626-fd630fe10bd5-run-httpd\") pod \"ceilometer-0\" (UID: \"011b41ca-3a73-4e05-a626-fd630fe10bd5\") " pod="openstack/ceilometer-0" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.594184 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/011b41ca-3a73-4e05-a626-fd630fe10bd5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"011b41ca-3a73-4e05-a626-fd630fe10bd5\") " pod="openstack/ceilometer-0" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.594208 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/011b41ca-3a73-4e05-a626-fd630fe10bd5-log-httpd\") pod \"ceilometer-0\" (UID: \"011b41ca-3a73-4e05-a626-fd630fe10bd5\") " pod="openstack/ceilometer-0" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.594226 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/011b41ca-3a73-4e05-a626-fd630fe10bd5-scripts\") pod \"ceilometer-0\" (UID: \"011b41ca-3a73-4e05-a626-fd630fe10bd5\") " pod="openstack/ceilometer-0" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.594872 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/011b41ca-3a73-4e05-a626-fd630fe10bd5-log-httpd\") pod \"ceilometer-0\" (UID: \"011b41ca-3a73-4e05-a626-fd630fe10bd5\") " pod="openstack/ceilometer-0" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.595225 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/011b41ca-3a73-4e05-a626-fd630fe10bd5-run-httpd\") pod \"ceilometer-0\" (UID: \"011b41ca-3a73-4e05-a626-fd630fe10bd5\") " pod="openstack/ceilometer-0" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.598777 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/011b41ca-3a73-4e05-a626-fd630fe10bd5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"011b41ca-3a73-4e05-a626-fd630fe10bd5\") " pod="openstack/ceilometer-0" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.599001 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/011b41ca-3a73-4e05-a626-fd630fe10bd5-scripts\") pod \"ceilometer-0\" (UID: \"011b41ca-3a73-4e05-a626-fd630fe10bd5\") " pod="openstack/ceilometer-0" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.609615 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/011b41ca-3a73-4e05-a626-fd630fe10bd5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"011b41ca-3a73-4e05-a626-fd630fe10bd5\") " pod="openstack/ceilometer-0" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.610264 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/011b41ca-3a73-4e05-a626-fd630fe10bd5-config-data\") pod \"ceilometer-0\" (UID: \"011b41ca-3a73-4e05-a626-fd630fe10bd5\") " pod="openstack/ceilometer-0" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.610347 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/011b41ca-3a73-4e05-a626-fd630fe10bd5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"011b41ca-3a73-4e05-a626-fd630fe10bd5\") " pod="openstack/ceilometer-0" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.614297 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tvxb\" (UniqueName: \"kubernetes.io/projected/011b41ca-3a73-4e05-a626-fd630fe10bd5-kube-api-access-5tvxb\") pod \"ceilometer-0\" (UID: \"011b41ca-3a73-4e05-a626-fd630fe10bd5\") " pod="openstack/ceilometer-0" Feb 27 16:31:55 crc kubenswrapper[4830]: I0227 16:31:55.667895 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 16:31:56 crc kubenswrapper[4830]: I0227 16:31:56.003909 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 27 16:31:56 crc kubenswrapper[4830]: I0227 16:31:56.168410 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:31:56 crc kubenswrapper[4830]: I0227 16:31:56.781675 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="076dd25a-67a2-4121-84a9-4e994d1542ce" path="/var/lib/kubelet/pods/076dd25a-67a2-4121-84a9-4e994d1542ce/volumes" Feb 27 16:31:57 crc kubenswrapper[4830]: I0227 16:31:57.013745 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"011b41ca-3a73-4e05-a626-fd630fe10bd5","Type":"ContainerStarted","Data":"886a0228ee148ff9da749037f7f01afafdd051b62ca863aa6c6b5daca54e39d8"} Feb 27 16:31:57 crc kubenswrapper[4830]: I0227 16:31:57.013833 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"011b41ca-3a73-4e05-a626-fd630fe10bd5","Type":"ContainerStarted","Data":"3009c6a73dde29fca43c735e15125fab15ad914c49dd2741b2db5ab41b894c67"} Feb 27 16:31:57 crc kubenswrapper[4830]: E0227 16:31:57.834733 4830 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03f3ea66_a50c_42c4_a54b_5ea85ac2973f.slice/crio-conmon-c295229ad63020dc74e32f395e0052762d2b710c258cbe248a93f4458f7f6cdf.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03f3ea66_a50c_42c4_a54b_5ea85ac2973f.slice/crio-c295229ad63020dc74e32f395e0052762d2b710c258cbe248a93f4458f7f6cdf.scope\": RecentStats: unable to find data in memory cache]" Feb 27 16:31:57 crc kubenswrapper[4830]: I0227 16:31:57.992822 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.037641 4830 generic.go:334] "Generic (PLEG): container finished" podID="03f3ea66-a50c-42c4-a54b-5ea85ac2973f" containerID="c295229ad63020dc74e32f395e0052762d2b710c258cbe248a93f4458f7f6cdf" exitCode=137 Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.037766 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"03f3ea66-a50c-42c4-a54b-5ea85ac2973f","Type":"ContainerDied","Data":"c295229ad63020dc74e32f395e0052762d2b710c258cbe248a93f4458f7f6cdf"} Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.037806 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"03f3ea66-a50c-42c4-a54b-5ea85ac2973f","Type":"ContainerDied","Data":"17beda40d5b478cf740af71e1e8fed09b8a80861c72240c13aa9d29e0fb268f4"} Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.037834 4830 scope.go:117] "RemoveContainer" containerID="c295229ad63020dc74e32f395e0052762d2b710c258cbe248a93f4458f7f6cdf" Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.038069 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.043448 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"011b41ca-3a73-4e05-a626-fd630fe10bd5","Type":"ContainerStarted","Data":"27f08ba3ee7493e2750f049ee399156717aab13c7564c7e0f4e1cbc42612183d"} Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.062279 4830 scope.go:117] "RemoveContainer" containerID="c295229ad63020dc74e32f395e0052762d2b710c258cbe248a93f4458f7f6cdf" Feb 27 16:31:58 crc kubenswrapper[4830]: E0227 16:31:58.062757 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c295229ad63020dc74e32f395e0052762d2b710c258cbe248a93f4458f7f6cdf\": container with ID starting with c295229ad63020dc74e32f395e0052762d2b710c258cbe248a93f4458f7f6cdf not found: ID does not exist" containerID="c295229ad63020dc74e32f395e0052762d2b710c258cbe248a93f4458f7f6cdf" Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.062933 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c295229ad63020dc74e32f395e0052762d2b710c258cbe248a93f4458f7f6cdf"} err="failed to get container status \"c295229ad63020dc74e32f395e0052762d2b710c258cbe248a93f4458f7f6cdf\": rpc error: code = NotFound desc = could not find container \"c295229ad63020dc74e32f395e0052762d2b710c258cbe248a93f4458f7f6cdf\": container with ID starting with c295229ad63020dc74e32f395e0052762d2b710c258cbe248a93f4458f7f6cdf not found: ID does not exist" Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.145059 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03f3ea66-a50c-42c4-a54b-5ea85ac2973f-combined-ca-bundle\") pod \"03f3ea66-a50c-42c4-a54b-5ea85ac2973f\" (UID: \"03f3ea66-a50c-42c4-a54b-5ea85ac2973f\") " Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.145285 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdrtv\" (UniqueName: \"kubernetes.io/projected/03f3ea66-a50c-42c4-a54b-5ea85ac2973f-kube-api-access-jdrtv\") pod \"03f3ea66-a50c-42c4-a54b-5ea85ac2973f\" (UID: \"03f3ea66-a50c-42c4-a54b-5ea85ac2973f\") " Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.145316 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03f3ea66-a50c-42c4-a54b-5ea85ac2973f-config-data\") pod \"03f3ea66-a50c-42c4-a54b-5ea85ac2973f\" (UID: \"03f3ea66-a50c-42c4-a54b-5ea85ac2973f\") " Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.150262 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03f3ea66-a50c-42c4-a54b-5ea85ac2973f-kube-api-access-jdrtv" (OuterVolumeSpecName: "kube-api-access-jdrtv") pod "03f3ea66-a50c-42c4-a54b-5ea85ac2973f" (UID: "03f3ea66-a50c-42c4-a54b-5ea85ac2973f"). InnerVolumeSpecName "kube-api-access-jdrtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.192062 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03f3ea66-a50c-42c4-a54b-5ea85ac2973f-config-data" (OuterVolumeSpecName: "config-data") pod "03f3ea66-a50c-42c4-a54b-5ea85ac2973f" (UID: "03f3ea66-a50c-42c4-a54b-5ea85ac2973f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.197066 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03f3ea66-a50c-42c4-a54b-5ea85ac2973f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03f3ea66-a50c-42c4-a54b-5ea85ac2973f" (UID: "03f3ea66-a50c-42c4-a54b-5ea85ac2973f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.247503 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03f3ea66-a50c-42c4-a54b-5ea85ac2973f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.247538 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdrtv\" (UniqueName: \"kubernetes.io/projected/03f3ea66-a50c-42c4-a54b-5ea85ac2973f-kube-api-access-jdrtv\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.247550 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03f3ea66-a50c-42c4-a54b-5ea85ac2973f-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.379701 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.392151 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.404275 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 16:31:58 crc kubenswrapper[4830]: E0227 16:31:58.404685 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03f3ea66-a50c-42c4-a54b-5ea85ac2973f" containerName="nova-cell1-novncproxy-novncproxy" Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.404703 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="03f3ea66-a50c-42c4-a54b-5ea85ac2973f" containerName="nova-cell1-novncproxy-novncproxy" Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.404905 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="03f3ea66-a50c-42c4-a54b-5ea85ac2973f" containerName="nova-cell1-novncproxy-novncproxy" Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.405524 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.409873 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.410086 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.411548 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.421396 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.552740 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21656f50-51b8-4761-8b9e-c2b823dace13-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"21656f50-51b8-4761-8b9e-c2b823dace13\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.552790 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/21656f50-51b8-4761-8b9e-c2b823dace13-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"21656f50-51b8-4761-8b9e-c2b823dace13\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.552924 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21656f50-51b8-4761-8b9e-c2b823dace13-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"21656f50-51b8-4761-8b9e-c2b823dace13\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.553238 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/21656f50-51b8-4761-8b9e-c2b823dace13-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"21656f50-51b8-4761-8b9e-c2b823dace13\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.553352 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdlml\" (UniqueName: \"kubernetes.io/projected/21656f50-51b8-4761-8b9e-c2b823dace13-kube-api-access-rdlml\") pod \"nova-cell1-novncproxy-0\" (UID: \"21656f50-51b8-4761-8b9e-c2b823dace13\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.655122 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21656f50-51b8-4761-8b9e-c2b823dace13-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"21656f50-51b8-4761-8b9e-c2b823dace13\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.655175 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21656f50-51b8-4761-8b9e-c2b823dace13-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"21656f50-51b8-4761-8b9e-c2b823dace13\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.655193 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/21656f50-51b8-4761-8b9e-c2b823dace13-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"21656f50-51b8-4761-8b9e-c2b823dace13\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.655259 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/21656f50-51b8-4761-8b9e-c2b823dace13-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"21656f50-51b8-4761-8b9e-c2b823dace13\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.655298 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdlml\" (UniqueName: \"kubernetes.io/projected/21656f50-51b8-4761-8b9e-c2b823dace13-kube-api-access-rdlml\") pod \"nova-cell1-novncproxy-0\" (UID: \"21656f50-51b8-4761-8b9e-c2b823dace13\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.659740 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/21656f50-51b8-4761-8b9e-c2b823dace13-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"21656f50-51b8-4761-8b9e-c2b823dace13\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.660607 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21656f50-51b8-4761-8b9e-c2b823dace13-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"21656f50-51b8-4761-8b9e-c2b823dace13\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.660725 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/21656f50-51b8-4761-8b9e-c2b823dace13-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"21656f50-51b8-4761-8b9e-c2b823dace13\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.661324 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21656f50-51b8-4761-8b9e-c2b823dace13-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"21656f50-51b8-4761-8b9e-c2b823dace13\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.676828 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdlml\" (UniqueName: \"kubernetes.io/projected/21656f50-51b8-4761-8b9e-c2b823dace13-kube-api-access-rdlml\") pod \"nova-cell1-novncproxy-0\" (UID: \"21656f50-51b8-4761-8b9e-c2b823dace13\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.759567 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 16:31:58 crc kubenswrapper[4830]: I0227 16:31:58.786347 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03f3ea66-a50c-42c4-a54b-5ea85ac2973f" path="/var/lib/kubelet/pods/03f3ea66-a50c-42c4-a54b-5ea85ac2973f/volumes" Feb 27 16:31:59 crc kubenswrapper[4830]: I0227 16:31:59.071261 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"011b41ca-3a73-4e05-a626-fd630fe10bd5","Type":"ContainerStarted","Data":"c76d965da2c975a0273aeb23d58784ca41a628b70191cb5a53634f0746ebd045"} Feb 27 16:31:59 crc kubenswrapper[4830]: W0227 16:31:59.248901 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21656f50_51b8_4761_8b9e_c2b823dace13.slice/crio-d17a62450a3a94180b3ce51f2368de76aa3ea9b22a04ed67e84a909447fa119c WatchSource:0}: Error finding container d17a62450a3a94180b3ce51f2368de76aa3ea9b22a04ed67e84a909447fa119c: Status 404 returned error can't find the container with id d17a62450a3a94180b3ce51f2368de76aa3ea9b22a04ed67e84a909447fa119c Feb 27 16:31:59 crc kubenswrapper[4830]: I0227 16:31:59.251524 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 16:32:00 crc kubenswrapper[4830]: I0227 16:32:00.084274 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"21656f50-51b8-4761-8b9e-c2b823dace13","Type":"ContainerStarted","Data":"a3e19fe9784a7e84ad00ba5db518baa23ac731605584cf84a3a6192b109fa71e"} Feb 27 16:32:00 crc kubenswrapper[4830]: I0227 16:32:00.084856 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"21656f50-51b8-4761-8b9e-c2b823dace13","Type":"ContainerStarted","Data":"d17a62450a3a94180b3ce51f2368de76aa3ea9b22a04ed67e84a909447fa119c"} Feb 27 16:32:00 crc kubenswrapper[4830]: I0227 16:32:00.102484 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.102470135 podStartE2EDuration="2.102470135s" podCreationTimestamp="2026-02-27 16:31:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:32:00.098458878 +0000 UTC m=+1516.187731331" watchObservedRunningTime="2026-02-27 16:32:00.102470135 +0000 UTC m=+1516.191742598" Feb 27 16:32:00 crc kubenswrapper[4830]: I0227 16:32:00.139701 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536832-nwxgv"] Feb 27 16:32:00 crc kubenswrapper[4830]: I0227 16:32:00.140974 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536832-nwxgv" Feb 27 16:32:00 crc kubenswrapper[4830]: I0227 16:32:00.146622 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 16:32:00 crc kubenswrapper[4830]: I0227 16:32:00.146838 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 16:32:00 crc kubenswrapper[4830]: I0227 16:32:00.146903 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 16:32:00 crc kubenswrapper[4830]: I0227 16:32:00.154672 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536832-nwxgv"] Feb 27 16:32:00 crc kubenswrapper[4830]: I0227 16:32:00.191371 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 27 16:32:00 crc kubenswrapper[4830]: I0227 16:32:00.192156 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 27 16:32:00 crc kubenswrapper[4830]: I0227 16:32:00.192772 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 27 16:32:00 crc kubenswrapper[4830]: I0227 16:32:00.202995 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 27 16:32:00 crc kubenswrapper[4830]: I0227 16:32:00.290812 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz5bk\" (UniqueName: \"kubernetes.io/projected/3706b00e-6257-4879-b0bb-066b912637da-kube-api-access-sz5bk\") pod \"auto-csr-approver-29536832-nwxgv\" (UID: \"3706b00e-6257-4879-b0bb-066b912637da\") " pod="openshift-infra/auto-csr-approver-29536832-nwxgv" Feb 27 16:32:00 crc kubenswrapper[4830]: I0227 16:32:00.392603 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz5bk\" (UniqueName: \"kubernetes.io/projected/3706b00e-6257-4879-b0bb-066b912637da-kube-api-access-sz5bk\") pod \"auto-csr-approver-29536832-nwxgv\" (UID: \"3706b00e-6257-4879-b0bb-066b912637da\") " pod="openshift-infra/auto-csr-approver-29536832-nwxgv" Feb 27 16:32:00 crc kubenswrapper[4830]: I0227 16:32:00.414349 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz5bk\" (UniqueName: \"kubernetes.io/projected/3706b00e-6257-4879-b0bb-066b912637da-kube-api-access-sz5bk\") pod \"auto-csr-approver-29536832-nwxgv\" (UID: \"3706b00e-6257-4879-b0bb-066b912637da\") " pod="openshift-infra/auto-csr-approver-29536832-nwxgv" Feb 27 16:32:00 crc kubenswrapper[4830]: I0227 16:32:00.462138 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536832-nwxgv" Feb 27 16:32:00 crc kubenswrapper[4830]: I0227 16:32:00.946759 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536832-nwxgv"] Feb 27 16:32:00 crc kubenswrapper[4830]: I0227 16:32:00.959116 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 16:32:01 crc kubenswrapper[4830]: I0227 16:32:01.100560 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"011b41ca-3a73-4e05-a626-fd630fe10bd5","Type":"ContainerStarted","Data":"69c1844d408b9d354e0304a471b14402f998109d3d20675bd6d2a50bbacaf35f"} Feb 27 16:32:01 crc kubenswrapper[4830]: I0227 16:32:01.101052 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 16:32:01 crc kubenswrapper[4830]: I0227 16:32:01.103351 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536832-nwxgv" event={"ID":"3706b00e-6257-4879-b0bb-066b912637da","Type":"ContainerStarted","Data":"c90978768ba2290da94bdf86b62fc72d7f43c4ddf234110a45aec59186096029"} Feb 27 16:32:01 crc kubenswrapper[4830]: I0227 16:32:01.103892 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 27 16:32:01 crc kubenswrapper[4830]: I0227 16:32:01.111743 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 27 16:32:01 crc kubenswrapper[4830]: I0227 16:32:01.139855 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.378317927 podStartE2EDuration="6.139833023s" podCreationTimestamp="2026-02-27 16:31:55 +0000 UTC" firstStartedPulling="2026-02-27 16:31:56.17223698 +0000 UTC m=+1512.261509443" lastFinishedPulling="2026-02-27 16:31:59.933752066 +0000 UTC m=+1516.023024539" observedRunningTime="2026-02-27 16:32:01.13805517 +0000 UTC m=+1517.227327673" watchObservedRunningTime="2026-02-27 16:32:01.139833023 +0000 UTC m=+1517.229105506" Feb 27 16:32:01 crc kubenswrapper[4830]: I0227 16:32:01.317145 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-dmhcp"] Feb 27 16:32:01 crc kubenswrapper[4830]: I0227 16:32:01.318884 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-dmhcp" Feb 27 16:32:01 crc kubenswrapper[4830]: I0227 16:32:01.333236 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-dmhcp"] Feb 27 16:32:01 crc kubenswrapper[4830]: I0227 16:32:01.381212 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 27 16:32:01 crc kubenswrapper[4830]: I0227 16:32:01.423843 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v9sf\" (UniqueName: \"kubernetes.io/projected/23db3cbd-39ac-4137-8a7e-0533af96e5b1-kube-api-access-6v9sf\") pod \"dnsmasq-dns-cd5cbd7b9-dmhcp\" (UID: \"23db3cbd-39ac-4137-8a7e-0533af96e5b1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-dmhcp" Feb 27 16:32:01 crc kubenswrapper[4830]: I0227 16:32:01.423913 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23db3cbd-39ac-4137-8a7e-0533af96e5b1-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-dmhcp\" (UID: \"23db3cbd-39ac-4137-8a7e-0533af96e5b1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-dmhcp" Feb 27 16:32:01 crc kubenswrapper[4830]: I0227 16:32:01.423953 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23db3cbd-39ac-4137-8a7e-0533af96e5b1-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-dmhcp\" (UID: \"23db3cbd-39ac-4137-8a7e-0533af96e5b1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-dmhcp" Feb 27 16:32:01 crc kubenswrapper[4830]: I0227 16:32:01.423979 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23db3cbd-39ac-4137-8a7e-0533af96e5b1-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-dmhcp\" (UID: \"23db3cbd-39ac-4137-8a7e-0533af96e5b1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-dmhcp" Feb 27 16:32:01 crc kubenswrapper[4830]: I0227 16:32:01.424066 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23db3cbd-39ac-4137-8a7e-0533af96e5b1-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-dmhcp\" (UID: \"23db3cbd-39ac-4137-8a7e-0533af96e5b1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-dmhcp" Feb 27 16:32:01 crc kubenswrapper[4830]: I0227 16:32:01.424256 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23db3cbd-39ac-4137-8a7e-0533af96e5b1-config\") pod \"dnsmasq-dns-cd5cbd7b9-dmhcp\" (UID: \"23db3cbd-39ac-4137-8a7e-0533af96e5b1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-dmhcp" Feb 27 16:32:01 crc kubenswrapper[4830]: I0227 16:32:01.526525 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v9sf\" (UniqueName: \"kubernetes.io/projected/23db3cbd-39ac-4137-8a7e-0533af96e5b1-kube-api-access-6v9sf\") pod \"dnsmasq-dns-cd5cbd7b9-dmhcp\" (UID: \"23db3cbd-39ac-4137-8a7e-0533af96e5b1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-dmhcp" Feb 27 16:32:01 crc kubenswrapper[4830]: I0227 16:32:01.526611 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23db3cbd-39ac-4137-8a7e-0533af96e5b1-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-dmhcp\" (UID: \"23db3cbd-39ac-4137-8a7e-0533af96e5b1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-dmhcp" Feb 27 16:32:01 crc kubenswrapper[4830]: I0227 16:32:01.526639 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23db3cbd-39ac-4137-8a7e-0533af96e5b1-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-dmhcp\" (UID: \"23db3cbd-39ac-4137-8a7e-0533af96e5b1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-dmhcp" Feb 27 16:32:01 crc kubenswrapper[4830]: I0227 16:32:01.526660 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23db3cbd-39ac-4137-8a7e-0533af96e5b1-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-dmhcp\" (UID: \"23db3cbd-39ac-4137-8a7e-0533af96e5b1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-dmhcp" Feb 27 16:32:01 crc kubenswrapper[4830]: I0227 16:32:01.526697 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23db3cbd-39ac-4137-8a7e-0533af96e5b1-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-dmhcp\" (UID: \"23db3cbd-39ac-4137-8a7e-0533af96e5b1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-dmhcp" Feb 27 16:32:01 crc kubenswrapper[4830]: I0227 16:32:01.526740 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23db3cbd-39ac-4137-8a7e-0533af96e5b1-config\") pod \"dnsmasq-dns-cd5cbd7b9-dmhcp\" (UID: \"23db3cbd-39ac-4137-8a7e-0533af96e5b1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-dmhcp" Feb 27 16:32:01 crc kubenswrapper[4830]: I0227 16:32:01.527709 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23db3cbd-39ac-4137-8a7e-0533af96e5b1-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-dmhcp\" (UID: \"23db3cbd-39ac-4137-8a7e-0533af96e5b1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-dmhcp" Feb 27 16:32:01 crc kubenswrapper[4830]: I0227 16:32:01.527752 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23db3cbd-39ac-4137-8a7e-0533af96e5b1-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-dmhcp\" (UID: \"23db3cbd-39ac-4137-8a7e-0533af96e5b1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-dmhcp" Feb 27 16:32:01 crc kubenswrapper[4830]: I0227 16:32:01.528269 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23db3cbd-39ac-4137-8a7e-0533af96e5b1-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-dmhcp\" (UID: \"23db3cbd-39ac-4137-8a7e-0533af96e5b1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-dmhcp" Feb 27 16:32:01 crc kubenswrapper[4830]: I0227 16:32:01.528419 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23db3cbd-39ac-4137-8a7e-0533af96e5b1-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-dmhcp\" (UID: \"23db3cbd-39ac-4137-8a7e-0533af96e5b1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-dmhcp" Feb 27 16:32:01 crc kubenswrapper[4830]: I0227 16:32:01.528604 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23db3cbd-39ac-4137-8a7e-0533af96e5b1-config\") pod \"dnsmasq-dns-cd5cbd7b9-dmhcp\" (UID: \"23db3cbd-39ac-4137-8a7e-0533af96e5b1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-dmhcp" Feb 27 16:32:01 crc kubenswrapper[4830]: I0227 16:32:01.559169 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v9sf\" (UniqueName: \"kubernetes.io/projected/23db3cbd-39ac-4137-8a7e-0533af96e5b1-kube-api-access-6v9sf\") pod \"dnsmasq-dns-cd5cbd7b9-dmhcp\" (UID: \"23db3cbd-39ac-4137-8a7e-0533af96e5b1\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-dmhcp" Feb 27 16:32:01 crc kubenswrapper[4830]: I0227 16:32:01.655347 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-dmhcp" Feb 27 16:32:02 crc kubenswrapper[4830]: I0227 16:32:02.177042 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-dmhcp"] Feb 27 16:32:03 crc kubenswrapper[4830]: I0227 16:32:03.137871 4830 generic.go:334] "Generic (PLEG): container finished" podID="23db3cbd-39ac-4137-8a7e-0533af96e5b1" containerID="bde345255725008534174e08aa3bfd1e9e5abd79b8d35b0ffbbec8fdecf1e21f" exitCode=0 Feb 27 16:32:03 crc kubenswrapper[4830]: I0227 16:32:03.138175 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-dmhcp" event={"ID":"23db3cbd-39ac-4137-8a7e-0533af96e5b1","Type":"ContainerDied","Data":"bde345255725008534174e08aa3bfd1e9e5abd79b8d35b0ffbbec8fdecf1e21f"} Feb 27 16:32:03 crc kubenswrapper[4830]: I0227 16:32:03.138199 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-dmhcp" event={"ID":"23db3cbd-39ac-4137-8a7e-0533af96e5b1","Type":"ContainerStarted","Data":"37060d261048bdd878ea526cb1f8c5e1bdf8de7dfa50b5e84e600756b107d840"} Feb 27 16:32:03 crc kubenswrapper[4830]: I0227 16:32:03.141651 4830 generic.go:334] "Generic (PLEG): container finished" podID="3706b00e-6257-4879-b0bb-066b912637da" containerID="6e897d68c31265e9f5fea3191c220fdd3f653e9c14499ea7470715d9f71ca8e2" exitCode=0 Feb 27 16:32:03 crc kubenswrapper[4830]: I0227 16:32:03.142128 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536832-nwxgv" event={"ID":"3706b00e-6257-4879-b0bb-066b912637da","Type":"ContainerDied","Data":"6e897d68c31265e9f5fea3191c220fdd3f653e9c14499ea7470715d9f71ca8e2"} Feb 27 16:32:03 crc kubenswrapper[4830]: I0227 16:32:03.165433 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 16:32:03 crc kubenswrapper[4830]: I0227 16:32:03.165650 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 16:32:03 crc kubenswrapper[4830]: I0227 16:32:03.434210 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:32:03 crc kubenswrapper[4830]: I0227 16:32:03.434696 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="011b41ca-3a73-4e05-a626-fd630fe10bd5" containerName="ceilometer-central-agent" containerID="cri-o://886a0228ee148ff9da749037f7f01afafdd051b62ca863aa6c6b5daca54e39d8" gracePeriod=30 Feb 27 16:32:03 crc kubenswrapper[4830]: I0227 16:32:03.434811 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="011b41ca-3a73-4e05-a626-fd630fe10bd5" containerName="proxy-httpd" containerID="cri-o://69c1844d408b9d354e0304a471b14402f998109d3d20675bd6d2a50bbacaf35f" gracePeriod=30 Feb 27 16:32:03 crc kubenswrapper[4830]: I0227 16:32:03.434852 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="011b41ca-3a73-4e05-a626-fd630fe10bd5" containerName="sg-core" containerID="cri-o://c76d965da2c975a0273aeb23d58784ca41a628b70191cb5a53634f0746ebd045" gracePeriod=30 Feb 27 16:32:03 crc kubenswrapper[4830]: I0227 16:32:03.434922 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="011b41ca-3a73-4e05-a626-fd630fe10bd5" containerName="ceilometer-notification-agent" containerID="cri-o://27f08ba3ee7493e2750f049ee399156717aab13c7564c7e0f4e1cbc42612183d" gracePeriod=30 Feb 27 16:32:03 crc kubenswrapper[4830]: I0227 16:32:03.760020 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 27 16:32:03 crc kubenswrapper[4830]: I0227 16:32:03.859255 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 16:32:04 crc kubenswrapper[4830]: I0227 16:32:04.155280 4830 generic.go:334] "Generic (PLEG): container finished" podID="011b41ca-3a73-4e05-a626-fd630fe10bd5" containerID="69c1844d408b9d354e0304a471b14402f998109d3d20675bd6d2a50bbacaf35f" exitCode=0 Feb 27 16:32:04 crc kubenswrapper[4830]: I0227 16:32:04.155334 4830 generic.go:334] "Generic (PLEG): container finished" podID="011b41ca-3a73-4e05-a626-fd630fe10bd5" containerID="c76d965da2c975a0273aeb23d58784ca41a628b70191cb5a53634f0746ebd045" exitCode=2 Feb 27 16:32:04 crc kubenswrapper[4830]: I0227 16:32:04.155352 4830 generic.go:334] "Generic (PLEG): container finished" podID="011b41ca-3a73-4e05-a626-fd630fe10bd5" containerID="27f08ba3ee7493e2750f049ee399156717aab13c7564c7e0f4e1cbc42612183d" exitCode=0 Feb 27 16:32:04 crc kubenswrapper[4830]: I0227 16:32:04.155412 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"011b41ca-3a73-4e05-a626-fd630fe10bd5","Type":"ContainerDied","Data":"69c1844d408b9d354e0304a471b14402f998109d3d20675bd6d2a50bbacaf35f"} Feb 27 16:32:04 crc kubenswrapper[4830]: I0227 16:32:04.155451 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"011b41ca-3a73-4e05-a626-fd630fe10bd5","Type":"ContainerDied","Data":"c76d965da2c975a0273aeb23d58784ca41a628b70191cb5a53634f0746ebd045"} Feb 27 16:32:04 crc kubenswrapper[4830]: I0227 16:32:04.155470 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"011b41ca-3a73-4e05-a626-fd630fe10bd5","Type":"ContainerDied","Data":"27f08ba3ee7493e2750f049ee399156717aab13c7564c7e0f4e1cbc42612183d"} Feb 27 16:32:04 crc kubenswrapper[4830]: I0227 16:32:04.157627 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="42523523-0867-4520-8f31-f11949fb08f4" containerName="nova-api-log" containerID="cri-o://1382af5fe49790df2c419c664aad5930fef37ad780b2c578a935c9462e7352bb" gracePeriod=30 Feb 27 16:32:04 crc kubenswrapper[4830]: I0227 16:32:04.160513 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-dmhcp" event={"ID":"23db3cbd-39ac-4137-8a7e-0533af96e5b1","Type":"ContainerStarted","Data":"5e4b95ff9e120a4e75ce39c775be2aee2b80b55e4a33fe61a9e413a3ae463cf6"} Feb 27 16:32:04 crc kubenswrapper[4830]: I0227 16:32:04.160651 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-dmhcp" Feb 27 16:32:04 crc kubenswrapper[4830]: I0227 16:32:04.161367 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="42523523-0867-4520-8f31-f11949fb08f4" containerName="nova-api-api" containerID="cri-o://1d6559428ad57ae73914e9bc0f3d82e639a8f07404438b74f346af3906edf948" gracePeriod=30 Feb 27 16:32:04 crc kubenswrapper[4830]: I0227 16:32:04.183603 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-dmhcp" podStartSLOduration=3.183581598 podStartE2EDuration="3.183581598s" podCreationTimestamp="2026-02-27 16:32:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:32:04.181618921 +0000 UTC m=+1520.270891405" watchObservedRunningTime="2026-02-27 16:32:04.183581598 +0000 UTC m=+1520.272854071" Feb 27 16:32:04 crc kubenswrapper[4830]: I0227 16:32:04.590459 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536832-nwxgv" Feb 27 16:32:04 crc kubenswrapper[4830]: I0227 16:32:04.701247 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sz5bk\" (UniqueName: \"kubernetes.io/projected/3706b00e-6257-4879-b0bb-066b912637da-kube-api-access-sz5bk\") pod \"3706b00e-6257-4879-b0bb-066b912637da\" (UID: \"3706b00e-6257-4879-b0bb-066b912637da\") " Feb 27 16:32:04 crc kubenswrapper[4830]: I0227 16:32:04.714164 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3706b00e-6257-4879-b0bb-066b912637da-kube-api-access-sz5bk" (OuterVolumeSpecName: "kube-api-access-sz5bk") pod "3706b00e-6257-4879-b0bb-066b912637da" (UID: "3706b00e-6257-4879-b0bb-066b912637da"). InnerVolumeSpecName "kube-api-access-sz5bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:32:04 crc kubenswrapper[4830]: I0227 16:32:04.803763 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sz5bk\" (UniqueName: \"kubernetes.io/projected/3706b00e-6257-4879-b0bb-066b912637da-kube-api-access-sz5bk\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.021003 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.107499 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tvxb\" (UniqueName: \"kubernetes.io/projected/011b41ca-3a73-4e05-a626-fd630fe10bd5-kube-api-access-5tvxb\") pod \"011b41ca-3a73-4e05-a626-fd630fe10bd5\" (UID: \"011b41ca-3a73-4e05-a626-fd630fe10bd5\") " Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.107677 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/011b41ca-3a73-4e05-a626-fd630fe10bd5-log-httpd\") pod \"011b41ca-3a73-4e05-a626-fd630fe10bd5\" (UID: \"011b41ca-3a73-4e05-a626-fd630fe10bd5\") " Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.107779 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/011b41ca-3a73-4e05-a626-fd630fe10bd5-run-httpd\") pod \"011b41ca-3a73-4e05-a626-fd630fe10bd5\" (UID: \"011b41ca-3a73-4e05-a626-fd630fe10bd5\") " Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.107832 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/011b41ca-3a73-4e05-a626-fd630fe10bd5-sg-core-conf-yaml\") pod \"011b41ca-3a73-4e05-a626-fd630fe10bd5\" (UID: \"011b41ca-3a73-4e05-a626-fd630fe10bd5\") " Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.107900 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/011b41ca-3a73-4e05-a626-fd630fe10bd5-combined-ca-bundle\") pod \"011b41ca-3a73-4e05-a626-fd630fe10bd5\" (UID: \"011b41ca-3a73-4e05-a626-fd630fe10bd5\") " Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.107936 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/011b41ca-3a73-4e05-a626-fd630fe10bd5-config-data\") pod \"011b41ca-3a73-4e05-a626-fd630fe10bd5\" (UID: \"011b41ca-3a73-4e05-a626-fd630fe10bd5\") " Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.108056 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/011b41ca-3a73-4e05-a626-fd630fe10bd5-ceilometer-tls-certs\") pod \"011b41ca-3a73-4e05-a626-fd630fe10bd5\" (UID: \"011b41ca-3a73-4e05-a626-fd630fe10bd5\") " Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.108081 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/011b41ca-3a73-4e05-a626-fd630fe10bd5-scripts\") pod \"011b41ca-3a73-4e05-a626-fd630fe10bd5\" (UID: \"011b41ca-3a73-4e05-a626-fd630fe10bd5\") " Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.110767 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/011b41ca-3a73-4e05-a626-fd630fe10bd5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "011b41ca-3a73-4e05-a626-fd630fe10bd5" (UID: "011b41ca-3a73-4e05-a626-fd630fe10bd5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.114065 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/011b41ca-3a73-4e05-a626-fd630fe10bd5-kube-api-access-5tvxb" (OuterVolumeSpecName: "kube-api-access-5tvxb") pod "011b41ca-3a73-4e05-a626-fd630fe10bd5" (UID: "011b41ca-3a73-4e05-a626-fd630fe10bd5"). InnerVolumeSpecName "kube-api-access-5tvxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.114398 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/011b41ca-3a73-4e05-a626-fd630fe10bd5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "011b41ca-3a73-4e05-a626-fd630fe10bd5" (UID: "011b41ca-3a73-4e05-a626-fd630fe10bd5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.124000 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/011b41ca-3a73-4e05-a626-fd630fe10bd5-scripts" (OuterVolumeSpecName: "scripts") pod "011b41ca-3a73-4e05-a626-fd630fe10bd5" (UID: "011b41ca-3a73-4e05-a626-fd630fe10bd5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.140079 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/011b41ca-3a73-4e05-a626-fd630fe10bd5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "011b41ca-3a73-4e05-a626-fd630fe10bd5" (UID: "011b41ca-3a73-4e05-a626-fd630fe10bd5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.172435 4830 generic.go:334] "Generic (PLEG): container finished" podID="011b41ca-3a73-4e05-a626-fd630fe10bd5" containerID="886a0228ee148ff9da749037f7f01afafdd051b62ca863aa6c6b5daca54e39d8" exitCode=0 Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.172544 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"011b41ca-3a73-4e05-a626-fd630fe10bd5","Type":"ContainerDied","Data":"886a0228ee148ff9da749037f7f01afafdd051b62ca863aa6c6b5daca54e39d8"} Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.172629 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"011b41ca-3a73-4e05-a626-fd630fe10bd5","Type":"ContainerDied","Data":"3009c6a73dde29fca43c735e15125fab15ad914c49dd2741b2db5ab41b894c67"} Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.172687 4830 scope.go:117] "RemoveContainer" containerID="69c1844d408b9d354e0304a471b14402f998109d3d20675bd6d2a50bbacaf35f" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.172791 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.180056 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/011b41ca-3a73-4e05-a626-fd630fe10bd5-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "011b41ca-3a73-4e05-a626-fd630fe10bd5" (UID: "011b41ca-3a73-4e05-a626-fd630fe10bd5"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.180454 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536832-nwxgv" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.180455 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536832-nwxgv" event={"ID":"3706b00e-6257-4879-b0bb-066b912637da","Type":"ContainerDied","Data":"c90978768ba2290da94bdf86b62fc72d7f43c4ddf234110a45aec59186096029"} Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.180603 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c90978768ba2290da94bdf86b62fc72d7f43c4ddf234110a45aec59186096029" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.182461 4830 generic.go:334] "Generic (PLEG): container finished" podID="42523523-0867-4520-8f31-f11949fb08f4" containerID="1382af5fe49790df2c419c664aad5930fef37ad780b2c578a935c9462e7352bb" exitCode=143 Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.183308 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"42523523-0867-4520-8f31-f11949fb08f4","Type":"ContainerDied","Data":"1382af5fe49790df2c419c664aad5930fef37ad780b2c578a935c9462e7352bb"} Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.191591 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/011b41ca-3a73-4e05-a626-fd630fe10bd5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "011b41ca-3a73-4e05-a626-fd630fe10bd5" (UID: "011b41ca-3a73-4e05-a626-fd630fe10bd5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.207537 4830 scope.go:117] "RemoveContainer" containerID="c76d965da2c975a0273aeb23d58784ca41a628b70191cb5a53634f0746ebd045" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.210460 4830 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/011b41ca-3a73-4e05-a626-fd630fe10bd5-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.210484 4830 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/011b41ca-3a73-4e05-a626-fd630fe10bd5-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.210492 4830 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/011b41ca-3a73-4e05-a626-fd630fe10bd5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.210501 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/011b41ca-3a73-4e05-a626-fd630fe10bd5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.210528 4830 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/011b41ca-3a73-4e05-a626-fd630fe10bd5-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.210537 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/011b41ca-3a73-4e05-a626-fd630fe10bd5-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.210545 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tvxb\" (UniqueName: \"kubernetes.io/projected/011b41ca-3a73-4e05-a626-fd630fe10bd5-kube-api-access-5tvxb\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.213974 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/011b41ca-3a73-4e05-a626-fd630fe10bd5-config-data" (OuterVolumeSpecName: "config-data") pod "011b41ca-3a73-4e05-a626-fd630fe10bd5" (UID: "011b41ca-3a73-4e05-a626-fd630fe10bd5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.228803 4830 scope.go:117] "RemoveContainer" containerID="27f08ba3ee7493e2750f049ee399156717aab13c7564c7e0f4e1cbc42612183d" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.259320 4830 scope.go:117] "RemoveContainer" containerID="886a0228ee148ff9da749037f7f01afafdd051b62ca863aa6c6b5daca54e39d8" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.279747 4830 scope.go:117] "RemoveContainer" containerID="69c1844d408b9d354e0304a471b14402f998109d3d20675bd6d2a50bbacaf35f" Feb 27 16:32:05 crc kubenswrapper[4830]: E0227 16:32:05.280195 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69c1844d408b9d354e0304a471b14402f998109d3d20675bd6d2a50bbacaf35f\": container with ID starting with 69c1844d408b9d354e0304a471b14402f998109d3d20675bd6d2a50bbacaf35f not found: ID does not exist" containerID="69c1844d408b9d354e0304a471b14402f998109d3d20675bd6d2a50bbacaf35f" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.280225 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69c1844d408b9d354e0304a471b14402f998109d3d20675bd6d2a50bbacaf35f"} err="failed to get container status \"69c1844d408b9d354e0304a471b14402f998109d3d20675bd6d2a50bbacaf35f\": rpc error: code = NotFound desc = could not find container \"69c1844d408b9d354e0304a471b14402f998109d3d20675bd6d2a50bbacaf35f\": container with ID starting with 69c1844d408b9d354e0304a471b14402f998109d3d20675bd6d2a50bbacaf35f not found: ID does not exist" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.280244 4830 scope.go:117] "RemoveContainer" containerID="c76d965da2c975a0273aeb23d58784ca41a628b70191cb5a53634f0746ebd045" Feb 27 16:32:05 crc kubenswrapper[4830]: E0227 16:32:05.280584 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c76d965da2c975a0273aeb23d58784ca41a628b70191cb5a53634f0746ebd045\": container with ID starting with c76d965da2c975a0273aeb23d58784ca41a628b70191cb5a53634f0746ebd045 not found: ID does not exist" containerID="c76d965da2c975a0273aeb23d58784ca41a628b70191cb5a53634f0746ebd045" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.280625 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c76d965da2c975a0273aeb23d58784ca41a628b70191cb5a53634f0746ebd045"} err="failed to get container status \"c76d965da2c975a0273aeb23d58784ca41a628b70191cb5a53634f0746ebd045\": rpc error: code = NotFound desc = could not find container \"c76d965da2c975a0273aeb23d58784ca41a628b70191cb5a53634f0746ebd045\": container with ID starting with c76d965da2c975a0273aeb23d58784ca41a628b70191cb5a53634f0746ebd045 not found: ID does not exist" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.280640 4830 scope.go:117] "RemoveContainer" containerID="27f08ba3ee7493e2750f049ee399156717aab13c7564c7e0f4e1cbc42612183d" Feb 27 16:32:05 crc kubenswrapper[4830]: E0227 16:32:05.281000 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27f08ba3ee7493e2750f049ee399156717aab13c7564c7e0f4e1cbc42612183d\": container with ID starting with 27f08ba3ee7493e2750f049ee399156717aab13c7564c7e0f4e1cbc42612183d not found: ID does not exist" containerID="27f08ba3ee7493e2750f049ee399156717aab13c7564c7e0f4e1cbc42612183d" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.281039 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27f08ba3ee7493e2750f049ee399156717aab13c7564c7e0f4e1cbc42612183d"} err="failed to get container status \"27f08ba3ee7493e2750f049ee399156717aab13c7564c7e0f4e1cbc42612183d\": rpc error: code = NotFound desc = could not find container \"27f08ba3ee7493e2750f049ee399156717aab13c7564c7e0f4e1cbc42612183d\": container with ID starting with 27f08ba3ee7493e2750f049ee399156717aab13c7564c7e0f4e1cbc42612183d not found: ID does not exist" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.281063 4830 scope.go:117] "RemoveContainer" containerID="886a0228ee148ff9da749037f7f01afafdd051b62ca863aa6c6b5daca54e39d8" Feb 27 16:32:05 crc kubenswrapper[4830]: E0227 16:32:05.281613 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"886a0228ee148ff9da749037f7f01afafdd051b62ca863aa6c6b5daca54e39d8\": container with ID starting with 886a0228ee148ff9da749037f7f01afafdd051b62ca863aa6c6b5daca54e39d8 not found: ID does not exist" containerID="886a0228ee148ff9da749037f7f01afafdd051b62ca863aa6c6b5daca54e39d8" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.281723 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"886a0228ee148ff9da749037f7f01afafdd051b62ca863aa6c6b5daca54e39d8"} err="failed to get container status \"886a0228ee148ff9da749037f7f01afafdd051b62ca863aa6c6b5daca54e39d8\": rpc error: code = NotFound desc = could not find container \"886a0228ee148ff9da749037f7f01afafdd051b62ca863aa6c6b5daca54e39d8\": container with ID starting with 886a0228ee148ff9da749037f7f01afafdd051b62ca863aa6c6b5daca54e39d8 not found: ID does not exist" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.311988 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/011b41ca-3a73-4e05-a626-fd630fe10bd5-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.572572 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.582131 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.589582 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:32:05 crc kubenswrapper[4830]: E0227 16:32:05.589954 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="011b41ca-3a73-4e05-a626-fd630fe10bd5" containerName="ceilometer-notification-agent" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.589971 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="011b41ca-3a73-4e05-a626-fd630fe10bd5" containerName="ceilometer-notification-agent" Feb 27 16:32:05 crc kubenswrapper[4830]: E0227 16:32:05.589987 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="011b41ca-3a73-4e05-a626-fd630fe10bd5" containerName="ceilometer-central-agent" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.589993 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="011b41ca-3a73-4e05-a626-fd630fe10bd5" containerName="ceilometer-central-agent" Feb 27 16:32:05 crc kubenswrapper[4830]: E0227 16:32:05.590008 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="011b41ca-3a73-4e05-a626-fd630fe10bd5" containerName="sg-core" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.590013 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="011b41ca-3a73-4e05-a626-fd630fe10bd5" containerName="sg-core" Feb 27 16:32:05 crc kubenswrapper[4830]: E0227 16:32:05.590023 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="011b41ca-3a73-4e05-a626-fd630fe10bd5" containerName="proxy-httpd" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.590029 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="011b41ca-3a73-4e05-a626-fd630fe10bd5" containerName="proxy-httpd" Feb 27 16:32:05 crc kubenswrapper[4830]: E0227 16:32:05.590040 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3706b00e-6257-4879-b0bb-066b912637da" containerName="oc" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.590045 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="3706b00e-6257-4879-b0bb-066b912637da" containerName="oc" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.590218 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="011b41ca-3a73-4e05-a626-fd630fe10bd5" containerName="ceilometer-notification-agent" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.590232 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="011b41ca-3a73-4e05-a626-fd630fe10bd5" containerName="ceilometer-central-agent" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.590240 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="3706b00e-6257-4879-b0bb-066b912637da" containerName="oc" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.590255 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="011b41ca-3a73-4e05-a626-fd630fe10bd5" containerName="sg-core" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.590271 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="011b41ca-3a73-4e05-a626-fd630fe10bd5" containerName="proxy-httpd" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.592400 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.595070 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.595124 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.595195 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.598401 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.651517 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536826-jgfgr"] Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.660037 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536826-jgfgr"] Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.725372 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-log-httpd\") pod \"ceilometer-0\" (UID: \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\") " pod="openstack/ceilometer-0" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.725425 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc89c\" (UniqueName: \"kubernetes.io/projected/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-kube-api-access-lc89c\") pod \"ceilometer-0\" (UID: \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\") " pod="openstack/ceilometer-0" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.725696 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\") " pod="openstack/ceilometer-0" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.725751 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\") " pod="openstack/ceilometer-0" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.725790 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-scripts\") pod \"ceilometer-0\" (UID: \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\") " pod="openstack/ceilometer-0" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.725846 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\") " pod="openstack/ceilometer-0" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.725885 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-run-httpd\") pod \"ceilometer-0\" (UID: \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\") " pod="openstack/ceilometer-0" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.725955 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-config-data\") pod \"ceilometer-0\" (UID: \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\") " pod="openstack/ceilometer-0" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.827475 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\") " pod="openstack/ceilometer-0" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.827547 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-run-httpd\") pod \"ceilometer-0\" (UID: \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\") " pod="openstack/ceilometer-0" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.828025 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-run-httpd\") pod \"ceilometer-0\" (UID: \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\") " pod="openstack/ceilometer-0" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.828062 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-config-data\") pod \"ceilometer-0\" (UID: \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\") " pod="openstack/ceilometer-0" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.828134 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-log-httpd\") pod \"ceilometer-0\" (UID: \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\") " pod="openstack/ceilometer-0" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.828165 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc89c\" (UniqueName: \"kubernetes.io/projected/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-kube-api-access-lc89c\") pod \"ceilometer-0\" (UID: \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\") " pod="openstack/ceilometer-0" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.828491 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-log-httpd\") pod \"ceilometer-0\" (UID: \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\") " pod="openstack/ceilometer-0" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.828587 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\") " pod="openstack/ceilometer-0" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.828607 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\") " pod="openstack/ceilometer-0" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.828626 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-scripts\") pod \"ceilometer-0\" (UID: \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\") " pod="openstack/ceilometer-0" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.833597 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\") " pod="openstack/ceilometer-0" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.833838 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\") " pod="openstack/ceilometer-0" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.834195 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\") " pod="openstack/ceilometer-0" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.835770 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-config-data\") pod \"ceilometer-0\" (UID: \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\") " pod="openstack/ceilometer-0" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.836342 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-scripts\") pod \"ceilometer-0\" (UID: \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\") " pod="openstack/ceilometer-0" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.844544 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc89c\" (UniqueName: \"kubernetes.io/projected/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-kube-api-access-lc89c\") pod \"ceilometer-0\" (UID: \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\") " pod="openstack/ceilometer-0" Feb 27 16:32:05 crc kubenswrapper[4830]: I0227 16:32:05.912525 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 16:32:06 crc kubenswrapper[4830]: W0227 16:32:06.378069 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2fe2ad2_a0de_49aa_95fd_ef5f15032676.slice/crio-51dd486163d05319c102306b662f11e5d7f037407786a09d627e9ddb61b01f59 WatchSource:0}: Error finding container 51dd486163d05319c102306b662f11e5d7f037407786a09d627e9ddb61b01f59: Status 404 returned error can't find the container with id 51dd486163d05319c102306b662f11e5d7f037407786a09d627e9ddb61b01f59 Feb 27 16:32:06 crc kubenswrapper[4830]: I0227 16:32:06.378757 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:32:06 crc kubenswrapper[4830]: I0227 16:32:06.781094 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="011b41ca-3a73-4e05-a626-fd630fe10bd5" path="/var/lib/kubelet/pods/011b41ca-3a73-4e05-a626-fd630fe10bd5/volumes" Feb 27 16:32:06 crc kubenswrapper[4830]: I0227 16:32:06.782659 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1192f6ae-a29b-4553-a293-6f4e41814652" path="/var/lib/kubelet/pods/1192f6ae-a29b-4553-a293-6f4e41814652/volumes" Feb 27 16:32:07 crc kubenswrapper[4830]: I0227 16:32:07.206132 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2fe2ad2-a0de-49aa-95fd-ef5f15032676","Type":"ContainerStarted","Data":"51dd486163d05319c102306b662f11e5d7f037407786a09d627e9ddb61b01f59"} Feb 27 16:32:07 crc kubenswrapper[4830]: I0227 16:32:07.884899 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 16:32:07 crc kubenswrapper[4830]: I0227 16:32:07.970681 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42523523-0867-4520-8f31-f11949fb08f4-logs\") pod \"42523523-0867-4520-8f31-f11949fb08f4\" (UID: \"42523523-0867-4520-8f31-f11949fb08f4\") " Feb 27 16:32:07 crc kubenswrapper[4830]: I0227 16:32:07.970774 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42523523-0867-4520-8f31-f11949fb08f4-config-data\") pod \"42523523-0867-4520-8f31-f11949fb08f4\" (UID: \"42523523-0867-4520-8f31-f11949fb08f4\") " Feb 27 16:32:07 crc kubenswrapper[4830]: I0227 16:32:07.970891 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42523523-0867-4520-8f31-f11949fb08f4-combined-ca-bundle\") pod \"42523523-0867-4520-8f31-f11949fb08f4\" (UID: \"42523523-0867-4520-8f31-f11949fb08f4\") " Feb 27 16:32:07 crc kubenswrapper[4830]: I0227 16:32:07.970931 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp7kr\" (UniqueName: \"kubernetes.io/projected/42523523-0867-4520-8f31-f11949fb08f4-kube-api-access-sp7kr\") pod \"42523523-0867-4520-8f31-f11949fb08f4\" (UID: \"42523523-0867-4520-8f31-f11949fb08f4\") " Feb 27 16:32:07 crc kubenswrapper[4830]: I0227 16:32:07.975785 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42523523-0867-4520-8f31-f11949fb08f4-kube-api-access-sp7kr" (OuterVolumeSpecName: "kube-api-access-sp7kr") pod "42523523-0867-4520-8f31-f11949fb08f4" (UID: "42523523-0867-4520-8f31-f11949fb08f4"). InnerVolumeSpecName "kube-api-access-sp7kr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:32:07 crc kubenswrapper[4830]: I0227 16:32:07.976102 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42523523-0867-4520-8f31-f11949fb08f4-logs" (OuterVolumeSpecName: "logs") pod "42523523-0867-4520-8f31-f11949fb08f4" (UID: "42523523-0867-4520-8f31-f11949fb08f4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.016762 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42523523-0867-4520-8f31-f11949fb08f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42523523-0867-4520-8f31-f11949fb08f4" (UID: "42523523-0867-4520-8f31-f11949fb08f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.025001 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42523523-0867-4520-8f31-f11949fb08f4-config-data" (OuterVolumeSpecName: "config-data") pod "42523523-0867-4520-8f31-f11949fb08f4" (UID: "42523523-0867-4520-8f31-f11949fb08f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.072474 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42523523-0867-4520-8f31-f11949fb08f4-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.072505 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42523523-0867-4520-8f31-f11949fb08f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.072517 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp7kr\" (UniqueName: \"kubernetes.io/projected/42523523-0867-4520-8f31-f11949fb08f4-kube-api-access-sp7kr\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.072528 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42523523-0867-4520-8f31-f11949fb08f4-logs\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.217565 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2fe2ad2-a0de-49aa-95fd-ef5f15032676","Type":"ContainerStarted","Data":"17c416fd77703fb7feb38dfb7c6e7aef3b647f80b42763e1c40e7ca828662e25"} Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.218963 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"42523523-0867-4520-8f31-f11949fb08f4","Type":"ContainerDied","Data":"1d6559428ad57ae73914e9bc0f3d82e639a8f07404438b74f346af3906edf948"} Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.218993 4830 scope.go:117] "RemoveContainer" containerID="1d6559428ad57ae73914e9bc0f3d82e639a8f07404438b74f346af3906edf948" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.219064 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.218938 4830 generic.go:334] "Generic (PLEG): container finished" podID="42523523-0867-4520-8f31-f11949fb08f4" containerID="1d6559428ad57ae73914e9bc0f3d82e639a8f07404438b74f346af3906edf948" exitCode=0 Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.219093 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"42523523-0867-4520-8f31-f11949fb08f4","Type":"ContainerDied","Data":"af1324a2f66fe97068081a8714dbd5f48bdfe63649824b5ab78e0e7001c99424"} Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.279083 4830 scope.go:117] "RemoveContainer" containerID="1382af5fe49790df2c419c664aad5930fef37ad780b2c578a935c9462e7352bb" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.301763 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.311539 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.314971 4830 scope.go:117] "RemoveContainer" containerID="1d6559428ad57ae73914e9bc0f3d82e639a8f07404438b74f346af3906edf948" Feb 27 16:32:08 crc kubenswrapper[4830]: E0227 16:32:08.315471 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d6559428ad57ae73914e9bc0f3d82e639a8f07404438b74f346af3906edf948\": container with ID starting with 1d6559428ad57ae73914e9bc0f3d82e639a8f07404438b74f346af3906edf948 not found: ID does not exist" containerID="1d6559428ad57ae73914e9bc0f3d82e639a8f07404438b74f346af3906edf948" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.315514 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d6559428ad57ae73914e9bc0f3d82e639a8f07404438b74f346af3906edf948"} err="failed to get container status \"1d6559428ad57ae73914e9bc0f3d82e639a8f07404438b74f346af3906edf948\": rpc error: code = NotFound desc = could not find container \"1d6559428ad57ae73914e9bc0f3d82e639a8f07404438b74f346af3906edf948\": container with ID starting with 1d6559428ad57ae73914e9bc0f3d82e639a8f07404438b74f346af3906edf948 not found: ID does not exist" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.315540 4830 scope.go:117] "RemoveContainer" containerID="1382af5fe49790df2c419c664aad5930fef37ad780b2c578a935c9462e7352bb" Feb 27 16:32:08 crc kubenswrapper[4830]: E0227 16:32:08.315922 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1382af5fe49790df2c419c664aad5930fef37ad780b2c578a935c9462e7352bb\": container with ID starting with 1382af5fe49790df2c419c664aad5930fef37ad780b2c578a935c9462e7352bb not found: ID does not exist" containerID="1382af5fe49790df2c419c664aad5930fef37ad780b2c578a935c9462e7352bb" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.315994 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1382af5fe49790df2c419c664aad5930fef37ad780b2c578a935c9462e7352bb"} err="failed to get container status \"1382af5fe49790df2c419c664aad5930fef37ad780b2c578a935c9462e7352bb\": rpc error: code = NotFound desc = could not find container \"1382af5fe49790df2c419c664aad5930fef37ad780b2c578a935c9462e7352bb\": container with ID starting with 1382af5fe49790df2c419c664aad5930fef37ad780b2c578a935c9462e7352bb not found: ID does not exist" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.318239 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 27 16:32:08 crc kubenswrapper[4830]: E0227 16:32:08.318757 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42523523-0867-4520-8f31-f11949fb08f4" containerName="nova-api-log" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.318772 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="42523523-0867-4520-8f31-f11949fb08f4" containerName="nova-api-log" Feb 27 16:32:08 crc kubenswrapper[4830]: E0227 16:32:08.318796 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42523523-0867-4520-8f31-f11949fb08f4" containerName="nova-api-api" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.318804 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="42523523-0867-4520-8f31-f11949fb08f4" containerName="nova-api-api" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.319035 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="42523523-0867-4520-8f31-f11949fb08f4" containerName="nova-api-api" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.319062 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="42523523-0867-4520-8f31-f11949fb08f4" containerName="nova-api-log" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.320288 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.323548 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.323560 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.324431 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.327135 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.377503 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0-config-data\") pod \"nova-api-0\" (UID: \"b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0\") " pod="openstack/nova-api-0" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.377555 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0-public-tls-certs\") pod \"nova-api-0\" (UID: \"b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0\") " pod="openstack/nova-api-0" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.377578 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0-logs\") pod \"nova-api-0\" (UID: \"b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0\") " pod="openstack/nova-api-0" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.377598 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0\") " pod="openstack/nova-api-0" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.377640 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0\") " pod="openstack/nova-api-0" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.377669 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m9wn\" (UniqueName: \"kubernetes.io/projected/b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0-kube-api-access-4m9wn\") pod \"nova-api-0\" (UID: \"b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0\") " pod="openstack/nova-api-0" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.479184 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0\") " pod="openstack/nova-api-0" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.479240 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m9wn\" (UniqueName: \"kubernetes.io/projected/b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0-kube-api-access-4m9wn\") pod \"nova-api-0\" (UID: \"b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0\") " pod="openstack/nova-api-0" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.479340 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0-config-data\") pod \"nova-api-0\" (UID: \"b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0\") " pod="openstack/nova-api-0" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.479375 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0-public-tls-certs\") pod \"nova-api-0\" (UID: \"b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0\") " pod="openstack/nova-api-0" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.479394 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0-logs\") pod \"nova-api-0\" (UID: \"b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0\") " pod="openstack/nova-api-0" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.479414 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0\") " pod="openstack/nova-api-0" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.480531 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0-logs\") pod \"nova-api-0\" (UID: \"b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0\") " pod="openstack/nova-api-0" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.483578 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0\") " pod="openstack/nova-api-0" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.484337 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0\") " pod="openstack/nova-api-0" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.484711 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0-config-data\") pod \"nova-api-0\" (UID: \"b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0\") " pod="openstack/nova-api-0" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.486315 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0-public-tls-certs\") pod \"nova-api-0\" (UID: \"b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0\") " pod="openstack/nova-api-0" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.496084 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m9wn\" (UniqueName: \"kubernetes.io/projected/b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0-kube-api-access-4m9wn\") pod \"nova-api-0\" (UID: \"b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0\") " pod="openstack/nova-api-0" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.636193 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.763333 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.780806 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42523523-0867-4520-8f31-f11949fb08f4" path="/var/lib/kubelet/pods/42523523-0867-4520-8f31-f11949fb08f4/volumes" Feb 27 16:32:08 crc kubenswrapper[4830]: I0227 16:32:08.785148 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 27 16:32:09 crc kubenswrapper[4830]: I0227 16:32:09.131266 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 16:32:09 crc kubenswrapper[4830]: W0227 16:32:09.146882 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4a9d2bd_da61_4a9f_b2a5_030dd24eedd0.slice/crio-568b83f9d0e2aac1add6fa14ba406ece747f698e1589e2bea692287fb89b60a6 WatchSource:0}: Error finding container 568b83f9d0e2aac1add6fa14ba406ece747f698e1589e2bea692287fb89b60a6: Status 404 returned error can't find the container with id 568b83f9d0e2aac1add6fa14ba406ece747f698e1589e2bea692287fb89b60a6 Feb 27 16:32:09 crc kubenswrapper[4830]: I0227 16:32:09.229593 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2fe2ad2-a0de-49aa-95fd-ef5f15032676","Type":"ContainerStarted","Data":"72e38d1c2009b64b0066ca1c11420f6777aab9186b8f6d7357f2184e318a87ad"} Feb 27 16:32:09 crc kubenswrapper[4830]: I0227 16:32:09.231235 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0","Type":"ContainerStarted","Data":"568b83f9d0e2aac1add6fa14ba406ece747f698e1589e2bea692287fb89b60a6"} Feb 27 16:32:09 crc kubenswrapper[4830]: I0227 16:32:09.253063 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 27 16:32:09 crc kubenswrapper[4830]: I0227 16:32:09.425871 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-86c7h"] Feb 27 16:32:09 crc kubenswrapper[4830]: I0227 16:32:09.427354 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-86c7h" Feb 27 16:32:09 crc kubenswrapper[4830]: I0227 16:32:09.429498 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 27 16:32:09 crc kubenswrapper[4830]: I0227 16:32:09.429666 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 27 16:32:09 crc kubenswrapper[4830]: I0227 16:32:09.454162 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-86c7h"] Feb 27 16:32:09 crc kubenswrapper[4830]: I0227 16:32:09.505984 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f35673a0-3e6b-4cd6-b378-5baf313756c7-scripts\") pod \"nova-cell1-cell-mapping-86c7h\" (UID: \"f35673a0-3e6b-4cd6-b378-5baf313756c7\") " pod="openstack/nova-cell1-cell-mapping-86c7h" Feb 27 16:32:09 crc kubenswrapper[4830]: I0227 16:32:09.506617 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35673a0-3e6b-4cd6-b378-5baf313756c7-config-data\") pod \"nova-cell1-cell-mapping-86c7h\" (UID: \"f35673a0-3e6b-4cd6-b378-5baf313756c7\") " pod="openstack/nova-cell1-cell-mapping-86c7h" Feb 27 16:32:09 crc kubenswrapper[4830]: I0227 16:32:09.506654 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cg48\" (UniqueName: \"kubernetes.io/projected/f35673a0-3e6b-4cd6-b378-5baf313756c7-kube-api-access-4cg48\") pod \"nova-cell1-cell-mapping-86c7h\" (UID: \"f35673a0-3e6b-4cd6-b378-5baf313756c7\") " pod="openstack/nova-cell1-cell-mapping-86c7h" Feb 27 16:32:09 crc kubenswrapper[4830]: I0227 16:32:09.506712 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35673a0-3e6b-4cd6-b378-5baf313756c7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-86c7h\" (UID: \"f35673a0-3e6b-4cd6-b378-5baf313756c7\") " pod="openstack/nova-cell1-cell-mapping-86c7h" Feb 27 16:32:09 crc kubenswrapper[4830]: I0227 16:32:09.608372 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f35673a0-3e6b-4cd6-b378-5baf313756c7-scripts\") pod \"nova-cell1-cell-mapping-86c7h\" (UID: \"f35673a0-3e6b-4cd6-b378-5baf313756c7\") " pod="openstack/nova-cell1-cell-mapping-86c7h" Feb 27 16:32:09 crc kubenswrapper[4830]: I0227 16:32:09.608416 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35673a0-3e6b-4cd6-b378-5baf313756c7-config-data\") pod \"nova-cell1-cell-mapping-86c7h\" (UID: \"f35673a0-3e6b-4cd6-b378-5baf313756c7\") " pod="openstack/nova-cell1-cell-mapping-86c7h" Feb 27 16:32:09 crc kubenswrapper[4830]: I0227 16:32:09.608440 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cg48\" (UniqueName: \"kubernetes.io/projected/f35673a0-3e6b-4cd6-b378-5baf313756c7-kube-api-access-4cg48\") pod \"nova-cell1-cell-mapping-86c7h\" (UID: \"f35673a0-3e6b-4cd6-b378-5baf313756c7\") " pod="openstack/nova-cell1-cell-mapping-86c7h" Feb 27 16:32:09 crc kubenswrapper[4830]: I0227 16:32:09.608484 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35673a0-3e6b-4cd6-b378-5baf313756c7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-86c7h\" (UID: \"f35673a0-3e6b-4cd6-b378-5baf313756c7\") " pod="openstack/nova-cell1-cell-mapping-86c7h" Feb 27 16:32:09 crc kubenswrapper[4830]: I0227 16:32:09.612282 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35673a0-3e6b-4cd6-b378-5baf313756c7-config-data\") pod \"nova-cell1-cell-mapping-86c7h\" (UID: \"f35673a0-3e6b-4cd6-b378-5baf313756c7\") " pod="openstack/nova-cell1-cell-mapping-86c7h" Feb 27 16:32:09 crc kubenswrapper[4830]: I0227 16:32:09.612294 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35673a0-3e6b-4cd6-b378-5baf313756c7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-86c7h\" (UID: \"f35673a0-3e6b-4cd6-b378-5baf313756c7\") " pod="openstack/nova-cell1-cell-mapping-86c7h" Feb 27 16:32:09 crc kubenswrapper[4830]: I0227 16:32:09.613018 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f35673a0-3e6b-4cd6-b378-5baf313756c7-scripts\") pod \"nova-cell1-cell-mapping-86c7h\" (UID: \"f35673a0-3e6b-4cd6-b378-5baf313756c7\") " pod="openstack/nova-cell1-cell-mapping-86c7h" Feb 27 16:32:09 crc kubenswrapper[4830]: I0227 16:32:09.622795 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cg48\" (UniqueName: \"kubernetes.io/projected/f35673a0-3e6b-4cd6-b378-5baf313756c7-kube-api-access-4cg48\") pod \"nova-cell1-cell-mapping-86c7h\" (UID: \"f35673a0-3e6b-4cd6-b378-5baf313756c7\") " pod="openstack/nova-cell1-cell-mapping-86c7h" Feb 27 16:32:09 crc kubenswrapper[4830]: I0227 16:32:09.750865 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-86c7h" Feb 27 16:32:10 crc kubenswrapper[4830]: I0227 16:32:10.246227 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2fe2ad2-a0de-49aa-95fd-ef5f15032676","Type":"ContainerStarted","Data":"efb022c64f6ae8ffd2fec27339e107e45b38a12b6d4a8d2858182ad516e6d9f9"} Feb 27 16:32:10 crc kubenswrapper[4830]: I0227 16:32:10.249881 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0","Type":"ContainerStarted","Data":"813dd162c7b10d38280c7a7150448714572d48c51e6c1ae70728e1deaf7d61a7"} Feb 27 16:32:10 crc kubenswrapper[4830]: I0227 16:32:10.249921 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0","Type":"ContainerStarted","Data":"6e559975b32a9c25c02067fbf07d4cb6e124fdc661cd5b27630e91997b8d0b03"} Feb 27 16:32:10 crc kubenswrapper[4830]: I0227 16:32:10.272709 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.272694719 podStartE2EDuration="2.272694719s" podCreationTimestamp="2026-02-27 16:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:32:10.271413458 +0000 UTC m=+1526.360685921" watchObservedRunningTime="2026-02-27 16:32:10.272694719 +0000 UTC m=+1526.361967182" Feb 27 16:32:10 crc kubenswrapper[4830]: W0227 16:32:10.286495 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf35673a0_3e6b_4cd6_b378_5baf313756c7.slice/crio-4b4ae80685302c721edc9945e009e67403a400845635e34403067dae8ee28983 WatchSource:0}: Error finding container 4b4ae80685302c721edc9945e009e67403a400845635e34403067dae8ee28983: Status 404 returned error can't find the container with id 4b4ae80685302c721edc9945e009e67403a400845635e34403067dae8ee28983 Feb 27 16:32:10 crc kubenswrapper[4830]: I0227 16:32:10.309806 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-86c7h"] Feb 27 16:32:11 crc kubenswrapper[4830]: I0227 16:32:11.268781 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-86c7h" event={"ID":"f35673a0-3e6b-4cd6-b378-5baf313756c7","Type":"ContainerStarted","Data":"d42b710ef87298f2e0a2e01a47fd2d62e290785d9674d8573992395513f85975"} Feb 27 16:32:11 crc kubenswrapper[4830]: I0227 16:32:11.269311 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-86c7h" event={"ID":"f35673a0-3e6b-4cd6-b378-5baf313756c7","Type":"ContainerStarted","Data":"4b4ae80685302c721edc9945e009e67403a400845635e34403067dae8ee28983"} Feb 27 16:32:11 crc kubenswrapper[4830]: I0227 16:32:11.316176 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-86c7h" podStartSLOduration=2.316151613 podStartE2EDuration="2.316151613s" podCreationTimestamp="2026-02-27 16:32:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:32:11.301795447 +0000 UTC m=+1527.391067940" watchObservedRunningTime="2026-02-27 16:32:11.316151613 +0000 UTC m=+1527.405424106" Feb 27 16:32:11 crc kubenswrapper[4830]: I0227 16:32:11.658809 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-dmhcp" Feb 27 16:32:11 crc kubenswrapper[4830]: I0227 16:32:11.759260 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-p8gmd"] Feb 27 16:32:11 crc kubenswrapper[4830]: I0227 16:32:11.759545 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-p8gmd" podUID="fa63e972-7d02-4b84-8f48-c4126c0e6b06" containerName="dnsmasq-dns" containerID="cri-o://84e5aebe73093b2a7e9c000daf0f6b003429ded822d6e5698a1d1f248efe4601" gracePeriod=10 Feb 27 16:32:12 crc kubenswrapper[4830]: I0227 16:32:12.252324 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-p8gmd" Feb 27 16:32:12 crc kubenswrapper[4830]: I0227 16:32:12.279327 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2fe2ad2-a0de-49aa-95fd-ef5f15032676","Type":"ContainerStarted","Data":"e377c9fe2c2c4014633d618a399228bda3185620f06415bda5d22e2216dcccee"} Feb 27 16:32:12 crc kubenswrapper[4830]: I0227 16:32:12.279474 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 16:32:12 crc kubenswrapper[4830]: I0227 16:32:12.282677 4830 generic.go:334] "Generic (PLEG): container finished" podID="fa63e972-7d02-4b84-8f48-c4126c0e6b06" containerID="84e5aebe73093b2a7e9c000daf0f6b003429ded822d6e5698a1d1f248efe4601" exitCode=0 Feb 27 16:32:12 crc kubenswrapper[4830]: I0227 16:32:12.282714 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-p8gmd" event={"ID":"fa63e972-7d02-4b84-8f48-c4126c0e6b06","Type":"ContainerDied","Data":"84e5aebe73093b2a7e9c000daf0f6b003429ded822d6e5698a1d1f248efe4601"} Feb 27 16:32:12 crc kubenswrapper[4830]: I0227 16:32:12.282749 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-p8gmd" event={"ID":"fa63e972-7d02-4b84-8f48-c4126c0e6b06","Type":"ContainerDied","Data":"58eba474e106c61c1feaa3b8b7712eef0faef5992f6a2b410b738b0088c6ccb7"} Feb 27 16:32:12 crc kubenswrapper[4830]: I0227 16:32:12.282771 4830 scope.go:117] "RemoveContainer" containerID="84e5aebe73093b2a7e9c000daf0f6b003429ded822d6e5698a1d1f248efe4601" Feb 27 16:32:12 crc kubenswrapper[4830]: I0227 16:32:12.282754 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-p8gmd" Feb 27 16:32:12 crc kubenswrapper[4830]: I0227 16:32:12.305856 4830 scope.go:117] "RemoveContainer" containerID="c628a74f0963b41f934fe48342ac8ac62afeee0bc6d1e12b9006b8133b207093" Feb 27 16:32:12 crc kubenswrapper[4830]: I0227 16:32:12.327934 4830 scope.go:117] "RemoveContainer" containerID="84e5aebe73093b2a7e9c000daf0f6b003429ded822d6e5698a1d1f248efe4601" Feb 27 16:32:12 crc kubenswrapper[4830]: E0227 16:32:12.328256 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84e5aebe73093b2a7e9c000daf0f6b003429ded822d6e5698a1d1f248efe4601\": container with ID starting with 84e5aebe73093b2a7e9c000daf0f6b003429ded822d6e5698a1d1f248efe4601 not found: ID does not exist" containerID="84e5aebe73093b2a7e9c000daf0f6b003429ded822d6e5698a1d1f248efe4601" Feb 27 16:32:12 crc kubenswrapper[4830]: I0227 16:32:12.328290 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84e5aebe73093b2a7e9c000daf0f6b003429ded822d6e5698a1d1f248efe4601"} err="failed to get container status \"84e5aebe73093b2a7e9c000daf0f6b003429ded822d6e5698a1d1f248efe4601\": rpc error: code = NotFound desc = could not find container \"84e5aebe73093b2a7e9c000daf0f6b003429ded822d6e5698a1d1f248efe4601\": container with ID starting with 84e5aebe73093b2a7e9c000daf0f6b003429ded822d6e5698a1d1f248efe4601 not found: ID does not exist" Feb 27 16:32:12 crc kubenswrapper[4830]: I0227 16:32:12.328308 4830 scope.go:117] "RemoveContainer" containerID="c628a74f0963b41f934fe48342ac8ac62afeee0bc6d1e12b9006b8133b207093" Feb 27 16:32:12 crc kubenswrapper[4830]: E0227 16:32:12.328497 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c628a74f0963b41f934fe48342ac8ac62afeee0bc6d1e12b9006b8133b207093\": container with ID starting with c628a74f0963b41f934fe48342ac8ac62afeee0bc6d1e12b9006b8133b207093 not found: ID does not exist" containerID="c628a74f0963b41f934fe48342ac8ac62afeee0bc6d1e12b9006b8133b207093" Feb 27 16:32:12 crc kubenswrapper[4830]: I0227 16:32:12.328519 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c628a74f0963b41f934fe48342ac8ac62afeee0bc6d1e12b9006b8133b207093"} err="failed to get container status \"c628a74f0963b41f934fe48342ac8ac62afeee0bc6d1e12b9006b8133b207093\": rpc error: code = NotFound desc = could not find container \"c628a74f0963b41f934fe48342ac8ac62afeee0bc6d1e12b9006b8133b207093\": container with ID starting with c628a74f0963b41f934fe48342ac8ac62afeee0bc6d1e12b9006b8133b207093 not found: ID does not exist" Feb 27 16:32:12 crc kubenswrapper[4830]: I0227 16:32:12.361524 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa63e972-7d02-4b84-8f48-c4126c0e6b06-ovsdbserver-nb\") pod \"fa63e972-7d02-4b84-8f48-c4126c0e6b06\" (UID: \"fa63e972-7d02-4b84-8f48-c4126c0e6b06\") " Feb 27 16:32:12 crc kubenswrapper[4830]: I0227 16:32:12.361647 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgbzl\" (UniqueName: \"kubernetes.io/projected/fa63e972-7d02-4b84-8f48-c4126c0e6b06-kube-api-access-hgbzl\") pod \"fa63e972-7d02-4b84-8f48-c4126c0e6b06\" (UID: \"fa63e972-7d02-4b84-8f48-c4126c0e6b06\") " Feb 27 16:32:12 crc kubenswrapper[4830]: I0227 16:32:12.361672 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa63e972-7d02-4b84-8f48-c4126c0e6b06-ovsdbserver-sb\") pod \"fa63e972-7d02-4b84-8f48-c4126c0e6b06\" (UID: \"fa63e972-7d02-4b84-8f48-c4126c0e6b06\") " Feb 27 16:32:12 crc kubenswrapper[4830]: I0227 16:32:12.361717 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa63e972-7d02-4b84-8f48-c4126c0e6b06-dns-svc\") pod \"fa63e972-7d02-4b84-8f48-c4126c0e6b06\" (UID: \"fa63e972-7d02-4b84-8f48-c4126c0e6b06\") " Feb 27 16:32:12 crc kubenswrapper[4830]: I0227 16:32:12.361855 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa63e972-7d02-4b84-8f48-c4126c0e6b06-config\") pod \"fa63e972-7d02-4b84-8f48-c4126c0e6b06\" (UID: \"fa63e972-7d02-4b84-8f48-c4126c0e6b06\") " Feb 27 16:32:12 crc kubenswrapper[4830]: I0227 16:32:12.361916 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa63e972-7d02-4b84-8f48-c4126c0e6b06-dns-swift-storage-0\") pod \"fa63e972-7d02-4b84-8f48-c4126c0e6b06\" (UID: \"fa63e972-7d02-4b84-8f48-c4126c0e6b06\") " Feb 27 16:32:12 crc kubenswrapper[4830]: I0227 16:32:12.369068 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa63e972-7d02-4b84-8f48-c4126c0e6b06-kube-api-access-hgbzl" (OuterVolumeSpecName: "kube-api-access-hgbzl") pod "fa63e972-7d02-4b84-8f48-c4126c0e6b06" (UID: "fa63e972-7d02-4b84-8f48-c4126c0e6b06"). InnerVolumeSpecName "kube-api-access-hgbzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:32:12 crc kubenswrapper[4830]: I0227 16:32:12.409890 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa63e972-7d02-4b84-8f48-c4126c0e6b06-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fa63e972-7d02-4b84-8f48-c4126c0e6b06" (UID: "fa63e972-7d02-4b84-8f48-c4126c0e6b06"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:32:12 crc kubenswrapper[4830]: I0227 16:32:12.420396 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa63e972-7d02-4b84-8f48-c4126c0e6b06-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fa63e972-7d02-4b84-8f48-c4126c0e6b06" (UID: "fa63e972-7d02-4b84-8f48-c4126c0e6b06"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:32:12 crc kubenswrapper[4830]: I0227 16:32:12.420424 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa63e972-7d02-4b84-8f48-c4126c0e6b06-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fa63e972-7d02-4b84-8f48-c4126c0e6b06" (UID: "fa63e972-7d02-4b84-8f48-c4126c0e6b06"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:32:12 crc kubenswrapper[4830]: I0227 16:32:12.428844 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa63e972-7d02-4b84-8f48-c4126c0e6b06-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fa63e972-7d02-4b84-8f48-c4126c0e6b06" (UID: "fa63e972-7d02-4b84-8f48-c4126c0e6b06"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:32:12 crc kubenswrapper[4830]: I0227 16:32:12.450056 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa63e972-7d02-4b84-8f48-c4126c0e6b06-config" (OuterVolumeSpecName: "config") pod "fa63e972-7d02-4b84-8f48-c4126c0e6b06" (UID: "fa63e972-7d02-4b84-8f48-c4126c0e6b06"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:32:12 crc kubenswrapper[4830]: I0227 16:32:12.466897 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa63e972-7d02-4b84-8f48-c4126c0e6b06-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:12 crc kubenswrapper[4830]: I0227 16:32:12.466928 4830 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa63e972-7d02-4b84-8f48-c4126c0e6b06-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:12 crc kubenswrapper[4830]: I0227 16:32:12.466937 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa63e972-7d02-4b84-8f48-c4126c0e6b06-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:12 crc kubenswrapper[4830]: I0227 16:32:12.466960 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgbzl\" (UniqueName: \"kubernetes.io/projected/fa63e972-7d02-4b84-8f48-c4126c0e6b06-kube-api-access-hgbzl\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:12 crc kubenswrapper[4830]: I0227 16:32:12.466968 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa63e972-7d02-4b84-8f48-c4126c0e6b06-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:12 crc kubenswrapper[4830]: I0227 16:32:12.466979 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa63e972-7d02-4b84-8f48-c4126c0e6b06-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:12 crc kubenswrapper[4830]: I0227 16:32:12.624182 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.03814031 podStartE2EDuration="7.624164058s" podCreationTimestamp="2026-02-27 16:32:05 +0000 UTC" firstStartedPulling="2026-02-27 16:32:06.380592103 +0000 UTC m=+1522.469864556" lastFinishedPulling="2026-02-27 16:32:11.966615841 +0000 UTC m=+1528.055888304" observedRunningTime="2026-02-27 16:32:12.30409625 +0000 UTC m=+1528.393368713" watchObservedRunningTime="2026-02-27 16:32:12.624164058 +0000 UTC m=+1528.713436521" Feb 27 16:32:12 crc kubenswrapper[4830]: I0227 16:32:12.624983 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-p8gmd"] Feb 27 16:32:12 crc kubenswrapper[4830]: I0227 16:32:12.633134 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-p8gmd"] Feb 27 16:32:12 crc kubenswrapper[4830]: I0227 16:32:12.776688 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa63e972-7d02-4b84-8f48-c4126c0e6b06" path="/var/lib/kubelet/pods/fa63e972-7d02-4b84-8f48-c4126c0e6b06/volumes" Feb 27 16:32:15 crc kubenswrapper[4830]: I0227 16:32:15.332793 4830 generic.go:334] "Generic (PLEG): container finished" podID="f35673a0-3e6b-4cd6-b378-5baf313756c7" containerID="d42b710ef87298f2e0a2e01a47fd2d62e290785d9674d8573992395513f85975" exitCode=0 Feb 27 16:32:15 crc kubenswrapper[4830]: I0227 16:32:15.332879 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-86c7h" event={"ID":"f35673a0-3e6b-4cd6-b378-5baf313756c7","Type":"ContainerDied","Data":"d42b710ef87298f2e0a2e01a47fd2d62e290785d9674d8573992395513f85975"} Feb 27 16:32:16 crc kubenswrapper[4830]: I0227 16:32:16.824083 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-86c7h" Feb 27 16:32:16 crc kubenswrapper[4830]: I0227 16:32:16.974785 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35673a0-3e6b-4cd6-b378-5baf313756c7-combined-ca-bundle\") pod \"f35673a0-3e6b-4cd6-b378-5baf313756c7\" (UID: \"f35673a0-3e6b-4cd6-b378-5baf313756c7\") " Feb 27 16:32:16 crc kubenswrapper[4830]: I0227 16:32:16.974863 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f35673a0-3e6b-4cd6-b378-5baf313756c7-scripts\") pod \"f35673a0-3e6b-4cd6-b378-5baf313756c7\" (UID: \"f35673a0-3e6b-4cd6-b378-5baf313756c7\") " Feb 27 16:32:16 crc kubenswrapper[4830]: I0227 16:32:16.975104 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35673a0-3e6b-4cd6-b378-5baf313756c7-config-data\") pod \"f35673a0-3e6b-4cd6-b378-5baf313756c7\" (UID: \"f35673a0-3e6b-4cd6-b378-5baf313756c7\") " Feb 27 16:32:16 crc kubenswrapper[4830]: I0227 16:32:16.975229 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cg48\" (UniqueName: \"kubernetes.io/projected/f35673a0-3e6b-4cd6-b378-5baf313756c7-kube-api-access-4cg48\") pod \"f35673a0-3e6b-4cd6-b378-5baf313756c7\" (UID: \"f35673a0-3e6b-4cd6-b378-5baf313756c7\") " Feb 27 16:32:16 crc kubenswrapper[4830]: I0227 16:32:16.981520 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f35673a0-3e6b-4cd6-b378-5baf313756c7-kube-api-access-4cg48" (OuterVolumeSpecName: "kube-api-access-4cg48") pod "f35673a0-3e6b-4cd6-b378-5baf313756c7" (UID: "f35673a0-3e6b-4cd6-b378-5baf313756c7"). InnerVolumeSpecName "kube-api-access-4cg48". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:32:16 crc kubenswrapper[4830]: I0227 16:32:16.982222 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f35673a0-3e6b-4cd6-b378-5baf313756c7-scripts" (OuterVolumeSpecName: "scripts") pod "f35673a0-3e6b-4cd6-b378-5baf313756c7" (UID: "f35673a0-3e6b-4cd6-b378-5baf313756c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:32:17 crc kubenswrapper[4830]: I0227 16:32:17.002017 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f35673a0-3e6b-4cd6-b378-5baf313756c7-config-data" (OuterVolumeSpecName: "config-data") pod "f35673a0-3e6b-4cd6-b378-5baf313756c7" (UID: "f35673a0-3e6b-4cd6-b378-5baf313756c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:32:17 crc kubenswrapper[4830]: I0227 16:32:17.019590 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f35673a0-3e6b-4cd6-b378-5baf313756c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f35673a0-3e6b-4cd6-b378-5baf313756c7" (UID: "f35673a0-3e6b-4cd6-b378-5baf313756c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:32:17 crc kubenswrapper[4830]: I0227 16:32:17.077031 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cg48\" (UniqueName: \"kubernetes.io/projected/f35673a0-3e6b-4cd6-b378-5baf313756c7-kube-api-access-4cg48\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:17 crc kubenswrapper[4830]: I0227 16:32:17.077063 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35673a0-3e6b-4cd6-b378-5baf313756c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:17 crc kubenswrapper[4830]: I0227 16:32:17.077072 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f35673a0-3e6b-4cd6-b378-5baf313756c7-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:17 crc kubenswrapper[4830]: I0227 16:32:17.077081 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35673a0-3e6b-4cd6-b378-5baf313756c7-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:17 crc kubenswrapper[4830]: I0227 16:32:17.360618 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-86c7h" event={"ID":"f35673a0-3e6b-4cd6-b378-5baf313756c7","Type":"ContainerDied","Data":"4b4ae80685302c721edc9945e009e67403a400845635e34403067dae8ee28983"} Feb 27 16:32:17 crc kubenswrapper[4830]: I0227 16:32:17.360662 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b4ae80685302c721edc9945e009e67403a400845635e34403067dae8ee28983" Feb 27 16:32:17 crc kubenswrapper[4830]: I0227 16:32:17.360679 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-86c7h" Feb 27 16:32:17 crc kubenswrapper[4830]: I0227 16:32:17.543226 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 16:32:17 crc kubenswrapper[4830]: I0227 16:32:17.543499 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="ac57b71a-c649-451a-8cd8-a71f13e1387d" containerName="nova-scheduler-scheduler" containerID="cri-o://fa47c79e0160bd3ec8239f111a947e8a403a1646d83d1d09c44c1606c5678dd6" gracePeriod=30 Feb 27 16:32:17 crc kubenswrapper[4830]: I0227 16:32:17.555627 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 16:32:17 crc kubenswrapper[4830]: I0227 16:32:17.555836 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0" containerName="nova-api-log" containerID="cri-o://6e559975b32a9c25c02067fbf07d4cb6e124fdc661cd5b27630e91997b8d0b03" gracePeriod=30 Feb 27 16:32:17 crc kubenswrapper[4830]: I0227 16:32:17.555934 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0" containerName="nova-api-api" containerID="cri-o://813dd162c7b10d38280c7a7150448714572d48c51e6c1ae70728e1deaf7d61a7" gracePeriod=30 Feb 27 16:32:17 crc kubenswrapper[4830]: I0227 16:32:17.576177 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 16:32:17 crc kubenswrapper[4830]: I0227 16:32:17.576405 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="34e10b21-9e53-464a-a707-cb587ab15199" containerName="nova-metadata-log" containerID="cri-o://88b9e53a42c4a09fc14bc5ea02179f51920e7ad14dd3c90d36fd4745296b055e" gracePeriod=30 Feb 27 16:32:17 crc kubenswrapper[4830]: I0227 16:32:17.576548 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="34e10b21-9e53-464a-a707-cb587ab15199" containerName="nova-metadata-metadata" containerID="cri-o://ed489a7ac08882b9d78c835ff27e520076b06b9bccf96912640b432acd899e9a" gracePeriod=30 Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.111675 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.199409 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0-public-tls-certs\") pod \"b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0\" (UID: \"b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0\") " Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.199450 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0-config-data\") pod \"b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0\" (UID: \"b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0\") " Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.199577 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0-combined-ca-bundle\") pod \"b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0\" (UID: \"b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0\") " Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.199628 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0-logs\") pod \"b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0\" (UID: \"b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0\") " Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.199665 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4m9wn\" (UniqueName: \"kubernetes.io/projected/b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0-kube-api-access-4m9wn\") pod \"b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0\" (UID: \"b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0\") " Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.199722 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0-internal-tls-certs\") pod \"b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0\" (UID: \"b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0\") " Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.200151 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0-logs" (OuterVolumeSpecName: "logs") pod "b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0" (UID: "b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.204319 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0-kube-api-access-4m9wn" (OuterVolumeSpecName: "kube-api-access-4m9wn") pod "b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0" (UID: "b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0"). InnerVolumeSpecName "kube-api-access-4m9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.234345 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0-config-data" (OuterVolumeSpecName: "config-data") pod "b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0" (UID: "b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.240091 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0" (UID: "b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.252076 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0" (UID: "b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.282021 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0" (UID: "b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.301435 4830 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.301478 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.301488 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.301498 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0-logs\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.301509 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4m9wn\" (UniqueName: \"kubernetes.io/projected/b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0-kube-api-access-4m9wn\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.301521 4830 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.371668 4830 generic.go:334] "Generic (PLEG): container finished" podID="b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0" containerID="813dd162c7b10d38280c7a7150448714572d48c51e6c1ae70728e1deaf7d61a7" exitCode=0 Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.371699 4830 generic.go:334] "Generic (PLEG): container finished" podID="b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0" containerID="6e559975b32a9c25c02067fbf07d4cb6e124fdc661cd5b27630e91997b8d0b03" exitCode=143 Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.371719 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.371740 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0","Type":"ContainerDied","Data":"813dd162c7b10d38280c7a7150448714572d48c51e6c1ae70728e1deaf7d61a7"} Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.371762 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0","Type":"ContainerDied","Data":"6e559975b32a9c25c02067fbf07d4cb6e124fdc661cd5b27630e91997b8d0b03"} Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.371772 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0","Type":"ContainerDied","Data":"568b83f9d0e2aac1add6fa14ba406ece747f698e1589e2bea692287fb89b60a6"} Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.371800 4830 scope.go:117] "RemoveContainer" containerID="813dd162c7b10d38280c7a7150448714572d48c51e6c1ae70728e1deaf7d61a7" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.373504 4830 generic.go:334] "Generic (PLEG): container finished" podID="34e10b21-9e53-464a-a707-cb587ab15199" containerID="88b9e53a42c4a09fc14bc5ea02179f51920e7ad14dd3c90d36fd4745296b055e" exitCode=143 Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.373545 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34e10b21-9e53-464a-a707-cb587ab15199","Type":"ContainerDied","Data":"88b9e53a42c4a09fc14bc5ea02179f51920e7ad14dd3c90d36fd4745296b055e"} Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.394289 4830 scope.go:117] "RemoveContainer" containerID="6e559975b32a9c25c02067fbf07d4cb6e124fdc661cd5b27630e91997b8d0b03" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.415816 4830 scope.go:117] "RemoveContainer" containerID="813dd162c7b10d38280c7a7150448714572d48c51e6c1ae70728e1deaf7d61a7" Feb 27 16:32:18 crc kubenswrapper[4830]: E0227 16:32:18.416408 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"813dd162c7b10d38280c7a7150448714572d48c51e6c1ae70728e1deaf7d61a7\": container with ID starting with 813dd162c7b10d38280c7a7150448714572d48c51e6c1ae70728e1deaf7d61a7 not found: ID does not exist" containerID="813dd162c7b10d38280c7a7150448714572d48c51e6c1ae70728e1deaf7d61a7" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.416458 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"813dd162c7b10d38280c7a7150448714572d48c51e6c1ae70728e1deaf7d61a7"} err="failed to get container status \"813dd162c7b10d38280c7a7150448714572d48c51e6c1ae70728e1deaf7d61a7\": rpc error: code = NotFound desc = could not find container \"813dd162c7b10d38280c7a7150448714572d48c51e6c1ae70728e1deaf7d61a7\": container with ID starting with 813dd162c7b10d38280c7a7150448714572d48c51e6c1ae70728e1deaf7d61a7 not found: ID does not exist" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.416487 4830 scope.go:117] "RemoveContainer" containerID="6e559975b32a9c25c02067fbf07d4cb6e124fdc661cd5b27630e91997b8d0b03" Feb 27 16:32:18 crc kubenswrapper[4830]: E0227 16:32:18.416905 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e559975b32a9c25c02067fbf07d4cb6e124fdc661cd5b27630e91997b8d0b03\": container with ID starting with 6e559975b32a9c25c02067fbf07d4cb6e124fdc661cd5b27630e91997b8d0b03 not found: ID does not exist" containerID="6e559975b32a9c25c02067fbf07d4cb6e124fdc661cd5b27630e91997b8d0b03" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.416938 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e559975b32a9c25c02067fbf07d4cb6e124fdc661cd5b27630e91997b8d0b03"} err="failed to get container status \"6e559975b32a9c25c02067fbf07d4cb6e124fdc661cd5b27630e91997b8d0b03\": rpc error: code = NotFound desc = could not find container \"6e559975b32a9c25c02067fbf07d4cb6e124fdc661cd5b27630e91997b8d0b03\": container with ID starting with 6e559975b32a9c25c02067fbf07d4cb6e124fdc661cd5b27630e91997b8d0b03 not found: ID does not exist" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.417027 4830 scope.go:117] "RemoveContainer" containerID="813dd162c7b10d38280c7a7150448714572d48c51e6c1ae70728e1deaf7d61a7" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.417294 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"813dd162c7b10d38280c7a7150448714572d48c51e6c1ae70728e1deaf7d61a7"} err="failed to get container status \"813dd162c7b10d38280c7a7150448714572d48c51e6c1ae70728e1deaf7d61a7\": rpc error: code = NotFound desc = could not find container \"813dd162c7b10d38280c7a7150448714572d48c51e6c1ae70728e1deaf7d61a7\": container with ID starting with 813dd162c7b10d38280c7a7150448714572d48c51e6c1ae70728e1deaf7d61a7 not found: ID does not exist" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.417378 4830 scope.go:117] "RemoveContainer" containerID="6e559975b32a9c25c02067fbf07d4cb6e124fdc661cd5b27630e91997b8d0b03" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.417677 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e559975b32a9c25c02067fbf07d4cb6e124fdc661cd5b27630e91997b8d0b03"} err="failed to get container status \"6e559975b32a9c25c02067fbf07d4cb6e124fdc661cd5b27630e91997b8d0b03\": rpc error: code = NotFound desc = could not find container \"6e559975b32a9c25c02067fbf07d4cb6e124fdc661cd5b27630e91997b8d0b03\": container with ID starting with 6e559975b32a9c25c02067fbf07d4cb6e124fdc661cd5b27630e91997b8d0b03 not found: ID does not exist" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.422344 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.436970 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.448467 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 27 16:32:18 crc kubenswrapper[4830]: E0227 16:32:18.448904 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa63e972-7d02-4b84-8f48-c4126c0e6b06" containerName="dnsmasq-dns" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.448921 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa63e972-7d02-4b84-8f48-c4126c0e6b06" containerName="dnsmasq-dns" Feb 27 16:32:18 crc kubenswrapper[4830]: E0227 16:32:18.448932 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa63e972-7d02-4b84-8f48-c4126c0e6b06" containerName="init" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.448938 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa63e972-7d02-4b84-8f48-c4126c0e6b06" containerName="init" Feb 27 16:32:18 crc kubenswrapper[4830]: E0227 16:32:18.448971 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0" containerName="nova-api-log" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.448978 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0" containerName="nova-api-log" Feb 27 16:32:18 crc kubenswrapper[4830]: E0227 16:32:18.448990 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f35673a0-3e6b-4cd6-b378-5baf313756c7" containerName="nova-manage" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.448997 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f35673a0-3e6b-4cd6-b378-5baf313756c7" containerName="nova-manage" Feb 27 16:32:18 crc kubenswrapper[4830]: E0227 16:32:18.449019 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0" containerName="nova-api-api" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.449026 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0" containerName="nova-api-api" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.449195 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f35673a0-3e6b-4cd6-b378-5baf313756c7" containerName="nova-manage" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.449209 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa63e972-7d02-4b84-8f48-c4126c0e6b06" containerName="dnsmasq-dns" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.449217 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0" containerName="nova-api-api" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.449235 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0" containerName="nova-api-log" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.450184 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.457475 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.457827 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.458684 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.463857 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.504189 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c-config-data\") pod \"nova-api-0\" (UID: \"91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c\") " pod="openstack/nova-api-0" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.504229 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c\") " pod="openstack/nova-api-0" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.504296 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c-public-tls-certs\") pod \"nova-api-0\" (UID: \"91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c\") " pod="openstack/nova-api-0" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.504335 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c-logs\") pod \"nova-api-0\" (UID: \"91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c\") " pod="openstack/nova-api-0" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.504360 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c\") " pod="openstack/nova-api-0" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.504376 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lx6w\" (UniqueName: \"kubernetes.io/projected/91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c-kube-api-access-4lx6w\") pod \"nova-api-0\" (UID: \"91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c\") " pod="openstack/nova-api-0" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.604910 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c-public-tls-certs\") pod \"nova-api-0\" (UID: \"91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c\") " pod="openstack/nova-api-0" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.604984 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c-logs\") pod \"nova-api-0\" (UID: \"91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c\") " pod="openstack/nova-api-0" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.605016 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c\") " pod="openstack/nova-api-0" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.605032 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lx6w\" (UniqueName: \"kubernetes.io/projected/91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c-kube-api-access-4lx6w\") pod \"nova-api-0\" (UID: \"91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c\") " pod="openstack/nova-api-0" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.605090 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c-config-data\") pod \"nova-api-0\" (UID: \"91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c\") " pod="openstack/nova-api-0" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.605109 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c\") " pod="openstack/nova-api-0" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.605587 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c-logs\") pod \"nova-api-0\" (UID: \"91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c\") " pod="openstack/nova-api-0" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.608661 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c-public-tls-certs\") pod \"nova-api-0\" (UID: \"91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c\") " pod="openstack/nova-api-0" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.608713 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c-config-data\") pod \"nova-api-0\" (UID: \"91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c\") " pod="openstack/nova-api-0" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.610629 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c\") " pod="openstack/nova-api-0" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.610746 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c\") " pod="openstack/nova-api-0" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.624379 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lx6w\" (UniqueName: \"kubernetes.io/projected/91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c-kube-api-access-4lx6w\") pod \"nova-api-0\" (UID: \"91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c\") " pod="openstack/nova-api-0" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.776077 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0" path="/var/lib/kubelet/pods/b4a9d2bd-da61-4a9f-b2a5-030dd24eedd0/volumes" Feb 27 16:32:18 crc kubenswrapper[4830]: I0227 16:32:18.776212 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 16:32:19 crc kubenswrapper[4830]: I0227 16:32:19.281431 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 16:32:19 crc kubenswrapper[4830]: I0227 16:32:19.383544 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c","Type":"ContainerStarted","Data":"7e7bd33e89c122ff26f31646057c366aa0a0c10749f0a8963144d1ea36341568"} Feb 27 16:32:19 crc kubenswrapper[4830]: I0227 16:32:19.384080 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 16:32:19 crc kubenswrapper[4830]: I0227 16:32:19.385905 4830 generic.go:334] "Generic (PLEG): container finished" podID="ac57b71a-c649-451a-8cd8-a71f13e1387d" containerID="fa47c79e0160bd3ec8239f111a947e8a403a1646d83d1d09c44c1606c5678dd6" exitCode=0 Feb 27 16:32:19 crc kubenswrapper[4830]: I0227 16:32:19.385985 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ac57b71a-c649-451a-8cd8-a71f13e1387d","Type":"ContainerDied","Data":"fa47c79e0160bd3ec8239f111a947e8a403a1646d83d1d09c44c1606c5678dd6"} Feb 27 16:32:19 crc kubenswrapper[4830]: I0227 16:32:19.386011 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ac57b71a-c649-451a-8cd8-a71f13e1387d","Type":"ContainerDied","Data":"f2b9cc10cba138b3fef51c45a1d8d3056239bfd96370b191cfda84adc73c5df0"} Feb 27 16:32:19 crc kubenswrapper[4830]: I0227 16:32:19.386032 4830 scope.go:117] "RemoveContainer" containerID="fa47c79e0160bd3ec8239f111a947e8a403a1646d83d1d09c44c1606c5678dd6" Feb 27 16:32:19 crc kubenswrapper[4830]: I0227 16:32:19.449148 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhp6l\" (UniqueName: \"kubernetes.io/projected/ac57b71a-c649-451a-8cd8-a71f13e1387d-kube-api-access-bhp6l\") pod \"ac57b71a-c649-451a-8cd8-a71f13e1387d\" (UID: \"ac57b71a-c649-451a-8cd8-a71f13e1387d\") " Feb 27 16:32:19 crc kubenswrapper[4830]: I0227 16:32:19.449234 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac57b71a-c649-451a-8cd8-a71f13e1387d-combined-ca-bundle\") pod \"ac57b71a-c649-451a-8cd8-a71f13e1387d\" (UID: \"ac57b71a-c649-451a-8cd8-a71f13e1387d\") " Feb 27 16:32:19 crc kubenswrapper[4830]: I0227 16:32:19.449408 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac57b71a-c649-451a-8cd8-a71f13e1387d-config-data\") pod \"ac57b71a-c649-451a-8cd8-a71f13e1387d\" (UID: \"ac57b71a-c649-451a-8cd8-a71f13e1387d\") " Feb 27 16:32:19 crc kubenswrapper[4830]: I0227 16:32:19.455131 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac57b71a-c649-451a-8cd8-a71f13e1387d-kube-api-access-bhp6l" (OuterVolumeSpecName: "kube-api-access-bhp6l") pod "ac57b71a-c649-451a-8cd8-a71f13e1387d" (UID: "ac57b71a-c649-451a-8cd8-a71f13e1387d"). InnerVolumeSpecName "kube-api-access-bhp6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:32:19 crc kubenswrapper[4830]: I0227 16:32:19.471218 4830 scope.go:117] "RemoveContainer" containerID="fa47c79e0160bd3ec8239f111a947e8a403a1646d83d1d09c44c1606c5678dd6" Feb 27 16:32:19 crc kubenswrapper[4830]: E0227 16:32:19.472434 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa47c79e0160bd3ec8239f111a947e8a403a1646d83d1d09c44c1606c5678dd6\": container with ID starting with fa47c79e0160bd3ec8239f111a947e8a403a1646d83d1d09c44c1606c5678dd6 not found: ID does not exist" containerID="fa47c79e0160bd3ec8239f111a947e8a403a1646d83d1d09c44c1606c5678dd6" Feb 27 16:32:19 crc kubenswrapper[4830]: I0227 16:32:19.472472 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa47c79e0160bd3ec8239f111a947e8a403a1646d83d1d09c44c1606c5678dd6"} err="failed to get container status \"fa47c79e0160bd3ec8239f111a947e8a403a1646d83d1d09c44c1606c5678dd6\": rpc error: code = NotFound desc = could not find container \"fa47c79e0160bd3ec8239f111a947e8a403a1646d83d1d09c44c1606c5678dd6\": container with ID starting with fa47c79e0160bd3ec8239f111a947e8a403a1646d83d1d09c44c1606c5678dd6 not found: ID does not exist" Feb 27 16:32:19 crc kubenswrapper[4830]: I0227 16:32:19.479404 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac57b71a-c649-451a-8cd8-a71f13e1387d-config-data" (OuterVolumeSpecName: "config-data") pod "ac57b71a-c649-451a-8cd8-a71f13e1387d" (UID: "ac57b71a-c649-451a-8cd8-a71f13e1387d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:32:19 crc kubenswrapper[4830]: I0227 16:32:19.498364 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac57b71a-c649-451a-8cd8-a71f13e1387d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac57b71a-c649-451a-8cd8-a71f13e1387d" (UID: "ac57b71a-c649-451a-8cd8-a71f13e1387d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:32:19 crc kubenswrapper[4830]: I0227 16:32:19.551437 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac57b71a-c649-451a-8cd8-a71f13e1387d-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:19 crc kubenswrapper[4830]: I0227 16:32:19.551478 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhp6l\" (UniqueName: \"kubernetes.io/projected/ac57b71a-c649-451a-8cd8-a71f13e1387d-kube-api-access-bhp6l\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:19 crc kubenswrapper[4830]: I0227 16:32:19.551495 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac57b71a-c649-451a-8cd8-a71f13e1387d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:20 crc kubenswrapper[4830]: I0227 16:32:20.406744 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 16:32:20 crc kubenswrapper[4830]: I0227 16:32:20.409662 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c","Type":"ContainerStarted","Data":"71f9a2d35a123a7c42bc68cc143760e467aedb724086c36e562efbf095e0c426"} Feb 27 16:32:20 crc kubenswrapper[4830]: I0227 16:32:20.409722 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c","Type":"ContainerStarted","Data":"144b29fbee6ca22072cb52d8025180f33aea96191753e1a5038399c82ac702fc"} Feb 27 16:32:20 crc kubenswrapper[4830]: I0227 16:32:20.451220 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.451184842 podStartE2EDuration="2.451184842s" podCreationTimestamp="2026-02-27 16:32:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:32:20.448356374 +0000 UTC m=+1536.537628877" watchObservedRunningTime="2026-02-27 16:32:20.451184842 +0000 UTC m=+1536.540457345" Feb 27 16:32:20 crc kubenswrapper[4830]: I0227 16:32:20.486671 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 16:32:20 crc kubenswrapper[4830]: I0227 16:32:20.502642 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 16:32:20 crc kubenswrapper[4830]: I0227 16:32:20.520141 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 16:32:20 crc kubenswrapper[4830]: E0227 16:32:20.520667 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac57b71a-c649-451a-8cd8-a71f13e1387d" containerName="nova-scheduler-scheduler" Feb 27 16:32:20 crc kubenswrapper[4830]: I0227 16:32:20.520688 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac57b71a-c649-451a-8cd8-a71f13e1387d" containerName="nova-scheduler-scheduler" Feb 27 16:32:20 crc kubenswrapper[4830]: I0227 16:32:20.520941 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac57b71a-c649-451a-8cd8-a71f13e1387d" containerName="nova-scheduler-scheduler" Feb 27 16:32:20 crc kubenswrapper[4830]: I0227 16:32:20.521668 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 16:32:20 crc kubenswrapper[4830]: I0227 16:32:20.535167 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 27 16:32:20 crc kubenswrapper[4830]: I0227 16:32:20.539150 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 16:32:20 crc kubenswrapper[4830]: I0227 16:32:20.573073 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6-config-data\") pod \"nova-scheduler-0\" (UID: \"9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6\") " pod="openstack/nova-scheduler-0" Feb 27 16:32:20 crc kubenswrapper[4830]: I0227 16:32:20.573356 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6\") " pod="openstack/nova-scheduler-0" Feb 27 16:32:20 crc kubenswrapper[4830]: I0227 16:32:20.573500 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6rg2\" (UniqueName: \"kubernetes.io/projected/9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6-kube-api-access-c6rg2\") pod \"nova-scheduler-0\" (UID: \"9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6\") " pod="openstack/nova-scheduler-0" Feb 27 16:32:20 crc kubenswrapper[4830]: I0227 16:32:20.675759 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6-config-data\") pod \"nova-scheduler-0\" (UID: \"9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6\") " pod="openstack/nova-scheduler-0" Feb 27 16:32:20 crc kubenswrapper[4830]: I0227 16:32:20.676159 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6\") " pod="openstack/nova-scheduler-0" Feb 27 16:32:20 crc kubenswrapper[4830]: I0227 16:32:20.676470 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6rg2\" (UniqueName: \"kubernetes.io/projected/9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6-kube-api-access-c6rg2\") pod \"nova-scheduler-0\" (UID: \"9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6\") " pod="openstack/nova-scheduler-0" Feb 27 16:32:20 crc kubenswrapper[4830]: I0227 16:32:20.683431 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6-config-data\") pod \"nova-scheduler-0\" (UID: \"9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6\") " pod="openstack/nova-scheduler-0" Feb 27 16:32:20 crc kubenswrapper[4830]: I0227 16:32:20.683469 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6\") " pod="openstack/nova-scheduler-0" Feb 27 16:32:20 crc kubenswrapper[4830]: I0227 16:32:20.702870 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6rg2\" (UniqueName: \"kubernetes.io/projected/9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6-kube-api-access-c6rg2\") pod \"nova-scheduler-0\" (UID: \"9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6\") " pod="openstack/nova-scheduler-0" Feb 27 16:32:20 crc kubenswrapper[4830]: I0227 16:32:20.719234 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="34e10b21-9e53-464a-a707-cb587ab15199" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.200:8775/\": read tcp 10.217.0.2:36786->10.217.0.200:8775: read: connection reset by peer" Feb 27 16:32:20 crc kubenswrapper[4830]: I0227 16:32:20.719237 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="34e10b21-9e53-464a-a707-cb587ab15199" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.200:8775/\": read tcp 10.217.0.2:36782->10.217.0.200:8775: read: connection reset by peer" Feb 27 16:32:20 crc kubenswrapper[4830]: I0227 16:32:20.805146 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac57b71a-c649-451a-8cd8-a71f13e1387d" path="/var/lib/kubelet/pods/ac57b71a-c649-451a-8cd8-a71f13e1387d/volumes" Feb 27 16:32:20 crc kubenswrapper[4830]: I0227 16:32:20.849423 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.193328 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.298152 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e10b21-9e53-464a-a707-cb587ab15199-combined-ca-bundle\") pod \"34e10b21-9e53-464a-a707-cb587ab15199\" (UID: \"34e10b21-9e53-464a-a707-cb587ab15199\") " Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.298483 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4klkj\" (UniqueName: \"kubernetes.io/projected/34e10b21-9e53-464a-a707-cb587ab15199-kube-api-access-4klkj\") pod \"34e10b21-9e53-464a-a707-cb587ab15199\" (UID: \"34e10b21-9e53-464a-a707-cb587ab15199\") " Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.298550 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34e10b21-9e53-464a-a707-cb587ab15199-logs\") pod \"34e10b21-9e53-464a-a707-cb587ab15199\" (UID: \"34e10b21-9e53-464a-a707-cb587ab15199\") " Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.298643 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/34e10b21-9e53-464a-a707-cb587ab15199-nova-metadata-tls-certs\") pod \"34e10b21-9e53-464a-a707-cb587ab15199\" (UID: \"34e10b21-9e53-464a-a707-cb587ab15199\") " Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.298683 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34e10b21-9e53-464a-a707-cb587ab15199-config-data\") pod \"34e10b21-9e53-464a-a707-cb587ab15199\" (UID: \"34e10b21-9e53-464a-a707-cb587ab15199\") " Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.299157 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34e10b21-9e53-464a-a707-cb587ab15199-logs" (OuterVolumeSpecName: "logs") pod "34e10b21-9e53-464a-a707-cb587ab15199" (UID: "34e10b21-9e53-464a-a707-cb587ab15199"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.305188 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34e10b21-9e53-464a-a707-cb587ab15199-kube-api-access-4klkj" (OuterVolumeSpecName: "kube-api-access-4klkj") pod "34e10b21-9e53-464a-a707-cb587ab15199" (UID: "34e10b21-9e53-464a-a707-cb587ab15199"). InnerVolumeSpecName "kube-api-access-4klkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.357745 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-24w5s"] Feb 27 16:32:21 crc kubenswrapper[4830]: E0227 16:32:21.371240 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34e10b21-9e53-464a-a707-cb587ab15199" containerName="nova-metadata-metadata" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.371279 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="34e10b21-9e53-464a-a707-cb587ab15199" containerName="nova-metadata-metadata" Feb 27 16:32:21 crc kubenswrapper[4830]: E0227 16:32:21.371297 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34e10b21-9e53-464a-a707-cb587ab15199" containerName="nova-metadata-log" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.371304 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="34e10b21-9e53-464a-a707-cb587ab15199" containerName="nova-metadata-log" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.371535 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="34e10b21-9e53-464a-a707-cb587ab15199" containerName="nova-metadata-metadata" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.371555 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="34e10b21-9e53-464a-a707-cb587ab15199" containerName="nova-metadata-log" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.372728 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-24w5s" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.396738 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34e10b21-9e53-464a-a707-cb587ab15199-config-data" (OuterVolumeSpecName: "config-data") pod "34e10b21-9e53-464a-a707-cb587ab15199" (UID: "34e10b21-9e53-464a-a707-cb587ab15199"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.398814 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-24w5s"] Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.403415 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4klkj\" (UniqueName: \"kubernetes.io/projected/34e10b21-9e53-464a-a707-cb587ab15199-kube-api-access-4klkj\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.403434 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34e10b21-9e53-464a-a707-cb587ab15199-logs\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.403444 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34e10b21-9e53-464a-a707-cb587ab15199-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.417732 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.419084 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34e10b21-9e53-464a-a707-cb587ab15199-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "34e10b21-9e53-464a-a707-cb587ab15199" (UID: "34e10b21-9e53-464a-a707-cb587ab15199"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.426549 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34e10b21-9e53-464a-a707-cb587ab15199-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34e10b21-9e53-464a-a707-cb587ab15199" (UID: "34e10b21-9e53-464a-a707-cb587ab15199"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.462446 4830 generic.go:334] "Generic (PLEG): container finished" podID="34e10b21-9e53-464a-a707-cb587ab15199" containerID="ed489a7ac08882b9d78c835ff27e520076b06b9bccf96912640b432acd899e9a" exitCode=0 Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.462802 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34e10b21-9e53-464a-a707-cb587ab15199","Type":"ContainerDied","Data":"ed489a7ac08882b9d78c835ff27e520076b06b9bccf96912640b432acd899e9a"} Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.462835 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34e10b21-9e53-464a-a707-cb587ab15199","Type":"ContainerDied","Data":"d4174918fb6c20d990c1995356845eb5e906d733e7b0ba614eec5de386d4c062"} Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.462852 4830 scope.go:117] "RemoveContainer" containerID="ed489a7ac08882b9d78c835ff27e520076b06b9bccf96912640b432acd899e9a" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.463031 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.477638 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6","Type":"ContainerStarted","Data":"c29626b40606fd93d793caacbd2f1f3be72535bb9cd73efe02a55861642ccc13"} Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.510989 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d5ffa89-a299-442c-9744-e8c35a5f4551-utilities\") pod \"redhat-operators-24w5s\" (UID: \"4d5ffa89-a299-442c-9744-e8c35a5f4551\") " pod="openshift-marketplace/redhat-operators-24w5s" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.511073 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d5ffa89-a299-442c-9744-e8c35a5f4551-catalog-content\") pod \"redhat-operators-24w5s\" (UID: \"4d5ffa89-a299-442c-9744-e8c35a5f4551\") " pod="openshift-marketplace/redhat-operators-24w5s" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.511301 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gvl2\" (UniqueName: \"kubernetes.io/projected/4d5ffa89-a299-442c-9744-e8c35a5f4551-kube-api-access-2gvl2\") pod \"redhat-operators-24w5s\" (UID: \"4d5ffa89-a299-442c-9744-e8c35a5f4551\") " pod="openshift-marketplace/redhat-operators-24w5s" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.511435 4830 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/34e10b21-9e53-464a-a707-cb587ab15199-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.511446 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34e10b21-9e53-464a-a707-cb587ab15199-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.528281 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.571247 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.573600 4830 scope.go:117] "RemoveContainer" containerID="88b9e53a42c4a09fc14bc5ea02179f51920e7ad14dd3c90d36fd4745296b055e" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.581490 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.583105 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.590063 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.596530 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.596564 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.613415 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d5ffa89-a299-442c-9744-e8c35a5f4551-utilities\") pod \"redhat-operators-24w5s\" (UID: \"4d5ffa89-a299-442c-9744-e8c35a5f4551\") " pod="openshift-marketplace/redhat-operators-24w5s" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.613489 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d5ffa89-a299-442c-9744-e8c35a5f4551-catalog-content\") pod \"redhat-operators-24w5s\" (UID: \"4d5ffa89-a299-442c-9744-e8c35a5f4551\") " pod="openshift-marketplace/redhat-operators-24w5s" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.613512 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gvl2\" (UniqueName: \"kubernetes.io/projected/4d5ffa89-a299-442c-9744-e8c35a5f4551-kube-api-access-2gvl2\") pod \"redhat-operators-24w5s\" (UID: \"4d5ffa89-a299-442c-9744-e8c35a5f4551\") " pod="openshift-marketplace/redhat-operators-24w5s" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.614133 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d5ffa89-a299-442c-9744-e8c35a5f4551-utilities\") pod \"redhat-operators-24w5s\" (UID: \"4d5ffa89-a299-442c-9744-e8c35a5f4551\") " pod="openshift-marketplace/redhat-operators-24w5s" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.614242 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d5ffa89-a299-442c-9744-e8c35a5f4551-catalog-content\") pod \"redhat-operators-24w5s\" (UID: \"4d5ffa89-a299-442c-9744-e8c35a5f4551\") " pod="openshift-marketplace/redhat-operators-24w5s" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.631290 4830 scope.go:117] "RemoveContainer" containerID="ed489a7ac08882b9d78c835ff27e520076b06b9bccf96912640b432acd899e9a" Feb 27 16:32:21 crc kubenswrapper[4830]: E0227 16:32:21.631654 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed489a7ac08882b9d78c835ff27e520076b06b9bccf96912640b432acd899e9a\": container with ID starting with ed489a7ac08882b9d78c835ff27e520076b06b9bccf96912640b432acd899e9a not found: ID does not exist" containerID="ed489a7ac08882b9d78c835ff27e520076b06b9bccf96912640b432acd899e9a" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.631691 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed489a7ac08882b9d78c835ff27e520076b06b9bccf96912640b432acd899e9a"} err="failed to get container status \"ed489a7ac08882b9d78c835ff27e520076b06b9bccf96912640b432acd899e9a\": rpc error: code = NotFound desc = could not find container \"ed489a7ac08882b9d78c835ff27e520076b06b9bccf96912640b432acd899e9a\": container with ID starting with ed489a7ac08882b9d78c835ff27e520076b06b9bccf96912640b432acd899e9a not found: ID does not exist" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.631716 4830 scope.go:117] "RemoveContainer" containerID="88b9e53a42c4a09fc14bc5ea02179f51920e7ad14dd3c90d36fd4745296b055e" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.632204 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gvl2\" (UniqueName: \"kubernetes.io/projected/4d5ffa89-a299-442c-9744-e8c35a5f4551-kube-api-access-2gvl2\") pod \"redhat-operators-24w5s\" (UID: \"4d5ffa89-a299-442c-9744-e8c35a5f4551\") " pod="openshift-marketplace/redhat-operators-24w5s" Feb 27 16:32:21 crc kubenswrapper[4830]: E0227 16:32:21.632421 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88b9e53a42c4a09fc14bc5ea02179f51920e7ad14dd3c90d36fd4745296b055e\": container with ID starting with 88b9e53a42c4a09fc14bc5ea02179f51920e7ad14dd3c90d36fd4745296b055e not found: ID does not exist" containerID="88b9e53a42c4a09fc14bc5ea02179f51920e7ad14dd3c90d36fd4745296b055e" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.632443 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88b9e53a42c4a09fc14bc5ea02179f51920e7ad14dd3c90d36fd4745296b055e"} err="failed to get container status \"88b9e53a42c4a09fc14bc5ea02179f51920e7ad14dd3c90d36fd4745296b055e\": rpc error: code = NotFound desc = could not find container \"88b9e53a42c4a09fc14bc5ea02179f51920e7ad14dd3c90d36fd4745296b055e\": container with ID starting with 88b9e53a42c4a09fc14bc5ea02179f51920e7ad14dd3c90d36fd4745296b055e not found: ID does not exist" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.714671 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4280aaf-817d-41e1-9867-715359ae322e-logs\") pod \"nova-metadata-0\" (UID: \"f4280aaf-817d-41e1-9867-715359ae322e\") " pod="openstack/nova-metadata-0" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.714783 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4280aaf-817d-41e1-9867-715359ae322e-config-data\") pod \"nova-metadata-0\" (UID: \"f4280aaf-817d-41e1-9867-715359ae322e\") " pod="openstack/nova-metadata-0" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.714825 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf85v\" (UniqueName: \"kubernetes.io/projected/f4280aaf-817d-41e1-9867-715359ae322e-kube-api-access-gf85v\") pod \"nova-metadata-0\" (UID: \"f4280aaf-817d-41e1-9867-715359ae322e\") " pod="openstack/nova-metadata-0" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.714916 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4280aaf-817d-41e1-9867-715359ae322e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f4280aaf-817d-41e1-9867-715359ae322e\") " pod="openstack/nova-metadata-0" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.714963 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4280aaf-817d-41e1-9867-715359ae322e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f4280aaf-817d-41e1-9867-715359ae322e\") " pod="openstack/nova-metadata-0" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.723684 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-24w5s" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.816991 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4280aaf-817d-41e1-9867-715359ae322e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f4280aaf-817d-41e1-9867-715359ae322e\") " pod="openstack/nova-metadata-0" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.817037 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4280aaf-817d-41e1-9867-715359ae322e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f4280aaf-817d-41e1-9867-715359ae322e\") " pod="openstack/nova-metadata-0" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.817105 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4280aaf-817d-41e1-9867-715359ae322e-logs\") pod \"nova-metadata-0\" (UID: \"f4280aaf-817d-41e1-9867-715359ae322e\") " pod="openstack/nova-metadata-0" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.817150 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4280aaf-817d-41e1-9867-715359ae322e-config-data\") pod \"nova-metadata-0\" (UID: \"f4280aaf-817d-41e1-9867-715359ae322e\") " pod="openstack/nova-metadata-0" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.817181 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf85v\" (UniqueName: \"kubernetes.io/projected/f4280aaf-817d-41e1-9867-715359ae322e-kube-api-access-gf85v\") pod \"nova-metadata-0\" (UID: \"f4280aaf-817d-41e1-9867-715359ae322e\") " pod="openstack/nova-metadata-0" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.818136 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4280aaf-817d-41e1-9867-715359ae322e-logs\") pod \"nova-metadata-0\" (UID: \"f4280aaf-817d-41e1-9867-715359ae322e\") " pod="openstack/nova-metadata-0" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.821442 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4280aaf-817d-41e1-9867-715359ae322e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f4280aaf-817d-41e1-9867-715359ae322e\") " pod="openstack/nova-metadata-0" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.821910 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4280aaf-817d-41e1-9867-715359ae322e-config-data\") pod \"nova-metadata-0\" (UID: \"f4280aaf-817d-41e1-9867-715359ae322e\") " pod="openstack/nova-metadata-0" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.825710 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4280aaf-817d-41e1-9867-715359ae322e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f4280aaf-817d-41e1-9867-715359ae322e\") " pod="openstack/nova-metadata-0" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.840318 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf85v\" (UniqueName: \"kubernetes.io/projected/f4280aaf-817d-41e1-9867-715359ae322e-kube-api-access-gf85v\") pod \"nova-metadata-0\" (UID: \"f4280aaf-817d-41e1-9867-715359ae322e\") " pod="openstack/nova-metadata-0" Feb 27 16:32:21 crc kubenswrapper[4830]: I0227 16:32:21.935736 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 16:32:22 crc kubenswrapper[4830]: I0227 16:32:22.205659 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-24w5s"] Feb 27 16:32:22 crc kubenswrapper[4830]: I0227 16:32:22.431094 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 16:32:22 crc kubenswrapper[4830]: W0227 16:32:22.481459 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4280aaf_817d_41e1_9867_715359ae322e.slice/crio-b40b50dc0c3eb8a1f90824340053269054b596dd3826d38c5c351f59aca76b6f WatchSource:0}: Error finding container b40b50dc0c3eb8a1f90824340053269054b596dd3826d38c5c351f59aca76b6f: Status 404 returned error can't find the container with id b40b50dc0c3eb8a1f90824340053269054b596dd3826d38c5c351f59aca76b6f Feb 27 16:32:22 crc kubenswrapper[4830]: I0227 16:32:22.492835 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6","Type":"ContainerStarted","Data":"4bd0cecd4c639c19d6288ae6763e874f65a458b96d3aae8d391e7b853fd3836b"} Feb 27 16:32:22 crc kubenswrapper[4830]: I0227 16:32:22.495756 4830 generic.go:334] "Generic (PLEG): container finished" podID="4d5ffa89-a299-442c-9744-e8c35a5f4551" containerID="65bd4c8c7a3fe04d12a49336ff4511ad0e4c23cd97d78079a10c343cdbc2ac74" exitCode=0 Feb 27 16:32:22 crc kubenswrapper[4830]: I0227 16:32:22.495806 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24w5s" event={"ID":"4d5ffa89-a299-442c-9744-e8c35a5f4551","Type":"ContainerDied","Data":"65bd4c8c7a3fe04d12a49336ff4511ad0e4c23cd97d78079a10c343cdbc2ac74"} Feb 27 16:32:22 crc kubenswrapper[4830]: I0227 16:32:22.495831 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24w5s" event={"ID":"4d5ffa89-a299-442c-9744-e8c35a5f4551","Type":"ContainerStarted","Data":"8434bb4941bb2112d22c27ced4cd14650b1b81a200089ba506d3cf93b420ee57"} Feb 27 16:32:22 crc kubenswrapper[4830]: I0227 16:32:22.520884 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.520864436 podStartE2EDuration="2.520864436s" podCreationTimestamp="2026-02-27 16:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:32:22.512368531 +0000 UTC m=+1538.601641004" watchObservedRunningTime="2026-02-27 16:32:22.520864436 +0000 UTC m=+1538.610136899" Feb 27 16:32:22 crc kubenswrapper[4830]: I0227 16:32:22.777474 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34e10b21-9e53-464a-a707-cb587ab15199" path="/var/lib/kubelet/pods/34e10b21-9e53-464a-a707-cb587ab15199/volumes" Feb 27 16:32:23 crc kubenswrapper[4830]: I0227 16:32:23.510794 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24w5s" event={"ID":"4d5ffa89-a299-442c-9744-e8c35a5f4551","Type":"ContainerStarted","Data":"ea0a9f279ac004dd3b231d6040eeabf30f00ec08976a6bba28466775b736ae0e"} Feb 27 16:32:23 crc kubenswrapper[4830]: I0227 16:32:23.514426 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f4280aaf-817d-41e1-9867-715359ae322e","Type":"ContainerStarted","Data":"67f705d66ad4d26d1a66a751f763fac473304bb8b591b54c2c0c497cc8ee46c6"} Feb 27 16:32:23 crc kubenswrapper[4830]: I0227 16:32:23.514497 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f4280aaf-817d-41e1-9867-715359ae322e","Type":"ContainerStarted","Data":"53a40c635318ff11c80f75f6211616278bbd9c179f11fec9265e63a26e70b0ac"} Feb 27 16:32:23 crc kubenswrapper[4830]: I0227 16:32:23.514518 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f4280aaf-817d-41e1-9867-715359ae322e","Type":"ContainerStarted","Data":"b40b50dc0c3eb8a1f90824340053269054b596dd3826d38c5c351f59aca76b6f"} Feb 27 16:32:23 crc kubenswrapper[4830]: I0227 16:32:23.589743 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.589715043 podStartE2EDuration="2.589715043s" podCreationTimestamp="2026-02-27 16:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:32:23.578783839 +0000 UTC m=+1539.668056312" watchObservedRunningTime="2026-02-27 16:32:23.589715043 +0000 UTC m=+1539.678987546" Feb 27 16:32:25 crc kubenswrapper[4830]: I0227 16:32:25.850565 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 27 16:32:26 crc kubenswrapper[4830]: I0227 16:32:26.558447 4830 generic.go:334] "Generic (PLEG): container finished" podID="4d5ffa89-a299-442c-9744-e8c35a5f4551" containerID="ea0a9f279ac004dd3b231d6040eeabf30f00ec08976a6bba28466775b736ae0e" exitCode=0 Feb 27 16:32:26 crc kubenswrapper[4830]: I0227 16:32:26.558529 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24w5s" event={"ID":"4d5ffa89-a299-442c-9744-e8c35a5f4551","Type":"ContainerDied","Data":"ea0a9f279ac004dd3b231d6040eeabf30f00ec08976a6bba28466775b736ae0e"} Feb 27 16:32:26 crc kubenswrapper[4830]: I0227 16:32:26.936338 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 27 16:32:26 crc kubenswrapper[4830]: I0227 16:32:26.936835 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 27 16:32:27 crc kubenswrapper[4830]: I0227 16:32:27.572965 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24w5s" event={"ID":"4d5ffa89-a299-442c-9744-e8c35a5f4551","Type":"ContainerStarted","Data":"c272b7abf71d3c9df05948cb7260642dfba9e5b0ceb985b551e55c3e2cf6c141"} Feb 27 16:32:27 crc kubenswrapper[4830]: I0227 16:32:27.615250 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-24w5s" podStartSLOduration=1.858881198 podStartE2EDuration="6.615223036s" podCreationTimestamp="2026-02-27 16:32:21 +0000 UTC" firstStartedPulling="2026-02-27 16:32:22.507933164 +0000 UTC m=+1538.597205627" lastFinishedPulling="2026-02-27 16:32:27.264274962 +0000 UTC m=+1543.353547465" observedRunningTime="2026-02-27 16:32:27.59753715 +0000 UTC m=+1543.686809653" watchObservedRunningTime="2026-02-27 16:32:27.615223036 +0000 UTC m=+1543.704495529" Feb 27 16:32:28 crc kubenswrapper[4830]: I0227 16:32:28.782283 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 27 16:32:28 crc kubenswrapper[4830]: I0227 16:32:28.782627 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 27 16:32:29 crc kubenswrapper[4830]: I0227 16:32:29.799210 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.212:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 27 16:32:29 crc kubenswrapper[4830]: I0227 16:32:29.799287 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.212:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 27 16:32:30 crc kubenswrapper[4830]: I0227 16:32:30.849698 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 27 16:32:30 crc kubenswrapper[4830]: I0227 16:32:30.881921 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 27 16:32:31 crc kubenswrapper[4830]: I0227 16:32:31.672286 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 27 16:32:31 crc kubenswrapper[4830]: I0227 16:32:31.724841 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-24w5s" Feb 27 16:32:31 crc kubenswrapper[4830]: I0227 16:32:31.724905 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-24w5s" Feb 27 16:32:31 crc kubenswrapper[4830]: I0227 16:32:31.937786 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 27 16:32:31 crc kubenswrapper[4830]: I0227 16:32:31.938271 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 27 16:32:32 crc kubenswrapper[4830]: I0227 16:32:32.801752 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-24w5s" podUID="4d5ffa89-a299-442c-9744-e8c35a5f4551" containerName="registry-server" probeResult="failure" output=< Feb 27 16:32:32 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Feb 27 16:32:32 crc kubenswrapper[4830]: > Feb 27 16:32:32 crc kubenswrapper[4830]: I0227 16:32:32.953087 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f4280aaf-817d-41e1-9867-715359ae322e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.215:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 27 16:32:32 crc kubenswrapper[4830]: I0227 16:32:32.953145 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f4280aaf-817d-41e1-9867-715359ae322e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.215:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 27 16:32:33 crc kubenswrapper[4830]: I0227 16:32:33.160110 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 16:32:33 crc kubenswrapper[4830]: I0227 16:32:33.160187 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 16:32:35 crc kubenswrapper[4830]: I0227 16:32:35.928774 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 27 16:32:38 crc kubenswrapper[4830]: I0227 16:32:38.786882 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 27 16:32:38 crc kubenswrapper[4830]: I0227 16:32:38.787597 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 27 16:32:38 crc kubenswrapper[4830]: I0227 16:32:38.793369 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 27 16:32:38 crc kubenswrapper[4830]: I0227 16:32:38.794776 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 27 16:32:39 crc kubenswrapper[4830]: I0227 16:32:39.635475 4830 scope.go:117] "RemoveContainer" containerID="578837b4acd572cc743f96ab1ca35beed66af3c7803c66e45ec9a5459c53e247" Feb 27 16:32:39 crc kubenswrapper[4830]: I0227 16:32:39.707073 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 27 16:32:39 crc kubenswrapper[4830]: I0227 16:32:39.713141 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 27 16:32:41 crc kubenswrapper[4830]: I0227 16:32:41.806731 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-24w5s" Feb 27 16:32:41 crc kubenswrapper[4830]: I0227 16:32:41.886215 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-24w5s" Feb 27 16:32:41 crc kubenswrapper[4830]: I0227 16:32:41.949701 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 27 16:32:41 crc kubenswrapper[4830]: I0227 16:32:41.949788 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 27 16:32:41 crc kubenswrapper[4830]: I0227 16:32:41.956994 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 27 16:32:41 crc kubenswrapper[4830]: I0227 16:32:41.961146 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 27 16:32:42 crc kubenswrapper[4830]: I0227 16:32:42.066996 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-24w5s"] Feb 27 16:32:43 crc kubenswrapper[4830]: I0227 16:32:43.759774 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-24w5s" podUID="4d5ffa89-a299-442c-9744-e8c35a5f4551" containerName="registry-server" containerID="cri-o://c272b7abf71d3c9df05948cb7260642dfba9e5b0ceb985b551e55c3e2cf6c141" gracePeriod=2 Feb 27 16:32:44 crc kubenswrapper[4830]: I0227 16:32:44.519268 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-24w5s" Feb 27 16:32:44 crc kubenswrapper[4830]: I0227 16:32:44.651227 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gvl2\" (UniqueName: \"kubernetes.io/projected/4d5ffa89-a299-442c-9744-e8c35a5f4551-kube-api-access-2gvl2\") pod \"4d5ffa89-a299-442c-9744-e8c35a5f4551\" (UID: \"4d5ffa89-a299-442c-9744-e8c35a5f4551\") " Feb 27 16:32:44 crc kubenswrapper[4830]: I0227 16:32:44.651455 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d5ffa89-a299-442c-9744-e8c35a5f4551-utilities\") pod \"4d5ffa89-a299-442c-9744-e8c35a5f4551\" (UID: \"4d5ffa89-a299-442c-9744-e8c35a5f4551\") " Feb 27 16:32:44 crc kubenswrapper[4830]: I0227 16:32:44.651504 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d5ffa89-a299-442c-9744-e8c35a5f4551-catalog-content\") pod \"4d5ffa89-a299-442c-9744-e8c35a5f4551\" (UID: \"4d5ffa89-a299-442c-9744-e8c35a5f4551\") " Feb 27 16:32:44 crc kubenswrapper[4830]: I0227 16:32:44.652329 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d5ffa89-a299-442c-9744-e8c35a5f4551-utilities" (OuterVolumeSpecName: "utilities") pod "4d5ffa89-a299-442c-9744-e8c35a5f4551" (UID: "4d5ffa89-a299-442c-9744-e8c35a5f4551"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:32:44 crc kubenswrapper[4830]: I0227 16:32:44.662258 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d5ffa89-a299-442c-9744-e8c35a5f4551-kube-api-access-2gvl2" (OuterVolumeSpecName: "kube-api-access-2gvl2") pod "4d5ffa89-a299-442c-9744-e8c35a5f4551" (UID: "4d5ffa89-a299-442c-9744-e8c35a5f4551"). InnerVolumeSpecName "kube-api-access-2gvl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:32:44 crc kubenswrapper[4830]: I0227 16:32:44.753671 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gvl2\" (UniqueName: \"kubernetes.io/projected/4d5ffa89-a299-442c-9744-e8c35a5f4551-kube-api-access-2gvl2\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:44 crc kubenswrapper[4830]: I0227 16:32:44.753736 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d5ffa89-a299-442c-9744-e8c35a5f4551-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:44 crc kubenswrapper[4830]: I0227 16:32:44.776032 4830 generic.go:334] "Generic (PLEG): container finished" podID="4d5ffa89-a299-442c-9744-e8c35a5f4551" containerID="c272b7abf71d3c9df05948cb7260642dfba9e5b0ceb985b551e55c3e2cf6c141" exitCode=0 Feb 27 16:32:44 crc kubenswrapper[4830]: I0227 16:32:44.776190 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-24w5s" Feb 27 16:32:44 crc kubenswrapper[4830]: I0227 16:32:44.787678 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24w5s" event={"ID":"4d5ffa89-a299-442c-9744-e8c35a5f4551","Type":"ContainerDied","Data":"c272b7abf71d3c9df05948cb7260642dfba9e5b0ceb985b551e55c3e2cf6c141"} Feb 27 16:32:44 crc kubenswrapper[4830]: I0227 16:32:44.787747 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-24w5s" event={"ID":"4d5ffa89-a299-442c-9744-e8c35a5f4551","Type":"ContainerDied","Data":"8434bb4941bb2112d22c27ced4cd14650b1b81a200089ba506d3cf93b420ee57"} Feb 27 16:32:44 crc kubenswrapper[4830]: I0227 16:32:44.787783 4830 scope.go:117] "RemoveContainer" containerID="c272b7abf71d3c9df05948cb7260642dfba9e5b0ceb985b551e55c3e2cf6c141" Feb 27 16:32:44 crc kubenswrapper[4830]: I0227 16:32:44.819763 4830 scope.go:117] "RemoveContainer" containerID="ea0a9f279ac004dd3b231d6040eeabf30f00ec08976a6bba28466775b736ae0e" Feb 27 16:32:44 crc kubenswrapper[4830]: I0227 16:32:44.831974 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d5ffa89-a299-442c-9744-e8c35a5f4551-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d5ffa89-a299-442c-9744-e8c35a5f4551" (UID: "4d5ffa89-a299-442c-9744-e8c35a5f4551"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:32:44 crc kubenswrapper[4830]: I0227 16:32:44.855936 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d5ffa89-a299-442c-9744-e8c35a5f4551-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 16:32:44 crc kubenswrapper[4830]: I0227 16:32:44.860024 4830 scope.go:117] "RemoveContainer" containerID="65bd4c8c7a3fe04d12a49336ff4511ad0e4c23cd97d78079a10c343cdbc2ac74" Feb 27 16:32:44 crc kubenswrapper[4830]: I0227 16:32:44.917750 4830 scope.go:117] "RemoveContainer" containerID="c272b7abf71d3c9df05948cb7260642dfba9e5b0ceb985b551e55c3e2cf6c141" Feb 27 16:32:44 crc kubenswrapper[4830]: E0227 16:32:44.918410 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c272b7abf71d3c9df05948cb7260642dfba9e5b0ceb985b551e55c3e2cf6c141\": container with ID starting with c272b7abf71d3c9df05948cb7260642dfba9e5b0ceb985b551e55c3e2cf6c141 not found: ID does not exist" containerID="c272b7abf71d3c9df05948cb7260642dfba9e5b0ceb985b551e55c3e2cf6c141" Feb 27 16:32:44 crc kubenswrapper[4830]: I0227 16:32:44.918496 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c272b7abf71d3c9df05948cb7260642dfba9e5b0ceb985b551e55c3e2cf6c141"} err="failed to get container status \"c272b7abf71d3c9df05948cb7260642dfba9e5b0ceb985b551e55c3e2cf6c141\": rpc error: code = NotFound desc = could not find container \"c272b7abf71d3c9df05948cb7260642dfba9e5b0ceb985b551e55c3e2cf6c141\": container with ID starting with c272b7abf71d3c9df05948cb7260642dfba9e5b0ceb985b551e55c3e2cf6c141 not found: ID does not exist" Feb 27 16:32:44 crc kubenswrapper[4830]: I0227 16:32:44.918551 4830 scope.go:117] "RemoveContainer" containerID="ea0a9f279ac004dd3b231d6040eeabf30f00ec08976a6bba28466775b736ae0e" Feb 27 16:32:44 crc kubenswrapper[4830]: E0227 16:32:44.919036 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea0a9f279ac004dd3b231d6040eeabf30f00ec08976a6bba28466775b736ae0e\": container with ID starting with ea0a9f279ac004dd3b231d6040eeabf30f00ec08976a6bba28466775b736ae0e not found: ID does not exist" containerID="ea0a9f279ac004dd3b231d6040eeabf30f00ec08976a6bba28466775b736ae0e" Feb 27 16:32:44 crc kubenswrapper[4830]: I0227 16:32:44.919077 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea0a9f279ac004dd3b231d6040eeabf30f00ec08976a6bba28466775b736ae0e"} err="failed to get container status \"ea0a9f279ac004dd3b231d6040eeabf30f00ec08976a6bba28466775b736ae0e\": rpc error: code = NotFound desc = could not find container \"ea0a9f279ac004dd3b231d6040eeabf30f00ec08976a6bba28466775b736ae0e\": container with ID starting with ea0a9f279ac004dd3b231d6040eeabf30f00ec08976a6bba28466775b736ae0e not found: ID does not exist" Feb 27 16:32:44 crc kubenswrapper[4830]: I0227 16:32:44.919102 4830 scope.go:117] "RemoveContainer" containerID="65bd4c8c7a3fe04d12a49336ff4511ad0e4c23cd97d78079a10c343cdbc2ac74" Feb 27 16:32:44 crc kubenswrapper[4830]: E0227 16:32:44.919554 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65bd4c8c7a3fe04d12a49336ff4511ad0e4c23cd97d78079a10c343cdbc2ac74\": container with ID starting with 65bd4c8c7a3fe04d12a49336ff4511ad0e4c23cd97d78079a10c343cdbc2ac74 not found: ID does not exist" containerID="65bd4c8c7a3fe04d12a49336ff4511ad0e4c23cd97d78079a10c343cdbc2ac74" Feb 27 16:32:44 crc kubenswrapper[4830]: I0227 16:32:44.919628 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65bd4c8c7a3fe04d12a49336ff4511ad0e4c23cd97d78079a10c343cdbc2ac74"} err="failed to get container status \"65bd4c8c7a3fe04d12a49336ff4511ad0e4c23cd97d78079a10c343cdbc2ac74\": rpc error: code = NotFound desc = could not find container \"65bd4c8c7a3fe04d12a49336ff4511ad0e4c23cd97d78079a10c343cdbc2ac74\": container with ID starting with 65bd4c8c7a3fe04d12a49336ff4511ad0e4c23cd97d78079a10c343cdbc2ac74 not found: ID does not exist" Feb 27 16:32:45 crc kubenswrapper[4830]: I0227 16:32:45.121398 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-24w5s"] Feb 27 16:32:45 crc kubenswrapper[4830]: I0227 16:32:45.130201 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-24w5s"] Feb 27 16:32:46 crc kubenswrapper[4830]: I0227 16:32:46.805404 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d5ffa89-a299-442c-9744-e8c35a5f4551" path="/var/lib/kubelet/pods/4d5ffa89-a299-442c-9744-e8c35a5f4551/volumes" Feb 27 16:33:01 crc kubenswrapper[4830]: I0227 16:33:01.333052 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 27 16:33:01 crc kubenswrapper[4830]: I0227 16:33:01.334307 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="3482e9fb-53ae-4908-87fc-4096c5b26b76" containerName="openstackclient" containerID="cri-o://ebe94bb0443ae2939345bc80a179e9644e55c467b0fc2c9d6043e5cff481e239" gracePeriod=2 Feb 27 16:33:01 crc kubenswrapper[4830]: I0227 16:33:01.374672 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 27 16:33:01 crc kubenswrapper[4830]: I0227 16:33:01.547068 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 16:33:01 crc kubenswrapper[4830]: I0227 16:33:01.629726 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-lx5sm"] Feb 27 16:33:01 crc kubenswrapper[4830]: E0227 16:33:01.630366 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d5ffa89-a299-442c-9744-e8c35a5f4551" containerName="extract-utilities" Feb 27 16:33:01 crc kubenswrapper[4830]: I0227 16:33:01.630434 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d5ffa89-a299-442c-9744-e8c35a5f4551" containerName="extract-utilities" Feb 27 16:33:01 crc kubenswrapper[4830]: E0227 16:33:01.631476 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d5ffa89-a299-442c-9744-e8c35a5f4551" containerName="registry-server" Feb 27 16:33:01 crc kubenswrapper[4830]: I0227 16:33:01.631545 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d5ffa89-a299-442c-9744-e8c35a5f4551" containerName="registry-server" Feb 27 16:33:01 crc kubenswrapper[4830]: E0227 16:33:01.631612 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d5ffa89-a299-442c-9744-e8c35a5f4551" containerName="extract-content" Feb 27 16:33:01 crc kubenswrapper[4830]: I0227 16:33:01.631671 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d5ffa89-a299-442c-9744-e8c35a5f4551" containerName="extract-content" Feb 27 16:33:01 crc kubenswrapper[4830]: E0227 16:33:01.631733 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3482e9fb-53ae-4908-87fc-4096c5b26b76" containerName="openstackclient" Feb 27 16:33:01 crc kubenswrapper[4830]: I0227 16:33:01.631788 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="3482e9fb-53ae-4908-87fc-4096c5b26b76" containerName="openstackclient" Feb 27 16:33:01 crc kubenswrapper[4830]: I0227 16:33:01.636169 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="3482e9fb-53ae-4908-87fc-4096c5b26b76" containerName="openstackclient" Feb 27 16:33:01 crc kubenswrapper[4830]: I0227 16:33:01.636277 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d5ffa89-a299-442c-9744-e8c35a5f4551" containerName="registry-server" Feb 27 16:33:01 crc kubenswrapper[4830]: I0227 16:33:01.636969 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lx5sm" Feb 27 16:33:01 crc kubenswrapper[4830]: I0227 16:33:01.641168 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 27 16:33:01 crc kubenswrapper[4830]: I0227 16:33:01.695240 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lx5sm"] Feb 27 16:33:01 crc kubenswrapper[4830]: I0227 16:33:01.734789 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-776e-account-create-update-kg8tx"] Feb 27 16:33:01 crc kubenswrapper[4830]: I0227 16:33:01.736106 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-776e-account-create-update-kg8tx" Feb 27 16:33:01 crc kubenswrapper[4830]: I0227 16:33:01.756369 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 27 16:33:01 crc kubenswrapper[4830]: I0227 16:33:01.782502 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-qt6mr"] Feb 27 16:33:01 crc kubenswrapper[4830]: I0227 16:33:01.801577 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-7668-account-create-update-6wj4n"] Feb 27 16:33:01 crc kubenswrapper[4830]: I0227 16:33:01.802780 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7668-account-create-update-6wj4n" Feb 27 16:33:01 crc kubenswrapper[4830]: I0227 16:33:01.815505 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-776e-account-create-update-kg8tx"] Feb 27 16:33:01 crc kubenswrapper[4830]: I0227 16:33:01.830009 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-mtj7r"] Feb 27 16:33:01 crc kubenswrapper[4830]: I0227 16:33:01.830216 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-mtj7r" podUID="b64de41e-9e05-48b2-87e5-387aad57532a" containerName="openstack-network-exporter" containerID="cri-o://68e148d9c338e25590dbfaf5b9ed31c09c1d25b0cdfd43f35a0878475443aaf7" gracePeriod=30 Feb 27 16:33:01 crc kubenswrapper[4830]: I0227 16:33:01.837347 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 27 16:33:01 crc kubenswrapper[4830]: I0227 16:33:01.852117 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pncxt\" (UniqueName: \"kubernetes.io/projected/3bf3e284-86ae-43b5-9259-6e9e34164de2-kube-api-access-pncxt\") pod \"placement-776e-account-create-update-kg8tx\" (UID: \"3bf3e284-86ae-43b5-9259-6e9e34164de2\") " pod="openstack/placement-776e-account-create-update-kg8tx" Feb 27 16:33:01 crc kubenswrapper[4830]: I0227 16:33:01.852170 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t98xs\" (UniqueName: \"kubernetes.io/projected/09849d6c-7457-4130-9074-73154d22af1f-kube-api-access-t98xs\") pod \"root-account-create-update-lx5sm\" (UID: \"09849d6c-7457-4130-9074-73154d22af1f\") " pod="openstack/root-account-create-update-lx5sm" Feb 27 16:33:01 crc kubenswrapper[4830]: I0227 16:33:01.852240 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bf3e284-86ae-43b5-9259-6e9e34164de2-operator-scripts\") pod \"placement-776e-account-create-update-kg8tx\" (UID: \"3bf3e284-86ae-43b5-9259-6e9e34164de2\") " pod="openstack/placement-776e-account-create-update-kg8tx" Feb 27 16:33:01 crc kubenswrapper[4830]: I0227 16:33:01.852309 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09849d6c-7457-4130-9074-73154d22af1f-operator-scripts\") pod \"root-account-create-update-lx5sm\" (UID: \"09849d6c-7457-4130-9074-73154d22af1f\") " pod="openstack/root-account-create-update-lx5sm" Feb 27 16:33:01 crc kubenswrapper[4830]: I0227 16:33:01.888030 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-mncqx"] Feb 27 16:33:01 crc kubenswrapper[4830]: I0227 16:33:01.923733 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7668-account-create-update-6wj4n"] Feb 27 16:33:01 crc kubenswrapper[4830]: I0227 16:33:01.957793 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz2dd\" (UniqueName: \"kubernetes.io/projected/baefaedf-2591-42f2-a383-5c92ae714ab5-kube-api-access-pz2dd\") pod \"glance-7668-account-create-update-6wj4n\" (UID: \"baefaedf-2591-42f2-a383-5c92ae714ab5\") " pod="openstack/glance-7668-account-create-update-6wj4n" Feb 27 16:33:01 crc kubenswrapper[4830]: I0227 16:33:01.957844 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pncxt\" (UniqueName: \"kubernetes.io/projected/3bf3e284-86ae-43b5-9259-6e9e34164de2-kube-api-access-pncxt\") pod \"placement-776e-account-create-update-kg8tx\" (UID: \"3bf3e284-86ae-43b5-9259-6e9e34164de2\") " pod="openstack/placement-776e-account-create-update-kg8tx" Feb 27 16:33:01 crc kubenswrapper[4830]: I0227 16:33:01.957877 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t98xs\" (UniqueName: \"kubernetes.io/projected/09849d6c-7457-4130-9074-73154d22af1f-kube-api-access-t98xs\") pod \"root-account-create-update-lx5sm\" (UID: \"09849d6c-7457-4130-9074-73154d22af1f\") " pod="openstack/root-account-create-update-lx5sm" Feb 27 16:33:01 crc kubenswrapper[4830]: I0227 16:33:01.957924 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baefaedf-2591-42f2-a383-5c92ae714ab5-operator-scripts\") pod \"glance-7668-account-create-update-6wj4n\" (UID: \"baefaedf-2591-42f2-a383-5c92ae714ab5\") " pod="openstack/glance-7668-account-create-update-6wj4n" Feb 27 16:33:01 crc kubenswrapper[4830]: I0227 16:33:01.957960 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bf3e284-86ae-43b5-9259-6e9e34164de2-operator-scripts\") pod \"placement-776e-account-create-update-kg8tx\" (UID: \"3bf3e284-86ae-43b5-9259-6e9e34164de2\") " pod="openstack/placement-776e-account-create-update-kg8tx" Feb 27 16:33:01 crc kubenswrapper[4830]: I0227 16:33:01.957989 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09849d6c-7457-4130-9074-73154d22af1f-operator-scripts\") pod \"root-account-create-update-lx5sm\" (UID: \"09849d6c-7457-4130-9074-73154d22af1f\") " pod="openstack/root-account-create-update-lx5sm" Feb 27 16:33:01 crc kubenswrapper[4830]: I0227 16:33:01.958652 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09849d6c-7457-4130-9074-73154d22af1f-operator-scripts\") pod \"root-account-create-update-lx5sm\" (UID: \"09849d6c-7457-4130-9074-73154d22af1f\") " pod="openstack/root-account-create-update-lx5sm" Feb 27 16:33:01 crc kubenswrapper[4830]: I0227 16:33:01.959465 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bf3e284-86ae-43b5-9259-6e9e34164de2-operator-scripts\") pod \"placement-776e-account-create-update-kg8tx\" (UID: \"3bf3e284-86ae-43b5-9259-6e9e34164de2\") " pod="openstack/placement-776e-account-create-update-kg8tx" Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.010912 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t98xs\" (UniqueName: \"kubernetes.io/projected/09849d6c-7457-4130-9074-73154d22af1f-kube-api-access-t98xs\") pod \"root-account-create-update-lx5sm\" (UID: \"09849d6c-7457-4130-9074-73154d22af1f\") " pod="openstack/root-account-create-update-lx5sm" Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.030361 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pncxt\" (UniqueName: \"kubernetes.io/projected/3bf3e284-86ae-43b5-9259-6e9e34164de2-kube-api-access-pncxt\") pod \"placement-776e-account-create-update-kg8tx\" (UID: \"3bf3e284-86ae-43b5-9259-6e9e34164de2\") " pod="openstack/placement-776e-account-create-update-kg8tx" Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.033774 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-vd8js"] Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.064547 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-776e-account-create-update-kg8tx" Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.070013 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-vd8js"] Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.099008 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baefaedf-2591-42f2-a383-5c92ae714ab5-operator-scripts\") pod \"glance-7668-account-create-update-6wj4n\" (UID: \"baefaedf-2591-42f2-a383-5c92ae714ab5\") " pod="openstack/glance-7668-account-create-update-6wj4n" Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.099316 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz2dd\" (UniqueName: \"kubernetes.io/projected/baefaedf-2591-42f2-a383-5c92ae714ab5-kube-api-access-pz2dd\") pod \"glance-7668-account-create-update-6wj4n\" (UID: \"baefaedf-2591-42f2-a383-5c92ae714ab5\") " pod="openstack/glance-7668-account-create-update-6wj4n" Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.128997 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.140586 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baefaedf-2591-42f2-a383-5c92ae714ab5-operator-scripts\") pod \"glance-7668-account-create-update-6wj4n\" (UID: \"baefaedf-2591-42f2-a383-5c92ae714ab5\") " pod="openstack/glance-7668-account-create-update-6wj4n" Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.145327 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz2dd\" (UniqueName: \"kubernetes.io/projected/baefaedf-2591-42f2-a383-5c92ae714ab5-kube-api-access-pz2dd\") pod \"glance-7668-account-create-update-6wj4n\" (UID: \"baefaedf-2591-42f2-a383-5c92ae714ab5\") " pod="openstack/glance-7668-account-create-update-6wj4n" Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.164346 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-776e-account-create-update-dkfsh"] Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.200811 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-7668-account-create-update-fvfp5"] Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.260476 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.260987 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="7c017daa-cb8f-4629-80e6-a671a8455149" containerName="ovn-northd" containerID="cri-o://3a1363ee7dd262f992c78bd580ddc30893c1d6cb37be4314ebecd1d79f83e351" gracePeriod=30 Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.287711 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="7c017daa-cb8f-4629-80e6-a671a8455149" containerName="openstack-network-exporter" containerID="cri-o://2bcb11d594f79ad72480623964189734a371dbef467c7efb79b03ffa6975a1e6" gracePeriod=30 Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.289243 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7668-account-create-update-6wj4n" Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.318011 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lx5sm" Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.371082 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-776e-account-create-update-dkfsh"] Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.402854 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-7668-account-create-update-fvfp5"] Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.472863 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-vrjmz"] Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.489776 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-c219-account-create-update-w82r8"] Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.497242 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c219-account-create-update-w82r8" Feb 27 16:33:02 crc kubenswrapper[4830]: E0227 16:33:02.493189 4830 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 27 16:33:02 crc kubenswrapper[4830]: E0227 16:33:02.500251 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/47514135-95a6-4b77-815a-ebf23a3cab82-config-data podName:47514135-95a6-4b77-815a-ebf23a3cab82 nodeName:}" failed. No retries permitted until 2026-02-27 16:33:02.997926189 +0000 UTC m=+1579.087198652 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/47514135-95a6-4b77-815a-ebf23a3cab82-config-data") pod "rabbitmq-cell1-server-0" (UID: "47514135-95a6-4b77-815a-ebf23a3cab82") : configmap "rabbitmq-cell1-config-data" not found Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.504347 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 27 16:33:02 crc kubenswrapper[4830]: E0227 16:33:02.535139 4830 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-mncqx" message="Exiting ovn-controller (1) " Feb 27 16:33:02 crc kubenswrapper[4830]: E0227 16:33:02.535164 4830 kuberuntime_container.go:691] "PreStop hook failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " pod="openstack/ovn-controller-mncqx" podUID="2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60" containerName="ovn-controller" containerID="cri-o://37bbcf553bf7957f279c1d2e295e8937f67d4c1dc6186d8ebb25e160632ad917" Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.535194 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-mncqx" podUID="2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60" containerName="ovn-controller" containerID="cri-o://37bbcf553bf7957f279c1d2e295e8937f67d4c1dc6186d8ebb25e160632ad917" gracePeriod=30 Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.535760 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-vrjmz"] Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.575219 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c219-account-create-update-w82r8"] Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.594306 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26018553-1865-499d-9c9b-932807fce26c-operator-scripts\") pod \"nova-api-c219-account-create-update-w82r8\" (UID: \"26018553-1865-499d-9c9b-932807fce26c\") " pod="openstack/nova-api-c219-account-create-update-w82r8" Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.594472 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl6m8\" (UniqueName: \"kubernetes.io/projected/26018553-1865-499d-9c9b-932807fce26c-kube-api-access-wl6m8\") pod \"nova-api-c219-account-create-update-w82r8\" (UID: \"26018553-1865-499d-9c9b-932807fce26c\") " pod="openstack/nova-api-c219-account-create-update-w82r8" Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.614526 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-4d9ld"] Feb 27 16:33:02 crc kubenswrapper[4830]: E0227 16:33:02.623967 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 37bbcf553bf7957f279c1d2e295e8937f67d4c1dc6186d8ebb25e160632ad917 is running failed: container process not found" containerID="37bbcf553bf7957f279c1d2e295e8937f67d4c1dc6186d8ebb25e160632ad917" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Feb 27 16:33:02 crc kubenswrapper[4830]: E0227 16:33:02.629182 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 37bbcf553bf7957f279c1d2e295e8937f67d4c1dc6186d8ebb25e160632ad917 is running failed: container process not found" containerID="37bbcf553bf7957f279c1d2e295e8937f67d4c1dc6186d8ebb25e160632ad917" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Feb 27 16:33:02 crc kubenswrapper[4830]: E0227 16:33:02.649475 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 37bbcf553bf7957f279c1d2e295e8937f67d4c1dc6186d8ebb25e160632ad917 is running failed: container process not found" containerID="37bbcf553bf7957f279c1d2e295e8937f67d4c1dc6186d8ebb25e160632ad917" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Feb 27 16:33:02 crc kubenswrapper[4830]: E0227 16:33:02.649552 4830 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 37bbcf553bf7957f279c1d2e295e8937f67d4c1dc6186d8ebb25e160632ad917 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-mncqx" podUID="2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60" containerName="ovn-controller" Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.655676 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-4d9ld"] Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.675344 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-29fd-account-create-update-st6rb"] Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.676639 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-29fd-account-create-update-st6rb" Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.682910 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-29fd-account-create-update-st6rb"] Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.692819 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.699037 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26018553-1865-499d-9c9b-932807fce26c-operator-scripts\") pod \"nova-api-c219-account-create-update-w82r8\" (UID: \"26018553-1865-499d-9c9b-932807fce26c\") " pod="openstack/nova-api-c219-account-create-update-w82r8" Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.699199 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl6m8\" (UniqueName: \"kubernetes.io/projected/26018553-1865-499d-9c9b-932807fce26c-kube-api-access-wl6m8\") pod \"nova-api-c219-account-create-update-w82r8\" (UID: \"26018553-1865-499d-9c9b-932807fce26c\") " pod="openstack/nova-api-c219-account-create-update-w82r8" Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.700111 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26018553-1865-499d-9c9b-932807fce26c-operator-scripts\") pod \"nova-api-c219-account-create-update-w82r8\" (UID: \"26018553-1865-499d-9c9b-932807fce26c\") " pod="openstack/nova-api-c219-account-create-update-w82r8" Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.716747 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-c219-account-create-update-zndsj"] Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.724545 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-c219-account-create-update-zndsj"] Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.736990 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-b9fgg"] Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.746500 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-b9fgg"] Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.758005 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-5e39-account-create-update-r88l6"] Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.759175 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5e39-account-create-update-r88l6" Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.768508 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.769058 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl6m8\" (UniqueName: \"kubernetes.io/projected/26018553-1865-499d-9c9b-932807fce26c-kube-api-access-wl6m8\") pod \"nova-api-c219-account-create-update-w82r8\" (UID: \"26018553-1865-499d-9c9b-932807fce26c\") " pod="openstack/nova-api-c219-account-create-update-w82r8" Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.801165 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6rx9\" (UniqueName: \"kubernetes.io/projected/02d5a77c-198f-43aa-96ab-2ac2d76c7743-kube-api-access-r6rx9\") pod \"nova-cell0-29fd-account-create-update-st6rb\" (UID: \"02d5a77c-198f-43aa-96ab-2ac2d76c7743\") " pod="openstack/nova-cell0-29fd-account-create-update-st6rb" Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.801205 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02d5a77c-198f-43aa-96ab-2ac2d76c7743-operator-scripts\") pod \"nova-cell0-29fd-account-create-update-st6rb\" (UID: \"02d5a77c-198f-43aa-96ab-2ac2d76c7743\") " pod="openstack/nova-cell0-29fd-account-create-update-st6rb" Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.808726 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52d332d0-98e5-4cff-8486-151b6593c94f" path="/var/lib/kubelet/pods/52d332d0-98e5-4cff-8486-151b6593c94f/volumes" Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.811496 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="624a4c06-2a5c-480c-89f1-addc261412f0" path="/var/lib/kubelet/pods/624a4c06-2a5c-480c-89f1-addc261412f0/volumes" Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.812075 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67a7f858-b1fb-4547-9880-8f496d704f48" path="/var/lib/kubelet/pods/67a7f858-b1fb-4547-9880-8f496d704f48/volumes" Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.826461 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a69bc2ed-ce70-4828-af02-ccac1c3f0c10" path="/var/lib/kubelet/pods/a69bc2ed-ce70-4828-af02-ccac1c3f0c10/volumes" Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.827193 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b44b6447-25d6-4a6a-986d-b49fc2729061" path="/var/lib/kubelet/pods/b44b6447-25d6-4a6a-986d-b49fc2729061/volumes" Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.827841 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1998358-5e92-4f90-8163-1705c1614197" path="/var/lib/kubelet/pods/e1998358-5e92-4f90-8163-1705c1614197/volumes" Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.830239 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e59291fe-6cc6-4fda-870b-d3842d9b65ee" path="/var/lib/kubelet/pods/e59291fe-6cc6-4fda-870b-d3842d9b65ee/volumes" Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.833518 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-5e39-account-create-update-r88l6"] Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.833553 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-29fd-account-create-update-n79tl"] Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.833567 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-29fd-account-create-update-n79tl"] Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.833581 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-jhwfg"] Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.845818 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-jhwfg"] Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.856749 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c219-account-create-update-w82r8" Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.907413 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8z6k\" (UniqueName: \"kubernetes.io/projected/0ea4ce89-3e8b-4521-9398-3406c6bf0166-kube-api-access-b8z6k\") pod \"nova-cell1-5e39-account-create-update-r88l6\" (UID: \"0ea4ce89-3e8b-4521-9398-3406c6bf0166\") " pod="openstack/nova-cell1-5e39-account-create-update-r88l6" Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.907455 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ea4ce89-3e8b-4521-9398-3406c6bf0166-operator-scripts\") pod \"nova-cell1-5e39-account-create-update-r88l6\" (UID: \"0ea4ce89-3e8b-4521-9398-3406c6bf0166\") " pod="openstack/nova-cell1-5e39-account-create-update-r88l6" Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.907515 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6rx9\" (UniqueName: \"kubernetes.io/projected/02d5a77c-198f-43aa-96ab-2ac2d76c7743-kube-api-access-r6rx9\") pod \"nova-cell0-29fd-account-create-update-st6rb\" (UID: \"02d5a77c-198f-43aa-96ab-2ac2d76c7743\") " pod="openstack/nova-cell0-29fd-account-create-update-st6rb" Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.907539 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02d5a77c-198f-43aa-96ab-2ac2d76c7743-operator-scripts\") pod \"nova-cell0-29fd-account-create-update-st6rb\" (UID: \"02d5a77c-198f-43aa-96ab-2ac2d76c7743\") " pod="openstack/nova-cell0-29fd-account-create-update-st6rb" Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.911472 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02d5a77c-198f-43aa-96ab-2ac2d76c7743-operator-scripts\") pod \"nova-cell0-29fd-account-create-update-st6rb\" (UID: \"02d5a77c-198f-43aa-96ab-2ac2d76c7743\") " pod="openstack/nova-cell0-29fd-account-create-update-st6rb" Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.930655 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-dcxkj"] Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.941802 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-dcxkj"] Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.944093 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6rx9\" (UniqueName: \"kubernetes.io/projected/02d5a77c-198f-43aa-96ab-2ac2d76c7743-kube-api-access-r6rx9\") pod \"nova-cell0-29fd-account-create-update-st6rb\" (UID: \"02d5a77c-198f-43aa-96ab-2ac2d76c7743\") " pod="openstack/nova-cell0-29fd-account-create-update-st6rb" Feb 27 16:33:02 crc kubenswrapper[4830]: E0227 16:33:02.947346 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 16:33:02 crc kubenswrapper[4830]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Feb 27 16:33:02 crc kubenswrapper[4830]: Feb 27 16:33:02 crc kubenswrapper[4830]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 27 16:33:02 crc kubenswrapper[4830]: Feb 27 16:33:02 crc kubenswrapper[4830]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 27 16:33:02 crc kubenswrapper[4830]: Feb 27 16:33:02 crc kubenswrapper[4830]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 27 16:33:02 crc kubenswrapper[4830]: Feb 27 16:33:02 crc kubenswrapper[4830]: if [ -n "placement" ]; then Feb 27 16:33:02 crc kubenswrapper[4830]: GRANT_DATABASE="placement" Feb 27 16:33:02 crc kubenswrapper[4830]: else Feb 27 16:33:02 crc kubenswrapper[4830]: GRANT_DATABASE="*" Feb 27 16:33:02 crc kubenswrapper[4830]: fi Feb 27 16:33:02 crc kubenswrapper[4830]: Feb 27 16:33:02 crc kubenswrapper[4830]: # going for maximum compatibility here: Feb 27 16:33:02 crc kubenswrapper[4830]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 27 16:33:02 crc kubenswrapper[4830]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 27 16:33:02 crc kubenswrapper[4830]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 27 16:33:02 crc kubenswrapper[4830]: # support updates Feb 27 16:33:02 crc kubenswrapper[4830]: Feb 27 16:33:02 crc kubenswrapper[4830]: $MYSQL_CMD < logger="UnhandledError" Feb 27 16:33:02 crc kubenswrapper[4830]: E0227 16:33:02.949395 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-776e-account-create-update-kg8tx" podUID="3bf3e284-86ae-43b5-9259-6e9e34164de2" Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.965131 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.965467 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="7285a360-7ff1-4e35-b91a-d472a0ee591b" containerName="openstack-network-exporter" containerID="cri-o://03fae1fb8e9a6d2c747afacdabeb6fc5b1752527700bbfdf259b9f15c3429baa" gracePeriod=300 Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.978223 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-5e39-account-create-update-hqzqb"] Feb 27 16:33:02 crc kubenswrapper[4830]: I0227 16:33:02.985991 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-5e39-account-create-update-hqzqb"] Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:02.992693 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-dmhcp"] Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:02.992939 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cd5cbd7b9-dmhcp" podUID="23db3cbd-39ac-4137-8a7e-0533af96e5b1" containerName="dnsmasq-dns" containerID="cri-o://5e4b95ff9e120a4e75ce39c775be2aee2b80b55e4a33fe61a9e413a3ae463cf6" gracePeriod=10 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.003405 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-776e-account-create-update-kg8tx"] Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.009498 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8z6k\" (UniqueName: \"kubernetes.io/projected/0ea4ce89-3e8b-4521-9398-3406c6bf0166-kube-api-access-b8z6k\") pod \"nova-cell1-5e39-account-create-update-r88l6\" (UID: \"0ea4ce89-3e8b-4521-9398-3406c6bf0166\") " pod="openstack/nova-cell1-5e39-account-create-update-r88l6" Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.009543 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ea4ce89-3e8b-4521-9398-3406c6bf0166-operator-scripts\") pod \"nova-cell1-5e39-account-create-update-r88l6\" (UID: \"0ea4ce89-3e8b-4521-9398-3406c6bf0166\") " pod="openstack/nova-cell1-5e39-account-create-update-r88l6" Feb 27 16:33:03 crc kubenswrapper[4830]: E0227 16:33:03.009743 4830 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 27 16:33:03 crc kubenswrapper[4830]: E0227 16:33:03.009796 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/47514135-95a6-4b77-815a-ebf23a3cab82-config-data podName:47514135-95a6-4b77-815a-ebf23a3cab82 nodeName:}" failed. No retries permitted until 2026-02-27 16:33:04.009780604 +0000 UTC m=+1580.099053067 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/47514135-95a6-4b77-815a-ebf23a3cab82-config-data") pod "rabbitmq-cell1-server-0" (UID: "47514135-95a6-4b77-815a-ebf23a3cab82") : configmap "rabbitmq-cell1-config-data" not found Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.010600 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ea4ce89-3e8b-4521-9398-3406c6bf0166-operator-scripts\") pod \"nova-cell1-5e39-account-create-update-r88l6\" (UID: \"0ea4ce89-3e8b-4521-9398-3406c6bf0166\") " pod="openstack/nova-cell1-5e39-account-create-update-r88l6" Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.014399 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.014718 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="9f17706c-2060-4191-b63a-df7dea2c4c95" containerName="openstack-network-exporter" containerID="cri-o://aef48ea8d72edf5f1504d9101a6b5d6f742a96bb0bdea5a1647ced04e0be6ed1" gracePeriod=300 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.023758 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-v5xs2"] Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.030758 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-29fd-account-create-update-st6rb" Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.033925 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-v5xs2"] Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.059927 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8z6k\" (UniqueName: \"kubernetes.io/projected/0ea4ce89-3e8b-4521-9398-3406c6bf0166-kube-api-access-b8z6k\") pod \"nova-cell1-5e39-account-create-update-r88l6\" (UID: \"0ea4ce89-3e8b-4521-9398-3406c6bf0166\") " pod="openstack/nova-cell1-5e39-account-create-update-r88l6" Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.097306 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-776e-account-create-update-kg8tx" event={"ID":"3bf3e284-86ae-43b5-9259-6e9e34164de2","Type":"ContainerStarted","Data":"80f4f51520c519c3de9df8d87842927e1bd643af1040ef8f9b7a66b5dbb693dd"} Feb 27 16:33:03 crc kubenswrapper[4830]: E0227 16:33:03.112423 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 16:33:03 crc kubenswrapper[4830]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Feb 27 16:33:03 crc kubenswrapper[4830]: Feb 27 16:33:03 crc kubenswrapper[4830]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 27 16:33:03 crc kubenswrapper[4830]: Feb 27 16:33:03 crc kubenswrapper[4830]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 27 16:33:03 crc kubenswrapper[4830]: Feb 27 16:33:03 crc kubenswrapper[4830]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 27 16:33:03 crc kubenswrapper[4830]: Feb 27 16:33:03 crc kubenswrapper[4830]: if [ -n "placement" ]; then Feb 27 16:33:03 crc kubenswrapper[4830]: GRANT_DATABASE="placement" Feb 27 16:33:03 crc kubenswrapper[4830]: else Feb 27 16:33:03 crc kubenswrapper[4830]: GRANT_DATABASE="*" Feb 27 16:33:03 crc kubenswrapper[4830]: fi Feb 27 16:33:03 crc kubenswrapper[4830]: Feb 27 16:33:03 crc kubenswrapper[4830]: # going for maximum compatibility here: Feb 27 16:33:03 crc kubenswrapper[4830]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 27 16:33:03 crc kubenswrapper[4830]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 27 16:33:03 crc kubenswrapper[4830]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 27 16:33:03 crc kubenswrapper[4830]: # support updates Feb 27 16:33:03 crc kubenswrapper[4830]: Feb 27 16:33:03 crc kubenswrapper[4830]: $MYSQL_CMD < logger="UnhandledError" Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.114023 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5e39-account-create-update-r88l6" Feb 27 16:33:03 crc kubenswrapper[4830]: E0227 16:33:03.114384 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-776e-account-create-update-kg8tx" podUID="3bf3e284-86ae-43b5-9259-6e9e34164de2" Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.122185 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-mtj7r_b64de41e-9e05-48b2-87e5-387aad57532a/openstack-network-exporter/0.log" Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.122229 4830 generic.go:334] "Generic (PLEG): container finished" podID="b64de41e-9e05-48b2-87e5-387aad57532a" containerID="68e148d9c338e25590dbfaf5b9ed31c09c1d25b0cdfd43f35a0878475443aaf7" exitCode=2 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.122338 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.122367 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-mtj7r" event={"ID":"b64de41e-9e05-48b2-87e5-387aad57532a","Type":"ContainerDied","Data":"68e148d9c338e25590dbfaf5b9ed31c09c1d25b0cdfd43f35a0878475443aaf7"} Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.124749 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="account-server" containerID="cri-o://d31525bce81210150593ba3db8f8611a5b2d43ff82b2e5c7435f34ad45248c17" gracePeriod=30 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.124802 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="object-server" containerID="cri-o://2b750caa248530febbfbd4731fc41f64ef7a9129eab2a66780052a81ccfecb65" gracePeriod=30 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.124839 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="container-updater" containerID="cri-o://63b86b7398c02b758efbf23ee7393a15e9d70cbae4e28af8dae65670306da7a0" gracePeriod=30 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.124803 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="container-server" containerID="cri-o://abb82842a2a5f9faa42c2a6d73afbddfe73443d7841d35f06ec15c1730975fed" gracePeriod=30 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.124937 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="container-auditor" containerID="cri-o://fddbdac256b4a79af48834ea268b02e9852631ab71cc27740d8344fa2927b417" gracePeriod=30 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.124997 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="container-replicator" containerID="cri-o://fe39e07eaf48b0f3b6310a52d48a7901fe69c67e61f2bc86fcae68e60845e160" gracePeriod=30 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.125048 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="account-auditor" containerID="cri-o://09edcd425fc07104a2a290237930b325e8877e8ef116e51111ef81ba1b7710e2" gracePeriod=30 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.125085 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="account-reaper" containerID="cri-o://b54307be9a881794a66b55a9bca85b4703855db739e2c59f98b8842a64710ed1" gracePeriod=30 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.125074 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="object-expirer" containerID="cri-o://7cfd581745eb62c04447e2179fa4d6397a6ffb2801133df8571673fd2fc8908e" gracePeriod=30 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.125125 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="object-auditor" containerID="cri-o://2111c96223f006387077459f4429b67f715648783b2df873c937a40d47be2181" gracePeriod=30 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.125163 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="object-updater" containerID="cri-o://d7c3c63f60fa6c0faabdef005cd6435637f7aa45e44077b6d1579dbcfce2ffa5" gracePeriod=30 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.125203 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="swift-recon-cron" containerID="cri-o://bd8b53933ff6dda1af3029d46d29a1b791028b8a3ae0508dffa6e043e33ce932" gracePeriod=30 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.125222 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="rsync" containerID="cri-o://2ecea93ad489597ba408891f7afe44675c8c3d67fbcc4edfbe9a3debbac6c3a1" gracePeriod=30 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.125247 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="object-replicator" containerID="cri-o://ee0b677352a33d7fbcb2e9fab57bf5d672b03867dad9240c6c1fbd8e2b1f0b37" gracePeriod=30 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.125282 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="account-replicator" containerID="cri-o://a6f8e6e02ca541ffa4fab936a485162a21cf976d73c728274bb3fd83cc01abb4" gracePeriod=30 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.146060 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="7285a360-7ff1-4e35-b91a-d472a0ee591b" containerName="ovsdbserver-sb" containerID="cri-o://5618df31dec13a8fa8c264acbc16b8fc53b1c9f9523f6216c8bce6be25fbacb1" gracePeriod=300 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.160353 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.160410 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.160453 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.161813 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ca4233a6fa911a1d6b959075103b585a592935c9e9a6d178d17fef41a2fea048"} pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.161886 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" containerID="cri-o://ca4233a6fa911a1d6b959075103b585a592935c9e9a6d178d17fef41a2fea048" gracePeriod=600 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.162470 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.198280 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="6d6ca92a-3e98-4628-8936-37032cf27463" containerName="probe" containerID="cri-o://08dae26c7de73c784a1c4cdf01a2ec48ed79b52c6c16691dcb728b190ce0bde0" gracePeriod=30 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.162941 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="6d6ca92a-3e98-4628-8936-37032cf27463" containerName="cinder-scheduler" containerID="cri-o://c6e289a18c1629684bcdb331c9033eb81b5cf53591f391b7c77955013ee8149f" gracePeriod=30 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.199351 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ovs-qt6mr" podUID="bc737ee4-d87c-4276-a6d1-6f3144879542" containerName="ovs-vswitchd" probeResult="failure" output=< Feb 27 16:33:03 crc kubenswrapper[4830]: cat: /var/run/openvswitch/ovs-vswitchd.pid: No such file or directory Feb 27 16:33:03 crc kubenswrapper[4830]: ERROR - Failed to get pid for ovs-vswitchd, exit status: 0 Feb 27 16:33:03 crc kubenswrapper[4830]: > Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.199404 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-qt6mr" podUID="bc737ee4-d87c-4276-a6d1-6f3144879542" containerName="ovs-vswitchd" containerID="cri-o://4a66491103ecf784427f1721d3379810efd654720c7344f0ebbf5e10bc7a1b3d" gracePeriod=29 Feb 27 16:33:03 crc kubenswrapper[4830]: E0227 16:33:03.200670 4830 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Feb 27 16:33:03 crc kubenswrapper[4830]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Feb 27 16:33:03 crc kubenswrapper[4830]: + source /usr/local/bin/container-scripts/functions Feb 27 16:33:03 crc kubenswrapper[4830]: ++ OVNBridge=br-int Feb 27 16:33:03 crc kubenswrapper[4830]: ++ OVNRemote=tcp:localhost:6642 Feb 27 16:33:03 crc kubenswrapper[4830]: ++ OVNEncapType=geneve Feb 27 16:33:03 crc kubenswrapper[4830]: ++ OVNAvailabilityZones= Feb 27 16:33:03 crc kubenswrapper[4830]: ++ EnableChassisAsGateway=true Feb 27 16:33:03 crc kubenswrapper[4830]: ++ PhysicalNetworks= Feb 27 16:33:03 crc kubenswrapper[4830]: ++ OVNHostName= Feb 27 16:33:03 crc kubenswrapper[4830]: ++ DB_FILE=/etc/openvswitch/conf.db Feb 27 16:33:03 crc kubenswrapper[4830]: ++ ovs_dir=/var/lib/openvswitch Feb 27 16:33:03 crc kubenswrapper[4830]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Feb 27 16:33:03 crc kubenswrapper[4830]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Feb 27 16:33:03 crc kubenswrapper[4830]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 27 16:33:03 crc kubenswrapper[4830]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 27 16:33:03 crc kubenswrapper[4830]: + sleep 0.5 Feb 27 16:33:03 crc kubenswrapper[4830]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 27 16:33:03 crc kubenswrapper[4830]: + sleep 0.5 Feb 27 16:33:03 crc kubenswrapper[4830]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 27 16:33:03 crc kubenswrapper[4830]: + cleanup_ovsdb_server_semaphore Feb 27 16:33:03 crc kubenswrapper[4830]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 27 16:33:03 crc kubenswrapper[4830]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Feb 27 16:33:03 crc kubenswrapper[4830]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-qt6mr" message=< Feb 27 16:33:03 crc kubenswrapper[4830]: Exiting ovsdb-server (5) [ OK ] Feb 27 16:33:03 crc kubenswrapper[4830]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Feb 27 16:33:03 crc kubenswrapper[4830]: + source /usr/local/bin/container-scripts/functions Feb 27 16:33:03 crc kubenswrapper[4830]: ++ OVNBridge=br-int Feb 27 16:33:03 crc kubenswrapper[4830]: ++ OVNRemote=tcp:localhost:6642 Feb 27 16:33:03 crc kubenswrapper[4830]: ++ OVNEncapType=geneve Feb 27 16:33:03 crc kubenswrapper[4830]: ++ OVNAvailabilityZones= Feb 27 16:33:03 crc kubenswrapper[4830]: ++ EnableChassisAsGateway=true Feb 27 16:33:03 crc kubenswrapper[4830]: ++ PhysicalNetworks= Feb 27 16:33:03 crc kubenswrapper[4830]: ++ OVNHostName= Feb 27 16:33:03 crc kubenswrapper[4830]: ++ DB_FILE=/etc/openvswitch/conf.db Feb 27 16:33:03 crc kubenswrapper[4830]: ++ ovs_dir=/var/lib/openvswitch Feb 27 16:33:03 crc kubenswrapper[4830]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Feb 27 16:33:03 crc kubenswrapper[4830]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Feb 27 16:33:03 crc kubenswrapper[4830]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 27 16:33:03 crc kubenswrapper[4830]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 27 16:33:03 crc kubenswrapper[4830]: + sleep 0.5 Feb 27 16:33:03 crc kubenswrapper[4830]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 27 16:33:03 crc kubenswrapper[4830]: + sleep 0.5 Feb 27 16:33:03 crc kubenswrapper[4830]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 27 16:33:03 crc kubenswrapper[4830]: + cleanup_ovsdb_server_semaphore Feb 27 16:33:03 crc kubenswrapper[4830]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 27 16:33:03 crc kubenswrapper[4830]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Feb 27 16:33:03 crc kubenswrapper[4830]: > Feb 27 16:33:03 crc kubenswrapper[4830]: E0227 16:33:03.200691 4830 kuberuntime_container.go:691] "PreStop hook failed" err=< Feb 27 16:33:03 crc kubenswrapper[4830]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Feb 27 16:33:03 crc kubenswrapper[4830]: + source /usr/local/bin/container-scripts/functions Feb 27 16:33:03 crc kubenswrapper[4830]: ++ OVNBridge=br-int Feb 27 16:33:03 crc kubenswrapper[4830]: ++ OVNRemote=tcp:localhost:6642 Feb 27 16:33:03 crc kubenswrapper[4830]: ++ OVNEncapType=geneve Feb 27 16:33:03 crc kubenswrapper[4830]: ++ OVNAvailabilityZones= Feb 27 16:33:03 crc kubenswrapper[4830]: ++ EnableChassisAsGateway=true Feb 27 16:33:03 crc kubenswrapper[4830]: ++ PhysicalNetworks= Feb 27 16:33:03 crc kubenswrapper[4830]: ++ OVNHostName= Feb 27 16:33:03 crc kubenswrapper[4830]: ++ DB_FILE=/etc/openvswitch/conf.db Feb 27 16:33:03 crc kubenswrapper[4830]: ++ ovs_dir=/var/lib/openvswitch Feb 27 16:33:03 crc kubenswrapper[4830]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Feb 27 16:33:03 crc kubenswrapper[4830]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Feb 27 16:33:03 crc kubenswrapper[4830]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 27 16:33:03 crc kubenswrapper[4830]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 27 16:33:03 crc kubenswrapper[4830]: + sleep 0.5 Feb 27 16:33:03 crc kubenswrapper[4830]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 27 16:33:03 crc kubenswrapper[4830]: + sleep 0.5 Feb 27 16:33:03 crc kubenswrapper[4830]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 27 16:33:03 crc kubenswrapper[4830]: + cleanup_ovsdb_server_semaphore Feb 27 16:33:03 crc kubenswrapper[4830]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 27 16:33:03 crc kubenswrapper[4830]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Feb 27 16:33:03 crc kubenswrapper[4830]: > pod="openstack/ovn-controller-ovs-qt6mr" podUID="bc737ee4-d87c-4276-a6d1-6f3144879542" containerName="ovsdb-server" containerID="cri-o://6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5" Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.200710 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-qt6mr" podUID="bc737ee4-d87c-4276-a6d1-6f3144879542" containerName="ovsdb-server" containerID="cri-o://6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5" gracePeriod=29 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.220315 4830 generic.go:334] "Generic (PLEG): container finished" podID="7c017daa-cb8f-4629-80e6-a671a8455149" containerID="2bcb11d594f79ad72480623964189734a371dbef467c7efb79b03ffa6975a1e6" exitCode=2 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.220397 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7c017daa-cb8f-4629-80e6-a671a8455149","Type":"ContainerDied","Data":"2bcb11d594f79ad72480623964189734a371dbef467c7efb79b03ffa6975a1e6"} Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.229328 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8559c55d4f-z6hpf"] Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.229585 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8559c55d4f-z6hpf" podUID="acdbf1f3-efd7-4181-b99c-a0697c465c4b" containerName="neutron-api" containerID="cri-o://a56e16403fc2d569470e79c24225b344a16dacbbe2255d02caeb6351695ce986" gracePeriod=30 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.229676 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-8559c55d4f-z6hpf" podUID="acdbf1f3-efd7-4181-b99c-a0697c465c4b" containerName="neutron-httpd" containerID="cri-o://825cde15be9549d56742ccbdc2f57b6324396f78c69861f72b851d87071dd387" gracePeriod=30 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.248588 4830 generic.go:334] "Generic (PLEG): container finished" podID="2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60" containerID="37bbcf553bf7957f279c1d2e295e8937f67d4c1dc6186d8ebb25e160632ad917" exitCode=0 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.248629 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mncqx" event={"ID":"2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60","Type":"ContainerDied","Data":"37bbcf553bf7957f279c1d2e295e8937f67d4c1dc6186d8ebb25e160632ad917"} Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.256479 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-58db7bd5dd-jr8zt"] Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.256780 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-58db7bd5dd-jr8zt" podUID="bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf" containerName="placement-log" containerID="cri-o://4ad340ff7e5d3dcbe59313ae7a759101ba1b8edf59a86c29f287b2cb3edf2de6" gracePeriod=30 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.256905 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-58db7bd5dd-jr8zt" podUID="bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf" containerName="placement-api" containerID="cri-o://b4c2a77141370e51625fa6bf385bb1eb77fc6e2be81322189a2da160e42e03d0" gracePeriod=30 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.266884 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.268154 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="41fafe33-b43b-4dcb-9edd-b365d0749e10" containerName="cinder-api-log" containerID="cri-o://40cab2835902cbbd7f2108f23209c5d896b2d0b912cf229a63563e0cdf02215b" gracePeriod=30 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.268584 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="41fafe33-b43b-4dcb-9edd-b365d0749e10" containerName="cinder-api" containerID="cri-o://9f254100c8c027338b42ed369be0ddd72af937c9d87a9a808607f1dcc876c8ed" gracePeriod=30 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.299068 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-86c7h"] Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.308236 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-86c7h"] Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.319057 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-8mnsh"] Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.343044 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-8mnsh"] Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.379826 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="9f17706c-2060-4191-b63a-df7dea2c4c95" containerName="ovsdbserver-nb" containerID="cri-o://6ec8f1e6a925dda75bf2b25d6d091880ed805d81e677fbee45551ce4d31bc846" gracePeriod=300 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.443951 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.467290 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.467544 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="73fa27e0-b59d-44b0-8648-7e696f71cd61" containerName="glance-log" containerID="cri-o://25a00b007e3e1a8c77c7bf619655cf9ead3a6eb2aa47a2c778cfc3371c33e4c5" gracePeriod=30 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.467977 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="73fa27e0-b59d-44b0-8648-7e696f71cd61" containerName="glance-httpd" containerID="cri-o://a5137475aad41fb8eb7b0a7b72def6633e3820a0b964c9cad287965ce3680cca" gracePeriod=30 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.493010 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-jhhnm"] Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.519999 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-jhhnm"] Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.576294 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-2d81-account-create-update-6xn6z"] Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.593289 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-2d81-account-create-update-6xn6z"] Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.602437 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-h7wpk"] Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.615643 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-h7wpk"] Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.625303 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.625512 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d8d4cd44-9972-445e-bac3-63441b6fa4cc" containerName="glance-log" containerID="cri-o://0e99db8779b62c9b60211a3a800d8786d6e5d19fd2046d962c492ef86848b48c" gracePeriod=30 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.626027 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d8d4cd44-9972-445e-bac3-63441b6fa4cc" containerName="glance-httpd" containerID="cri-o://7b743cc093d9cd3e5deb61678bf56225726f2ee5f6b916d24acb306d92c0ebc6" gracePeriod=30 Feb 27 16:33:03 crc kubenswrapper[4830]: E0227 16:33:03.641879 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.665275 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-37dc-account-create-update-j859r"] Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.694073 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-37dc-account-create-update-j859r"] Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.731764 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-2jzwm"] Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.737416 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="aa5b7bdd-50bb-4123-a32a-0c7e97035a3f" containerName="rabbitmq" containerID="cri-o://60b83b906afc06b23e5e1362e3117ceeff1474cd84090478f13efba3e31b7cf5" gracePeriod=604800 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.743390 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-2jzwm"] Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.756542 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-776e-account-create-update-kg8tx"] Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.776037 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-c6f44c475-twbzz"] Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.776309 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-c6f44c475-twbzz" podUID="38b57350-6ca0-4090-876b-7727c983cf52" containerName="proxy-httpd" containerID="cri-o://4379a4562487a2f829fd847e713d7b48e4f30ff72dfa48612a5cee4351449110" gracePeriod=30 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.776653 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-c6f44c475-twbzz" podUID="38b57350-6ca0-4090-876b-7727c983cf52" containerName="proxy-server" containerID="cri-o://7dad8ffa6283d569435591881ebf2eedf721235312643b6378985dffadc0a1cf" gracePeriod=30 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.792990 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-mz689"] Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.801614 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-mz689"] Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.820069 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-7668-account-create-update-6wj4n"] Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.899430 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-v5pmq"] Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.910909 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-v5pmq"] Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.933210 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-d4d2-account-create-update-qbmct"] Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.946208 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-d4d2-account-create-update-qbmct"] Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.949009 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.955990 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.956441 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c" containerName="nova-api-log" containerID="cri-o://144b29fbee6ca22072cb52d8025180f33aea96191753e1a5038399c82ac702fc" gracePeriod=30 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.956778 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c" containerName="nova-api-api" containerID="cri-o://71f9a2d35a123a7c42bc68cc143760e467aedb724086c36e562efbf095e0c426" gracePeriod=30 Feb 27 16:33:03 crc kubenswrapper[4830]: I0227 16:33:03.972016 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-sd6bv"] Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:03.997958 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-sd6bv"] Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.012993 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-5e39-account-create-update-r88l6"] Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.021085 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-2jtqk"] Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.028021 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-c219-account-create-update-w82r8"] Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.052898 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.053178 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f4280aaf-817d-41e1-9867-715359ae322e" containerName="nova-metadata-log" containerID="cri-o://53a40c635318ff11c80f75f6211616278bbd9c179f11fec9265e63a26e70b0ac" gracePeriod=30 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.053298 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f4280aaf-817d-41e1-9867-715359ae322e" containerName="nova-metadata-metadata" containerID="cri-o://67f705d66ad4d26d1a66a751f763fac473304bb8b591b54c2c0c497cc8ee46c6" gracePeriod=30 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.064217 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-2jtqk"] Feb 27 16:33:04 crc kubenswrapper[4830]: E0227 16:33:04.065077 4830 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 27 16:33:04 crc kubenswrapper[4830]: E0227 16:33:04.077445 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/47514135-95a6-4b77-815a-ebf23a3cab82-config-data podName:47514135-95a6-4b77-815a-ebf23a3cab82 nodeName:}" failed. No retries permitted until 2026-02-27 16:33:06.077410811 +0000 UTC m=+1582.166683274 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/47514135-95a6-4b77-815a-ebf23a3cab82-config-data") pod "rabbitmq-cell1-server-0" (UID: "47514135-95a6-4b77-815a-ebf23a3cab82") : configmap "rabbitmq-cell1-config-data" not found Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.157682 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-drqxj"] Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.166276 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-drqxj"] Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.223848 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-29fd-account-create-update-st6rb"] Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.231116 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.275840 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-58c49587-cz4f5"] Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.276090 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-58c49587-cz4f5" podUID="f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32" containerName="barbican-worker-log" containerID="cri-o://d25e9e29213d4dd9d13dc6e8f8443d64cbecee22307bae547934dfd69a24c51a" gracePeriod=30 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.276414 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-58c49587-cz4f5" podUID="f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32" containerName="barbican-worker" containerID="cri-o://3bd476206784383c2fbe0db210deee00da003f513b1f05dcbc55ea33c264c212" gracePeriod=30 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.289918 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-948fdb9cd-ncm6f"] Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.290125 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-948fdb9cd-ncm6f" podUID="22232c9c-ecf7-443e-834f-ad39b37735b2" containerName="barbican-keystone-listener-log" containerID="cri-o://6cf3d9b94980e2ca5aa0032ef28c8b51ac4ff272ea01954cb10fbe1ad64d9f4b" gracePeriod=30 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.290547 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-948fdb9cd-ncm6f" podUID="22232c9c-ecf7-443e-834f-ad39b37735b2" containerName="barbican-keystone-listener" containerID="cri-o://91059dd00f11fc333eace4b793fe5a4f3fca466216720380e52c9fb9f6ce33ff" gracePeriod=30 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.298908 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mncqx" event={"ID":"2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60","Type":"ContainerDied","Data":"76bb760f76d65ac29dbbac945a7c3f50503139f52918e3dcc5f430bb0fd782bc"} Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.298954 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76bb760f76d65ac29dbbac945a7c3f50503139f52918e3dcc5f430bb0fd782bc" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.301538 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5d54db5966-xcg7l"] Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.301774 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5d54db5966-xcg7l" podUID="a234743b-8983-4a60-bbb4-59ad823b83e2" containerName="barbican-api-log" containerID="cri-o://bcaad14a5dbb96adf7a18f1f57a6f9461056ab8d5981e03e5ed3e64de132d692" gracePeriod=30 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.302325 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5d54db5966-xcg7l" podUID="a234743b-8983-4a60-bbb4-59ad823b83e2" containerName="barbican-api" containerID="cri-o://5d61bb0dcfd0af97605ea6793d0ccb409521660eb0cfce03c505ba533a6f52a4" gracePeriod=30 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.303808 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-mtj7r_b64de41e-9e05-48b2-87e5-387aad57532a/openstack-network-exporter/0.log" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.304902 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-mtj7r" event={"ID":"b64de41e-9e05-48b2-87e5-387aad57532a","Type":"ContainerDied","Data":"6474c9f1bef2ad51145b280febe52f680adfa6000d6ca748d69a65cd5b075580"} Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.304928 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6474c9f1bef2ad51145b280febe52f680adfa6000d6ca748d69a65cd5b075580" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.328802 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.329271 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="21656f50-51b8-4761-8b9e-c2b823dace13" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://a3e19fe9784a7e84ad00ba5db518baa23ac731605584cf84a3a6192b109fa71e" gracePeriod=30 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.331928 4830 generic.go:334] "Generic (PLEG): container finished" podID="91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c" containerID="144b29fbee6ca22072cb52d8025180f33aea96191753e1a5038399c82ac702fc" exitCode=143 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.332090 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c","Type":"ContainerDied","Data":"144b29fbee6ca22072cb52d8025180f33aea96191753e1a5038399c82ac702fc"} Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.333616 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.333781 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6" containerName="nova-scheduler-scheduler" containerID="cri-o://4bd0cecd4c639c19d6288ae6763e874f65a458b96d3aae8d391e7b853fd3836b" gracePeriod=30 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.346126 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="b63af300-2b1c-47a7-ae1d-1334deeefdb1" containerName="galera" containerID="cri-o://58b3931eed123fb0912adbb48ae5835fb65012c51cabfe8279f65b2fb158c0e1" gracePeriod=30 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.346236 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-lx5sm"] Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.347282 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="47514135-95a6-4b77-815a-ebf23a3cab82" containerName="rabbitmq" containerID="cri-o://bfe6a195e3ce6d71c1ea474fd65d55d683efc1b57f94f1cbfda130d8058970ed" gracePeriod=604800 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.375800 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-7668-account-create-update-6wj4n"] Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.384148 4830 generic.go:334] "Generic (PLEG): container finished" podID="acdbf1f3-efd7-4181-b99c-a0697c465c4b" containerID="825cde15be9549d56742ccbdc2f57b6324396f78c69861f72b851d87071dd387" exitCode=0 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.384249 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8559c55d4f-z6hpf" event={"ID":"acdbf1f3-efd7-4181-b99c-a0697c465c4b","Type":"ContainerDied","Data":"825cde15be9549d56742ccbdc2f57b6324396f78c69861f72b851d87071dd387"} Feb 27 16:33:04 crc kubenswrapper[4830]: E0227 16:33:04.384486 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 16:33:04 crc kubenswrapper[4830]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Feb 27 16:33:04 crc kubenswrapper[4830]: Feb 27 16:33:04 crc kubenswrapper[4830]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 27 16:33:04 crc kubenswrapper[4830]: Feb 27 16:33:04 crc kubenswrapper[4830]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 27 16:33:04 crc kubenswrapper[4830]: Feb 27 16:33:04 crc kubenswrapper[4830]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 27 16:33:04 crc kubenswrapper[4830]: Feb 27 16:33:04 crc kubenswrapper[4830]: if [ -n "glance" ]; then Feb 27 16:33:04 crc kubenswrapper[4830]: GRANT_DATABASE="glance" Feb 27 16:33:04 crc kubenswrapper[4830]: else Feb 27 16:33:04 crc kubenswrapper[4830]: GRANT_DATABASE="*" Feb 27 16:33:04 crc kubenswrapper[4830]: fi Feb 27 16:33:04 crc kubenswrapper[4830]: Feb 27 16:33:04 crc kubenswrapper[4830]: # going for maximum compatibility here: Feb 27 16:33:04 crc kubenswrapper[4830]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 27 16:33:04 crc kubenswrapper[4830]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 27 16:33:04 crc kubenswrapper[4830]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 27 16:33:04 crc kubenswrapper[4830]: # support updates Feb 27 16:33:04 crc kubenswrapper[4830]: Feb 27 16:33:04 crc kubenswrapper[4830]: $MYSQL_CMD < logger="UnhandledError" Feb 27 16:33:04 crc kubenswrapper[4830]: E0227 16:33:04.386388 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"glance-db-secret\\\" not found\"" pod="openstack/glance-7668-account-create-update-6wj4n" podUID="baefaedf-2591-42f2-a383-5c92ae714ab5" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.389188 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.389341 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="a989aa76-9246-46b2-9f1e-7900cfecedc2" containerName="nova-cell1-conductor-conductor" containerID="cri-o://0177eede3f4945d97bcd0d90fed75c1aa58d1276a7fd71e80b0683515562f9b1" gracePeriod=30 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.403203 4830 generic.go:334] "Generic (PLEG): container finished" podID="3482e9fb-53ae-4908-87fc-4096c5b26b76" containerID="ebe94bb0443ae2939345bc80a179e9644e55c467b0fc2c9d6043e5cff481e239" exitCode=137 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.412462 4830 generic.go:334] "Generic (PLEG): container finished" podID="73fa27e0-b59d-44b0-8648-7e696f71cd61" containerID="25a00b007e3e1a8c77c7bf619655cf9ead3a6eb2aa47a2c778cfc3371c33e4c5" exitCode=143 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.412537 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-b8tph"] Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.412560 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"73fa27e0-b59d-44b0-8648-7e696f71cd61","Type":"ContainerDied","Data":"25a00b007e3e1a8c77c7bf619655cf9ead3a6eb2aa47a2c778cfc3371c33e4c5"} Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.428382 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-b8tph"] Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.432186 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7285a360-7ff1-4e35-b91a-d472a0ee591b/ovsdbserver-sb/0.log" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.432228 4830 generic.go:334] "Generic (PLEG): container finished" podID="7285a360-7ff1-4e35-b91a-d472a0ee591b" containerID="03fae1fb8e9a6d2c747afacdabeb6fc5b1752527700bbfdf259b9f15c3429baa" exitCode=2 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.432244 4830 generic.go:334] "Generic (PLEG): container finished" podID="7285a360-7ff1-4e35-b91a-d472a0ee591b" containerID="5618df31dec13a8fa8c264acbc16b8fc53b1c9f9523f6216c8bce6be25fbacb1" exitCode=143 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.432305 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7285a360-7ff1-4e35-b91a-d472a0ee591b","Type":"ContainerDied","Data":"03fae1fb8e9a6d2c747afacdabeb6fc5b1752527700bbfdf259b9f15c3429baa"} Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.432330 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7285a360-7ff1-4e35-b91a-d472a0ee591b","Type":"ContainerDied","Data":"5618df31dec13a8fa8c264acbc16b8fc53b1c9f9523f6216c8bce6be25fbacb1"} Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.432340 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7285a360-7ff1-4e35-b91a-d472a0ee591b","Type":"ContainerDied","Data":"49bf1f87f98ae2644a84087142c3c92892d9fad6b91ed15bc982a4c0b71e5a49"} Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.432349 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49bf1f87f98ae2644a84087142c3c92892d9fad6b91ed15bc982a4c0b71e5a49" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.439304 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.439546 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="0bee1ae7-32fb-484d-a81a-47fe31e25d70" containerName="nova-cell0-conductor-conductor" containerID="cri-o://c2905f95d9b1bd685977d7be7161ae0adaba055e9615f02fecc0602b6c991b5c" gracePeriod=30 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.453999 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-x7bbz"] Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.461301 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-x7bbz"] Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.491172 4830 generic.go:334] "Generic (PLEG): container finished" podID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerID="ca4233a6fa911a1d6b959075103b585a592935c9e9a6d178d17fef41a2fea048" exitCode=0 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.491227 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerDied","Data":"ca4233a6fa911a1d6b959075103b585a592935c9e9a6d178d17fef41a2fea048"} Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.491257 4830 scope.go:117] "RemoveContainer" containerID="4451f44bd5a230af740184dd479b8e8cef56c8f4c478f47a91288db9cb943456" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.491649 4830 scope.go:117] "RemoveContainer" containerID="ca4233a6fa911a1d6b959075103b585a592935c9e9a6d178d17fef41a2fea048" Feb 27 16:33:04 crc kubenswrapper[4830]: E0227 16:33:04.491854 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.533426 4830 generic.go:334] "Generic (PLEG): container finished" podID="d8d4cd44-9972-445e-bac3-63441b6fa4cc" containerID="0e99db8779b62c9b60211a3a800d8786d6e5d19fd2046d962c492ef86848b48c" exitCode=143 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.533521 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d8d4cd44-9972-445e-bac3-63441b6fa4cc","Type":"ContainerDied","Data":"0e99db8779b62c9b60211a3a800d8786d6e5d19fd2046d962c492ef86848b48c"} Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.548546 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_9f17706c-2060-4191-b63a-df7dea2c4c95/ovsdbserver-nb/0.log" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.548732 4830 generic.go:334] "Generic (PLEG): container finished" podID="9f17706c-2060-4191-b63a-df7dea2c4c95" containerID="aef48ea8d72edf5f1504d9101a6b5d6f742a96bb0bdea5a1647ced04e0be6ed1" exitCode=2 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.548797 4830 generic.go:334] "Generic (PLEG): container finished" podID="9f17706c-2060-4191-b63a-df7dea2c4c95" containerID="6ec8f1e6a925dda75bf2b25d6d091880ed805d81e677fbee45551ce4d31bc846" exitCode=143 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.548981 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"9f17706c-2060-4191-b63a-df7dea2c4c95","Type":"ContainerDied","Data":"aef48ea8d72edf5f1504d9101a6b5d6f742a96bb0bdea5a1647ced04e0be6ed1"} Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.549077 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"9f17706c-2060-4191-b63a-df7dea2c4c95","Type":"ContainerDied","Data":"6ec8f1e6a925dda75bf2b25d6d091880ed805d81e677fbee45551ce4d31bc846"} Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.593483 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-c219-account-create-update-w82r8"] Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.606555 4830 generic.go:334] "Generic (PLEG): container finished" podID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerID="2ecea93ad489597ba408891f7afe44675c8c3d67fbcc4edfbe9a3debbac6c3a1" exitCode=0 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.606584 4830 generic.go:334] "Generic (PLEG): container finished" podID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerID="7cfd581745eb62c04447e2179fa4d6397a6ffb2801133df8571673fd2fc8908e" exitCode=0 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.606593 4830 generic.go:334] "Generic (PLEG): container finished" podID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerID="d7c3c63f60fa6c0faabdef005cd6435637f7aa45e44077b6d1579dbcfce2ffa5" exitCode=0 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.606600 4830 generic.go:334] "Generic (PLEG): container finished" podID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerID="2111c96223f006387077459f4429b67f715648783b2df873c937a40d47be2181" exitCode=0 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.606609 4830 generic.go:334] "Generic (PLEG): container finished" podID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerID="ee0b677352a33d7fbcb2e9fab57bf5d672b03867dad9240c6c1fbd8e2b1f0b37" exitCode=0 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.606616 4830 generic.go:334] "Generic (PLEG): container finished" podID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerID="2b750caa248530febbfbd4731fc41f64ef7a9129eab2a66780052a81ccfecb65" exitCode=0 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.606622 4830 generic.go:334] "Generic (PLEG): container finished" podID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerID="63b86b7398c02b758efbf23ee7393a15e9d70cbae4e28af8dae65670306da7a0" exitCode=0 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.606630 4830 generic.go:334] "Generic (PLEG): container finished" podID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerID="fddbdac256b4a79af48834ea268b02e9852631ab71cc27740d8344fa2927b417" exitCode=0 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.606638 4830 generic.go:334] "Generic (PLEG): container finished" podID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerID="fe39e07eaf48b0f3b6310a52d48a7901fe69c67e61f2bc86fcae68e60845e160" exitCode=0 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.606644 4830 generic.go:334] "Generic (PLEG): container finished" podID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerID="abb82842a2a5f9faa42c2a6d73afbddfe73443d7841d35f06ec15c1730975fed" exitCode=0 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.606650 4830 generic.go:334] "Generic (PLEG): container finished" podID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerID="b54307be9a881794a66b55a9bca85b4703855db739e2c59f98b8842a64710ed1" exitCode=0 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.606656 4830 generic.go:334] "Generic (PLEG): container finished" podID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerID="09edcd425fc07104a2a290237930b325e8877e8ef116e51111ef81ba1b7710e2" exitCode=0 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.606663 4830 generic.go:334] "Generic (PLEG): container finished" podID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerID="a6f8e6e02ca541ffa4fab936a485162a21cf976d73c728274bb3fd83cc01abb4" exitCode=0 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.606669 4830 generic.go:334] "Generic (PLEG): container finished" podID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerID="d31525bce81210150593ba3db8f8611a5b2d43ff82b2e5c7435f34ad45248c17" exitCode=0 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.606742 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f","Type":"ContainerDied","Data":"2ecea93ad489597ba408891f7afe44675c8c3d67fbcc4edfbe9a3debbac6c3a1"} Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.606768 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f","Type":"ContainerDied","Data":"7cfd581745eb62c04447e2179fa4d6397a6ffb2801133df8571673fd2fc8908e"} Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.606781 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f","Type":"ContainerDied","Data":"d7c3c63f60fa6c0faabdef005cd6435637f7aa45e44077b6d1579dbcfce2ffa5"} Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.606791 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f","Type":"ContainerDied","Data":"2111c96223f006387077459f4429b67f715648783b2df873c937a40d47be2181"} Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.606801 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f","Type":"ContainerDied","Data":"ee0b677352a33d7fbcb2e9fab57bf5d672b03867dad9240c6c1fbd8e2b1f0b37"} Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.606811 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f","Type":"ContainerDied","Data":"2b750caa248530febbfbd4731fc41f64ef7a9129eab2a66780052a81ccfecb65"} Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.606820 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f","Type":"ContainerDied","Data":"63b86b7398c02b758efbf23ee7393a15e9d70cbae4e28af8dae65670306da7a0"} Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.606830 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f","Type":"ContainerDied","Data":"fddbdac256b4a79af48834ea268b02e9852631ab71cc27740d8344fa2927b417"} Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.606843 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f","Type":"ContainerDied","Data":"fe39e07eaf48b0f3b6310a52d48a7901fe69c67e61f2bc86fcae68e60845e160"} Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.606850 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f","Type":"ContainerDied","Data":"abb82842a2a5f9faa42c2a6d73afbddfe73443d7841d35f06ec15c1730975fed"} Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.606859 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f","Type":"ContainerDied","Data":"b54307be9a881794a66b55a9bca85b4703855db739e2c59f98b8842a64710ed1"} Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.606867 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f","Type":"ContainerDied","Data":"09edcd425fc07104a2a290237930b325e8877e8ef116e51111ef81ba1b7710e2"} Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.606877 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f","Type":"ContainerDied","Data":"a6f8e6e02ca541ffa4fab936a485162a21cf976d73c728274bb3fd83cc01abb4"} Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.606887 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f","Type":"ContainerDied","Data":"d31525bce81210150593ba3db8f8611a5b2d43ff82b2e5c7435f34ad45248c17"} Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.609363 4830 generic.go:334] "Generic (PLEG): container finished" podID="38b57350-6ca0-4090-876b-7727c983cf52" containerID="7dad8ffa6283d569435591881ebf2eedf721235312643b6378985dffadc0a1cf" exitCode=0 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.609390 4830 generic.go:334] "Generic (PLEG): container finished" podID="38b57350-6ca0-4090-876b-7727c983cf52" containerID="4379a4562487a2f829fd847e713d7b48e4f30ff72dfa48612a5cee4351449110" exitCode=0 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.609436 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-c6f44c475-twbzz" event={"ID":"38b57350-6ca0-4090-876b-7727c983cf52","Type":"ContainerDied","Data":"7dad8ffa6283d569435591881ebf2eedf721235312643b6378985dffadc0a1cf"} Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.609468 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-c6f44c475-twbzz" event={"ID":"38b57350-6ca0-4090-876b-7727c983cf52","Type":"ContainerDied","Data":"4379a4562487a2f829fd847e713d7b48e4f30ff72dfa48612a5cee4351449110"} Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.610719 4830 generic.go:334] "Generic (PLEG): container finished" podID="23db3cbd-39ac-4137-8a7e-0533af96e5b1" containerID="5e4b95ff9e120a4e75ce39c775be2aee2b80b55e4a33fe61a9e413a3ae463cf6" exitCode=0 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.610770 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-dmhcp" event={"ID":"23db3cbd-39ac-4137-8a7e-0533af96e5b1","Type":"ContainerDied","Data":"5e4b95ff9e120a4e75ce39c775be2aee2b80b55e4a33fe61a9e413a3ae463cf6"} Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.610793 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-dmhcp" event={"ID":"23db3cbd-39ac-4137-8a7e-0533af96e5b1","Type":"ContainerDied","Data":"37060d261048bdd878ea526cb1f8c5e1bdf8de7dfa50b5e84e600756b107d840"} Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.610803 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37060d261048bdd878ea526cb1f8c5e1bdf8de7dfa50b5e84e600756b107d840" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.612348 4830 generic.go:334] "Generic (PLEG): container finished" podID="bc737ee4-d87c-4276-a6d1-6f3144879542" containerID="6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5" exitCode=0 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.612452 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qt6mr" event={"ID":"bc737ee4-d87c-4276-a6d1-6f3144879542","Type":"ContainerDied","Data":"6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5"} Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.613806 4830 generic.go:334] "Generic (PLEG): container finished" podID="41fafe33-b43b-4dcb-9edd-b365d0749e10" containerID="40cab2835902cbbd7f2108f23209c5d896b2d0b912cf229a63563e0cdf02215b" exitCode=143 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.613899 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"41fafe33-b43b-4dcb-9edd-b365d0749e10","Type":"ContainerDied","Data":"40cab2835902cbbd7f2108f23209c5d896b2d0b912cf229a63563e0cdf02215b"} Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.620306 4830 generic.go:334] "Generic (PLEG): container finished" podID="bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf" containerID="4ad340ff7e5d3dcbe59313ae7a759101ba1b8edf59a86c29f287b2cb3edf2de6" exitCode=143 Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.621050 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-58db7bd5dd-jr8zt" event={"ID":"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf","Type":"ContainerDied","Data":"4ad340ff7e5d3dcbe59313ae7a759101ba1b8edf59a86c29f287b2cb3edf2de6"} Feb 27 16:33:04 crc kubenswrapper[4830]: E0227 16:33:04.622764 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 16:33:04 crc kubenswrapper[4830]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Feb 27 16:33:04 crc kubenswrapper[4830]: Feb 27 16:33:04 crc kubenswrapper[4830]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 27 16:33:04 crc kubenswrapper[4830]: Feb 27 16:33:04 crc kubenswrapper[4830]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 27 16:33:04 crc kubenswrapper[4830]: Feb 27 16:33:04 crc kubenswrapper[4830]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 27 16:33:04 crc kubenswrapper[4830]: Feb 27 16:33:04 crc kubenswrapper[4830]: if [ -n "placement" ]; then Feb 27 16:33:04 crc kubenswrapper[4830]: GRANT_DATABASE="placement" Feb 27 16:33:04 crc kubenswrapper[4830]: else Feb 27 16:33:04 crc kubenswrapper[4830]: GRANT_DATABASE="*" Feb 27 16:33:04 crc kubenswrapper[4830]: fi Feb 27 16:33:04 crc kubenswrapper[4830]: Feb 27 16:33:04 crc kubenswrapper[4830]: # going for maximum compatibility here: Feb 27 16:33:04 crc kubenswrapper[4830]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 27 16:33:04 crc kubenswrapper[4830]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 27 16:33:04 crc kubenswrapper[4830]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 27 16:33:04 crc kubenswrapper[4830]: # support updates Feb 27 16:33:04 crc kubenswrapper[4830]: Feb 27 16:33:04 crc kubenswrapper[4830]: $MYSQL_CMD < logger="UnhandledError" Feb 27 16:33:04 crc kubenswrapper[4830]: E0227 16:33:04.623923 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-776e-account-create-update-kg8tx" podUID="3bf3e284-86ae-43b5-9259-6e9e34164de2" Feb 27 16:33:04 crc kubenswrapper[4830]: E0227 16:33:04.640597 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 16:33:04 crc kubenswrapper[4830]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Feb 27 16:33:04 crc kubenswrapper[4830]: Feb 27 16:33:04 crc kubenswrapper[4830]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 27 16:33:04 crc kubenswrapper[4830]: Feb 27 16:33:04 crc kubenswrapper[4830]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 27 16:33:04 crc kubenswrapper[4830]: Feb 27 16:33:04 crc kubenswrapper[4830]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 27 16:33:04 crc kubenswrapper[4830]: Feb 27 16:33:04 crc kubenswrapper[4830]: if [ -n "nova_api" ]; then Feb 27 16:33:04 crc kubenswrapper[4830]: GRANT_DATABASE="nova_api" Feb 27 16:33:04 crc kubenswrapper[4830]: else Feb 27 16:33:04 crc kubenswrapper[4830]: GRANT_DATABASE="*" Feb 27 16:33:04 crc kubenswrapper[4830]: fi Feb 27 16:33:04 crc kubenswrapper[4830]: Feb 27 16:33:04 crc kubenswrapper[4830]: # going for maximum compatibility here: Feb 27 16:33:04 crc kubenswrapper[4830]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 27 16:33:04 crc kubenswrapper[4830]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 27 16:33:04 crc kubenswrapper[4830]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 27 16:33:04 crc kubenswrapper[4830]: # support updates Feb 27 16:33:04 crc kubenswrapper[4830]: Feb 27 16:33:04 crc kubenswrapper[4830]: $MYSQL_CMD < logger="UnhandledError" Feb 27 16:33:04 crc kubenswrapper[4830]: E0227 16:33:04.641767 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-c219-account-create-update-w82r8" podUID="26018553-1865-499d-9c9b-932807fce26c" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.642271 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mncqx" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.652835 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-mtj7r_b64de41e-9e05-48b2-87e5-387aad57532a/openstack-network-exporter/0.log" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.652893 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-mtj7r" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.683713 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-dmhcp" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.694778 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b64de41e-9e05-48b2-87e5-387aad57532a-config\") pod \"b64de41e-9e05-48b2-87e5-387aad57532a\" (UID: \"b64de41e-9e05-48b2-87e5-387aad57532a\") " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.694866 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b64de41e-9e05-48b2-87e5-387aad57532a-metrics-certs-tls-certs\") pod \"b64de41e-9e05-48b2-87e5-387aad57532a\" (UID: \"b64de41e-9e05-48b2-87e5-387aad57532a\") " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.694892 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b64de41e-9e05-48b2-87e5-387aad57532a-ovn-rundir\") pod \"b64de41e-9e05-48b2-87e5-387aad57532a\" (UID: \"b64de41e-9e05-48b2-87e5-387aad57532a\") " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.694916 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-scripts\") pod \"2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60\" (UID: \"2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60\") " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.694956 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b64de41e-9e05-48b2-87e5-387aad57532a-ovs-rundir\") pod \"b64de41e-9e05-48b2-87e5-387aad57532a\" (UID: \"b64de41e-9e05-48b2-87e5-387aad57532a\") " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.694974 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b64de41e-9e05-48b2-87e5-387aad57532a-combined-ca-bundle\") pod \"b64de41e-9e05-48b2-87e5-387aad57532a\" (UID: \"b64de41e-9e05-48b2-87e5-387aad57532a\") " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.695011 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lffj5\" (UniqueName: \"kubernetes.io/projected/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-kube-api-access-lffj5\") pod \"2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60\" (UID: \"2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60\") " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.695039 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqgvk\" (UniqueName: \"kubernetes.io/projected/b64de41e-9e05-48b2-87e5-387aad57532a-kube-api-access-sqgvk\") pod \"b64de41e-9e05-48b2-87e5-387aad57532a\" (UID: \"b64de41e-9e05-48b2-87e5-387aad57532a\") " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.695058 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-var-log-ovn\") pod \"2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60\" (UID: \"2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60\") " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.695099 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-var-run-ovn\") pod \"2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60\" (UID: \"2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60\") " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.695159 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-combined-ca-bundle\") pod \"2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60\" (UID: \"2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60\") " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.695181 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-ovn-controller-tls-certs\") pod \"2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60\" (UID: \"2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60\") " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.695213 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-var-run\") pod \"2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60\" (UID: \"2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60\") " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.697002 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60" (UID: "2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.697052 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-var-run" (OuterVolumeSpecName: "var-run") pod "2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60" (UID: "2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.697228 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b64de41e-9e05-48b2-87e5-387aad57532a-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "b64de41e-9e05-48b2-87e5-387aad57532a" (UID: "b64de41e-9e05-48b2-87e5-387aad57532a"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.697542 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7285a360-7ff1-4e35-b91a-d472a0ee591b/ovsdbserver-sb/0.log" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.697597 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.697935 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b64de41e-9e05-48b2-87e5-387aad57532a-config" (OuterVolumeSpecName: "config") pod "b64de41e-9e05-48b2-87e5-387aad57532a" (UID: "b64de41e-9e05-48b2-87e5-387aad57532a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.698031 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60" (UID: "2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.698055 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b64de41e-9e05-48b2-87e5-387aad57532a-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "b64de41e-9e05-48b2-87e5-387aad57532a" (UID: "b64de41e-9e05-48b2-87e5-387aad57532a"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.698911 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-scripts" (OuterVolumeSpecName: "scripts") pod "2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60" (UID: "2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.718248 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b64de41e-9e05-48b2-87e5-387aad57532a-kube-api-access-sqgvk" (OuterVolumeSpecName: "kube-api-access-sqgvk") pod "b64de41e-9e05-48b2-87e5-387aad57532a" (UID: "b64de41e-9e05-48b2-87e5-387aad57532a"). InnerVolumeSpecName "kube-api-access-sqgvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.723895 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60" (UID: "2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.719380 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.733910 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_9f17706c-2060-4191-b63a-df7dea2c4c95/ovsdbserver-nb/0.log" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.733995 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.737301 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-kube-api-access-lffj5" (OuterVolumeSpecName: "kube-api-access-lffj5") pod "2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60" (UID: "2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60"). InnerVolumeSpecName "kube-api-access-lffj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.761359 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b64de41e-9e05-48b2-87e5-387aad57532a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b64de41e-9e05-48b2-87e5-387aad57532a" (UID: "b64de41e-9e05-48b2-87e5-387aad57532a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.798366 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7285a360-7ff1-4e35-b91a-d472a0ee591b-config\") pod \"7285a360-7ff1-4e35-b91a-d472a0ee591b\" (UID: \"7285a360-7ff1-4e35-b91a-d472a0ee591b\") " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.798401 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23db3cbd-39ac-4137-8a7e-0533af96e5b1-ovsdbserver-nb\") pod \"23db3cbd-39ac-4137-8a7e-0533af96e5b1\" (UID: \"23db3cbd-39ac-4137-8a7e-0533af96e5b1\") " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.798440 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23db3cbd-39ac-4137-8a7e-0533af96e5b1-config\") pod \"23db3cbd-39ac-4137-8a7e-0533af96e5b1\" (UID: \"23db3cbd-39ac-4137-8a7e-0533af96e5b1\") " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.798469 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfhbc\" (UniqueName: \"kubernetes.io/projected/3482e9fb-53ae-4908-87fc-4096c5b26b76-kube-api-access-jfhbc\") pod \"3482e9fb-53ae-4908-87fc-4096c5b26b76\" (UID: \"3482e9fb-53ae-4908-87fc-4096c5b26b76\") " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.798515 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7285a360-7ff1-4e35-b91a-d472a0ee591b-ovsdbserver-sb-tls-certs\") pod \"7285a360-7ff1-4e35-b91a-d472a0ee591b\" (UID: \"7285a360-7ff1-4e35-b91a-d472a0ee591b\") " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.798548 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23db3cbd-39ac-4137-8a7e-0533af96e5b1-dns-svc\") pod \"23db3cbd-39ac-4137-8a7e-0533af96e5b1\" (UID: \"23db3cbd-39ac-4137-8a7e-0533af96e5b1\") " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.798605 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23db3cbd-39ac-4137-8a7e-0533af96e5b1-dns-swift-storage-0\") pod \"23db3cbd-39ac-4137-8a7e-0533af96e5b1\" (UID: \"23db3cbd-39ac-4137-8a7e-0533af96e5b1\") " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.798680 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3482e9fb-53ae-4908-87fc-4096c5b26b76-combined-ca-bundle\") pod \"3482e9fb-53ae-4908-87fc-4096c5b26b76\" (UID: \"3482e9fb-53ae-4908-87fc-4096c5b26b76\") " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.798707 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9f17706c-2060-4191-b63a-df7dea2c4c95-scripts\") pod \"9f17706c-2060-4191-b63a-df7dea2c4c95\" (UID: \"9f17706c-2060-4191-b63a-df7dea2c4c95\") " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.798791 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f17706c-2060-4191-b63a-df7dea2c4c95-ovsdbserver-nb-tls-certs\") pod \"9f17706c-2060-4191-b63a-df7dea2c4c95\" (UID: \"9f17706c-2060-4191-b63a-df7dea2c4c95\") " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.798846 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7285a360-7ff1-4e35-b91a-d472a0ee591b-combined-ca-bundle\") pod \"7285a360-7ff1-4e35-b91a-d472a0ee591b\" (UID: \"7285a360-7ff1-4e35-b91a-d472a0ee591b\") " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.798917 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn479\" (UniqueName: \"kubernetes.io/projected/7285a360-7ff1-4e35-b91a-d472a0ee591b-kube-api-access-wn479\") pod \"7285a360-7ff1-4e35-b91a-d472a0ee591b\" (UID: \"7285a360-7ff1-4e35-b91a-d472a0ee591b\") " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.798975 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3482e9fb-53ae-4908-87fc-4096c5b26b76-openstack-config-secret\") pod \"3482e9fb-53ae-4908-87fc-4096c5b26b76\" (UID: \"3482e9fb-53ae-4908-87fc-4096c5b26b76\") " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.799002 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7285a360-7ff1-4e35-b91a-d472a0ee591b-scripts\") pod \"7285a360-7ff1-4e35-b91a-d472a0ee591b\" (UID: \"7285a360-7ff1-4e35-b91a-d472a0ee591b\") " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.799016 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v9sf\" (UniqueName: \"kubernetes.io/projected/23db3cbd-39ac-4137-8a7e-0533af96e5b1-kube-api-access-6v9sf\") pod \"23db3cbd-39ac-4137-8a7e-0533af96e5b1\" (UID: \"23db3cbd-39ac-4137-8a7e-0533af96e5b1\") " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.799066 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f17706c-2060-4191-b63a-df7dea2c4c95-metrics-certs-tls-certs\") pod \"9f17706c-2060-4191-b63a-df7dea2c4c95\" (UID: \"9f17706c-2060-4191-b63a-df7dea2c4c95\") " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.799087 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9f17706c-2060-4191-b63a-df7dea2c4c95-ovsdb-rundir\") pod \"9f17706c-2060-4191-b63a-df7dea2c4c95\" (UID: \"9f17706c-2060-4191-b63a-df7dea2c4c95\") " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.799104 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"9f17706c-2060-4191-b63a-df7dea2c4c95\" (UID: \"9f17706c-2060-4191-b63a-df7dea2c4c95\") " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.799142 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f17706c-2060-4191-b63a-df7dea2c4c95-config\") pod \"9f17706c-2060-4191-b63a-df7dea2c4c95\" (UID: \"9f17706c-2060-4191-b63a-df7dea2c4c95\") " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.799171 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23db3cbd-39ac-4137-8a7e-0533af96e5b1-ovsdbserver-sb\") pod \"23db3cbd-39ac-4137-8a7e-0533af96e5b1\" (UID: \"23db3cbd-39ac-4137-8a7e-0533af96e5b1\") " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.799233 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7285a360-7ff1-4e35-b91a-d472a0ee591b-ovsdb-rundir\") pod \"7285a360-7ff1-4e35-b91a-d472a0ee591b\" (UID: \"7285a360-7ff1-4e35-b91a-d472a0ee591b\") " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.799288 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f17706c-2060-4191-b63a-df7dea2c4c95-combined-ca-bundle\") pod \"9f17706c-2060-4191-b63a-df7dea2c4c95\" (UID: \"9f17706c-2060-4191-b63a-df7dea2c4c95\") " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.799304 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7285a360-7ff1-4e35-b91a-d472a0ee591b-metrics-certs-tls-certs\") pod \"7285a360-7ff1-4e35-b91a-d472a0ee591b\" (UID: \"7285a360-7ff1-4e35-b91a-d472a0ee591b\") " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.799322 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3482e9fb-53ae-4908-87fc-4096c5b26b76-openstack-config\") pod \"3482e9fb-53ae-4908-87fc-4096c5b26b76\" (UID: \"3482e9fb-53ae-4908-87fc-4096c5b26b76\") " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.799377 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"7285a360-7ff1-4e35-b91a-d472a0ee591b\" (UID: \"7285a360-7ff1-4e35-b91a-d472a0ee591b\") " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.799412 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clnqr\" (UniqueName: \"kubernetes.io/projected/9f17706c-2060-4191-b63a-df7dea2c4c95-kube-api-access-clnqr\") pod \"9f17706c-2060-4191-b63a-df7dea2c4c95\" (UID: \"9f17706c-2060-4191-b63a-df7dea2c4c95\") " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.800781 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f17706c-2060-4191-b63a-df7dea2c4c95-config" (OuterVolumeSpecName: "config") pod "9f17706c-2060-4191-b63a-df7dea2c4c95" (UID: "9f17706c-2060-4191-b63a-df7dea2c4c95"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.801296 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="034a69b5-6540-4b46-b0d5-55098d2f6467" path="/var/lib/kubelet/pods/034a69b5-6540-4b46-b0d5-55098d2f6467/volumes" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.802009 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f17706c-2060-4191-b63a-df7dea2c4c95-scripts" (OuterVolumeSpecName: "scripts") pod "9f17706c-2060-4191-b63a-df7dea2c4c95" (UID: "9f17706c-2060-4191-b63a-df7dea2c4c95"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.802091 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f17706c-2060-4191-b63a-df7dea2c4c95-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "9f17706c-2060-4191-b63a-df7dea2c4c95" (UID: "9f17706c-2060-4191-b63a-df7dea2c4c95"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.802264 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.802962 4830 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-var-run\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.802993 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b64de41e-9e05-48b2-87e5-387aad57532a-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.803004 4830 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b64de41e-9e05-48b2-87e5-387aad57532a-ovn-rundir\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.803017 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.803026 4830 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b64de41e-9e05-48b2-87e5-387aad57532a-ovs-rundir\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.803035 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b64de41e-9e05-48b2-87e5-387aad57532a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.803050 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lffj5\" (UniqueName: \"kubernetes.io/projected/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-kube-api-access-lffj5\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.803061 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqgvk\" (UniqueName: \"kubernetes.io/projected/b64de41e-9e05-48b2-87e5-387aad57532a-kube-api-access-sqgvk\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.803069 4830 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.803079 4830 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.806702 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="120619c7-5358-455a-bf71-e3d60389fb05" path="/var/lib/kubelet/pods/120619c7-5358-455a-bf71-e3d60389fb05/volumes" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.809554 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2687dd0d-1fea-48d6-a53a-b10ccfa7d223" path="/var/lib/kubelet/pods/2687dd0d-1fea-48d6-a53a-b10ccfa7d223/volumes" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.810612 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26e38f3b-1dee-4203-bd8c-5c4ce7d29b0d" path="/var/lib/kubelet/pods/26e38f3b-1dee-4203-bd8c-5c4ce7d29b0d/volumes" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.811376 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43877352-b9c6-4179-82a0-3b194a870e8a" path="/var/lib/kubelet/pods/43877352-b9c6-4179-82a0-3b194a870e8a/volumes" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.812662 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="459173e8-7571-47b7-9af8-3bd2d24d4e21" path="/var/lib/kubelet/pods/459173e8-7571-47b7-9af8-3bd2d24d4e21/volumes" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.814635 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4902066e-ebd0-4ea5-8620-939e120b7862" path="/var/lib/kubelet/pods/4902066e-ebd0-4ea5-8620-939e120b7862/volumes" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.815593 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4989a1bf-9609-47ae-99c3-561023cff325" path="/var/lib/kubelet/pods/4989a1bf-9609-47ae-99c3-561023cff325/volumes" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.816131 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cf85768-fd08-43b7-a8bf-a2738e493b22" path="/var/lib/kubelet/pods/5cf85768-fd08-43b7-a8bf-a2738e493b22/volumes" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.816675 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6af8e619-e07a-4702-ac64-7fcf5077aef8" path="/var/lib/kubelet/pods/6af8e619-e07a-4702-ac64-7fcf5077aef8/volumes" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.817836 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77b4533c-3623-4d0c-834c-dc2329c0ffc8" path="/var/lib/kubelet/pods/77b4533c-3623-4d0c-834c-dc2329c0ffc8/volumes" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.818343 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d45526f-ecc3-4132-bdd0-159572980ba7" path="/var/lib/kubelet/pods/7d45526f-ecc3-4132-bdd0-159572980ba7/volumes" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.818858 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7285a360-7ff1-4e35-b91a-d472a0ee591b-scripts" (OuterVolumeSpecName: "scripts") pod "7285a360-7ff1-4e35-b91a-d472a0ee591b" (UID: "7285a360-7ff1-4e35-b91a-d472a0ee591b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.823811 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7285a360-7ff1-4e35-b91a-d472a0ee591b-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "7285a360-7ff1-4e35-b91a-d472a0ee591b" (UID: "7285a360-7ff1-4e35-b91a-d472a0ee591b"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.824599 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7285a360-7ff1-4e35-b91a-d472a0ee591b-config" (OuterVolumeSpecName: "config") pod "7285a360-7ff1-4e35-b91a-d472a0ee591b" (UID: "7285a360-7ff1-4e35-b91a-d472a0ee591b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.825258 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce3b7271-1b27-437c-a5a3-7a2f2511d3de" path="/var/lib/kubelet/pods/ce3b7271-1b27-437c-a5a3-7a2f2511d3de/volumes" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.829424 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0d104e4-315f-406d-ac89-21878f96a166" path="/var/lib/kubelet/pods/d0d104e4-315f-406d-ac89-21878f96a166/volumes" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.834270 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5b1adc8-187b-4662-b11e-c6ad31564ebf" path="/var/lib/kubelet/pods/d5b1adc8-187b-4662-b11e-c6ad31564ebf/volumes" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.835206 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df197887-2b7c-4c2c-b482-d411aad7f89d" path="/var/lib/kubelet/pods/df197887-2b7c-4c2c-b482-d411aad7f89d/volumes" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.836497 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e94cb22b-b51c-4f6d-8cdd-45d6180f8462" path="/var/lib/kubelet/pods/e94cb22b-b51c-4f6d-8cdd-45d6180f8462/volumes" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.838188 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f35673a0-3e6b-4cd6-b378-5baf313756c7" path="/var/lib/kubelet/pods/f35673a0-3e6b-4cd6-b378-5baf313756c7/volumes" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.838892 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6d3ef08-a386-4c3a-aea1-7870a4192822" path="/var/lib/kubelet/pods/f6d3ef08-a386-4c3a-aea1-7870a4192822/volumes" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.839550 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8f3bdd-7355-46ce-8ac6-75cb6a21f325" path="/var/lib/kubelet/pods/fc8f3bdd-7355-46ce-8ac6-75cb6a21f325/volumes" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.851160 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3482e9fb-53ae-4908-87fc-4096c5b26b76-kube-api-access-jfhbc" (OuterVolumeSpecName: "kube-api-access-jfhbc") pod "3482e9fb-53ae-4908-87fc-4096c5b26b76" (UID: "3482e9fb-53ae-4908-87fc-4096c5b26b76"). InnerVolumeSpecName "kube-api-access-jfhbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.860382 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23db3cbd-39ac-4137-8a7e-0533af96e5b1-kube-api-access-6v9sf" (OuterVolumeSpecName: "kube-api-access-6v9sf") pod "23db3cbd-39ac-4137-8a7e-0533af96e5b1" (UID: "23db3cbd-39ac-4137-8a7e-0533af96e5b1"). InnerVolumeSpecName "kube-api-access-6v9sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.860612 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7285a360-7ff1-4e35-b91a-d472a0ee591b-kube-api-access-wn479" (OuterVolumeSpecName: "kube-api-access-wn479") pod "7285a360-7ff1-4e35-b91a-d472a0ee591b" (UID: "7285a360-7ff1-4e35-b91a-d472a0ee591b"). InnerVolumeSpecName "kube-api-access-wn479". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.860662 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "7285a360-7ff1-4e35-b91a-d472a0ee591b" (UID: "7285a360-7ff1-4e35-b91a-d472a0ee591b"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.860918 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "9f17706c-2060-4191-b63a-df7dea2c4c95" (UID: "9f17706c-2060-4191-b63a-df7dea2c4c95"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.883646 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f17706c-2060-4191-b63a-df7dea2c4c95-kube-api-access-clnqr" (OuterVolumeSpecName: "kube-api-access-clnqr") pod "9f17706c-2060-4191-b63a-df7dea2c4c95" (UID: "9f17706c-2060-4191-b63a-df7dea2c4c95"). InnerVolumeSpecName "kube-api-access-clnqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.910344 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn479\" (UniqueName: \"kubernetes.io/projected/7285a360-7ff1-4e35-b91a-d472a0ee591b-kube-api-access-wn479\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.910363 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7285a360-7ff1-4e35-b91a-d472a0ee591b-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.910372 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v9sf\" (UniqueName: \"kubernetes.io/projected/23db3cbd-39ac-4137-8a7e-0533af96e5b1-kube-api-access-6v9sf\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.910381 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9f17706c-2060-4191-b63a-df7dea2c4c95-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.910399 4830 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.910408 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f17706c-2060-4191-b63a-df7dea2c4c95-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.910417 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7285a360-7ff1-4e35-b91a-d472a0ee591b-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.910429 4830 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.910438 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clnqr\" (UniqueName: \"kubernetes.io/projected/9f17706c-2060-4191-b63a-df7dea2c4c95-kube-api-access-clnqr\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.910447 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7285a360-7ff1-4e35-b91a-d472a0ee591b-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.910456 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfhbc\" (UniqueName: \"kubernetes.io/projected/3482e9fb-53ae-4908-87fc-4096c5b26b76-kube-api-access-jfhbc\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:04 crc kubenswrapper[4830]: I0227 16:33:04.910464 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9f17706c-2060-4191-b63a-df7dea2c4c95-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.005691 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3482e9fb-53ae-4908-87fc-4096c5b26b76-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3482e9fb-53ae-4908-87fc-4096c5b26b76" (UID: "3482e9fb-53ae-4908-87fc-4096c5b26b76"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.014183 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3482e9fb-53ae-4908-87fc-4096c5b26b76-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.038268 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-5e39-account-create-update-r88l6"] Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.049859 4830 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.079160 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-29fd-account-create-update-st6rb"] Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.152817 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7285a360-7ff1-4e35-b91a-d472a0ee591b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7285a360-7ff1-4e35-b91a-d472a0ee591b" (UID: "7285a360-7ff1-4e35-b91a-d472a0ee591b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.156855 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7285a360-7ff1-4e35-b91a-d472a0ee591b-combined-ca-bundle\") pod \"7285a360-7ff1-4e35-b91a-d472a0ee591b\" (UID: \"7285a360-7ff1-4e35-b91a-d472a0ee591b\") " Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.157904 4830 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:05 crc kubenswrapper[4830]: W0227 16:33:05.158048 4830 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/7285a360-7ff1-4e35-b91a-d472a0ee591b/volumes/kubernetes.io~secret/combined-ca-bundle Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.158067 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7285a360-7ff1-4e35-b91a-d472a0ee591b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7285a360-7ff1-4e35-b91a-d472a0ee591b" (UID: "7285a360-7ff1-4e35-b91a-d472a0ee591b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:05 crc kubenswrapper[4830]: E0227 16:33:05.187240 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 16:33:05 crc kubenswrapper[4830]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Feb 27 16:33:05 crc kubenswrapper[4830]: Feb 27 16:33:05 crc kubenswrapper[4830]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 27 16:33:05 crc kubenswrapper[4830]: Feb 27 16:33:05 crc kubenswrapper[4830]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 27 16:33:05 crc kubenswrapper[4830]: Feb 27 16:33:05 crc kubenswrapper[4830]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 27 16:33:05 crc kubenswrapper[4830]: Feb 27 16:33:05 crc kubenswrapper[4830]: if [ -n "nova_cell1" ]; then Feb 27 16:33:05 crc kubenswrapper[4830]: GRANT_DATABASE="nova_cell1" Feb 27 16:33:05 crc kubenswrapper[4830]: else Feb 27 16:33:05 crc kubenswrapper[4830]: GRANT_DATABASE="*" Feb 27 16:33:05 crc kubenswrapper[4830]: fi Feb 27 16:33:05 crc kubenswrapper[4830]: Feb 27 16:33:05 crc kubenswrapper[4830]: # going for maximum compatibility here: Feb 27 16:33:05 crc kubenswrapper[4830]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 27 16:33:05 crc kubenswrapper[4830]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 27 16:33:05 crc kubenswrapper[4830]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 27 16:33:05 crc kubenswrapper[4830]: # support updates Feb 27 16:33:05 crc kubenswrapper[4830]: Feb 27 16:33:05 crc kubenswrapper[4830]: $MYSQL_CMD < logger="UnhandledError" Feb 27 16:33:05 crc kubenswrapper[4830]: E0227 16:33:05.188116 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 16:33:05 crc kubenswrapper[4830]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Feb 27 16:33:05 crc kubenswrapper[4830]: Feb 27 16:33:05 crc kubenswrapper[4830]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 27 16:33:05 crc kubenswrapper[4830]: Feb 27 16:33:05 crc kubenswrapper[4830]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 27 16:33:05 crc kubenswrapper[4830]: Feb 27 16:33:05 crc kubenswrapper[4830]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 27 16:33:05 crc kubenswrapper[4830]: Feb 27 16:33:05 crc kubenswrapper[4830]: if [ -n "nova_cell0" ]; then Feb 27 16:33:05 crc kubenswrapper[4830]: GRANT_DATABASE="nova_cell0" Feb 27 16:33:05 crc kubenswrapper[4830]: else Feb 27 16:33:05 crc kubenswrapper[4830]: GRANT_DATABASE="*" Feb 27 16:33:05 crc kubenswrapper[4830]: fi Feb 27 16:33:05 crc kubenswrapper[4830]: Feb 27 16:33:05 crc kubenswrapper[4830]: # going for maximum compatibility here: Feb 27 16:33:05 crc kubenswrapper[4830]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 27 16:33:05 crc kubenswrapper[4830]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 27 16:33:05 crc kubenswrapper[4830]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 27 16:33:05 crc kubenswrapper[4830]: # support updates Feb 27 16:33:05 crc kubenswrapper[4830]: Feb 27 16:33:05 crc kubenswrapper[4830]: $MYSQL_CMD < logger="UnhandledError" Feb 27 16:33:05 crc kubenswrapper[4830]: E0227 16:33:05.189998 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell0-db-secret\\\" not found\"" pod="openstack/nova-cell0-29fd-account-create-update-st6rb" podUID="02d5a77c-198f-43aa-96ab-2ac2d76c7743" Feb 27 16:33:05 crc kubenswrapper[4830]: E0227 16:33:05.188366 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell1-db-secret\\\" not found\"" pod="openstack/nova-cell1-5e39-account-create-update-r88l6" podUID="0ea4ce89-3e8b-4521-9398-3406c6bf0166" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.197270 4830 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.205812 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3482e9fb-53ae-4908-87fc-4096c5b26b76-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "3482e9fb-53ae-4908-87fc-4096c5b26b76" (UID: "3482e9fb-53ae-4908-87fc-4096c5b26b76"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.250157 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f17706c-2060-4191-b63a-df7dea2c4c95-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f17706c-2060-4191-b63a-df7dea2c4c95" (UID: "9f17706c-2060-4191-b63a-df7dea2c4c95"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.253027 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b64de41e-9e05-48b2-87e5-387aad57532a-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "b64de41e-9e05-48b2-87e5-387aad57532a" (UID: "b64de41e-9e05-48b2-87e5-387aad57532a"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.261146 4830 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b64de41e-9e05-48b2-87e5-387aad57532a-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.261177 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7285a360-7ff1-4e35-b91a-d472a0ee591b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.261186 4830 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.261196 4830 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3482e9fb-53ae-4908-87fc-4096c5b26b76-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.261204 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f17706c-2060-4191-b63a-df7dea2c4c95-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.331208 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60" (UID: "2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.345788 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23db3cbd-39ac-4137-8a7e-0533af96e5b1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "23db3cbd-39ac-4137-8a7e-0533af96e5b1" (UID: "23db3cbd-39ac-4137-8a7e-0533af96e5b1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.351063 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23db3cbd-39ac-4137-8a7e-0533af96e5b1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "23db3cbd-39ac-4137-8a7e-0533af96e5b1" (UID: "23db3cbd-39ac-4137-8a7e-0533af96e5b1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.363237 4830 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.363268 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23db3cbd-39ac-4137-8a7e-0533af96e5b1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.363278 4830 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23db3cbd-39ac-4137-8a7e-0533af96e5b1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.372978 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f17706c-2060-4191-b63a-df7dea2c4c95-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "9f17706c-2060-4191-b63a-df7dea2c4c95" (UID: "9f17706c-2060-4191-b63a-df7dea2c4c95"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.375115 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23db3cbd-39ac-4137-8a7e-0533af96e5b1-config" (OuterVolumeSpecName: "config") pod "23db3cbd-39ac-4137-8a7e-0533af96e5b1" (UID: "23db3cbd-39ac-4137-8a7e-0533af96e5b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.386202 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23db3cbd-39ac-4137-8a7e-0533af96e5b1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "23db3cbd-39ac-4137-8a7e-0533af96e5b1" (UID: "23db3cbd-39ac-4137-8a7e-0533af96e5b1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.388555 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23db3cbd-39ac-4137-8a7e-0533af96e5b1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "23db3cbd-39ac-4137-8a7e-0533af96e5b1" (UID: "23db3cbd-39ac-4137-8a7e-0533af96e5b1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.395099 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3482e9fb-53ae-4908-87fc-4096c5b26b76-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "3482e9fb-53ae-4908-87fc-4096c5b26b76" (UID: "3482e9fb-53ae-4908-87fc-4096c5b26b76"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.418678 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7285a360-7ff1-4e35-b91a-d472a0ee591b-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "7285a360-7ff1-4e35-b91a-d472a0ee591b" (UID: "7285a360-7ff1-4e35-b91a-d472a0ee591b"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.425397 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7285a360-7ff1-4e35-b91a-d472a0ee591b-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "7285a360-7ff1-4e35-b91a-d472a0ee591b" (UID: "7285a360-7ff1-4e35-b91a-d472a0ee591b"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.451681 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f17706c-2060-4191-b63a-df7dea2c4c95-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "9f17706c-2060-4191-b63a-df7dea2c4c95" (UID: "9f17706c-2060-4191-b63a-df7dea2c4c95"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.468249 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23db3cbd-39ac-4137-8a7e-0533af96e5b1-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.468279 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7285a360-7ff1-4e35-b91a-d472a0ee591b-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.468290 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23db3cbd-39ac-4137-8a7e-0533af96e5b1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.468299 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f17706c-2060-4191-b63a-df7dea2c4c95-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.468309 4830 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3482e9fb-53ae-4908-87fc-4096c5b26b76-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.468317 4830 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f17706c-2060-4191-b63a-df7dea2c4c95-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.468324 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23db3cbd-39ac-4137-8a7e-0533af96e5b1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.468333 4830 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7285a360-7ff1-4e35-b91a-d472a0ee591b-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.635446 4830 generic.go:334] "Generic (PLEG): container finished" podID="b63af300-2b1c-47a7-ae1d-1334deeefdb1" containerID="58b3931eed123fb0912adbb48ae5835fb65012c51cabfe8279f65b2fb158c0e1" exitCode=0 Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.635489 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b63af300-2b1c-47a7-ae1d-1334deeefdb1","Type":"ContainerDied","Data":"58b3931eed123fb0912adbb48ae5835fb65012c51cabfe8279f65b2fb158c0e1"} Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.638562 4830 generic.go:334] "Generic (PLEG): container finished" podID="6d6ca92a-3e98-4628-8936-37032cf27463" containerID="08dae26c7de73c784a1c4cdf01a2ec48ed79b52c6c16691dcb728b190ce0bde0" exitCode=0 Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.638582 4830 generic.go:334] "Generic (PLEG): container finished" podID="6d6ca92a-3e98-4628-8936-37032cf27463" containerID="c6e289a18c1629684bcdb331c9033eb81b5cf53591f391b7c77955013ee8149f" exitCode=0 Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.638615 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6d6ca92a-3e98-4628-8936-37032cf27463","Type":"ContainerDied","Data":"08dae26c7de73c784a1c4cdf01a2ec48ed79b52c6c16691dcb728b190ce0bde0"} Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.638639 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6d6ca92a-3e98-4628-8936-37032cf27463","Type":"ContainerDied","Data":"c6e289a18c1629684bcdb331c9033eb81b5cf53591f391b7c77955013ee8149f"} Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.639933 4830 generic.go:334] "Generic (PLEG): container finished" podID="a234743b-8983-4a60-bbb4-59ad823b83e2" containerID="bcaad14a5dbb96adf7a18f1f57a6f9461056ab8d5981e03e5ed3e64de132d692" exitCode=143 Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.639981 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d54db5966-xcg7l" event={"ID":"a234743b-8983-4a60-bbb4-59ad823b83e2","Type":"ContainerDied","Data":"bcaad14a5dbb96adf7a18f1f57a6f9461056ab8d5981e03e5ed3e64de132d692"} Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.641598 4830 generic.go:334] "Generic (PLEG): container finished" podID="22232c9c-ecf7-443e-834f-ad39b37735b2" containerID="91059dd00f11fc333eace4b793fe5a4f3fca466216720380e52c9fb9f6ce33ff" exitCode=0 Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.641614 4830 generic.go:334] "Generic (PLEG): container finished" podID="22232c9c-ecf7-443e-834f-ad39b37735b2" containerID="6cf3d9b94980e2ca5aa0032ef28c8b51ac4ff272ea01954cb10fbe1ad64d9f4b" exitCode=143 Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.641641 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-948fdb9cd-ncm6f" event={"ID":"22232c9c-ecf7-443e-834f-ad39b37735b2","Type":"ContainerDied","Data":"91059dd00f11fc333eace4b793fe5a4f3fca466216720380e52c9fb9f6ce33ff"} Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.641655 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-948fdb9cd-ncm6f" event={"ID":"22232c9c-ecf7-443e-834f-ad39b37735b2","Type":"ContainerDied","Data":"6cf3d9b94980e2ca5aa0032ef28c8b51ac4ff272ea01954cb10fbe1ad64d9f4b"} Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.641664 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-948fdb9cd-ncm6f" event={"ID":"22232c9c-ecf7-443e-834f-ad39b37735b2","Type":"ContainerDied","Data":"f107c931d523968950e2e2557e2e0c71d1906d8784ee47e8cdbc627751b3a65f"} Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.641673 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f107c931d523968950e2e2557e2e0c71d1906d8784ee47e8cdbc627751b3a65f" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.642844 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5e39-account-create-update-r88l6" event={"ID":"0ea4ce89-3e8b-4521-9398-3406c6bf0166","Type":"ContainerStarted","Data":"00714c25a3ee5e8c8c745d06eedcdf99fc1e1beb99a405aeea022738ca2f8051"} Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.646037 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7668-account-create-update-6wj4n" event={"ID":"baefaedf-2591-42f2-a383-5c92ae714ab5","Type":"ContainerStarted","Data":"ec3d7793089e10f729c7992cd209dd73a5c94a3645fa8c7225222d7a1b49296c"} Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.650671 4830 generic.go:334] "Generic (PLEG): container finished" podID="09849d6c-7457-4130-9074-73154d22af1f" containerID="b76e4dfe38f967f37bb6025c4aa38ca81c5cf520e22fe035f96df51e28145466" exitCode=1 Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.650715 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lx5sm" event={"ID":"09849d6c-7457-4130-9074-73154d22af1f","Type":"ContainerDied","Data":"b76e4dfe38f967f37bb6025c4aa38ca81c5cf520e22fe035f96df51e28145466"} Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.650732 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lx5sm" event={"ID":"09849d6c-7457-4130-9074-73154d22af1f","Type":"ContainerStarted","Data":"86b15d76da0cc80d79a54876e95096e018daf6373a2151ef62d4412ba2710fe1"} Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.651440 4830 scope.go:117] "RemoveContainer" containerID="b76e4dfe38f967f37bb6025c4aa38ca81c5cf520e22fe035f96df51e28145466" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.664611 4830 generic.go:334] "Generic (PLEG): container finished" podID="21656f50-51b8-4761-8b9e-c2b823dace13" containerID="a3e19fe9784a7e84ad00ba5db518baa23ac731605584cf84a3a6192b109fa71e" exitCode=0 Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.664835 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"21656f50-51b8-4761-8b9e-c2b823dace13","Type":"ContainerDied","Data":"a3e19fe9784a7e84ad00ba5db518baa23ac731605584cf84a3a6192b109fa71e"} Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.664858 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"21656f50-51b8-4761-8b9e-c2b823dace13","Type":"ContainerDied","Data":"d17a62450a3a94180b3ce51f2368de76aa3ea9b22a04ed67e84a909447fa119c"} Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.664869 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d17a62450a3a94180b3ce51f2368de76aa3ea9b22a04ed67e84a909447fa119c" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.679519 4830 generic.go:334] "Generic (PLEG): container finished" podID="f4280aaf-817d-41e1-9867-715359ae322e" containerID="53a40c635318ff11c80f75f6211616278bbd9c179f11fec9265e63a26e70b0ac" exitCode=143 Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.679579 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f4280aaf-817d-41e1-9867-715359ae322e","Type":"ContainerDied","Data":"53a40c635318ff11c80f75f6211616278bbd9c179f11fec9265e63a26e70b0ac"} Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.681022 4830 generic.go:334] "Generic (PLEG): container finished" podID="f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32" containerID="d25e9e29213d4dd9d13dc6e8f8443d64cbecee22307bae547934dfd69a24c51a" exitCode=143 Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.681075 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-58c49587-cz4f5" event={"ID":"f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32","Type":"ContainerDied","Data":"d25e9e29213d4dd9d13dc6e8f8443d64cbecee22307bae547934dfd69a24c51a"} Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.690433 4830 scope.go:117] "RemoveContainer" containerID="ebe94bb0443ae2939345bc80a179e9644e55c467b0fc2c9d6043e5cff481e239" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.690583 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.695862 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_9f17706c-2060-4191-b63a-df7dea2c4c95/ovsdbserver-nb/0.log" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.696084 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"9f17706c-2060-4191-b63a-df7dea2c4c95","Type":"ContainerDied","Data":"00f79ecb78dd4a17bddeadf9a166b9472a51ed8ecdd2c84a404a74f15cdc18f4"} Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.696161 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.730786 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-c6f44c475-twbzz" event={"ID":"38b57350-6ca0-4090-876b-7727c983cf52","Type":"ContainerDied","Data":"73b5f31020bdda84b1e0be41fcac15122bcecb86520bab6e99fc3e9b00a4627b"} Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.730824 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73b5f31020bdda84b1e0be41fcac15122bcecb86520bab6e99fc3e9b00a4627b" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.737023 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-c6f44c475-twbzz" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.737846 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-29fd-account-create-update-st6rb" event={"ID":"02d5a77c-198f-43aa-96ab-2ac2d76c7743","Type":"ContainerStarted","Data":"2a78a24aca3c495b88132f5ada2bcab911a1923c10d4ae73e4167ccf4db89ab5"} Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.755754 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.756993 4830 generic.go:334] "Generic (PLEG): container finished" podID="9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6" containerID="4bd0cecd4c639c19d6288ae6763e874f65a458b96d3aae8d391e7b853fd3836b" exitCode=0 Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.757289 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.757331 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6","Type":"ContainerDied","Data":"4bd0cecd4c639c19d6288ae6763e874f65a458b96d3aae8d391e7b853fd3836b"} Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.760542 4830 scope.go:117] "RemoveContainer" containerID="aef48ea8d72edf5f1504d9101a6b5d6f742a96bb0bdea5a1647ced04e0be6ed1" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.763196 4830 generic.go:334] "Generic (PLEG): container finished" podID="0bee1ae7-32fb-484d-a81a-47fe31e25d70" containerID="c2905f95d9b1bd685977d7be7161ae0adaba055e9615f02fecc0602b6c991b5c" exitCode=0 Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.763402 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"0bee1ae7-32fb-484d-a81a-47fe31e25d70","Type":"ContainerDied","Data":"c2905f95d9b1bd685977d7be7161ae0adaba055e9615f02fecc0602b6c991b5c"} Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.763430 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"0bee1ae7-32fb-484d-a81a-47fe31e25d70","Type":"ContainerDied","Data":"3cb0de386802fcebf81aef3d8ec6687de2ac855669305853b68e07c352ad1bdc"} Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.763443 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cb0de386802fcebf81aef3d8ec6687de2ac855669305853b68e07c352ad1bdc" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.775056 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.785762 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-mtj7r" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.787398 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c219-account-create-update-w82r8" event={"ID":"26018553-1865-499d-9c9b-932807fce26c","Type":"ContainerStarted","Data":"76302f851e819e104625ea773eb0ded4d208f9f8a737d1b60b64998f743c7510"} Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.787481 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-dmhcp" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.789762 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mncqx" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.799445 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.812866 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.813026 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-948fdb9cd-ncm6f" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.816605 4830 scope.go:117] "RemoveContainer" containerID="6ec8f1e6a925dda75bf2b25d6d091880ed805d81e677fbee45551ce4d31bc846" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.824995 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 27 16:33:05 crc kubenswrapper[4830]: E0227 16:33:05.853097 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4bd0cecd4c639c19d6288ae6763e874f65a458b96d3aae8d391e7b853fd3836b is running failed: container process not found" containerID="4bd0cecd4c639c19d6288ae6763e874f65a458b96d3aae8d391e7b853fd3836b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 27 16:33:05 crc kubenswrapper[4830]: E0227 16:33:05.853903 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4bd0cecd4c639c19d6288ae6763e874f65a458b96d3aae8d391e7b853fd3836b is running failed: container process not found" containerID="4bd0cecd4c639c19d6288ae6763e874f65a458b96d3aae8d391e7b853fd3836b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 27 16:33:05 crc kubenswrapper[4830]: E0227 16:33:05.854819 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4bd0cecd4c639c19d6288ae6763e874f65a458b96d3aae8d391e7b853fd3836b is running failed: container process not found" containerID="4bd0cecd4c639c19d6288ae6763e874f65a458b96d3aae8d391e7b853fd3836b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 27 16:33:05 crc kubenswrapper[4830]: E0227 16:33:05.854909 4830 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4bd0cecd4c639c19d6288ae6763e874f65a458b96d3aae8d391e7b853fd3836b is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6" containerName="nova-scheduler-scheduler" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.876376 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38b57350-6ca0-4090-876b-7727c983cf52-log-httpd\") pod \"38b57350-6ca0-4090-876b-7727c983cf52\" (UID: \"38b57350-6ca0-4090-876b-7727c983cf52\") " Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.876419 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38b57350-6ca0-4090-876b-7727c983cf52-combined-ca-bundle\") pod \"38b57350-6ca0-4090-876b-7727c983cf52\" (UID: \"38b57350-6ca0-4090-876b-7727c983cf52\") " Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.876471 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38b57350-6ca0-4090-876b-7727c983cf52-run-httpd\") pod \"38b57350-6ca0-4090-876b-7727c983cf52\" (UID: \"38b57350-6ca0-4090-876b-7727c983cf52\") " Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.876503 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21656f50-51b8-4761-8b9e-c2b823dace13-combined-ca-bundle\") pod \"21656f50-51b8-4761-8b9e-c2b823dace13\" (UID: \"21656f50-51b8-4761-8b9e-c2b823dace13\") " Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.876559 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdlml\" (UniqueName: \"kubernetes.io/projected/21656f50-51b8-4761-8b9e-c2b823dace13-kube-api-access-rdlml\") pod \"21656f50-51b8-4761-8b9e-c2b823dace13\" (UID: \"21656f50-51b8-4761-8b9e-c2b823dace13\") " Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.876585 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/21656f50-51b8-4761-8b9e-c2b823dace13-vencrypt-tls-certs\") pod \"21656f50-51b8-4761-8b9e-c2b823dace13\" (UID: \"21656f50-51b8-4761-8b9e-c2b823dace13\") " Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.876624 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/38b57350-6ca0-4090-876b-7727c983cf52-internal-tls-certs\") pod \"38b57350-6ca0-4090-876b-7727c983cf52\" (UID: \"38b57350-6ca0-4090-876b-7727c983cf52\") " Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.876676 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/38b57350-6ca0-4090-876b-7727c983cf52-etc-swift\") pod \"38b57350-6ca0-4090-876b-7727c983cf52\" (UID: \"38b57350-6ca0-4090-876b-7727c983cf52\") " Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.876700 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21656f50-51b8-4761-8b9e-c2b823dace13-config-data\") pod \"21656f50-51b8-4761-8b9e-c2b823dace13\" (UID: \"21656f50-51b8-4761-8b9e-c2b823dace13\") " Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.876728 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38b57350-6ca0-4090-876b-7727c983cf52-config-data\") pod \"38b57350-6ca0-4090-876b-7727c983cf52\" (UID: \"38b57350-6ca0-4090-876b-7727c983cf52\") " Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.876744 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/38b57350-6ca0-4090-876b-7727c983cf52-public-tls-certs\") pod \"38b57350-6ca0-4090-876b-7727c983cf52\" (UID: \"38b57350-6ca0-4090-876b-7727c983cf52\") " Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.876765 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/21656f50-51b8-4761-8b9e-c2b823dace13-nova-novncproxy-tls-certs\") pod \"21656f50-51b8-4761-8b9e-c2b823dace13\" (UID: \"21656f50-51b8-4761-8b9e-c2b823dace13\") " Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.876924 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgkhr\" (UniqueName: \"kubernetes.io/projected/38b57350-6ca0-4090-876b-7727c983cf52-kube-api-access-qgkhr\") pod \"38b57350-6ca0-4090-876b-7727c983cf52\" (UID: \"38b57350-6ca0-4090-876b-7727c983cf52\") " Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.878614 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38b57350-6ca0-4090-876b-7727c983cf52-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "38b57350-6ca0-4090-876b-7727c983cf52" (UID: "38b57350-6ca0-4090-876b-7727c983cf52"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.881283 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38b57350-6ca0-4090-876b-7727c983cf52-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "38b57350-6ca0-4090-876b-7727c983cf52" (UID: "38b57350-6ca0-4090-876b-7727c983cf52"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.901094 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38b57350-6ca0-4090-876b-7727c983cf52-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "38b57350-6ca0-4090-876b-7727c983cf52" (UID: "38b57350-6ca0-4090-876b-7727c983cf52"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.901400 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21656f50-51b8-4761-8b9e-c2b823dace13-kube-api-access-rdlml" (OuterVolumeSpecName: "kube-api-access-rdlml") pod "21656f50-51b8-4761-8b9e-c2b823dace13" (UID: "21656f50-51b8-4761-8b9e-c2b823dace13"). InnerVolumeSpecName "kube-api-access-rdlml". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.901618 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38b57350-6ca0-4090-876b-7727c983cf52-kube-api-access-qgkhr" (OuterVolumeSpecName: "kube-api-access-qgkhr") pod "38b57350-6ca0-4090-876b-7727c983cf52" (UID: "38b57350-6ca0-4090-876b-7727c983cf52"). InnerVolumeSpecName "kube-api-access-qgkhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.905879 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-mtj7r"] Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.911973 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-mtj7r"] Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.959268 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.959843 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.978494 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\" (UID: \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\") " Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.992956 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22232c9c-ecf7-443e-834f-ad39b37735b2-config-data\") pod \"22232c9c-ecf7-443e-834f-ad39b37735b2\" (UID: \"22232c9c-ecf7-443e-834f-ad39b37735b2\") " Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.997864 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22232c9c-ecf7-443e-834f-ad39b37735b2-combined-ca-bundle\") pod \"22232c9c-ecf7-443e-834f-ad39b37735b2\" (UID: \"22232c9c-ecf7-443e-834f-ad39b37735b2\") " Feb 27 16:33:05 crc kubenswrapper[4830]: I0227 16:33:05.998444 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b63af300-2b1c-47a7-ae1d-1334deeefdb1-kolla-config\") pod \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\" (UID: \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\") " Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.004121 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bee1ae7-32fb-484d-a81a-47fe31e25d70-combined-ca-bundle\") pod \"0bee1ae7-32fb-484d-a81a-47fe31e25d70\" (UID: \"0bee1ae7-32fb-484d-a81a-47fe31e25d70\") " Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.004283 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b63af300-2b1c-47a7-ae1d-1334deeefdb1-operator-scripts\") pod \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\" (UID: \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\") " Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:05.998667 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-mncqx"] Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.001057 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b63af300-2b1c-47a7-ae1d-1334deeefdb1-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "b63af300-2b1c-47a7-ae1d-1334deeefdb1" (UID: "b63af300-2b1c-47a7-ae1d-1334deeefdb1"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.004436 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b63af300-2b1c-47a7-ae1d-1334deeefdb1-config-data-generated\") pod \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\" (UID: \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\") " Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.004654 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bee1ae7-32fb-484d-a81a-47fe31e25d70-config-data\") pod \"0bee1ae7-32fb-484d-a81a-47fe31e25d70\" (UID: \"0bee1ae7-32fb-484d-a81a-47fe31e25d70\") " Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.004767 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22232c9c-ecf7-443e-834f-ad39b37735b2-logs\") pod \"22232c9c-ecf7-443e-834f-ad39b37735b2\" (UID: \"22232c9c-ecf7-443e-834f-ad39b37735b2\") " Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.004856 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mgbg\" (UniqueName: \"kubernetes.io/projected/0bee1ae7-32fb-484d-a81a-47fe31e25d70-kube-api-access-4mgbg\") pod \"0bee1ae7-32fb-484d-a81a-47fe31e25d70\" (UID: \"0bee1ae7-32fb-484d-a81a-47fe31e25d70\") " Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.004935 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b63af300-2b1c-47a7-ae1d-1334deeefdb1-galera-tls-certs\") pod \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\" (UID: \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\") " Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.005026 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b63af300-2b1c-47a7-ae1d-1334deeefdb1-config-data-default\") pod \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\" (UID: \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\") " Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.005107 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b63af300-2b1c-47a7-ae1d-1334deeefdb1-combined-ca-bundle\") pod \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\" (UID: \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\") " Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.005200 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p76kb\" (UniqueName: \"kubernetes.io/projected/b63af300-2b1c-47a7-ae1d-1334deeefdb1-kube-api-access-p76kb\") pod \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\" (UID: \"b63af300-2b1c-47a7-ae1d-1334deeefdb1\") " Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.005365 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22232c9c-ecf7-443e-834f-ad39b37735b2-config-data-custom\") pod \"22232c9c-ecf7-443e-834f-ad39b37735b2\" (UID: \"22232c9c-ecf7-443e-834f-ad39b37735b2\") " Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.005447 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tt8qf\" (UniqueName: \"kubernetes.io/projected/22232c9c-ecf7-443e-834f-ad39b37735b2-kube-api-access-tt8qf\") pod \"22232c9c-ecf7-443e-834f-ad39b37735b2\" (UID: \"22232c9c-ecf7-443e-834f-ad39b37735b2\") " Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.007854 4830 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/38b57350-6ca0-4090-876b-7727c983cf52-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.007995 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgkhr\" (UniqueName: \"kubernetes.io/projected/38b57350-6ca0-4090-876b-7727c983cf52-kube-api-access-qgkhr\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.008076 4830 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38b57350-6ca0-4090-876b-7727c983cf52-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.008164 4830 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/38b57350-6ca0-4090-876b-7727c983cf52-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.008239 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdlml\" (UniqueName: \"kubernetes.io/projected/21656f50-51b8-4761-8b9e-c2b823dace13-kube-api-access-rdlml\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.008320 4830 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b63af300-2b1c-47a7-ae1d-1334deeefdb1-kolla-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.008572 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b63af300-2b1c-47a7-ae1d-1334deeefdb1-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "b63af300-2b1c-47a7-ae1d-1334deeefdb1" (UID: "b63af300-2b1c-47a7-ae1d-1334deeefdb1"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.008741 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b63af300-2b1c-47a7-ae1d-1334deeefdb1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b63af300-2b1c-47a7-ae1d-1334deeefdb1" (UID: "b63af300-2b1c-47a7-ae1d-1334deeefdb1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.009261 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22232c9c-ecf7-443e-834f-ad39b37735b2-logs" (OuterVolumeSpecName: "logs") pod "22232c9c-ecf7-443e-834f-ad39b37735b2" (UID: "22232c9c-ecf7-443e-834f-ad39b37735b2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.010707 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b63af300-2b1c-47a7-ae1d-1334deeefdb1-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "b63af300-2b1c-47a7-ae1d-1334deeefdb1" (UID: "b63af300-2b1c-47a7-ae1d-1334deeefdb1"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.019447 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-mncqx"] Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.019802 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-dmhcp"] Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.020298 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21656f50-51b8-4761-8b9e-c2b823dace13-config-data" (OuterVolumeSpecName: "config-data") pod "21656f50-51b8-4761-8b9e-c2b823dace13" (UID: "21656f50-51b8-4761-8b9e-c2b823dace13"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.030029 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-dmhcp"] Feb 27 16:33:06 crc kubenswrapper[4830]: E0227 16:33:06.031239 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3a1363ee7dd262f992c78bd580ddc30893c1d6cb37be4314ebecd1d79f83e351" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Feb 27 16:33:06 crc kubenswrapper[4830]: E0227 16:33:06.033681 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3a1363ee7dd262f992c78bd580ddc30893c1d6cb37be4314ebecd1d79f83e351" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.035802 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 27 16:33:06 crc kubenswrapper[4830]: E0227 16:33:06.037025 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3a1363ee7dd262f992c78bd580ddc30893c1d6cb37be4314ebecd1d79f83e351" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Feb 27 16:33:06 crc kubenswrapper[4830]: E0227 16:33:06.037077 4830 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="7c017daa-cb8f-4629-80e6-a671a8455149" containerName="ovn-northd" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.041225 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bee1ae7-32fb-484d-a81a-47fe31e25d70-kube-api-access-4mgbg" (OuterVolumeSpecName: "kube-api-access-4mgbg") pod "0bee1ae7-32fb-484d-a81a-47fe31e25d70" (UID: "0bee1ae7-32fb-484d-a81a-47fe31e25d70"). InnerVolumeSpecName "kube-api-access-4mgbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.044256 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22232c9c-ecf7-443e-834f-ad39b37735b2-kube-api-access-tt8qf" (OuterVolumeSpecName: "kube-api-access-tt8qf") pod "22232c9c-ecf7-443e-834f-ad39b37735b2" (UID: "22232c9c-ecf7-443e-834f-ad39b37735b2"). InnerVolumeSpecName "kube-api-access-tt8qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.045564 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.057271 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b63af300-2b1c-47a7-ae1d-1334deeefdb1-kube-api-access-p76kb" (OuterVolumeSpecName: "kube-api-access-p76kb") pod "b63af300-2b1c-47a7-ae1d-1334deeefdb1" (UID: "b63af300-2b1c-47a7-ae1d-1334deeefdb1"). InnerVolumeSpecName "kube-api-access-p76kb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.079417 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22232c9c-ecf7-443e-834f-ad39b37735b2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "22232c9c-ecf7-443e-834f-ad39b37735b2" (UID: "22232c9c-ecf7-443e-834f-ad39b37735b2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: E0227 16:33:06.083759 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0177eede3f4945d97bcd0d90fed75c1aa58d1276a7fd71e80b0683515562f9b1" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 27 16:33:06 crc kubenswrapper[4830]: E0227 16:33:06.085765 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0177eede3f4945d97bcd0d90fed75c1aa58d1276a7fd71e80b0683515562f9b1" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 27 16:33:06 crc kubenswrapper[4830]: E0227 16:33:06.087140 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0177eede3f4945d97bcd0d90fed75c1aa58d1276a7fd71e80b0683515562f9b1" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 27 16:33:06 crc kubenswrapper[4830]: E0227 16:33:06.087210 4830 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="a989aa76-9246-46b2-9f1e-7900cfecedc2" containerName="nova-cell1-conductor-conductor" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.090123 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "mysql-db") pod "b63af300-2b1c-47a7-ae1d-1334deeefdb1" (UID: "b63af300-2b1c-47a7-ae1d-1334deeefdb1"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.142483 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d6ca92a-3e98-4628-8936-37032cf27463-scripts\") pod \"6d6ca92a-3e98-4628-8936-37032cf27463\" (UID: \"6d6ca92a-3e98-4628-8936-37032cf27463\") " Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.142636 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d6ca92a-3e98-4628-8936-37032cf27463-config-data\") pod \"6d6ca92a-3e98-4628-8936-37032cf27463\" (UID: \"6d6ca92a-3e98-4628-8936-37032cf27463\") " Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.142668 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6-combined-ca-bundle\") pod \"9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6\" (UID: \"9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6\") " Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.142717 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6rg2\" (UniqueName: \"kubernetes.io/projected/9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6-kube-api-access-c6rg2\") pod \"9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6\" (UID: \"9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6\") " Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.142766 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6d6ca92a-3e98-4628-8936-37032cf27463-config-data-custom\") pod \"6d6ca92a-3e98-4628-8936-37032cf27463\" (UID: \"6d6ca92a-3e98-4628-8936-37032cf27463\") " Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.142804 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6d6ca92a-3e98-4628-8936-37032cf27463-etc-machine-id\") pod \"6d6ca92a-3e98-4628-8936-37032cf27463\" (UID: \"6d6ca92a-3e98-4628-8936-37032cf27463\") " Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.142855 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9khfg\" (UniqueName: \"kubernetes.io/projected/6d6ca92a-3e98-4628-8936-37032cf27463-kube-api-access-9khfg\") pod \"6d6ca92a-3e98-4628-8936-37032cf27463\" (UID: \"6d6ca92a-3e98-4628-8936-37032cf27463\") " Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.143012 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6-config-data\") pod \"9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6\" (UID: \"9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6\") " Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.143079 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d6ca92a-3e98-4628-8936-37032cf27463-combined-ca-bundle\") pod \"6d6ca92a-3e98-4628-8936-37032cf27463\" (UID: \"6d6ca92a-3e98-4628-8936-37032cf27463\") " Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.145153 4830 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b63af300-2b1c-47a7-ae1d-1334deeefdb1-config-data-generated\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.145250 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21656f50-51b8-4761-8b9e-c2b823dace13-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.145311 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22232c9c-ecf7-443e-834f-ad39b37735b2-logs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.145368 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mgbg\" (UniqueName: \"kubernetes.io/projected/0bee1ae7-32fb-484d-a81a-47fe31e25d70-kube-api-access-4mgbg\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.145428 4830 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b63af300-2b1c-47a7-ae1d-1334deeefdb1-config-data-default\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.145481 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p76kb\" (UniqueName: \"kubernetes.io/projected/b63af300-2b1c-47a7-ae1d-1334deeefdb1-kube-api-access-p76kb\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.145533 4830 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/22232c9c-ecf7-443e-834f-ad39b37735b2-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.145590 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tt8qf\" (UniqueName: \"kubernetes.io/projected/22232c9c-ecf7-443e-834f-ad39b37735b2-kube-api-access-tt8qf\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.145663 4830 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.145718 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b63af300-2b1c-47a7-ae1d-1334deeefdb1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.152845 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d6ca92a-3e98-4628-8936-37032cf27463-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6d6ca92a-3e98-4628-8936-37032cf27463" (UID: "6d6ca92a-3e98-4628-8936-37032cf27463"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: E0227 16:33:06.153009 4830 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 27 16:33:06 crc kubenswrapper[4830]: E0227 16:33:06.153069 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/47514135-95a6-4b77-815a-ebf23a3cab82-config-data podName:47514135-95a6-4b77-815a-ebf23a3cab82 nodeName:}" failed. No retries permitted until 2026-02-27 16:33:10.153049509 +0000 UTC m=+1586.242322032 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/47514135-95a6-4b77-815a-ebf23a3cab82-config-data") pod "rabbitmq-cell1-server-0" (UID: "47514135-95a6-4b77-815a-ebf23a3cab82") : configmap "rabbitmq-cell1-config-data" not found Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.153393 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d6ca92a-3e98-4628-8936-37032cf27463-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6d6ca92a-3e98-4628-8936-37032cf27463" (UID: "6d6ca92a-3e98-4628-8936-37032cf27463"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.153503 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21656f50-51b8-4761-8b9e-c2b823dace13-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21656f50-51b8-4761-8b9e-c2b823dace13" (UID: "21656f50-51b8-4761-8b9e-c2b823dace13"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.169388 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6-kube-api-access-c6rg2" (OuterVolumeSpecName: "kube-api-access-c6rg2") pod "9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6" (UID: "9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6"). InnerVolumeSpecName "kube-api-access-c6rg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.169430 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d6ca92a-3e98-4628-8936-37032cf27463-kube-api-access-9khfg" (OuterVolumeSpecName: "kube-api-access-9khfg") pod "6d6ca92a-3e98-4628-8936-37032cf27463" (UID: "6d6ca92a-3e98-4628-8936-37032cf27463"). InnerVolumeSpecName "kube-api-access-9khfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.181392 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38b57350-6ca0-4090-876b-7727c983cf52-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "38b57350-6ca0-4090-876b-7727c983cf52" (UID: "38b57350-6ca0-4090-876b-7727c983cf52"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.184372 4830 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.195346 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d6ca92a-3e98-4628-8936-37032cf27463-scripts" (OuterVolumeSpecName: "scripts") pod "6d6ca92a-3e98-4628-8936-37032cf27463" (UID: "6d6ca92a-3e98-4628-8936-37032cf27463"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.254265 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d6ca92a-3e98-4628-8936-37032cf27463-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.254519 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6rg2\" (UniqueName: \"kubernetes.io/projected/9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6-kube-api-access-c6rg2\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.254534 4830 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6d6ca92a-3e98-4628-8936-37032cf27463-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.254544 4830 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6d6ca92a-3e98-4628-8936-37032cf27463-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.254555 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9khfg\" (UniqueName: \"kubernetes.io/projected/6d6ca92a-3e98-4628-8936-37032cf27463-kube-api-access-9khfg\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.254564 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21656f50-51b8-4761-8b9e-c2b823dace13-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.254575 4830 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.254584 4830 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/38b57350-6ca0-4090-876b-7727c983cf52-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.262440 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22232c9c-ecf7-443e-834f-ad39b37735b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22232c9c-ecf7-443e-834f-ad39b37735b2" (UID: "22232c9c-ecf7-443e-834f-ad39b37735b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.299751 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bee1ae7-32fb-484d-a81a-47fe31e25d70-config-data" (OuterVolumeSpecName: "config-data") pod "0bee1ae7-32fb-484d-a81a-47fe31e25d70" (UID: "0bee1ae7-32fb-484d-a81a-47fe31e25d70"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.315856 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21656f50-51b8-4761-8b9e-c2b823dace13-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "21656f50-51b8-4761-8b9e-c2b823dace13" (UID: "21656f50-51b8-4761-8b9e-c2b823dace13"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.355811 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bee1ae7-32fb-484d-a81a-47fe31e25d70-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.355838 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22232c9c-ecf7-443e-834f-ad39b37735b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.355848 4830 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/21656f50-51b8-4761-8b9e-c2b823dace13-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.406064 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6" (UID: "9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.407621 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6-config-data" (OuterVolumeSpecName: "config-data") pod "9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6" (UID: "9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.410953 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bee1ae7-32fb-484d-a81a-47fe31e25d70-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0bee1ae7-32fb-484d-a81a-47fe31e25d70" (UID: "0bee1ae7-32fb-484d-a81a-47fe31e25d70"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.420878 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38b57350-6ca0-4090-876b-7727c983cf52-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "38b57350-6ca0-4090-876b-7727c983cf52" (UID: "38b57350-6ca0-4090-876b-7727c983cf52"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.425870 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d6ca92a-3e98-4628-8936-37032cf27463-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d6ca92a-3e98-4628-8936-37032cf27463" (UID: "6d6ca92a-3e98-4628-8936-37032cf27463"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.444442 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38b57350-6ca0-4090-876b-7727c983cf52-config-data" (OuterVolumeSpecName: "config-data") pod "38b57350-6ca0-4090-876b-7727c983cf52" (UID: "38b57350-6ca0-4090-876b-7727c983cf52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.452150 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38b57350-6ca0-4090-876b-7727c983cf52-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38b57350-6ca0-4090-876b-7727c983cf52" (UID: "38b57350-6ca0-4090-876b-7727c983cf52"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.452058 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21656f50-51b8-4761-8b9e-c2b823dace13-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "21656f50-51b8-4761-8b9e-c2b823dace13" (UID: "21656f50-51b8-4761-8b9e-c2b823dace13"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.457808 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.457833 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38b57350-6ca0-4090-876b-7727c983cf52-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.457842 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bee1ae7-32fb-484d-a81a-47fe31e25d70-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.457852 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.457861 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d6ca92a-3e98-4628-8936-37032cf27463-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.457868 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38b57350-6ca0-4090-876b-7727c983cf52-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.457877 4830 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/38b57350-6ca0-4090-876b-7727c983cf52-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.457885 4830 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/21656f50-51b8-4761-8b9e-c2b823dace13-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.474595 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.474968 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2fe2ad2-a0de-49aa-95fd-ef5f15032676" containerName="proxy-httpd" containerID="cri-o://e377c9fe2c2c4014633d618a399228bda3185620f06415bda5d22e2216dcccee" gracePeriod=30 Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.475104 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2fe2ad2-a0de-49aa-95fd-ef5f15032676" containerName="sg-core" containerID="cri-o://efb022c64f6ae8ffd2fec27339e107e45b38a12b6d4a8d2858182ad516e6d9f9" gracePeriod=30 Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.475157 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2fe2ad2-a0de-49aa-95fd-ef5f15032676" containerName="ceilometer-central-agent" containerID="cri-o://17c416fd77703fb7feb38dfb7c6e7aef3b647f80b42763e1c40e7ca828662e25" gracePeriod=30 Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.475189 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b63af300-2b1c-47a7-ae1d-1334deeefdb1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b63af300-2b1c-47a7-ae1d-1334deeefdb1" (UID: "b63af300-2b1c-47a7-ae1d-1334deeefdb1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.475242 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b2fe2ad2-a0de-49aa-95fd-ef5f15032676" containerName="ceilometer-notification-agent" containerID="cri-o://72e38d1c2009b64b0066ca1c11420f6777aab9186b8f6d7357f2184e318a87ad" gracePeriod=30 Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.497695 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b63af300-2b1c-47a7-ae1d-1334deeefdb1-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "b63af300-2b1c-47a7-ae1d-1334deeefdb1" (UID: "b63af300-2b1c-47a7-ae1d-1334deeefdb1"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.505348 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.505611 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="aef23409-e12b-4ef3-a968-f666e5a127ae" containerName="kube-state-metrics" containerID="cri-o://1954751f889385192cc38a0ea54da4d4fbf33340070fa0346fa385af89879ac7" gracePeriod=30 Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.512282 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22232c9c-ecf7-443e-834f-ad39b37735b2-config-data" (OuterVolumeSpecName: "config-data") pod "22232c9c-ecf7-443e-834f-ad39b37735b2" (UID: "22232c9c-ecf7-443e-834f-ad39b37735b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.559603 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22232c9c-ecf7-443e-834f-ad39b37735b2-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.559629 4830 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b63af300-2b1c-47a7-ae1d-1334deeefdb1-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.559639 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b63af300-2b1c-47a7-ae1d-1334deeefdb1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.563134 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5e39-account-create-update-r88l6" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.579040 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7668-account-create-update-6wj4n" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.607403 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d6ca92a-3e98-4628-8936-37032cf27463-config-data" (OuterVolumeSpecName: "config-data") pod "6d6ca92a-3e98-4628-8936-37032cf27463" (UID: "6d6ca92a-3e98-4628-8936-37032cf27463"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.669083 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8z6k\" (UniqueName: \"kubernetes.io/projected/0ea4ce89-3e8b-4521-9398-3406c6bf0166-kube-api-access-b8z6k\") pod \"0ea4ce89-3e8b-4521-9398-3406c6bf0166\" (UID: \"0ea4ce89-3e8b-4521-9398-3406c6bf0166\") " Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.669340 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ea4ce89-3e8b-4521-9398-3406c6bf0166-operator-scripts\") pod \"0ea4ce89-3e8b-4521-9398-3406c6bf0166\" (UID: \"0ea4ce89-3e8b-4521-9398-3406c6bf0166\") " Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.670056 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d6ca92a-3e98-4628-8936-37032cf27463-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.670526 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ea4ce89-3e8b-4521-9398-3406c6bf0166-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0ea4ce89-3e8b-4521-9398-3406c6bf0166" (UID: "0ea4ce89-3e8b-4521-9398-3406c6bf0166"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.670530 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-5550-account-create-update-5hslr"] Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.680027 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c219-account-create-update-w82r8" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.691125 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ea4ce89-3e8b-4521-9398-3406c6bf0166-kube-api-access-b8z6k" (OuterVolumeSpecName: "kube-api-access-b8z6k") pod "0ea4ce89-3e8b-4521-9398-3406c6bf0166" (UID: "0ea4ce89-3e8b-4521-9398-3406c6bf0166"). InnerVolumeSpecName "kube-api-access-b8z6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.728043 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.729538 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="eb3cdab6-15fa-40e1-a187-e277086227da" containerName="memcached" containerID="cri-o://1d243201cb634428da46e5d01d1c419016026f2c349204898c21d5e7060a1280" gracePeriod=30 Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.746673 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-29fd-account-create-update-st6rb" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.750402 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-5550-account-create-update-5hslr"] Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.762299 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-776e-account-create-update-kg8tx" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.785299 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz2dd\" (UniqueName: \"kubernetes.io/projected/baefaedf-2591-42f2-a383-5c92ae714ab5-kube-api-access-pz2dd\") pod \"baefaedf-2591-42f2-a383-5c92ae714ab5\" (UID: \"baefaedf-2591-42f2-a383-5c92ae714ab5\") " Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.799713 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26018553-1865-499d-9c9b-932807fce26c-operator-scripts\") pod \"26018553-1865-499d-9c9b-932807fce26c\" (UID: \"26018553-1865-499d-9c9b-932807fce26c\") " Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.799844 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02d5a77c-198f-43aa-96ab-2ac2d76c7743-operator-scripts\") pod \"02d5a77c-198f-43aa-96ab-2ac2d76c7743\" (UID: \"02d5a77c-198f-43aa-96ab-2ac2d76c7743\") " Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.799871 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6rx9\" (UniqueName: \"kubernetes.io/projected/02d5a77c-198f-43aa-96ab-2ac2d76c7743-kube-api-access-r6rx9\") pod \"02d5a77c-198f-43aa-96ab-2ac2d76c7743\" (UID: \"02d5a77c-198f-43aa-96ab-2ac2d76c7743\") " Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.799908 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wl6m8\" (UniqueName: \"kubernetes.io/projected/26018553-1865-499d-9c9b-932807fce26c-kube-api-access-wl6m8\") pod \"26018553-1865-499d-9c9b-932807fce26c\" (UID: \"26018553-1865-499d-9c9b-932807fce26c\") " Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.799936 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baefaedf-2591-42f2-a383-5c92ae714ab5-operator-scripts\") pod \"baefaedf-2591-42f2-a383-5c92ae714ab5\" (UID: \"baefaedf-2591-42f2-a383-5c92ae714ab5\") " Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.800046 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bf3e284-86ae-43b5-9259-6e9e34164de2-operator-scripts\") pod \"3bf3e284-86ae-43b5-9259-6e9e34164de2\" (UID: \"3bf3e284-86ae-43b5-9259-6e9e34164de2\") " Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.800667 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ea4ce89-3e8b-4521-9398-3406c6bf0166-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.800683 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8z6k\" (UniqueName: \"kubernetes.io/projected/0ea4ce89-3e8b-4521-9398-3406c6bf0166-kube-api-access-b8z6k\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.801373 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26018553-1865-499d-9c9b-932807fce26c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "26018553-1865-499d-9c9b-932807fce26c" (UID: "26018553-1865-499d-9c9b-932807fce26c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.801452 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bf3e284-86ae-43b5-9259-6e9e34164de2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3bf3e284-86ae-43b5-9259-6e9e34164de2" (UID: "3bf3e284-86ae-43b5-9259-6e9e34164de2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.801800 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/baefaedf-2591-42f2-a383-5c92ae714ab5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "baefaedf-2591-42f2-a383-5c92ae714ab5" (UID: "baefaedf-2591-42f2-a383-5c92ae714ab5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.801866 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23db3cbd-39ac-4137-8a7e-0533af96e5b1" path="/var/lib/kubelet/pods/23db3cbd-39ac-4137-8a7e-0533af96e5b1/volumes" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.802155 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02d5a77c-198f-43aa-96ab-2ac2d76c7743-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "02d5a77c-198f-43aa-96ab-2ac2d76c7743" (UID: "02d5a77c-198f-43aa-96ab-2ac2d76c7743"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.803650 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baefaedf-2591-42f2-a383-5c92ae714ab5-kube-api-access-pz2dd" (OuterVolumeSpecName: "kube-api-access-pz2dd") pod "baefaedf-2591-42f2-a383-5c92ae714ab5" (UID: "baefaedf-2591-42f2-a383-5c92ae714ab5"). InnerVolumeSpecName "kube-api-access-pz2dd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.804144 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60" path="/var/lib/kubelet/pods/2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60/volumes" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.804911 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3482e9fb-53ae-4908-87fc-4096c5b26b76" path="/var/lib/kubelet/pods/3482e9fb-53ae-4908-87fc-4096c5b26b76/volumes" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.806007 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7285a360-7ff1-4e35-b91a-d472a0ee591b" path="/var/lib/kubelet/pods/7285a360-7ff1-4e35-b91a-d472a0ee591b/volumes" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.806638 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f17706c-2060-4191-b63a-df7dea2c4c95" path="/var/lib/kubelet/pods/9f17706c-2060-4191-b63a-df7dea2c4c95/volumes" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.807230 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab8f9b5d-9b2e-4cb2-ab5c-392f0bf8ad89" path="/var/lib/kubelet/pods/ab8f9b5d-9b2e-4cb2-ab5c-392f0bf8ad89/volumes" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.808194 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b64de41e-9e05-48b2-87e5-387aad57532a" path="/var/lib/kubelet/pods/b64de41e-9e05-48b2-87e5-387aad57532a/volumes" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.810201 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26018553-1865-499d-9c9b-932807fce26c-kube-api-access-wl6m8" (OuterVolumeSpecName: "kube-api-access-wl6m8") pod "26018553-1865-499d-9c9b-932807fce26c" (UID: "26018553-1865-499d-9c9b-932807fce26c"). InnerVolumeSpecName "kube-api-access-wl6m8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.812673 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02d5a77c-198f-43aa-96ab-2ac2d76c7743-kube-api-access-r6rx9" (OuterVolumeSpecName: "kube-api-access-r6rx9") pod "02d5a77c-198f-43aa-96ab-2ac2d76c7743" (UID: "02d5a77c-198f-43aa-96ab-2ac2d76c7743"). InnerVolumeSpecName "kube-api-access-r6rx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.828312 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.842220 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-29fd-account-create-update-st6rb" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.853558 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.866519 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-776e-account-create-update-kg8tx" Feb 27 16:33:06 crc kubenswrapper[4830]: I0227 16:33:06.875304 4830 generic.go:334] "Generic (PLEG): container finished" podID="aef23409-e12b-4ef3-a968-f666e5a127ae" containerID="1954751f889385192cc38a0ea54da4d4fbf33340070fa0346fa385af89879ac7" exitCode=2 Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:06.899839 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7668-account-create-update-6wj4n" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:06.901700 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pncxt\" (UniqueName: \"kubernetes.io/projected/3bf3e284-86ae-43b5-9259-6e9e34164de2-kube-api-access-pncxt\") pod \"3bf3e284-86ae-43b5-9259-6e9e34164de2\" (UID: \"3bf3e284-86ae-43b5-9259-6e9e34164de2\") " Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:06.901972 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pz2dd\" (UniqueName: \"kubernetes.io/projected/baefaedf-2591-42f2-a383-5c92ae714ab5-kube-api-access-pz2dd\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:06.901984 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26018553-1865-499d-9c9b-932807fce26c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:06.901994 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02d5a77c-198f-43aa-96ab-2ac2d76c7743-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:06.902002 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6rx9\" (UniqueName: \"kubernetes.io/projected/02d5a77c-198f-43aa-96ab-2ac2d76c7743-kube-api-access-r6rx9\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:06.902011 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wl6m8\" (UniqueName: \"kubernetes.io/projected/26018553-1865-499d-9c9b-932807fce26c-kube-api-access-wl6m8\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:06.902020 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/baefaedf-2591-42f2-a383-5c92ae714ab5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:06.902029 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bf3e284-86ae-43b5-9259-6e9e34164de2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:06.905109 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c219-account-create-update-w82r8" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:06.910103 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bf3e284-86ae-43b5-9259-6e9e34164de2-kube-api-access-pncxt" (OuterVolumeSpecName: "kube-api-access-pncxt") pod "3bf3e284-86ae-43b5-9259-6e9e34164de2" (UID: "3bf3e284-86ae-43b5-9259-6e9e34164de2"). InnerVolumeSpecName "kube-api-access-pncxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:06.911866 4830 generic.go:334] "Generic (PLEG): container finished" podID="09849d6c-7457-4130-9074-73154d22af1f" containerID="3c3ffaf742258d5543939f307e4df804a0b02c0397303e259d28b6fddcbd5115" exitCode=1 Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:06.927019 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="41fafe33-b43b-4dcb-9edd-b365d0749e10" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.173:8776/healthcheck\": read tcp 10.217.0.2:57068->10.217.0.173:8776: read: connection reset by peer" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:06.934143 4830 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-lx5sm" secret="" err="secret \"galera-openstack-dockercfg-jd86w\" not found" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:06.934192 4830 scope.go:117] "RemoveContainer" containerID="3c3ffaf742258d5543939f307e4df804a0b02c0397303e259d28b6fddcbd5115" Feb 27 16:33:07 crc kubenswrapper[4830]: E0227 16:33:06.934530 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-create-update pod=root-account-create-update-lx5sm_openstack(09849d6c-7457-4130-9074-73154d22af1f)\"" pod="openstack/root-account-create-update-lx5sm" podUID="09849d6c-7457-4130-9074-73154d22af1f" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:06.938249 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:06.941347 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5e39-account-create-update-r88l6" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:06.947805 4830 generic.go:334] "Generic (PLEG): container finished" podID="b2fe2ad2-a0de-49aa-95fd-ef5f15032676" containerID="e377c9fe2c2c4014633d618a399228bda3185620f06415bda5d22e2216dcccee" exitCode=0 Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:06.947834 4830 generic.go:334] "Generic (PLEG): container finished" podID="b2fe2ad2-a0de-49aa-95fd-ef5f15032676" containerID="efb022c64f6ae8ffd2fec27339e107e45b38a12b6d4a8d2858182ad516e6d9f9" exitCode=2 Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:06.947906 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:06.948635 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-c6f44c475-twbzz" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:06.949354 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-948fdb9cd-ncm6f" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:06.950142 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.004182 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pncxt\" (UniqueName: \"kubernetes.io/projected/3bf3e284-86ae-43b5-9259-6e9e34164de2-kube-api-access-pncxt\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:07 crc kubenswrapper[4830]: E0227 16:33:07.108706 4830 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 27 16:33:07 crc kubenswrapper[4830]: E0227 16:33:07.109592 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09849d6c-7457-4130-9074-73154d22af1f-operator-scripts podName:09849d6c-7457-4130-9074-73154d22af1f nodeName:}" failed. No retries permitted until 2026-02-27 16:33:07.609556828 +0000 UTC m=+1583.698829291 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/09849d6c-7457-4130-9074-73154d22af1f-operator-scripts") pod "root-account-create-update-lx5sm" (UID: "09849d6c-7457-4130-9074-73154d22af1f") : configmap "openstack-scripts" not found Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.237519 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="f4280aaf-817d-41e1-9867-715359ae322e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.215:8775/\": read tcp 10.217.0.2:60994->10.217.0.215:8775: read: connection reset by peer" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.237643 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="f4280aaf-817d-41e1-9867-715359ae322e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.215:8775/\": read tcp 10.217.0.2:60998->10.217.0.215:8775: read: connection reset by peer" Feb 27 16:33:07 crc kubenswrapper[4830]: E0227 16:33:07.617569 4830 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 27 16:33:07 crc kubenswrapper[4830]: E0227 16:33:07.617652 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09849d6c-7457-4130-9074-73154d22af1f-operator-scripts podName:09849d6c-7457-4130-9074-73154d22af1f nodeName:}" failed. No retries permitted until 2026-02-27 16:33:08.617635611 +0000 UTC m=+1584.706908074 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/09849d6c-7457-4130-9074-73154d22af1f-operator-scripts") pod "root-account-create-update-lx5sm" (UID: "09849d6c-7457-4130-9074-73154d22af1f") : configmap "openstack-scripts" not found Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.733726 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5d54db5966-xcg7l" podUID="a234743b-8983-4a60-bbb4-59ad823b83e2" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.174:9311/healthcheck\": read tcp 10.217.0.2:52586->10.217.0.174:9311: read: connection reset by peer" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.733765 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5d54db5966-xcg7l" podUID="a234743b-8983-4a60-bbb4-59ad823b83e2" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.174:9311/healthcheck\": read tcp 10.217.0.2:52580->10.217.0.174:9311: read: connection reset by peer" Feb 27 16:33:07 crc kubenswrapper[4830]: E0227 16:33:07.871403 4830 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.109s" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.872165 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6d6ca92a-3e98-4628-8936-37032cf27463","Type":"ContainerDied","Data":"4da009fb0492324153cae8f54222ba75d4387ebdc9243a5ad16174a4cceea6c4"} Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.872214 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-29fd-account-create-update-st6rb" event={"ID":"02d5a77c-198f-43aa-96ab-2ac2d76c7743","Type":"ContainerDied","Data":"2a78a24aca3c495b88132f5ada2bcab911a1923c10d4ae73e4167ccf4db89ab5"} Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.872227 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6","Type":"ContainerDied","Data":"c29626b40606fd93d793caacbd2f1f3be72535bb9cd73efe02a55861642ccc13"} Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.872241 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-776e-account-create-update-kg8tx" event={"ID":"3bf3e284-86ae-43b5-9259-6e9e34164de2","Type":"ContainerDied","Data":"80f4f51520c519c3de9df8d87842927e1bd643af1040ef8f9b7a66b5dbb693dd"} Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.872252 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"aef23409-e12b-4ef3-a968-f666e5a127ae","Type":"ContainerDied","Data":"1954751f889385192cc38a0ea54da4d4fbf33340070fa0346fa385af89879ac7"} Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.872291 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-w22r8"] Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.872305 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7668-account-create-update-6wj4n" event={"ID":"baefaedf-2591-42f2-a383-5c92ae714ab5","Type":"ContainerDied","Data":"ec3d7793089e10f729c7992cd209dd73a5c94a3645fa8c7225222d7a1b49296c"} Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.872316 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c219-account-create-update-w82r8" event={"ID":"26018553-1865-499d-9c9b-932807fce26c","Type":"ContainerDied","Data":"76302f851e819e104625ea773eb0ded4d208f9f8a737d1b60b64998f743c7510"} Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.872331 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-w22r8"] Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.872363 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lx5sm" event={"ID":"09849d6c-7457-4130-9074-73154d22af1f","Type":"ContainerDied","Data":"3c3ffaf742258d5543939f307e4df804a0b02c0397303e259d28b6fddcbd5115"} Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.872378 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5550-account-create-update-q76l4"] Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.872490 4830 scope.go:117] "RemoveContainer" containerID="08dae26c7de73c784a1c4cdf01a2ec48ed79b52c6c16691dcb728b190ce0bde0" Feb 27 16:33:07 crc kubenswrapper[4830]: E0227 16:33:07.874486 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b57350-6ca0-4090-876b-7727c983cf52" containerName="proxy-httpd" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.874512 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b57350-6ca0-4090-876b-7727c983cf52" containerName="proxy-httpd" Feb 27 16:33:07 crc kubenswrapper[4830]: E0227 16:33:07.874525 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23db3cbd-39ac-4137-8a7e-0533af96e5b1" containerName="dnsmasq-dns" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.874531 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="23db3cbd-39ac-4137-8a7e-0533af96e5b1" containerName="dnsmasq-dns" Feb 27 16:33:07 crc kubenswrapper[4830]: E0227 16:33:07.874545 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b57350-6ca0-4090-876b-7727c983cf52" containerName="proxy-server" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.874552 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b57350-6ca0-4090-876b-7727c983cf52" containerName="proxy-server" Feb 27 16:33:07 crc kubenswrapper[4830]: E0227 16:33:07.874563 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7285a360-7ff1-4e35-b91a-d472a0ee591b" containerName="ovsdbserver-sb" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.874569 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="7285a360-7ff1-4e35-b91a-d472a0ee591b" containerName="ovsdbserver-sb" Feb 27 16:33:07 crc kubenswrapper[4830]: E0227 16:33:07.874579 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6" containerName="nova-scheduler-scheduler" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.874584 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6" containerName="nova-scheduler-scheduler" Feb 27 16:33:07 crc kubenswrapper[4830]: E0227 16:33:07.874593 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bee1ae7-32fb-484d-a81a-47fe31e25d70" containerName="nova-cell0-conductor-conductor" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.874599 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bee1ae7-32fb-484d-a81a-47fe31e25d70" containerName="nova-cell0-conductor-conductor" Feb 27 16:33:07 crc kubenswrapper[4830]: E0227 16:33:07.874625 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f17706c-2060-4191-b63a-df7dea2c4c95" containerName="openstack-network-exporter" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.874631 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f17706c-2060-4191-b63a-df7dea2c4c95" containerName="openstack-network-exporter" Feb 27 16:33:07 crc kubenswrapper[4830]: E0227 16:33:07.874640 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d6ca92a-3e98-4628-8936-37032cf27463" containerName="cinder-scheduler" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.874646 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d6ca92a-3e98-4628-8936-37032cf27463" containerName="cinder-scheduler" Feb 27 16:33:07 crc kubenswrapper[4830]: E0227 16:33:07.874654 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b63af300-2b1c-47a7-ae1d-1334deeefdb1" containerName="galera" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.874659 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b63af300-2b1c-47a7-ae1d-1334deeefdb1" containerName="galera" Feb 27 16:33:07 crc kubenswrapper[4830]: E0227 16:33:07.874667 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d6ca92a-3e98-4628-8936-37032cf27463" containerName="probe" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.874673 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d6ca92a-3e98-4628-8936-37032cf27463" containerName="probe" Feb 27 16:33:07 crc kubenswrapper[4830]: E0227 16:33:07.874682 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b63af300-2b1c-47a7-ae1d-1334deeefdb1" containerName="mysql-bootstrap" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.874687 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b63af300-2b1c-47a7-ae1d-1334deeefdb1" containerName="mysql-bootstrap" Feb 27 16:33:07 crc kubenswrapper[4830]: E0227 16:33:07.874698 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22232c9c-ecf7-443e-834f-ad39b37735b2" containerName="barbican-keystone-listener-log" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.874703 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="22232c9c-ecf7-443e-834f-ad39b37735b2" containerName="barbican-keystone-listener-log" Feb 27 16:33:07 crc kubenswrapper[4830]: E0227 16:33:07.874712 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7285a360-7ff1-4e35-b91a-d472a0ee591b" containerName="openstack-network-exporter" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.874718 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="7285a360-7ff1-4e35-b91a-d472a0ee591b" containerName="openstack-network-exporter" Feb 27 16:33:07 crc kubenswrapper[4830]: E0227 16:33:07.874731 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60" containerName="ovn-controller" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.874737 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60" containerName="ovn-controller" Feb 27 16:33:07 crc kubenswrapper[4830]: E0227 16:33:07.874745 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f17706c-2060-4191-b63a-df7dea2c4c95" containerName="ovsdbserver-nb" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.874753 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f17706c-2060-4191-b63a-df7dea2c4c95" containerName="ovsdbserver-nb" Feb 27 16:33:07 crc kubenswrapper[4830]: E0227 16:33:07.874760 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21656f50-51b8-4761-8b9e-c2b823dace13" containerName="nova-cell1-novncproxy-novncproxy" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.874766 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="21656f50-51b8-4761-8b9e-c2b823dace13" containerName="nova-cell1-novncproxy-novncproxy" Feb 27 16:33:07 crc kubenswrapper[4830]: E0227 16:33:07.874773 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23db3cbd-39ac-4137-8a7e-0533af96e5b1" containerName="init" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.874779 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="23db3cbd-39ac-4137-8a7e-0533af96e5b1" containerName="init" Feb 27 16:33:07 crc kubenswrapper[4830]: E0227 16:33:07.874789 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22232c9c-ecf7-443e-834f-ad39b37735b2" containerName="barbican-keystone-listener" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.874796 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="22232c9c-ecf7-443e-834f-ad39b37735b2" containerName="barbican-keystone-listener" Feb 27 16:33:07 crc kubenswrapper[4830]: E0227 16:33:07.874805 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b64de41e-9e05-48b2-87e5-387aad57532a" containerName="openstack-network-exporter" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.874810 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b64de41e-9e05-48b2-87e5-387aad57532a" containerName="openstack-network-exporter" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.874988 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eacdbe3-1c63-4811-a7ea-5dc6fd8dce60" containerName="ovn-controller" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.874996 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d6ca92a-3e98-4628-8936-37032cf27463" containerName="cinder-scheduler" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.875007 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="22232c9c-ecf7-443e-834f-ad39b37735b2" containerName="barbican-keystone-listener-log" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.875018 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bee1ae7-32fb-484d-a81a-47fe31e25d70" containerName="nova-cell0-conductor-conductor" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.875024 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6" containerName="nova-scheduler-scheduler" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.875034 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="7285a360-7ff1-4e35-b91a-d472a0ee591b" containerName="ovsdbserver-sb" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.875044 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="22232c9c-ecf7-443e-834f-ad39b37735b2" containerName="barbican-keystone-listener" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.875053 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="21656f50-51b8-4761-8b9e-c2b823dace13" containerName="nova-cell1-novncproxy-novncproxy" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.875062 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d6ca92a-3e98-4628-8936-37032cf27463" containerName="probe" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.875070 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f17706c-2060-4191-b63a-df7dea2c4c95" containerName="openstack-network-exporter" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.875078 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="38b57350-6ca0-4090-876b-7727c983cf52" containerName="proxy-httpd" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.875085 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="38b57350-6ca0-4090-876b-7727c983cf52" containerName="proxy-server" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.875091 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="b64de41e-9e05-48b2-87e5-387aad57532a" containerName="openstack-network-exporter" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.875100 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="23db3cbd-39ac-4137-8a7e-0533af96e5b1" containerName="dnsmasq-dns" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.875108 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="7285a360-7ff1-4e35-b91a-d472a0ee591b" containerName="openstack-network-exporter" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.875118 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f17706c-2060-4191-b63a-df7dea2c4c95" containerName="ovsdbserver-nb" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.875128 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="b63af300-2b1c-47a7-ae1d-1334deeefdb1" containerName="galera" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.875601 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b63af300-2b1c-47a7-ae1d-1334deeefdb1","Type":"ContainerDied","Data":"b7a67994406dc1ea6f1f20f4e7e5d5e87710cb482538e09640f4bf18261843b5"} Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.875858 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5550-account-create-update-q76l4"] Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.875872 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-wnx5p"] Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.875884 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-wnx5p"] Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.875897 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5e39-account-create-update-r88l6" event={"ID":"0ea4ce89-3e8b-4521-9398-3406c6bf0166","Type":"ContainerDied","Data":"00714c25a3ee5e8c8c745d06eedcdf99fc1e1beb99a405aeea022738ca2f8051"} Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.875908 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2fe2ad2-a0de-49aa-95fd-ef5f15032676","Type":"ContainerDied","Data":"e377c9fe2c2c4014633d618a399228bda3185620f06415bda5d22e2216dcccee"} Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.875925 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6b747d769f-z82kl"] Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.875938 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2fe2ad2-a0de-49aa-95fd-ef5f15032676","Type":"ContainerDied","Data":"efb022c64f6ae8ffd2fec27339e107e45b38a12b6d4a8d2858182ad516e6d9f9"} Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.875974 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.875990 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-lx5sm"] Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.876004 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-mz2rm"] Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.875872 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5550-account-create-update-q76l4" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.876016 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-mz2rm"] Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.876082 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-5550-account-create-update-q76l4"] Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.876129 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-6b747d769f-z82kl" podUID="28316ca0-eb95-47b0-bc7e-d31591facdc5" containerName="keystone-api" containerID="cri-o://0222fc9c68ebb7ebbcbccfa2809183acfbfef310f1d1faa28bd88a72fb86cf67" gracePeriod=30 Feb 27 16:33:07 crc kubenswrapper[4830]: E0227 16:33:07.877480 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-vn7kc operator-scripts], unattached volumes=[], failed to process volumes=[kube-api-access-vn7kc operator-scripts]: context canceled" pod="openstack/keystone-5550-account-create-update-q76l4" podUID="69771028-c356-4cfb-9f0b-30f67d320657" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.925599 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="47514135-95a6-4b77-815a-ebf23a3cab82" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.104:5671: connect: connection refused" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.966127 4830 generic.go:334] "Generic (PLEG): container finished" podID="f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32" containerID="3bd476206784383c2fbe0db210deee00da003f513b1f05dcbc55ea33c264c212" exitCode=0 Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.966204 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-58c49587-cz4f5" event={"ID":"f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32","Type":"ContainerDied","Data":"3bd476206784383c2fbe0db210deee00da003f513b1f05dcbc55ea33c264c212"} Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.966229 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-58c49587-cz4f5" event={"ID":"f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32","Type":"ContainerDied","Data":"847d19249a348581377717aa03626cf8ed77cb6a659d9e8fa65b56a85e33ea72"} Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.966240 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="847d19249a348581377717aa03626cf8ed77cb6a659d9e8fa65b56a85e33ea72" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.980541 4830 generic.go:334] "Generic (PLEG): container finished" podID="b2fe2ad2-a0de-49aa-95fd-ef5f15032676" containerID="72e38d1c2009b64b0066ca1c11420f6777aab9186b8f6d7357f2184e318a87ad" exitCode=0 Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.980584 4830 generic.go:334] "Generic (PLEG): container finished" podID="b2fe2ad2-a0de-49aa-95fd-ef5f15032676" containerID="17c416fd77703fb7feb38dfb7c6e7aef3b647f80b42763e1c40e7ca828662e25" exitCode=0 Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.980677 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2fe2ad2-a0de-49aa-95fd-ef5f15032676","Type":"ContainerDied","Data":"72e38d1c2009b64b0066ca1c11420f6777aab9186b8f6d7357f2184e318a87ad"} Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.980716 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2fe2ad2-a0de-49aa-95fd-ef5f15032676","Type":"ContainerDied","Data":"17c416fd77703fb7feb38dfb7c6e7aef3b647f80b42763e1c40e7ca828662e25"} Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.980731 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b2fe2ad2-a0de-49aa-95fd-ef5f15032676","Type":"ContainerDied","Data":"51dd486163d05319c102306b662f11e5d7f037407786a09d627e9ddb61b01f59"} Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.980743 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51dd486163d05319c102306b662f11e5d7f037407786a09d627e9ddb61b01f59" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.991326 4830 generic.go:334] "Generic (PLEG): container finished" podID="41fafe33-b43b-4dcb-9edd-b365d0749e10" containerID="9f254100c8c027338b42ed369be0ddd72af937c9d87a9a808607f1dcc876c8ed" exitCode=0 Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.991400 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"41fafe33-b43b-4dcb-9edd-b365d0749e10","Type":"ContainerDied","Data":"9f254100c8c027338b42ed369be0ddd72af937c9d87a9a808607f1dcc876c8ed"} Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.991431 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"41fafe33-b43b-4dcb-9edd-b365d0749e10","Type":"ContainerDied","Data":"c0d68aa16ecc6706ff17d105de79e65dbea1f9fef4f144b1b02f5ecb8a6a999e"} Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.991447 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0d68aa16ecc6706ff17d105de79e65dbea1f9fef4f144b1b02f5ecb8a6a999e" Feb 27 16:33:07 crc kubenswrapper[4830]: E0227 16:33:07.991517 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5 is running failed: container process not found" containerID="6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 27 16:33:07 crc kubenswrapper[4830]: E0227 16:33:07.991855 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5 is running failed: container process not found" containerID="6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 27 16:33:07 crc kubenswrapper[4830]: E0227 16:33:07.992056 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5 is running failed: container process not found" containerID="6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 27 16:33:07 crc kubenswrapper[4830]: E0227 16:33:07.992083 4830 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-qt6mr" podUID="bc737ee4-d87c-4276-a6d1-6f3144879542" containerName="ovsdb-server" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.992726 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"aef23409-e12b-4ef3-a968-f666e5a127ae","Type":"ContainerDied","Data":"936ad490bb55603d661c0e2ce4fe785a6cf5df1c8aaad0883b862facf2e9c797"} Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.992746 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="936ad490bb55603d661c0e2ce4fe785a6cf5df1c8aaad0883b862facf2e9c797" Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.998607 4830 generic.go:334] "Generic (PLEG): container finished" podID="d8d4cd44-9972-445e-bac3-63441b6fa4cc" containerID="7b743cc093d9cd3e5deb61678bf56225726f2ee5f6b916d24acb306d92c0ebc6" exitCode=0 Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.998646 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d8d4cd44-9972-445e-bac3-63441b6fa4cc","Type":"ContainerDied","Data":"7b743cc093d9cd3e5deb61678bf56225726f2ee5f6b916d24acb306d92c0ebc6"} Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.998676 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d8d4cd44-9972-445e-bac3-63441b6fa4cc","Type":"ContainerDied","Data":"f6b559b33a9c41bfd5e5daf5942e8e99f985853b1767f0c655d4bc26524a9085"} Feb 27 16:33:07 crc kubenswrapper[4830]: I0227 16:33:07.998688 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6b559b33a9c41bfd5e5daf5942e8e99f985853b1767f0c655d4bc26524a9085" Feb 27 16:33:08 crc kubenswrapper[4830]: E0227 16:33:08.011337 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4a66491103ecf784427f1721d3379810efd654720c7344f0ebbf5e10bc7a1b3d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 27 16:33:08 crc kubenswrapper[4830]: E0227 16:33:08.013042 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4a66491103ecf784427f1721d3379810efd654720c7344f0ebbf5e10bc7a1b3d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.013210 4830 generic.go:334] "Generic (PLEG): container finished" podID="73fa27e0-b59d-44b0-8648-7e696f71cd61" containerID="a5137475aad41fb8eb7b0a7b72def6633e3820a0b964c9cad287965ce3680cca" exitCode=0 Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.013276 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"73fa27e0-b59d-44b0-8648-7e696f71cd61","Type":"ContainerDied","Data":"a5137475aad41fb8eb7b0a7b72def6633e3820a0b964c9cad287965ce3680cca"} Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.013314 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"73fa27e0-b59d-44b0-8648-7e696f71cd61","Type":"ContainerDied","Data":"de43a7a66c7c10082a14de9a23a6b16f51cafd5a47a4318321033a2e89b70b49"} Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.013330 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de43a7a66c7c10082a14de9a23a6b16f51cafd5a47a4318321033a2e89b70b49" Feb 27 16:33:08 crc kubenswrapper[4830]: E0227 16:33:08.016097 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4a66491103ecf784427f1721d3379810efd654720c7344f0ebbf5e10bc7a1b3d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 27 16:33:08 crc kubenswrapper[4830]: E0227 16:33:08.016154 4830 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-qt6mr" podUID="bc737ee4-d87c-4276-a6d1-6f3144879542" containerName="ovs-vswitchd" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.018745 4830 generic.go:334] "Generic (PLEG): container finished" podID="f4280aaf-817d-41e1-9867-715359ae322e" containerID="67f705d66ad4d26d1a66a751f763fac473304bb8b591b54c2c0c497cc8ee46c6" exitCode=0 Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.018806 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f4280aaf-817d-41e1-9867-715359ae322e","Type":"ContainerDied","Data":"67f705d66ad4d26d1a66a751f763fac473304bb8b591b54c2c0c497cc8ee46c6"} Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.018828 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f4280aaf-817d-41e1-9867-715359ae322e","Type":"ContainerDied","Data":"b40b50dc0c3eb8a1f90824340053269054b596dd3826d38c5c351f59aca76b6f"} Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.018840 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b40b50dc0c3eb8a1f90824340053269054b596dd3826d38c5c351f59aca76b6f" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.023789 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn7kc\" (UniqueName: \"kubernetes.io/projected/69771028-c356-4cfb-9f0b-30f67d320657-kube-api-access-vn7kc\") pod \"keystone-5550-account-create-update-q76l4\" (UID: \"69771028-c356-4cfb-9f0b-30f67d320657\") " pod="openstack/keystone-5550-account-create-update-q76l4" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.024040 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69771028-c356-4cfb-9f0b-30f67d320657-operator-scripts\") pod \"keystone-5550-account-create-update-q76l4\" (UID: \"69771028-c356-4cfb-9f0b-30f67d320657\") " pod="openstack/keystone-5550-account-create-update-q76l4" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.031078 4830 generic.go:334] "Generic (PLEG): container finished" podID="a234743b-8983-4a60-bbb4-59ad823b83e2" containerID="5d61bb0dcfd0af97605ea6793d0ccb409521660eb0cfce03c505ba533a6f52a4" exitCode=0 Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.031168 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d54db5966-xcg7l" event={"ID":"a234743b-8983-4a60-bbb4-59ad823b83e2","Type":"ContainerDied","Data":"5d61bb0dcfd0af97605ea6793d0ccb409521660eb0cfce03c505ba533a6f52a4"} Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.044569 4830 generic.go:334] "Generic (PLEG): container finished" podID="eb3cdab6-15fa-40e1-a187-e277086227da" containerID="1d243201cb634428da46e5d01d1c419016026f2c349204898c21d5e7060a1280" exitCode=0 Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.044638 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"eb3cdab6-15fa-40e1-a187-e277086227da","Type":"ContainerDied","Data":"1d243201cb634428da46e5d01d1c419016026f2c349204898c21d5e7060a1280"} Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.046307 4830 generic.go:334] "Generic (PLEG): container finished" podID="bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf" containerID="b4c2a77141370e51625fa6bf385bb1eb77fc6e2be81322189a2da160e42e03d0" exitCode=0 Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.046408 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-58db7bd5dd-jr8zt" event={"ID":"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf","Type":"ContainerDied","Data":"b4c2a77141370e51625fa6bf385bb1eb77fc6e2be81322189a2da160e42e03d0"} Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.046469 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-58db7bd5dd-jr8zt" event={"ID":"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf","Type":"ContainerDied","Data":"ea5deab6a6c50b1124f89741ffb33d0a3789c9617f1574cfb25f4be315dbf7e6"} Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.046488 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea5deab6a6c50b1124f89741ffb33d0a3789c9617f1574cfb25f4be315dbf7e6" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.051814 4830 generic.go:334] "Generic (PLEG): container finished" podID="91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c" containerID="71f9a2d35a123a7c42bc68cc143760e467aedb724086c36e562efbf095e0c426" exitCode=0 Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.051882 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5550-account-create-update-q76l4" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.052330 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c","Type":"ContainerDied","Data":"71f9a2d35a123a7c42bc68cc143760e467aedb724086c36e562efbf095e0c426"} Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.052347 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c","Type":"ContainerDied","Data":"7e7bd33e89c122ff26f31646057c366aa0a0c10749f0a8963144d1ea36341568"} Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.052357 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e7bd33e89c122ff26f31646057c366aa0a0c10749f0a8963144d1ea36341568" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.052751 4830 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-lx5sm" secret="" err="secret \"galera-openstack-dockercfg-jd86w\" not found" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.052803 4830 scope.go:117] "RemoveContainer" containerID="3c3ffaf742258d5543939f307e4df804a0b02c0397303e259d28b6fddcbd5115" Feb 27 16:33:08 crc kubenswrapper[4830]: E0227 16:33:08.053123 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-create-update pod=root-account-create-update-lx5sm_openstack(09849d6c-7457-4130-9074-73154d22af1f)\"" pod="openstack/root-account-create-update-lx5sm" podUID="09849d6c-7457-4130-9074-73154d22af1f" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.131704 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn7kc\" (UniqueName: \"kubernetes.io/projected/69771028-c356-4cfb-9f0b-30f67d320657-kube-api-access-vn7kc\") pod \"keystone-5550-account-create-update-q76l4\" (UID: \"69771028-c356-4cfb-9f0b-30f67d320657\") " pod="openstack/keystone-5550-account-create-update-q76l4" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.131922 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69771028-c356-4cfb-9f0b-30f67d320657-operator-scripts\") pod \"keystone-5550-account-create-update-q76l4\" (UID: \"69771028-c356-4cfb-9f0b-30f67d320657\") " pod="openstack/keystone-5550-account-create-update-q76l4" Feb 27 16:33:08 crc kubenswrapper[4830]: E0227 16:33:08.132277 4830 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 27 16:33:08 crc kubenswrapper[4830]: E0227 16:33:08.132345 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/69771028-c356-4cfb-9f0b-30f67d320657-operator-scripts podName:69771028-c356-4cfb-9f0b-30f67d320657 nodeName:}" failed. No retries permitted until 2026-02-27 16:33:08.632313343 +0000 UTC m=+1584.721585806 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/69771028-c356-4cfb-9f0b-30f67d320657-operator-scripts") pod "keystone-5550-account-create-update-q76l4" (UID: "69771028-c356-4cfb-9f0b-30f67d320657") : configmap "openstack-scripts" not found Feb 27 16:33:08 crc kubenswrapper[4830]: E0227 16:33:08.139076 4830 projected.go:194] Error preparing data for projected volume kube-api-access-vn7kc for pod openstack/keystone-5550-account-create-update-q76l4: failed to fetch token: serviceaccounts "galera-openstack" not found Feb 27 16:33:08 crc kubenswrapper[4830]: E0227 16:33:08.139143 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69771028-c356-4cfb-9f0b-30f67d320657-kube-api-access-vn7kc podName:69771028-c356-4cfb-9f0b-30f67d320657 nodeName:}" failed. No retries permitted until 2026-02-27 16:33:08.639126327 +0000 UTC m=+1584.728398790 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vn7kc" (UniqueName: "kubernetes.io/projected/69771028-c356-4cfb-9f0b-30f67d320657-kube-api-access-vn7kc") pod "keystone-5550-account-create-update-q76l4" (UID: "69771028-c356-4cfb-9f0b-30f67d320657") : failed to fetch token: serviceaccounts "galera-openstack" not found Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.148669 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3" containerName="galera" containerID="cri-o://68dcbd84b2ee99bb92f47d75adccd5e677bcf1de6646eeea5b827c8e802fad81" gracePeriod=30 Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.180546 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="aa5b7bdd-50bb-4123-a32a-0c7e97035a3f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.105:5671: connect: connection refused" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.320578 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.330379 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-58db7bd5dd-jr8zt" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.339293 4830 scope.go:117] "RemoveContainer" containerID="c6e289a18c1629684bcdb331c9033eb81b5cf53591f391b7c77955013ee8149f" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.351139 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.371835 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.381440 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-58c49587-cz4f5" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.393738 4830 scope.go:117] "RemoveContainer" containerID="4bd0cecd4c639c19d6288ae6763e874f65a458b96d3aae8d391e7b853fd3836b" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.396142 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.420474 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.437775 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-combined-ca-bundle\") pod \"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf\" (UID: \"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.438060 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/aef23409-e12b-4ef3-a968-f666e5a127ae-kube-state-metrics-tls-certs\") pod \"aef23409-e12b-4ef3-a968-f666e5a127ae\" (UID: \"aef23409-e12b-4ef3-a968-f666e5a127ae\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.438092 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-scripts\") pod \"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf\" (UID: \"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.438116 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/aef23409-e12b-4ef3-a968-f666e5a127ae-kube-state-metrics-tls-config\") pod \"aef23409-e12b-4ef3-a968-f666e5a127ae\" (UID: \"aef23409-e12b-4ef3-a968-f666e5a127ae\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.438145 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aef23409-e12b-4ef3-a968-f666e5a127ae-combined-ca-bundle\") pod \"aef23409-e12b-4ef3-a968-f666e5a127ae\" (UID: \"aef23409-e12b-4ef3-a968-f666e5a127ae\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.438251 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-public-tls-certs\") pod \"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf\" (UID: \"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.438282 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-logs\") pod \"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf\" (UID: \"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.438298 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-config-data\") pod \"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf\" (UID: \"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.438347 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zldns\" (UniqueName: \"kubernetes.io/projected/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-kube-api-access-zldns\") pod \"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf\" (UID: \"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.438373 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bdrk\" (UniqueName: \"kubernetes.io/projected/aef23409-e12b-4ef3-a968-f666e5a127ae-kube-api-access-7bdrk\") pod \"aef23409-e12b-4ef3-a968-f666e5a127ae\" (UID: \"aef23409-e12b-4ef3-a968-f666e5a127ae\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.438441 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-internal-tls-certs\") pod \"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf\" (UID: \"bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.438563 4830 scope.go:117] "RemoveContainer" containerID="b76e4dfe38f967f37bb6025c4aa38ca81c5cf520e22fe035f96df51e28145466" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.440257 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.447815 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.451146 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-logs" (OuterVolumeSpecName: "logs") pod "bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf" (UID: "bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.456957 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aef23409-e12b-4ef3-a968-f666e5a127ae-kube-api-access-7bdrk" (OuterVolumeSpecName: "kube-api-access-7bdrk") pod "aef23409-e12b-4ef3-a968-f666e5a127ae" (UID: "aef23409-e12b-4ef3-a968-f666e5a127ae"). InnerVolumeSpecName "kube-api-access-7bdrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.459785 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.465176 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-scripts" (OuterVolumeSpecName: "scripts") pod "bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf" (UID: "bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.470618 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-kube-api-access-zldns" (OuterVolumeSpecName: "kube-api-access-zldns") pod "bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf" (UID: "bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf"). InnerVolumeSpecName "kube-api-access-zldns". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.474120 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-c6f44c475-twbzz"] Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.483680 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-c6f44c475-twbzz"] Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.506726 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aef23409-e12b-4ef3-a968-f666e5a127ae-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "aef23409-e12b-4ef3-a968-f666e5a127ae" (UID: "aef23409-e12b-4ef3-a968-f666e5a127ae"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.507305 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf" (UID: "bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.523065 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-c219-account-create-update-w82r8"] Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.536067 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf" (UID: "bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.540604 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73fa27e0-b59d-44b0-8648-7e696f71cd61-internal-tls-certs\") pod \"73fa27e0-b59d-44b0-8648-7e696f71cd61\" (UID: \"73fa27e0-b59d-44b0-8648-7e696f71cd61\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.540659 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4280aaf-817d-41e1-9867-715359ae322e-combined-ca-bundle\") pod \"f4280aaf-817d-41e1-9867-715359ae322e\" (UID: \"f4280aaf-817d-41e1-9867-715359ae322e\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.540685 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32-combined-ca-bundle\") pod \"f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32\" (UID: \"f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.540878 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8d4cd44-9972-445e-bac3-63441b6fa4cc-public-tls-certs\") pod \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\" (UID: \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.540905 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-config-data\") pod \"41fafe33-b43b-4dcb-9edd-b365d0749e10\" (UID: \"41fafe33-b43b-4dcb-9edd-b365d0749e10\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.540935 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73fa27e0-b59d-44b0-8648-7e696f71cd61-logs\") pod \"73fa27e0-b59d-44b0-8648-7e696f71cd61\" (UID: \"73fa27e0-b59d-44b0-8648-7e696f71cd61\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.540973 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-config-data-custom\") pod \"41fafe33-b43b-4dcb-9edd-b365d0749e10\" (UID: \"41fafe33-b43b-4dcb-9edd-b365d0749e10\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.540992 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/73fa27e0-b59d-44b0-8648-7e696f71cd61-httpd-run\") pod \"73fa27e0-b59d-44b0-8648-7e696f71cd61\" (UID: \"73fa27e0-b59d-44b0-8648-7e696f71cd61\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.541032 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-log-httpd\") pod \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\" (UID: \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.541057 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-config-data\") pod \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\" (UID: \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.541073 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d4cd44-9972-445e-bac3-63441b6fa4cc-combined-ca-bundle\") pod \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\" (UID: \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.541104 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8d4cd44-9972-445e-bac3-63441b6fa4cc-logs\") pod \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\" (UID: \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.541144 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf85v\" (UniqueName: \"kubernetes.io/projected/f4280aaf-817d-41e1-9867-715359ae322e-kube-api-access-gf85v\") pod \"f4280aaf-817d-41e1-9867-715359ae322e\" (UID: \"f4280aaf-817d-41e1-9867-715359ae322e\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.541165 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\" (UID: \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.541187 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-scripts\") pod \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\" (UID: \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.541216 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tc59x\" (UniqueName: \"kubernetes.io/projected/f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32-kube-api-access-tc59x\") pod \"f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32\" (UID: \"f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.541233 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-run-httpd\") pod \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\" (UID: \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.541257 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"73fa27e0-b59d-44b0-8648-7e696f71cd61\" (UID: \"73fa27e0-b59d-44b0-8648-7e696f71cd61\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.541279 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8d4cd44-9972-445e-bac3-63441b6fa4cc-config-data\") pod \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\" (UID: \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.541296 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-scripts\") pod \"41fafe33-b43b-4dcb-9edd-b365d0749e10\" (UID: \"41fafe33-b43b-4dcb-9edd-b365d0749e10\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.541338 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvmrx\" (UniqueName: \"kubernetes.io/projected/41fafe33-b43b-4dcb-9edd-b365d0749e10-kube-api-access-zvmrx\") pod \"41fafe33-b43b-4dcb-9edd-b365d0749e10\" (UID: \"41fafe33-b43b-4dcb-9edd-b365d0749e10\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.541396 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-internal-tls-certs\") pod \"41fafe33-b43b-4dcb-9edd-b365d0749e10\" (UID: \"41fafe33-b43b-4dcb-9edd-b365d0749e10\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.541420 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4280aaf-817d-41e1-9867-715359ae322e-logs\") pod \"f4280aaf-817d-41e1-9867-715359ae322e\" (UID: \"f4280aaf-817d-41e1-9867-715359ae322e\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.541437 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8d4cd44-9972-445e-bac3-63441b6fa4cc-scripts\") pod \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\" (UID: \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.541476 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41fafe33-b43b-4dcb-9edd-b365d0749e10-logs\") pod \"41fafe33-b43b-4dcb-9edd-b365d0749e10\" (UID: \"41fafe33-b43b-4dcb-9edd-b365d0749e10\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.541493 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-combined-ca-bundle\") pod \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\" (UID: \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.541514 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32-logs\") pod \"f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32\" (UID: \"f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.541551 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73fa27e0-b59d-44b0-8648-7e696f71cd61-config-data\") pod \"73fa27e0-b59d-44b0-8648-7e696f71cd61\" (UID: \"73fa27e0-b59d-44b0-8648-7e696f71cd61\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.541577 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73fa27e0-b59d-44b0-8648-7e696f71cd61-scripts\") pod \"73fa27e0-b59d-44b0-8648-7e696f71cd61\" (UID: \"73fa27e0-b59d-44b0-8648-7e696f71cd61\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.541594 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzf7p\" (UniqueName: \"kubernetes.io/projected/73fa27e0-b59d-44b0-8648-7e696f71cd61-kube-api-access-zzf7p\") pod \"73fa27e0-b59d-44b0-8648-7e696f71cd61\" (UID: \"73fa27e0-b59d-44b0-8648-7e696f71cd61\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.541634 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32-config-data\") pod \"f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32\" (UID: \"f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.541660 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-ceilometer-tls-certs\") pod \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\" (UID: \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.541677 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtk2v\" (UniqueName: \"kubernetes.io/projected/d8d4cd44-9972-445e-bac3-63441b6fa4cc-kube-api-access-mtk2v\") pod \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\" (UID: \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.541722 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lc89c\" (UniqueName: \"kubernetes.io/projected/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-kube-api-access-lc89c\") pod \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\" (UID: \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.541749 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32-config-data-custom\") pod \"f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32\" (UID: \"f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.541802 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-public-tls-certs\") pod \"41fafe33-b43b-4dcb-9edd-b365d0749e10\" (UID: \"41fafe33-b43b-4dcb-9edd-b365d0749e10\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.541843 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4280aaf-817d-41e1-9867-715359ae322e-nova-metadata-tls-certs\") pod \"f4280aaf-817d-41e1-9867-715359ae322e\" (UID: \"f4280aaf-817d-41e1-9867-715359ae322e\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.541917 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4280aaf-817d-41e1-9867-715359ae322e-config-data\") pod \"f4280aaf-817d-41e1-9867-715359ae322e\" (UID: \"f4280aaf-817d-41e1-9867-715359ae322e\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.541967 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-combined-ca-bundle\") pod \"41fafe33-b43b-4dcb-9edd-b365d0749e10\" (UID: \"41fafe33-b43b-4dcb-9edd-b365d0749e10\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.541983 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73fa27e0-b59d-44b0-8648-7e696f71cd61-combined-ca-bundle\") pod \"73fa27e0-b59d-44b0-8648-7e696f71cd61\" (UID: \"73fa27e0-b59d-44b0-8648-7e696f71cd61\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.542001 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d8d4cd44-9972-445e-bac3-63441b6fa4cc-httpd-run\") pod \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\" (UID: \"d8d4cd44-9972-445e-bac3-63441b6fa4cc\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.542016 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-sg-core-conf-yaml\") pod \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\" (UID: \"b2fe2ad2-a0de-49aa-95fd-ef5f15032676\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.542068 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/41fafe33-b43b-4dcb-9edd-b365d0749e10-etc-machine-id\") pod \"41fafe33-b43b-4dcb-9edd-b365d0749e10\" (UID: \"41fafe33-b43b-4dcb-9edd-b365d0749e10\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.542567 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-logs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.542579 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zldns\" (UniqueName: \"kubernetes.io/projected/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-kube-api-access-zldns\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.542589 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bdrk\" (UniqueName: \"kubernetes.io/projected/aef23409-e12b-4ef3-a968-f666e5a127ae-kube-api-access-7bdrk\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.542598 4830 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.542634 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.542643 4830 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/aef23409-e12b-4ef3-a968-f666e5a127ae-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.542652 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.542719 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41fafe33-b43b-4dcb-9edd-b365d0749e10-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "41fafe33-b43b-4dcb-9edd-b365d0749e10" (UID: "41fafe33-b43b-4dcb-9edd-b365d0749e10"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.549110 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73fa27e0-b59d-44b0-8648-7e696f71cd61-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "73fa27e0-b59d-44b0-8648-7e696f71cd61" (UID: "73fa27e0-b59d-44b0-8648-7e696f71cd61"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.549526 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b2fe2ad2-a0de-49aa-95fd-ef5f15032676" (UID: "b2fe2ad2-a0de-49aa-95fd-ef5f15032676"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.556232 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-scripts" (OuterVolumeSpecName: "scripts") pod "b2fe2ad2-a0de-49aa-95fd-ef5f15032676" (UID: "b2fe2ad2-a0de-49aa-95fd-ef5f15032676"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.556790 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aef23409-e12b-4ef3-a968-f666e5a127ae-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "aef23409-e12b-4ef3-a968-f666e5a127ae" (UID: "aef23409-e12b-4ef3-a968-f666e5a127ae"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.558759 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b2fe2ad2-a0de-49aa-95fd-ef5f15032676" (UID: "b2fe2ad2-a0de-49aa-95fd-ef5f15032676"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.558870 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73fa27e0-b59d-44b0-8648-7e696f71cd61-logs" (OuterVolumeSpecName: "logs") pod "73fa27e0-b59d-44b0-8648-7e696f71cd61" (UID: "73fa27e0-b59d-44b0-8648-7e696f71cd61"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.559055 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41fafe33-b43b-4dcb-9edd-b365d0749e10-kube-api-access-zvmrx" (OuterVolumeSpecName: "kube-api-access-zvmrx") pod "41fafe33-b43b-4dcb-9edd-b365d0749e10" (UID: "41fafe33-b43b-4dcb-9edd-b365d0749e10"). InnerVolumeSpecName "kube-api-access-zvmrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.559253 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8d4cd44-9972-445e-bac3-63441b6fa4cc-logs" (OuterVolumeSpecName: "logs") pod "d8d4cd44-9972-445e-bac3-63441b6fa4cc" (UID: "d8d4cd44-9972-445e-bac3-63441b6fa4cc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.560425 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-c219-account-create-update-w82r8"] Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.560462 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.560748 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8d4cd44-9972-445e-bac3-63441b6fa4cc-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d8d4cd44-9972-445e-bac3-63441b6fa4cc" (UID: "d8d4cd44-9972-445e-bac3-63441b6fa4cc"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.563080 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "41fafe33-b43b-4dcb-9edd-b365d0749e10" (UID: "41fafe33-b43b-4dcb-9edd-b365d0749e10"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.563139 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "73fa27e0-b59d-44b0-8648-7e696f71cd61" (UID: "73fa27e0-b59d-44b0-8648-7e696f71cd61"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.563223 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32-kube-api-access-tc59x" (OuterVolumeSpecName: "kube-api-access-tc59x") pod "f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32" (UID: "f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32"). InnerVolumeSpecName "kube-api-access-tc59x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.564997 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aef23409-e12b-4ef3-a968-f666e5a127ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aef23409-e12b-4ef3-a968-f666e5a127ae" (UID: "aef23409-e12b-4ef3-a968-f666e5a127ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.566292 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4280aaf-817d-41e1-9867-715359ae322e-logs" (OuterVolumeSpecName: "logs") pod "f4280aaf-817d-41e1-9867-715359ae322e" (UID: "f4280aaf-817d-41e1-9867-715359ae322e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.566823 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32-logs" (OuterVolumeSpecName: "logs") pod "f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32" (UID: "f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.569967 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41fafe33-b43b-4dcb-9edd-b365d0749e10-logs" (OuterVolumeSpecName: "logs") pod "41fafe33-b43b-4dcb-9edd-b365d0749e10" (UID: "41fafe33-b43b-4dcb-9edd-b365d0749e10"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.575142 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.577608 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.578135 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32" (UID: "f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.578183 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-scripts" (OuterVolumeSpecName: "scripts") pod "41fafe33-b43b-4dcb-9edd-b365d0749e10" (UID: "41fafe33-b43b-4dcb-9edd-b365d0749e10"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.578459 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-kube-api-access-lc89c" (OuterVolumeSpecName: "kube-api-access-lc89c") pod "b2fe2ad2-a0de-49aa-95fd-ef5f15032676" (UID: "b2fe2ad2-a0de-49aa-95fd-ef5f15032676"). InnerVolumeSpecName "kube-api-access-lc89c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.578483 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4280aaf-817d-41e1-9867-715359ae322e-kube-api-access-gf85v" (OuterVolumeSpecName: "kube-api-access-gf85v") pod "f4280aaf-817d-41e1-9867-715359ae322e" (UID: "f4280aaf-817d-41e1-9867-715359ae322e"). InnerVolumeSpecName "kube-api-access-gf85v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.579710 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5550-account-create-update-q76l4" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.580365 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8d4cd44-9972-445e-bac3-63441b6fa4cc-kube-api-access-mtk2v" (OuterVolumeSpecName: "kube-api-access-mtk2v") pod "d8d4cd44-9972-445e-bac3-63441b6fa4cc" (UID: "d8d4cd44-9972-445e-bac3-63441b6fa4cc"). InnerVolumeSpecName "kube-api-access-mtk2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.585991 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.587308 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73fa27e0-b59d-44b0-8648-7e696f71cd61-kube-api-access-zzf7p" (OuterVolumeSpecName: "kube-api-access-zzf7p") pod "73fa27e0-b59d-44b0-8648-7e696f71cd61" (UID: "73fa27e0-b59d-44b0-8648-7e696f71cd61"). InnerVolumeSpecName "kube-api-access-zzf7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.591408 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73fa27e0-b59d-44b0-8648-7e696f71cd61-scripts" (OuterVolumeSpecName: "scripts") pod "73fa27e0-b59d-44b0-8648-7e696f71cd61" (UID: "73fa27e0-b59d-44b0-8648-7e696f71cd61"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.601135 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.603122 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.606999 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d4cd44-9972-445e-bac3-63441b6fa4cc-scripts" (OuterVolumeSpecName: "scripts") pod "d8d4cd44-9972-445e-bac3-63441b6fa4cc" (UID: "d8d4cd44-9972-445e-bac3-63441b6fa4cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.610079 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "d8d4cd44-9972-445e-bac3-63441b6fa4cc" (UID: "d8d4cd44-9972-445e-bac3-63441b6fa4cc"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.609287 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d54db5966-xcg7l" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.611022 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.614218 4830 scope.go:117] "RemoveContainer" containerID="58b3931eed123fb0912adbb48ae5835fb65012c51cabfe8279f65b2fb158c0e1" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.647906 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn7kc\" (UniqueName: \"kubernetes.io/projected/69771028-c356-4cfb-9f0b-30f67d320657-kube-api-access-vn7kc\") pod \"keystone-5550-account-create-update-q76l4\" (UID: \"69771028-c356-4cfb-9f0b-30f67d320657\") " pod="openstack/keystone-5550-account-create-update-q76l4" Feb 27 16:33:08 crc kubenswrapper[4830]: E0227 16:33:08.650837 4830 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 27 16:33:08 crc kubenswrapper[4830]: E0227 16:33:08.651284 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09849d6c-7457-4130-9074-73154d22af1f-operator-scripts podName:09849d6c-7457-4130-9074-73154d22af1f nodeName:}" failed. No retries permitted until 2026-02-27 16:33:10.651264329 +0000 UTC m=+1586.740536792 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/09849d6c-7457-4130-9074-73154d22af1f-operator-scripts") pod "root-account-create-update-lx5sm" (UID: "09849d6c-7457-4130-9074-73154d22af1f") : configmap "openstack-scripts" not found Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.651688 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.651089 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69771028-c356-4cfb-9f0b-30f67d320657-operator-scripts\") pod \"keystone-5550-account-create-update-q76l4\" (UID: \"69771028-c356-4cfb-9f0b-30f67d320657\") " pod="openstack/keystone-5550-account-create-update-q76l4" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.652367 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aef23409-e12b-4ef3-a968-f666e5a127ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.653930 4830 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d8d4cd44-9972-445e-bac3-63441b6fa4cc-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.654031 4830 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/41fafe33-b43b-4dcb-9edd-b365d0749e10-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.654194 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73fa27e0-b59d-44b0-8648-7e696f71cd61-logs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.654289 4830 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.654579 4830 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/73fa27e0-b59d-44b0-8648-7e696f71cd61-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.655089 4830 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.655160 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8d4cd44-9972-445e-bac3-63441b6fa4cc-logs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.655218 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf85v\" (UniqueName: \"kubernetes.io/projected/f4280aaf-817d-41e1-9867-715359ae322e-kube-api-access-gf85v\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.655286 4830 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.655351 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.655412 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tc59x\" (UniqueName: \"kubernetes.io/projected/f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32-kube-api-access-tc59x\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.655732 4830 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.655819 4830 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.655923 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.656127 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvmrx\" (UniqueName: \"kubernetes.io/projected/41fafe33-b43b-4dcb-9edd-b365d0749e10-kube-api-access-zvmrx\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.656320 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4280aaf-817d-41e1-9867-715359ae322e-logs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.656397 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8d4cd44-9972-445e-bac3-63441b6fa4cc-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.656506 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41fafe33-b43b-4dcb-9edd-b365d0749e10-logs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.656684 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32-logs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.656762 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73fa27e0-b59d-44b0-8648-7e696f71cd61-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.656834 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzf7p\" (UniqueName: \"kubernetes.io/projected/73fa27e0-b59d-44b0-8648-7e696f71cd61-kube-api-access-zzf7p\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.656913 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtk2v\" (UniqueName: \"kubernetes.io/projected/d8d4cd44-9972-445e-bac3-63441b6fa4cc-kube-api-access-mtk2v\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.656991 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lc89c\" (UniqueName: \"kubernetes.io/projected/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-kube-api-access-lc89c\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.657051 4830 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/aef23409-e12b-4ef3-a968-f666e5a127ae-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.657122 4830 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.652779 4830 scope.go:117] "RemoveContainer" containerID="5d8587b51be5ddb11f190a631ac9ccd9976c6c15ea332cdd922d4924a56f8686" Feb 27 16:33:08 crc kubenswrapper[4830]: E0227 16:33:08.651167 4830 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 27 16:33:08 crc kubenswrapper[4830]: E0227 16:33:08.657437 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/69771028-c356-4cfb-9f0b-30f67d320657-operator-scripts podName:69771028-c356-4cfb-9f0b-30f67d320657 nodeName:}" failed. No retries permitted until 2026-02-27 16:33:09.657419027 +0000 UTC m=+1585.746691490 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/69771028-c356-4cfb-9f0b-30f67d320657-operator-scripts") pod "keystone-5550-account-create-update-q76l4" (UID: "69771028-c356-4cfb-9f0b-30f67d320657") : configmap "openstack-scripts" not found Feb 27 16:33:08 crc kubenswrapper[4830]: E0227 16:33:08.657624 4830 projected.go:194] Error preparing data for projected volume kube-api-access-vn7kc for pod openstack/keystone-5550-account-create-update-q76l4: failed to fetch token: serviceaccounts "galera-openstack" not found Feb 27 16:33:08 crc kubenswrapper[4830]: E0227 16:33:08.658112 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69771028-c356-4cfb-9f0b-30f67d320657-kube-api-access-vn7kc podName:69771028-c356-4cfb-9f0b-30f67d320657 nodeName:}" failed. No retries permitted until 2026-02-27 16:33:09.658102294 +0000 UTC m=+1585.747374757 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-vn7kc" (UniqueName: "kubernetes.io/projected/69771028-c356-4cfb-9f0b-30f67d320657-kube-api-access-vn7kc") pod "keystone-5550-account-create-update-q76l4" (UID: "69771028-c356-4cfb-9f0b-30f67d320657") : failed to fetch token: serviceaccounts "galera-openstack" not found Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.682242 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73fa27e0-b59d-44b0-8648-7e696f71cd61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73fa27e0-b59d-44b0-8648-7e696f71cd61" (UID: "73fa27e0-b59d-44b0-8648-7e696f71cd61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.683858 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.685556 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf" (UID: "bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.691831 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.707365 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-29fd-account-create-update-st6rb"] Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.720906 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b2fe2ad2-a0de-49aa-95fd-ef5f15032676" (UID: "b2fe2ad2-a0de-49aa-95fd-ef5f15032676"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.751066 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-29fd-account-create-update-st6rb"] Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.758049 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8klgh\" (UniqueName: \"kubernetes.io/projected/a234743b-8983-4a60-bbb4-59ad823b83e2-kube-api-access-8klgh\") pod \"a234743b-8983-4a60-bbb4-59ad823b83e2\" (UID: \"a234743b-8983-4a60-bbb4-59ad823b83e2\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.758182 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb3cdab6-15fa-40e1-a187-e277086227da-combined-ca-bundle\") pod \"eb3cdab6-15fa-40e1-a187-e277086227da\" (UID: \"eb3cdab6-15fa-40e1-a187-e277086227da\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.758489 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c-logs\") pod \"91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c\" (UID: \"91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.759446 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c-logs" (OuterVolumeSpecName: "logs") pod "91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c" (UID: "91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.760512 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-config-data" (OuterVolumeSpecName: "config-data") pod "bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf" (UID: "bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.762521 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c-public-tls-certs\") pod \"91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c\" (UID: \"91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.762594 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c-combined-ca-bundle\") pod \"91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c\" (UID: \"91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.762654 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a234743b-8983-4a60-bbb4-59ad823b83e2-config-data-custom\") pod \"a234743b-8983-4a60-bbb4-59ad823b83e2\" (UID: \"a234743b-8983-4a60-bbb4-59ad823b83e2\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.762684 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eb3cdab6-15fa-40e1-a187-e277086227da-kolla-config\") pod \"eb3cdab6-15fa-40e1-a187-e277086227da\" (UID: \"eb3cdab6-15fa-40e1-a187-e277086227da\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.762708 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lx6w\" (UniqueName: \"kubernetes.io/projected/91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c-kube-api-access-4lx6w\") pod \"91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c\" (UID: \"91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.762771 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c-config-data\") pod \"91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c\" (UID: \"91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.762812 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a234743b-8983-4a60-bbb4-59ad823b83e2-combined-ca-bundle\") pod \"a234743b-8983-4a60-bbb4-59ad823b83e2\" (UID: \"a234743b-8983-4a60-bbb4-59ad823b83e2\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.762833 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eb3cdab6-15fa-40e1-a187-e277086227da-config-data\") pod \"eb3cdab6-15fa-40e1-a187-e277086227da\" (UID: \"eb3cdab6-15fa-40e1-a187-e277086227da\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.762854 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a234743b-8983-4a60-bbb4-59ad823b83e2-logs\") pod \"a234743b-8983-4a60-bbb4-59ad823b83e2\" (UID: \"a234743b-8983-4a60-bbb4-59ad823b83e2\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.762883 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a234743b-8983-4a60-bbb4-59ad823b83e2-public-tls-certs\") pod \"a234743b-8983-4a60-bbb4-59ad823b83e2\" (UID: \"a234743b-8983-4a60-bbb4-59ad823b83e2\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.763043 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a234743b-8983-4a60-bbb4-59ad823b83e2-config-data\") pod \"a234743b-8983-4a60-bbb4-59ad823b83e2\" (UID: \"a234743b-8983-4a60-bbb4-59ad823b83e2\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.764683 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a234743b-8983-4a60-bbb4-59ad823b83e2-internal-tls-certs\") pod \"a234743b-8983-4a60-bbb4-59ad823b83e2\" (UID: \"a234743b-8983-4a60-bbb4-59ad823b83e2\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.764743 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb3cdab6-15fa-40e1-a187-e277086227da-memcached-tls-certs\") pod \"eb3cdab6-15fa-40e1-a187-e277086227da\" (UID: \"eb3cdab6-15fa-40e1-a187-e277086227da\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.764768 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c-internal-tls-certs\") pod \"91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c\" (UID: \"91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.764796 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j8x9\" (UniqueName: \"kubernetes.io/projected/eb3cdab6-15fa-40e1-a187-e277086227da-kube-api-access-5j8x9\") pod \"eb3cdab6-15fa-40e1-a187-e277086227da\" (UID: \"eb3cdab6-15fa-40e1-a187-e277086227da\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.765629 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb3cdab6-15fa-40e1-a187-e277086227da-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "eb3cdab6-15fa-40e1-a187-e277086227da" (UID: "eb3cdab6-15fa-40e1-a187-e277086227da"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.765905 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41fafe33-b43b-4dcb-9edd-b365d0749e10" (UID: "41fafe33-b43b-4dcb-9edd-b365d0749e10"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.766464 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb3cdab6-15fa-40e1-a187-e277086227da-config-data" (OuterVolumeSpecName: "config-data") pod "eb3cdab6-15fa-40e1-a187-e277086227da" (UID: "eb3cdab6-15fa-40e1-a187-e277086227da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.766547 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32" (UID: "f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.767592 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a234743b-8983-4a60-bbb4-59ad823b83e2-logs" (OuterVolumeSpecName: "logs") pod "a234743b-8983-4a60-bbb4-59ad823b83e2" (UID: "a234743b-8983-4a60-bbb4-59ad823b83e2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.775107 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a234743b-8983-4a60-bbb4-59ad823b83e2-kube-api-access-8klgh" (OuterVolumeSpecName: "kube-api-access-8klgh") pod "a234743b-8983-4a60-bbb4-59ad823b83e2" (UID: "a234743b-8983-4a60-bbb4-59ad823b83e2"). InnerVolumeSpecName "kube-api-access-8klgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.790119 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a234743b-8983-4a60-bbb4-59ad823b83e2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a234743b-8983-4a60-bbb4-59ad823b83e2" (UID: "a234743b-8983-4a60-bbb4-59ad823b83e2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.790873 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-combined-ca-bundle\") pod \"41fafe33-b43b-4dcb-9edd-b365d0749e10\" (UID: \"41fafe33-b43b-4dcb-9edd-b365d0749e10\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.790970 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32-combined-ca-bundle\") pod \"f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32\" (UID: \"f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32\") " Feb 27 16:33:08 crc kubenswrapper[4830]: W0227 16:33:08.791064 4830 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/41fafe33-b43b-4dcb-9edd-b365d0749e10/volumes/kubernetes.io~secret/combined-ca-bundle Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.791099 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41fafe33-b43b-4dcb-9edd-b365d0749e10" (UID: "41fafe33-b43b-4dcb-9edd-b365d0749e10"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: W0227 16:33:08.791225 4830 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32/volumes/kubernetes.io~secret/combined-ca-bundle Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.791239 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32" (UID: "f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.791647 4830 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a234743b-8983-4a60-bbb4-59ad823b83e2-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.791677 4830 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eb3cdab6-15fa-40e1-a187-e277086227da-kolla-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.791689 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eb3cdab6-15fa-40e1-a187-e277086227da-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.791701 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a234743b-8983-4a60-bbb4-59ad823b83e2-logs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.791713 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.791727 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73fa27e0-b59d-44b0-8648-7e696f71cd61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.791737 4830 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.791747 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.791760 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8klgh\" (UniqueName: \"kubernetes.io/projected/a234743b-8983-4a60-bbb4-59ad823b83e2-kube-api-access-8klgh\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.791773 4830 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.791785 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.791796 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c-logs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.798455 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c-kube-api-access-4lx6w" (OuterVolumeSpecName: "kube-api-access-4lx6w") pod "91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c" (UID: "91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c"). InnerVolumeSpecName "kube-api-access-4lx6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.798932 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4280aaf-817d-41e1-9867-715359ae322e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4280aaf-817d-41e1-9867-715359ae322e" (UID: "f4280aaf-817d-41e1-9867-715359ae322e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.805041 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02d5a77c-198f-43aa-96ab-2ac2d76c7743" path="/var/lib/kubelet/pods/02d5a77c-198f-43aa-96ab-2ac2d76c7743/volumes" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.805403 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bee1ae7-32fb-484d-a81a-47fe31e25d70" path="/var/lib/kubelet/pods/0bee1ae7-32fb-484d-a81a-47fe31e25d70/volumes" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.805866 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21656f50-51b8-4761-8b9e-c2b823dace13" path="/var/lib/kubelet/pods/21656f50-51b8-4761-8b9e-c2b823dace13/volumes" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.808859 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26018553-1865-499d-9c9b-932807fce26c" path="/var/lib/kubelet/pods/26018553-1865-499d-9c9b-932807fce26c/volumes" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.808930 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73fa27e0-b59d-44b0-8648-7e696f71cd61-config-data" (OuterVolumeSpecName: "config-data") pod "73fa27e0-b59d-44b0-8648-7e696f71cd61" (UID: "73fa27e0-b59d-44b0-8648-7e696f71cd61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.809403 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38b57350-6ca0-4090-876b-7727c983cf52" path="/var/lib/kubelet/pods/38b57350-6ca0-4090-876b-7727c983cf52/volumes" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.811899 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d6ca92a-3e98-4628-8936-37032cf27463" path="/var/lib/kubelet/pods/6d6ca92a-3e98-4628-8936-37032cf27463/volumes" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.812775 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6" path="/var/lib/kubelet/pods/9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6/volumes" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.813487 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b63af300-2b1c-47a7-ae1d-1334deeefdb1" path="/var/lib/kubelet/pods/b63af300-2b1c-47a7-ae1d-1334deeefdb1/volumes" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.814635 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c44ae554-632d-4347-ac9c-ce0c467ddce7" path="/var/lib/kubelet/pods/c44ae554-632d-4347-ac9c-ce0c467ddce7/volumes" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.815167 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6" path="/var/lib/kubelet/pods/f6ddd0b8-58b8-41b0-8555-6292f5d2d3d6/volumes" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.815985 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc88df57-1ce1-47f5-b850-7072073c4d72" path="/var/lib/kubelet/pods/fc88df57-1ce1-47f5-b850-7072073c4d72/volumes" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.821637 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d4cd44-9972-445e-bac3-63441b6fa4cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8d4cd44-9972-445e-bac3-63441b6fa4cc" (UID: "d8d4cd44-9972-445e-bac3-63441b6fa4cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.848690 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-7668-account-create-update-6wj4n"] Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.849117 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-7668-account-create-update-6wj4n"] Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.849142 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-776e-account-create-update-kg8tx"] Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.849157 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-776e-account-create-update-kg8tx"] Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.850965 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c" (UID: "91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.855396 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb3cdab6-15fa-40e1-a187-e277086227da-kube-api-access-5j8x9" (OuterVolumeSpecName: "kube-api-access-5j8x9") pod "eb3cdab6-15fa-40e1-a187-e277086227da" (UID: "eb3cdab6-15fa-40e1-a187-e277086227da"). InnerVolumeSpecName "kube-api-access-5j8x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.865698 4830 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.881506 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32-config-data" (OuterVolumeSpecName: "config-data") pod "f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32" (UID: "f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.884492 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-948fdb9cd-ncm6f"] Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.889688 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-948fdb9cd-ncm6f"] Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.891519 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-config-data" (OuterVolumeSpecName: "config-data") pod "41fafe33-b43b-4dcb-9edd-b365d0749e10" (UID: "41fafe33-b43b-4dcb-9edd-b365d0749e10"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.894021 4830 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.894042 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.894054 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73fa27e0-b59d-44b0-8648-7e696f71cd61-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.894064 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lx6w\" (UniqueName: \"kubernetes.io/projected/91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c-kube-api-access-4lx6w\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.894072 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.894080 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j8x9\" (UniqueName: \"kubernetes.io/projected/eb3cdab6-15fa-40e1-a187-e277086227da-kube-api-access-5j8x9\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.894088 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4280aaf-817d-41e1-9867-715359ae322e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.894096 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.894105 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8d4cd44-9972-445e-bac3-63441b6fa4cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.901639 4830 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.908785 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c-config-data" (OuterVolumeSpecName: "config-data") pod "91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c" (UID: "91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.910141 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a234743b-8983-4a60-bbb4-59ad823b83e2-config-data" (OuterVolumeSpecName: "config-data") pod "a234743b-8983-4a60-bbb4-59ad823b83e2" (UID: "a234743b-8983-4a60-bbb4-59ad823b83e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.910238 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "b2fe2ad2-a0de-49aa-95fd-ef5f15032676" (UID: "b2fe2ad2-a0de-49aa-95fd-ef5f15032676"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.926048 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d4cd44-9972-445e-bac3-63441b6fa4cc-config-data" (OuterVolumeSpecName: "config-data") pod "d8d4cd44-9972-445e-bac3-63441b6fa4cc" (UID: "d8d4cd44-9972-445e-bac3-63441b6fa4cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: E0227 16:33:08.944241 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-internal-tls-certs podName:41fafe33-b43b-4dcb-9edd-b365d0749e10 nodeName:}" failed. No retries permitted until 2026-02-27 16:33:09.444211763 +0000 UTC m=+1585.533484226 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "internal-tls-certs" (UniqueName: "kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-internal-tls-certs") pod "41fafe33-b43b-4dcb-9edd-b365d0749e10" (UID: "41fafe33-b43b-4dcb-9edd-b365d0749e10") : error deleting /var/lib/kubelet/pods/41fafe33-b43b-4dcb-9edd-b365d0749e10/volume-subpaths: remove /var/lib/kubelet/pods/41fafe33-b43b-4dcb-9edd-b365d0749e10/volume-subpaths: no such file or directory Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.951089 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8d4cd44-9972-445e-bac3-63441b6fa4cc-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d8d4cd44-9972-445e-bac3-63441b6fa4cc" (UID: "d8d4cd44-9972-445e-bac3-63441b6fa4cc"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.954113 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "41fafe33-b43b-4dcb-9edd-b365d0749e10" (UID: "41fafe33-b43b-4dcb-9edd-b365d0749e10"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.959333 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_7c017daa-cb8f-4629-80e6-a671a8455149/ovn-northd/0.log" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.959430 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.959845 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4280aaf-817d-41e1-9867-715359ae322e-config-data" (OuterVolumeSpecName: "config-data") pod "f4280aaf-817d-41e1-9867-715359ae322e" (UID: "f4280aaf-817d-41e1-9867-715359ae322e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.968241 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb3cdab6-15fa-40e1-a187-e277086227da-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "eb3cdab6-15fa-40e1-a187-e277086227da" (UID: "eb3cdab6-15fa-40e1-a187-e277086227da"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.968410 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4280aaf-817d-41e1-9867-715359ae322e-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "f4280aaf-817d-41e1-9867-715359ae322e" (UID: "f4280aaf-817d-41e1-9867-715359ae322e"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.975705 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb3cdab6-15fa-40e1-a187-e277086227da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb3cdab6-15fa-40e1-a187-e277086227da" (UID: "eb3cdab6-15fa-40e1-a187-e277086227da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.990296 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c" (UID: "91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.995118 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c017daa-cb8f-4629-80e6-a671a8455149-config\") pod \"7c017daa-cb8f-4629-80e6-a671a8455149\" (UID: \"7c017daa-cb8f-4629-80e6-a671a8455149\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.995282 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7c017daa-cb8f-4629-80e6-a671a8455149-ovn-rundir\") pod \"7c017daa-cb8f-4629-80e6-a671a8455149\" (UID: \"7c017daa-cb8f-4629-80e6-a671a8455149\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.995379 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c017daa-cb8f-4629-80e6-a671a8455149-combined-ca-bundle\") pod \"7c017daa-cb8f-4629-80e6-a671a8455149\" (UID: \"7c017daa-cb8f-4629-80e6-a671a8455149\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.995476 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c017daa-cb8f-4629-80e6-a671a8455149-metrics-certs-tls-certs\") pod \"7c017daa-cb8f-4629-80e6-a671a8455149\" (UID: \"7c017daa-cb8f-4629-80e6-a671a8455149\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.995559 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c017daa-cb8f-4629-80e6-a671a8455149-ovn-northd-tls-certs\") pod \"7c017daa-cb8f-4629-80e6-a671a8455149\" (UID: \"7c017daa-cb8f-4629-80e6-a671a8455149\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.995641 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c017daa-cb8f-4629-80e6-a671a8455149-scripts\") pod \"7c017daa-cb8f-4629-80e6-a671a8455149\" (UID: \"7c017daa-cb8f-4629-80e6-a671a8455149\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.995729 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcrzd\" (UniqueName: \"kubernetes.io/projected/7c017daa-cb8f-4629-80e6-a671a8455149-kube-api-access-dcrzd\") pod \"7c017daa-cb8f-4629-80e6-a671a8455149\" (UID: \"7c017daa-cb8f-4629-80e6-a671a8455149\") " Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.995586 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c017daa-cb8f-4629-80e6-a671a8455149-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "7c017daa-cb8f-4629-80e6-a671a8455149" (UID: "7c017daa-cb8f-4629-80e6-a671a8455149"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.995617 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c017daa-cb8f-4629-80e6-a671a8455149-config" (OuterVolumeSpecName: "config") pod "7c017daa-cb8f-4629-80e6-a671a8455149" (UID: "7c017daa-cb8f-4629-80e6-a671a8455149"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.996446 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c017daa-cb8f-4629-80e6-a671a8455149-scripts" (OuterVolumeSpecName: "scripts") pod "7c017daa-cb8f-4629-80e6-a671a8455149" (UID: "7c017daa-cb8f-4629-80e6-a671a8455149"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.997055 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb3cdab6-15fa-40e1-a187-e277086227da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.997081 4830 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.997091 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8d4cd44-9972-445e-bac3-63441b6fa4cc-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.997102 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.997129 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c017daa-cb8f-4629-80e6-a671a8455149-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.997138 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a234743b-8983-4a60-bbb4-59ad823b83e2-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.997147 4830 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.997156 4830 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.997165 4830 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb3cdab6-15fa-40e1-a187-e277086227da-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.997176 4830 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.997263 4830 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4280aaf-817d-41e1-9867-715359ae322e-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.997274 4830 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7c017daa-cb8f-4629-80e6-a671a8455149-ovn-rundir\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.997283 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4280aaf-817d-41e1-9867-715359ae322e-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.997292 4830 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8d4cd44-9972-445e-bac3-63441b6fa4cc-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.997300 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c017daa-cb8f-4629-80e6-a671a8455149-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:08 crc kubenswrapper[4830]: I0227 16:33:08.998081 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a234743b-8983-4a60-bbb4-59ad823b83e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a234743b-8983-4a60-bbb4-59ad823b83e2" (UID: "a234743b-8983-4a60-bbb4-59ad823b83e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.008810 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73fa27e0-b59d-44b0-8648-7e696f71cd61-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "73fa27e0-b59d-44b0-8648-7e696f71cd61" (UID: "73fa27e0-b59d-44b0-8648-7e696f71cd61"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.018663 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a234743b-8983-4a60-bbb4-59ad823b83e2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a234743b-8983-4a60-bbb4-59ad823b83e2" (UID: "a234743b-8983-4a60-bbb4-59ad823b83e2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.018762 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c017daa-cb8f-4629-80e6-a671a8455149-kube-api-access-dcrzd" (OuterVolumeSpecName: "kube-api-access-dcrzd") pod "7c017daa-cb8f-4629-80e6-a671a8455149" (UID: "7c017daa-cb8f-4629-80e6-a671a8455149"). InnerVolumeSpecName "kube-api-access-dcrzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.023831 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c017daa-cb8f-4629-80e6-a671a8455149-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c017daa-cb8f-4629-80e6-a671a8455149" (UID: "7c017daa-cb8f-4629-80e6-a671a8455149"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.032043 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c" (UID: "91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.034561 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a234743b-8983-4a60-bbb4-59ad823b83e2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a234743b-8983-4a60-bbb4-59ad823b83e2" (UID: "a234743b-8983-4a60-bbb4-59ad823b83e2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.037273 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2fe2ad2-a0de-49aa-95fd-ef5f15032676" (UID: "b2fe2ad2-a0de-49aa-95fd-ef5f15032676"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.060924 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-config-data" (OuterVolumeSpecName: "config-data") pod "b2fe2ad2-a0de-49aa-95fd-ef5f15032676" (UID: "b2fe2ad2-a0de-49aa-95fd-ef5f15032676"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.063709 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d54db5966-xcg7l" event={"ID":"a234743b-8983-4a60-bbb4-59ad823b83e2","Type":"ContainerDied","Data":"60b1698b9bf51b951bd77870e5046fcfdcd7a8f538faf1f1732e6055788dfb74"} Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.063755 4830 scope.go:117] "RemoveContainer" containerID="5d61bb0dcfd0af97605ea6793d0ccb409521660eb0cfce03c505ba533a6f52a4" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.063876 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d54db5966-xcg7l" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.072372 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c017daa-cb8f-4629-80e6-a671a8455149-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "7c017daa-cb8f-4629-80e6-a671a8455149" (UID: "7c017daa-cb8f-4629-80e6-a671a8455149"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.073636 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_7c017daa-cb8f-4629-80e6-a671a8455149/ovn-northd/0.log" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.073682 4830 generic.go:334] "Generic (PLEG): container finished" podID="7c017daa-cb8f-4629-80e6-a671a8455149" containerID="3a1363ee7dd262f992c78bd580ddc30893c1d6cb37be4314ebecd1d79f83e351" exitCode=139 Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.073791 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.073812 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7c017daa-cb8f-4629-80e6-a671a8455149","Type":"ContainerDied","Data":"3a1363ee7dd262f992c78bd580ddc30893c1d6cb37be4314ebecd1d79f83e351"} Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.074022 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7c017daa-cb8f-4629-80e6-a671a8455149","Type":"ContainerDied","Data":"3cc30a613b2117b4f5cbfde73330d0349be12252716eaff7963497d00f69d2cd"} Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.075887 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"eb3cdab6-15fa-40e1-a187-e277086227da","Type":"ContainerDied","Data":"23f9b2043dd7472d750b86599a6ec4fd73edb0ad6c2affdab8a506cb40cd6394"} Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.075913 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.083125 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5550-account-create-update-q76l4" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.083518 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.084658 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.085084 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.087576 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-58db7bd5dd-jr8zt" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.088055 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.088320 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.088966 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.090138 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-58c49587-cz4f5" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.095252 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.098160 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c017daa-cb8f-4629-80e6-a671a8455149-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.098180 4830 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73fa27e0-b59d-44b0-8648-7e696f71cd61-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.098210 4830 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c017daa-cb8f-4629-80e6-a671a8455149-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.098219 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.098474 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcrzd\" (UniqueName: \"kubernetes.io/projected/7c017daa-cb8f-4629-80e6-a671a8455149-kube-api-access-dcrzd\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.098510 4830 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.098520 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2fe2ad2-a0de-49aa-95fd-ef5f15032676-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.098548 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a234743b-8983-4a60-bbb4-59ad823b83e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.098557 4830 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a234743b-8983-4a60-bbb4-59ad823b83e2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.098567 4830 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a234743b-8983-4a60-bbb4-59ad823b83e2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.107895 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c017daa-cb8f-4629-80e6-a671a8455149-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "7c017daa-cb8f-4629-80e6-a671a8455149" (UID: "7c017daa-cb8f-4629-80e6-a671a8455149"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.115583 4830 scope.go:117] "RemoveContainer" containerID="bcaad14a5dbb96adf7a18f1f57a6f9461056ab8d5981e03e5ed3e64de132d692" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.147433 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-5550-account-create-update-q76l4"] Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.147473 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-5550-account-create-update-q76l4"] Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.168513 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5d54db5966-xcg7l"] Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.173826 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5d54db5966-xcg7l"] Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.200252 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69771028-c356-4cfb-9f0b-30f67d320657-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.200272 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vn7kc\" (UniqueName: \"kubernetes.io/projected/69771028-c356-4cfb-9f0b-30f67d320657-kube-api-access-vn7kc\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.200282 4830 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c017daa-cb8f-4629-80e6-a671a8455149-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.206097 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.212790 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.232019 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-58db7bd5dd-jr8zt"] Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.239246 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-58db7bd5dd-jr8zt"] Feb 27 16:33:09 crc kubenswrapper[4830]: E0227 16:33:09.389623 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="68dcbd84b2ee99bb92f47d75adccd5e677bcf1de6646eeea5b827c8e802fad81" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 27 16:33:09 crc kubenswrapper[4830]: E0227 16:33:09.392469 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="68dcbd84b2ee99bb92f47d75adccd5e677bcf1de6646eeea5b827c8e802fad81" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 27 16:33:09 crc kubenswrapper[4830]: E0227 16:33:09.393440 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="68dcbd84b2ee99bb92f47d75adccd5e677bcf1de6646eeea5b827c8e802fad81" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 27 16:33:09 crc kubenswrapper[4830]: E0227 16:33:09.393478 4830 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3" containerName="galera" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.438680 4830 scope.go:117] "RemoveContainer" containerID="2bcb11d594f79ad72480623964189734a371dbef467c7efb79b03ffa6975a1e6" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.451847 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lx5sm" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.452384 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.474205 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.501090 4830 scope.go:117] "RemoveContainer" containerID="3a1363ee7dd262f992c78bd580ddc30893c1d6cb37be4314ebecd1d79f83e351" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.515765 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09849d6c-7457-4130-9074-73154d22af1f-operator-scripts\") pod \"09849d6c-7457-4130-9074-73154d22af1f\" (UID: \"09849d6c-7457-4130-9074-73154d22af1f\") " Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.515825 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t98xs\" (UniqueName: \"kubernetes.io/projected/09849d6c-7457-4130-9074-73154d22af1f-kube-api-access-t98xs\") pod \"09849d6c-7457-4130-9074-73154d22af1f\" (UID: \"09849d6c-7457-4130-9074-73154d22af1f\") " Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.515885 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-internal-tls-certs\") pod \"41fafe33-b43b-4dcb-9edd-b365d0749e10\" (UID: \"41fafe33-b43b-4dcb-9edd-b365d0749e10\") " Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.519771 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "41fafe33-b43b-4dcb-9edd-b365d0749e10" (UID: "41fafe33-b43b-4dcb-9edd-b365d0749e10"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.520634 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09849d6c-7457-4130-9074-73154d22af1f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "09849d6c-7457-4130-9074-73154d22af1f" (UID: "09849d6c-7457-4130-9074-73154d22af1f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.534465 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.535787 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09849d6c-7457-4130-9074-73154d22af1f-kube-api-access-t98xs" (OuterVolumeSpecName: "kube-api-access-t98xs") pod "09849d6c-7457-4130-9074-73154d22af1f" (UID: "09849d6c-7457-4130-9074-73154d22af1f"). InnerVolumeSpecName "kube-api-access-t98xs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.552667 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.560843 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.576294 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.601957 4830 scope.go:117] "RemoveContainer" containerID="2bcb11d594f79ad72480623964189734a371dbef467c7efb79b03ffa6975a1e6" Feb 27 16:33:09 crc kubenswrapper[4830]: E0227 16:33:09.606608 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bcb11d594f79ad72480623964189734a371dbef467c7efb79b03ffa6975a1e6\": container with ID starting with 2bcb11d594f79ad72480623964189734a371dbef467c7efb79b03ffa6975a1e6 not found: ID does not exist" containerID="2bcb11d594f79ad72480623964189734a371dbef467c7efb79b03ffa6975a1e6" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.606648 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bcb11d594f79ad72480623964189734a371dbef467c7efb79b03ffa6975a1e6"} err="failed to get container status \"2bcb11d594f79ad72480623964189734a371dbef467c7efb79b03ffa6975a1e6\": rpc error: code = NotFound desc = could not find container \"2bcb11d594f79ad72480623964189734a371dbef467c7efb79b03ffa6975a1e6\": container with ID starting with 2bcb11d594f79ad72480623964189734a371dbef467c7efb79b03ffa6975a1e6 not found: ID does not exist" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.606672 4830 scope.go:117] "RemoveContainer" containerID="3a1363ee7dd262f992c78bd580ddc30893c1d6cb37be4314ebecd1d79f83e351" Feb 27 16:33:09 crc kubenswrapper[4830]: E0227 16:33:09.606935 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a1363ee7dd262f992c78bd580ddc30893c1d6cb37be4314ebecd1d79f83e351\": container with ID starting with 3a1363ee7dd262f992c78bd580ddc30893c1d6cb37be4314ebecd1d79f83e351 not found: ID does not exist" containerID="3a1363ee7dd262f992c78bd580ddc30893c1d6cb37be4314ebecd1d79f83e351" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.606978 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a1363ee7dd262f992c78bd580ddc30893c1d6cb37be4314ebecd1d79f83e351"} err="failed to get container status \"3a1363ee7dd262f992c78bd580ddc30893c1d6cb37be4314ebecd1d79f83e351\": rpc error: code = NotFound desc = could not find container \"3a1363ee7dd262f992c78bd580ddc30893c1d6cb37be4314ebecd1d79f83e351\": container with ID starting with 3a1363ee7dd262f992c78bd580ddc30893c1d6cb37be4314ebecd1d79f83e351 not found: ID does not exist" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.606996 4830 scope.go:117] "RemoveContainer" containerID="1d243201cb634428da46e5d01d1c419016026f2c349204898c21d5e7060a1280" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.615157 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.619263 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09849d6c-7457-4130-9074-73154d22af1f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.619293 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t98xs\" (UniqueName: \"kubernetes.io/projected/09849d6c-7457-4130-9074-73154d22af1f-kube-api-access-t98xs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.619304 4830 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41fafe33-b43b-4dcb-9edd-b365d0749e10-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.633519 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.633571 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.636200 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.641338 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.646493 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.657816 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-58c49587-cz4f5"] Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.667751 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-58c49587-cz4f5"] Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.674256 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.680031 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.729640 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 27 16:33:09 crc kubenswrapper[4830]: I0227 16:33:09.736101 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 27 16:33:09 crc kubenswrapper[4830]: E0227 16:33:09.758735 4830 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41fafe33_b43b_4dcb_9edd_b365d0749e10.slice\": RecentStats: unable to find data in memory cache]" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.097530 4830 generic.go:334] "Generic (PLEG): container finished" podID="aa5b7bdd-50bb-4123-a32a-0c7e97035a3f" containerID="60b83b906afc06b23e5e1362e3117ceeff1474cd84090478f13efba3e31b7cf5" exitCode=0 Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.097630 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f","Type":"ContainerDied","Data":"60b83b906afc06b23e5e1362e3117ceeff1474cd84090478f13efba3e31b7cf5"} Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.102888 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-lx5sm" event={"ID":"09849d6c-7457-4130-9074-73154d22af1f","Type":"ContainerDied","Data":"86b15d76da0cc80d79a54876e95096e018daf6373a2151ef62d4412ba2710fe1"} Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.102937 4830 scope.go:117] "RemoveContainer" containerID="3c3ffaf742258d5543939f307e4df804a0b02c0397303e259d28b6fddcbd5115" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.103069 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-lx5sm" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.106124 4830 generic.go:334] "Generic (PLEG): container finished" podID="bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3" containerID="68dcbd84b2ee99bb92f47d75adccd5e677bcf1de6646eeea5b827c8e802fad81" exitCode=0 Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.106174 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3","Type":"ContainerDied","Data":"68dcbd84b2ee99bb92f47d75adccd5e677bcf1de6646eeea5b827c8e802fad81"} Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.148634 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-lx5sm"] Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.158415 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-lx5sm"] Feb 27 16:33:10 crc kubenswrapper[4830]: E0227 16:33:10.228097 4830 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 27 16:33:10 crc kubenswrapper[4830]: E0227 16:33:10.228172 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/47514135-95a6-4b77-815a-ebf23a3cab82-config-data podName:47514135-95a6-4b77-815a-ebf23a3cab82 nodeName:}" failed. No retries permitted until 2026-02-27 16:33:18.228158058 +0000 UTC m=+1594.317430521 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/47514135-95a6-4b77-815a-ebf23a3cab82-config-data") pod "rabbitmq-cell1-server-0" (UID: "47514135-95a6-4b77-815a-ebf23a3cab82") : configmap "rabbitmq-cell1-config-data" not found Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.369371 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.431321 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-rabbitmq-erlang-cookie\") pod \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.431400 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-rabbitmq-plugins\") pod \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.431431 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb7f9\" (UniqueName: \"kubernetes.io/projected/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-kube-api-access-jb7f9\") pod \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.431458 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-rabbitmq-confd\") pod \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.431507 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-config-data\") pod \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.431541 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-erlang-cookie-secret\") pod \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.431561 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.431635 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-plugins-conf\") pod \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.431690 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-rabbitmq-tls\") pod \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.431838 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-pod-info\") pod \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.431872 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-server-conf\") pod \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\" (UID: \"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f\") " Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.433813 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "aa5b7bdd-50bb-4123-a32a-0c7e97035a3f" (UID: "aa5b7bdd-50bb-4123-a32a-0c7e97035a3f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.436836 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "aa5b7bdd-50bb-4123-a32a-0c7e97035a3f" (UID: "aa5b7bdd-50bb-4123-a32a-0c7e97035a3f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.437595 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "aa5b7bdd-50bb-4123-a32a-0c7e97035a3f" (UID: "aa5b7bdd-50bb-4123-a32a-0c7e97035a3f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.438050 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "aa5b7bdd-50bb-4123-a32a-0c7e97035a3f" (UID: "aa5b7bdd-50bb-4123-a32a-0c7e97035a3f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.442447 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "aa5b7bdd-50bb-4123-a32a-0c7e97035a3f" (UID: "aa5b7bdd-50bb-4123-a32a-0c7e97035a3f"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.442895 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-kube-api-access-jb7f9" (OuterVolumeSpecName: "kube-api-access-jb7f9") pod "aa5b7bdd-50bb-4123-a32a-0c7e97035a3f" (UID: "aa5b7bdd-50bb-4123-a32a-0c7e97035a3f"). InnerVolumeSpecName "kube-api-access-jb7f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.443571 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "aa5b7bdd-50bb-4123-a32a-0c7e97035a3f" (UID: "aa5b7bdd-50bb-4123-a32a-0c7e97035a3f"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.443814 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-pod-info" (OuterVolumeSpecName: "pod-info") pod "aa5b7bdd-50bb-4123-a32a-0c7e97035a3f" (UID: "aa5b7bdd-50bb-4123-a32a-0c7e97035a3f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.451784 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-config-data" (OuterVolumeSpecName: "config-data") pod "aa5b7bdd-50bb-4123-a32a-0c7e97035a3f" (UID: "aa5b7bdd-50bb-4123-a32a-0c7e97035a3f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.457714 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.477792 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-server-conf" (OuterVolumeSpecName: "server-conf") pod "aa5b7bdd-50bb-4123-a32a-0c7e97035a3f" (UID: "aa5b7bdd-50bb-4123-a32a-0c7e97035a3f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.533422 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\" (UID: \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\") " Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.533521 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-combined-ca-bundle\") pod \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\" (UID: \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\") " Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.533646 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-galera-tls-certs\") pod \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\" (UID: \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\") " Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.533717 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-config-data-generated\") pod \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\" (UID: \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\") " Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.533752 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-operator-scripts\") pod \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\" (UID: \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\") " Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.533830 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-config-data-default\") pod \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\" (UID: \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\") " Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.533883 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-kolla-config\") pod \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\" (UID: \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\") " Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.533974 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdmdk\" (UniqueName: \"kubernetes.io/projected/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-kube-api-access-pdmdk\") pod \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\" (UID: \"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3\") " Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.534406 4830 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.534431 4830 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.534444 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jb7f9\" (UniqueName: \"kubernetes.io/projected/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-kube-api-access-jb7f9\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.534457 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.534469 4830 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.534492 4830 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.534504 4830 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.534517 4830 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.534528 4830 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-pod-info\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.534538 4830 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-server-conf\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.535199 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3" (UID: "bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.535192 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3" (UID: "bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.535817 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3" (UID: "bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.537670 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3" (UID: "bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.540427 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-kube-api-access-pdmdk" (OuterVolumeSpecName: "kube-api-access-pdmdk") pod "bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3" (UID: "bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3"). InnerVolumeSpecName "kube-api-access-pdmdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.542715 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "mysql-db") pod "bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3" (UID: "bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.556716 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "aa5b7bdd-50bb-4123-a32a-0c7e97035a3f" (UID: "aa5b7bdd-50bb-4123-a32a-0c7e97035a3f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.556716 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3" (UID: "bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.566597 4830 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.581396 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3" (UID: "bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.636221 4830 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.636253 4830 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-config-data-generated\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.636263 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.636271 4830 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.636281 4830 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-config-data-default\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.636290 4830 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-kolla-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.636299 4830 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.636307 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdmdk\" (UniqueName: \"kubernetes.io/projected/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-kube-api-access-pdmdk\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.636347 4830 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.636356 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.650963 4830 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.738184 4830 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.772563 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09849d6c-7457-4130-9074-73154d22af1f" path="/var/lib/kubelet/pods/09849d6c-7457-4130-9074-73154d22af1f/volumes" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.773101 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22232c9c-ecf7-443e-834f-ad39b37735b2" path="/var/lib/kubelet/pods/22232c9c-ecf7-443e-834f-ad39b37735b2/volumes" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.773599 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bf3e284-86ae-43b5-9259-6e9e34164de2" path="/var/lib/kubelet/pods/3bf3e284-86ae-43b5-9259-6e9e34164de2/volumes" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.774452 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41fafe33-b43b-4dcb-9edd-b365d0749e10" path="/var/lib/kubelet/pods/41fafe33-b43b-4dcb-9edd-b365d0749e10/volumes" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.774856 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69771028-c356-4cfb-9f0b-30f67d320657" path="/var/lib/kubelet/pods/69771028-c356-4cfb-9f0b-30f67d320657/volumes" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.775294 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73fa27e0-b59d-44b0-8648-7e696f71cd61" path="/var/lib/kubelet/pods/73fa27e0-b59d-44b0-8648-7e696f71cd61/volumes" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.776121 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c017daa-cb8f-4629-80e6-a671a8455149" path="/var/lib/kubelet/pods/7c017daa-cb8f-4629-80e6-a671a8455149/volumes" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.777090 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c" path="/var/lib/kubelet/pods/91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c/volumes" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.777785 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a234743b-8983-4a60-bbb4-59ad823b83e2" path="/var/lib/kubelet/pods/a234743b-8983-4a60-bbb4-59ad823b83e2/volumes" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.779120 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aef23409-e12b-4ef3-a968-f666e5a127ae" path="/var/lib/kubelet/pods/aef23409-e12b-4ef3-a968-f666e5a127ae/volumes" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.779607 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2fe2ad2-a0de-49aa-95fd-ef5f15032676" path="/var/lib/kubelet/pods/b2fe2ad2-a0de-49aa-95fd-ef5f15032676/volumes" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.780721 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baefaedf-2591-42f2-a383-5c92ae714ab5" path="/var/lib/kubelet/pods/baefaedf-2591-42f2-a383-5c92ae714ab5/volumes" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.781972 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf" path="/var/lib/kubelet/pods/bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf/volumes" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.785541 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8d4cd44-9972-445e-bac3-63441b6fa4cc" path="/var/lib/kubelet/pods/d8d4cd44-9972-445e-bac3-63441b6fa4cc/volumes" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.786863 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb3cdab6-15fa-40e1-a187-e277086227da" path="/var/lib/kubelet/pods/eb3cdab6-15fa-40e1-a187-e277086227da/volumes" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.787863 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4280aaf-817d-41e1-9867-715359ae322e" path="/var/lib/kubelet/pods/f4280aaf-817d-41e1-9867-715359ae322e/volumes" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.789590 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32" path="/var/lib/kubelet/pods/f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32/volumes" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.863154 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.941386 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/47514135-95a6-4b77-815a-ebf23a3cab82-rabbitmq-tls\") pod \"47514135-95a6-4b77-815a-ebf23a3cab82\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.941451 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/47514135-95a6-4b77-815a-ebf23a3cab82-config-data\") pod \"47514135-95a6-4b77-815a-ebf23a3cab82\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.941475 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/47514135-95a6-4b77-815a-ebf23a3cab82-erlang-cookie-secret\") pod \"47514135-95a6-4b77-815a-ebf23a3cab82\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.941494 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc6fh\" (UniqueName: \"kubernetes.io/projected/47514135-95a6-4b77-815a-ebf23a3cab82-kube-api-access-kc6fh\") pod \"47514135-95a6-4b77-815a-ebf23a3cab82\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.941514 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/47514135-95a6-4b77-815a-ebf23a3cab82-server-conf\") pod \"47514135-95a6-4b77-815a-ebf23a3cab82\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.941587 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/47514135-95a6-4b77-815a-ebf23a3cab82-rabbitmq-erlang-cookie\") pod \"47514135-95a6-4b77-815a-ebf23a3cab82\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.942410 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"47514135-95a6-4b77-815a-ebf23a3cab82\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.943268 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/47514135-95a6-4b77-815a-ebf23a3cab82-rabbitmq-confd\") pod \"47514135-95a6-4b77-815a-ebf23a3cab82\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.943082 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47514135-95a6-4b77-815a-ebf23a3cab82-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "47514135-95a6-4b77-815a-ebf23a3cab82" (UID: "47514135-95a6-4b77-815a-ebf23a3cab82"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.943500 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/47514135-95a6-4b77-815a-ebf23a3cab82-plugins-conf\") pod \"47514135-95a6-4b77-815a-ebf23a3cab82\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.943527 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/47514135-95a6-4b77-815a-ebf23a3cab82-rabbitmq-plugins\") pod \"47514135-95a6-4b77-815a-ebf23a3cab82\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.943562 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/47514135-95a6-4b77-815a-ebf23a3cab82-pod-info\") pod \"47514135-95a6-4b77-815a-ebf23a3cab82\" (UID: \"47514135-95a6-4b77-815a-ebf23a3cab82\") " Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.943917 4830 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/47514135-95a6-4b77-815a-ebf23a3cab82-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.945426 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47514135-95a6-4b77-815a-ebf23a3cab82-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "47514135-95a6-4b77-815a-ebf23a3cab82" (UID: "47514135-95a6-4b77-815a-ebf23a3cab82"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.945524 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47514135-95a6-4b77-815a-ebf23a3cab82-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "47514135-95a6-4b77-815a-ebf23a3cab82" (UID: "47514135-95a6-4b77-815a-ebf23a3cab82"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.945658 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47514135-95a6-4b77-815a-ebf23a3cab82-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "47514135-95a6-4b77-815a-ebf23a3cab82" (UID: "47514135-95a6-4b77-815a-ebf23a3cab82"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.947591 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47514135-95a6-4b77-815a-ebf23a3cab82-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "47514135-95a6-4b77-815a-ebf23a3cab82" (UID: "47514135-95a6-4b77-815a-ebf23a3cab82"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.947813 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47514135-95a6-4b77-815a-ebf23a3cab82-kube-api-access-kc6fh" (OuterVolumeSpecName: "kube-api-access-kc6fh") pod "47514135-95a6-4b77-815a-ebf23a3cab82" (UID: "47514135-95a6-4b77-815a-ebf23a3cab82"). InnerVolumeSpecName "kube-api-access-kc6fh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.947895 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "47514135-95a6-4b77-815a-ebf23a3cab82" (UID: "47514135-95a6-4b77-815a-ebf23a3cab82"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.948022 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/47514135-95a6-4b77-815a-ebf23a3cab82-pod-info" (OuterVolumeSpecName: "pod-info") pod "47514135-95a6-4b77-815a-ebf23a3cab82" (UID: "47514135-95a6-4b77-815a-ebf23a3cab82"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.965805 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47514135-95a6-4b77-815a-ebf23a3cab82-config-data" (OuterVolumeSpecName: "config-data") pod "47514135-95a6-4b77-815a-ebf23a3cab82" (UID: "47514135-95a6-4b77-815a-ebf23a3cab82"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:33:10 crc kubenswrapper[4830]: I0227 16:33:10.975122 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47514135-95a6-4b77-815a-ebf23a3cab82-server-conf" (OuterVolumeSpecName: "server-conf") pod "47514135-95a6-4b77-815a-ebf23a3cab82" (UID: "47514135-95a6-4b77-815a-ebf23a3cab82"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.019053 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47514135-95a6-4b77-815a-ebf23a3cab82-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "47514135-95a6-4b77-815a-ebf23a3cab82" (UID: "47514135-95a6-4b77-815a-ebf23a3cab82"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.044775 4830 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.044806 4830 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/47514135-95a6-4b77-815a-ebf23a3cab82-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.044817 4830 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/47514135-95a6-4b77-815a-ebf23a3cab82-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.044827 4830 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/47514135-95a6-4b77-815a-ebf23a3cab82-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.044836 4830 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/47514135-95a6-4b77-815a-ebf23a3cab82-pod-info\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.044844 4830 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/47514135-95a6-4b77-815a-ebf23a3cab82-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.044851 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/47514135-95a6-4b77-815a-ebf23a3cab82-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.044861 4830 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/47514135-95a6-4b77-815a-ebf23a3cab82-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.044872 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kc6fh\" (UniqueName: \"kubernetes.io/projected/47514135-95a6-4b77-815a-ebf23a3cab82-kube-api-access-kc6fh\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.044881 4830 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/47514135-95a6-4b77-815a-ebf23a3cab82-server-conf\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.058275 4830 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 27 16:33:11 crc kubenswrapper[4830]: E0227 16:33:11.083884 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0177eede3f4945d97bcd0d90fed75c1aa58d1276a7fd71e80b0683515562f9b1" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 27 16:33:11 crc kubenswrapper[4830]: E0227 16:33:11.086227 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0177eede3f4945d97bcd0d90fed75c1aa58d1276a7fd71e80b0683515562f9b1" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 27 16:33:11 crc kubenswrapper[4830]: E0227 16:33:11.088250 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0177eede3f4945d97bcd0d90fed75c1aa58d1276a7fd71e80b0683515562f9b1" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 27 16:33:11 crc kubenswrapper[4830]: E0227 16:33:11.088351 4830 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="a989aa76-9246-46b2-9f1e-7900cfecedc2" containerName="nova-cell1-conductor-conductor" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.117019 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3","Type":"ContainerDied","Data":"cfd42446c7904e4ee2b3cc8caf83bb44f68fa23e91c0df8dccd39789d5275b09"} Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.117066 4830 scope.go:117] "RemoveContainer" containerID="68dcbd84b2ee99bb92f47d75adccd5e677bcf1de6646eeea5b827c8e802fad81" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.117153 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.127628 4830 generic.go:334] "Generic (PLEG): container finished" podID="28316ca0-eb95-47b0-bc7e-d31591facdc5" containerID="0222fc9c68ebb7ebbcbccfa2809183acfbfef310f1d1faa28bd88a72fb86cf67" exitCode=0 Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.127900 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6b747d769f-z82kl" event={"ID":"28316ca0-eb95-47b0-bc7e-d31591facdc5","Type":"ContainerDied","Data":"0222fc9c68ebb7ebbcbccfa2809183acfbfef310f1d1faa28bd88a72fb86cf67"} Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.130865 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"aa5b7bdd-50bb-4123-a32a-0c7e97035a3f","Type":"ContainerDied","Data":"04a0e9026bdd37ee0f8f5e146fe81b31fad50f2da7639fc5b02226cffac84e09"} Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.131170 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.134781 4830 generic.go:334] "Generic (PLEG): container finished" podID="47514135-95a6-4b77-815a-ebf23a3cab82" containerID="bfe6a195e3ce6d71c1ea474fd65d55d683efc1b57f94f1cbfda130d8058970ed" exitCode=0 Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.134864 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"47514135-95a6-4b77-815a-ebf23a3cab82","Type":"ContainerDied","Data":"bfe6a195e3ce6d71c1ea474fd65d55d683efc1b57f94f1cbfda130d8058970ed"} Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.134893 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"47514135-95a6-4b77-815a-ebf23a3cab82","Type":"ContainerDied","Data":"bd95fbc21262734a4243970c4ca8c0c8132b401d0826f72dac04a87fe9febbf4"} Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.134996 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.146190 4830 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.151543 4830 scope.go:117] "RemoveContainer" containerID="1b96ec56ecc45649c019ca46229cb367a2a6fcf878e737c27d2446d8365254f8" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.186661 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.195316 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.206282 4830 scope.go:117] "RemoveContainer" containerID="60b83b906afc06b23e5e1362e3117ceeff1474cd84090478f13efba3e31b7cf5" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.213100 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.217666 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.227687 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.232386 4830 scope.go:117] "RemoveContainer" containerID="aea522c2ecab41c50d2a7430cd094093e90f5bf0a044bc4b659d102558a7db55" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.241048 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.318588 4830 scope.go:117] "RemoveContainer" containerID="bfe6a195e3ce6d71c1ea474fd65d55d683efc1b57f94f1cbfda130d8058970ed" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.347968 4830 scope.go:117] "RemoveContainer" containerID="5a4ec36b1a76d0a19cb17b92fc8ea7c7d1d244acdec968ae755d558d3eadddc7" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.366666 4830 scope.go:117] "RemoveContainer" containerID="bfe6a195e3ce6d71c1ea474fd65d55d683efc1b57f94f1cbfda130d8058970ed" Feb 27 16:33:11 crc kubenswrapper[4830]: E0227 16:33:11.367264 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfe6a195e3ce6d71c1ea474fd65d55d683efc1b57f94f1cbfda130d8058970ed\": container with ID starting with bfe6a195e3ce6d71c1ea474fd65d55d683efc1b57f94f1cbfda130d8058970ed not found: ID does not exist" containerID="bfe6a195e3ce6d71c1ea474fd65d55d683efc1b57f94f1cbfda130d8058970ed" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.367370 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfe6a195e3ce6d71c1ea474fd65d55d683efc1b57f94f1cbfda130d8058970ed"} err="failed to get container status \"bfe6a195e3ce6d71c1ea474fd65d55d683efc1b57f94f1cbfda130d8058970ed\": rpc error: code = NotFound desc = could not find container \"bfe6a195e3ce6d71c1ea474fd65d55d683efc1b57f94f1cbfda130d8058970ed\": container with ID starting with bfe6a195e3ce6d71c1ea474fd65d55d683efc1b57f94f1cbfda130d8058970ed not found: ID does not exist" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.367442 4830 scope.go:117] "RemoveContainer" containerID="5a4ec36b1a76d0a19cb17b92fc8ea7c7d1d244acdec968ae755d558d3eadddc7" Feb 27 16:33:11 crc kubenswrapper[4830]: E0227 16:33:11.367839 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a4ec36b1a76d0a19cb17b92fc8ea7c7d1d244acdec968ae755d558d3eadddc7\": container with ID starting with 5a4ec36b1a76d0a19cb17b92fc8ea7c7d1d244acdec968ae755d558d3eadddc7 not found: ID does not exist" containerID="5a4ec36b1a76d0a19cb17b92fc8ea7c7d1d244acdec968ae755d558d3eadddc7" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.367877 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a4ec36b1a76d0a19cb17b92fc8ea7c7d1d244acdec968ae755d558d3eadddc7"} err="failed to get container status \"5a4ec36b1a76d0a19cb17b92fc8ea7c7d1d244acdec968ae755d558d3eadddc7\": rpc error: code = NotFound desc = could not find container \"5a4ec36b1a76d0a19cb17b92fc8ea7c7d1d244acdec968ae755d558d3eadddc7\": container with ID starting with 5a4ec36b1a76d0a19cb17b92fc8ea7c7d1d244acdec968ae755d558d3eadddc7 not found: ID does not exist" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.480368 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6b747d769f-z82kl" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.553874 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-config-data\") pod \"28316ca0-eb95-47b0-bc7e-d31591facdc5\" (UID: \"28316ca0-eb95-47b0-bc7e-d31591facdc5\") " Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.553929 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zjxw\" (UniqueName: \"kubernetes.io/projected/28316ca0-eb95-47b0-bc7e-d31591facdc5-kube-api-access-4zjxw\") pod \"28316ca0-eb95-47b0-bc7e-d31591facdc5\" (UID: \"28316ca0-eb95-47b0-bc7e-d31591facdc5\") " Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.554087 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-credential-keys\") pod \"28316ca0-eb95-47b0-bc7e-d31591facdc5\" (UID: \"28316ca0-eb95-47b0-bc7e-d31591facdc5\") " Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.554135 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-public-tls-certs\") pod \"28316ca0-eb95-47b0-bc7e-d31591facdc5\" (UID: \"28316ca0-eb95-47b0-bc7e-d31591facdc5\") " Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.554210 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-internal-tls-certs\") pod \"28316ca0-eb95-47b0-bc7e-d31591facdc5\" (UID: \"28316ca0-eb95-47b0-bc7e-d31591facdc5\") " Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.554258 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-fernet-keys\") pod \"28316ca0-eb95-47b0-bc7e-d31591facdc5\" (UID: \"28316ca0-eb95-47b0-bc7e-d31591facdc5\") " Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.554353 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-combined-ca-bundle\") pod \"28316ca0-eb95-47b0-bc7e-d31591facdc5\" (UID: \"28316ca0-eb95-47b0-bc7e-d31591facdc5\") " Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.554382 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-scripts\") pod \"28316ca0-eb95-47b0-bc7e-d31591facdc5\" (UID: \"28316ca0-eb95-47b0-bc7e-d31591facdc5\") " Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.561074 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28316ca0-eb95-47b0-bc7e-d31591facdc5-kube-api-access-4zjxw" (OuterVolumeSpecName: "kube-api-access-4zjxw") pod "28316ca0-eb95-47b0-bc7e-d31591facdc5" (UID: "28316ca0-eb95-47b0-bc7e-d31591facdc5"). InnerVolumeSpecName "kube-api-access-4zjxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.563068 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "28316ca0-eb95-47b0-bc7e-d31591facdc5" (UID: "28316ca0-eb95-47b0-bc7e-d31591facdc5"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.566068 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-scripts" (OuterVolumeSpecName: "scripts") pod "28316ca0-eb95-47b0-bc7e-d31591facdc5" (UID: "28316ca0-eb95-47b0-bc7e-d31591facdc5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.566815 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "28316ca0-eb95-47b0-bc7e-d31591facdc5" (UID: "28316ca0-eb95-47b0-bc7e-d31591facdc5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.578472 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28316ca0-eb95-47b0-bc7e-d31591facdc5" (UID: "28316ca0-eb95-47b0-bc7e-d31591facdc5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.585318 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-config-data" (OuterVolumeSpecName: "config-data") pod "28316ca0-eb95-47b0-bc7e-d31591facdc5" (UID: "28316ca0-eb95-47b0-bc7e-d31591facdc5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.600179 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "28316ca0-eb95-47b0-bc7e-d31591facdc5" (UID: "28316ca0-eb95-47b0-bc7e-d31591facdc5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.604356 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "28316ca0-eb95-47b0-bc7e-d31591facdc5" (UID: "28316ca0-eb95-47b0-bc7e-d31591facdc5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.656779 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.656846 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zjxw\" (UniqueName: \"kubernetes.io/projected/28316ca0-eb95-47b0-bc7e-d31591facdc5-kube-api-access-4zjxw\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.656862 4830 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.656875 4830 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.656887 4830 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.656899 4830 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.656910 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:11 crc kubenswrapper[4830]: I0227 16:33:11.656921 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28316ca0-eb95-47b0-bc7e-d31591facdc5-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:12 crc kubenswrapper[4830]: I0227 16:33:12.149139 4830 generic.go:334] "Generic (PLEG): container finished" podID="a989aa76-9246-46b2-9f1e-7900cfecedc2" containerID="0177eede3f4945d97bcd0d90fed75c1aa58d1276a7fd71e80b0683515562f9b1" exitCode=0 Feb 27 16:33:12 crc kubenswrapper[4830]: I0227 16:33:12.149219 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a989aa76-9246-46b2-9f1e-7900cfecedc2","Type":"ContainerDied","Data":"0177eede3f4945d97bcd0d90fed75c1aa58d1276a7fd71e80b0683515562f9b1"} Feb 27 16:33:12 crc kubenswrapper[4830]: I0227 16:33:12.151553 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6b747d769f-z82kl" Feb 27 16:33:12 crc kubenswrapper[4830]: I0227 16:33:12.151606 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6b747d769f-z82kl" event={"ID":"28316ca0-eb95-47b0-bc7e-d31591facdc5","Type":"ContainerDied","Data":"c34db6546cf7c5a207be75e43da86cdbe0ee1689c79b1c3a34a3de47326a4399"} Feb 27 16:33:12 crc kubenswrapper[4830]: I0227 16:33:12.151661 4830 scope.go:117] "RemoveContainer" containerID="0222fc9c68ebb7ebbcbccfa2809183acfbfef310f1d1faa28bd88a72fb86cf67" Feb 27 16:33:12 crc kubenswrapper[4830]: I0227 16:33:12.195062 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6b747d769f-z82kl"] Feb 27 16:33:12 crc kubenswrapper[4830]: I0227 16:33:12.199717 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-6b747d769f-z82kl"] Feb 27 16:33:12 crc kubenswrapper[4830]: I0227 16:33:12.316025 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 27 16:33:12 crc kubenswrapper[4830]: I0227 16:33:12.371111 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a989aa76-9246-46b2-9f1e-7900cfecedc2-config-data\") pod \"a989aa76-9246-46b2-9f1e-7900cfecedc2\" (UID: \"a989aa76-9246-46b2-9f1e-7900cfecedc2\") " Feb 27 16:33:12 crc kubenswrapper[4830]: I0227 16:33:12.372150 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcn4p\" (UniqueName: \"kubernetes.io/projected/a989aa76-9246-46b2-9f1e-7900cfecedc2-kube-api-access-rcn4p\") pod \"a989aa76-9246-46b2-9f1e-7900cfecedc2\" (UID: \"a989aa76-9246-46b2-9f1e-7900cfecedc2\") " Feb 27 16:33:12 crc kubenswrapper[4830]: I0227 16:33:12.372401 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a989aa76-9246-46b2-9f1e-7900cfecedc2-combined-ca-bundle\") pod \"a989aa76-9246-46b2-9f1e-7900cfecedc2\" (UID: \"a989aa76-9246-46b2-9f1e-7900cfecedc2\") " Feb 27 16:33:12 crc kubenswrapper[4830]: I0227 16:33:12.376674 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a989aa76-9246-46b2-9f1e-7900cfecedc2-kube-api-access-rcn4p" (OuterVolumeSpecName: "kube-api-access-rcn4p") pod "a989aa76-9246-46b2-9f1e-7900cfecedc2" (UID: "a989aa76-9246-46b2-9f1e-7900cfecedc2"). InnerVolumeSpecName "kube-api-access-rcn4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:12 crc kubenswrapper[4830]: I0227 16:33:12.406565 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a989aa76-9246-46b2-9f1e-7900cfecedc2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a989aa76-9246-46b2-9f1e-7900cfecedc2" (UID: "a989aa76-9246-46b2-9f1e-7900cfecedc2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:12 crc kubenswrapper[4830]: I0227 16:33:12.423074 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a989aa76-9246-46b2-9f1e-7900cfecedc2-config-data" (OuterVolumeSpecName: "config-data") pod "a989aa76-9246-46b2-9f1e-7900cfecedc2" (UID: "a989aa76-9246-46b2-9f1e-7900cfecedc2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:12 crc kubenswrapper[4830]: I0227 16:33:12.473692 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a989aa76-9246-46b2-9f1e-7900cfecedc2-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:12 crc kubenswrapper[4830]: I0227 16:33:12.473737 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcn4p\" (UniqueName: \"kubernetes.io/projected/a989aa76-9246-46b2-9f1e-7900cfecedc2-kube-api-access-rcn4p\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:12 crc kubenswrapper[4830]: I0227 16:33:12.473751 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a989aa76-9246-46b2-9f1e-7900cfecedc2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:12 crc kubenswrapper[4830]: I0227 16:33:12.772868 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28316ca0-eb95-47b0-bc7e-d31591facdc5" path="/var/lib/kubelet/pods/28316ca0-eb95-47b0-bc7e-d31591facdc5/volumes" Feb 27 16:33:12 crc kubenswrapper[4830]: I0227 16:33:12.773583 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47514135-95a6-4b77-815a-ebf23a3cab82" path="/var/lib/kubelet/pods/47514135-95a6-4b77-815a-ebf23a3cab82/volumes" Feb 27 16:33:12 crc kubenswrapper[4830]: I0227 16:33:12.774292 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa5b7bdd-50bb-4123-a32a-0c7e97035a3f" path="/var/lib/kubelet/pods/aa5b7bdd-50bb-4123-a32a-0c7e97035a3f/volumes" Feb 27 16:33:12 crc kubenswrapper[4830]: I0227 16:33:12.775359 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3" path="/var/lib/kubelet/pods/bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3/volumes" Feb 27 16:33:12 crc kubenswrapper[4830]: E0227 16:33:12.991005 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5 is running failed: container process not found" containerID="6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 27 16:33:12 crc kubenswrapper[4830]: E0227 16:33:12.995080 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5 is running failed: container process not found" containerID="6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 27 16:33:12 crc kubenswrapper[4830]: E0227 16:33:12.995317 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4a66491103ecf784427f1721d3379810efd654720c7344f0ebbf5e10bc7a1b3d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 27 16:33:12 crc kubenswrapper[4830]: E0227 16:33:12.996161 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5 is running failed: container process not found" containerID="6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 27 16:33:12 crc kubenswrapper[4830]: E0227 16:33:12.996246 4830 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-qt6mr" podUID="bc737ee4-d87c-4276-a6d1-6f3144879542" containerName="ovsdb-server" Feb 27 16:33:12 crc kubenswrapper[4830]: E0227 16:33:12.998052 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4a66491103ecf784427f1721d3379810efd654720c7344f0ebbf5e10bc7a1b3d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 27 16:33:12 crc kubenswrapper[4830]: E0227 16:33:12.999845 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4a66491103ecf784427f1721d3379810efd654720c7344f0ebbf5e10bc7a1b3d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 27 16:33:12 crc kubenswrapper[4830]: E0227 16:33:12.999895 4830 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-qt6mr" podUID="bc737ee4-d87c-4276-a6d1-6f3144879542" containerName="ovs-vswitchd" Feb 27 16:33:13 crc kubenswrapper[4830]: I0227 16:33:13.176666 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a989aa76-9246-46b2-9f1e-7900cfecedc2","Type":"ContainerDied","Data":"0cdeaecb8f58ab83bb70e3c942e1583e6c782dcc702e86c59532ba7ea8a3d3a3"} Feb 27 16:33:13 crc kubenswrapper[4830]: I0227 16:33:13.176759 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 27 16:33:13 crc kubenswrapper[4830]: I0227 16:33:13.176774 4830 scope.go:117] "RemoveContainer" containerID="0177eede3f4945d97bcd0d90fed75c1aa58d1276a7fd71e80b0683515562f9b1" Feb 27 16:33:13 crc kubenswrapper[4830]: I0227 16:33:13.207361 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 27 16:33:13 crc kubenswrapper[4830]: I0227 16:33:13.217365 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 27 16:33:14 crc kubenswrapper[4830]: I0227 16:33:14.780909 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a989aa76-9246-46b2-9f1e-7900cfecedc2" path="/var/lib/kubelet/pods/a989aa76-9246-46b2-9f1e-7900cfecedc2/volumes" Feb 27 16:33:15 crc kubenswrapper[4830]: I0227 16:33:15.762751 4830 scope.go:117] "RemoveContainer" containerID="ca4233a6fa911a1d6b959075103b585a592935c9e9a6d178d17fef41a2fea048" Feb 27 16:33:15 crc kubenswrapper[4830]: E0227 16:33:15.763145 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:33:17 crc kubenswrapper[4830]: I0227 16:33:17.009072 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-8559c55d4f-z6hpf" podUID="acdbf1f3-efd7-4181-b99c-a0697c465c4b" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.172:9696/\": dial tcp 10.217.0.172:9696: connect: connection refused" Feb 27 16:33:17 crc kubenswrapper[4830]: E0227 16:33:17.991741 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5 is running failed: container process not found" containerID="6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 27 16:33:17 crc kubenswrapper[4830]: E0227 16:33:17.992647 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5 is running failed: container process not found" containerID="6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 27 16:33:17 crc kubenswrapper[4830]: E0227 16:33:17.993400 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5 is running failed: container process not found" containerID="6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 27 16:33:17 crc kubenswrapper[4830]: E0227 16:33:17.993431 4830 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-qt6mr" podUID="bc737ee4-d87c-4276-a6d1-6f3144879542" containerName="ovsdb-server" Feb 27 16:33:17 crc kubenswrapper[4830]: E0227 16:33:17.994143 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4a66491103ecf784427f1721d3379810efd654720c7344f0ebbf5e10bc7a1b3d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 27 16:33:17 crc kubenswrapper[4830]: E0227 16:33:17.998091 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4a66491103ecf784427f1721d3379810efd654720c7344f0ebbf5e10bc7a1b3d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 27 16:33:18 crc kubenswrapper[4830]: E0227 16:33:18.000045 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4a66491103ecf784427f1721d3379810efd654720c7344f0ebbf5e10bc7a1b3d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 27 16:33:18 crc kubenswrapper[4830]: E0227 16:33:18.000198 4830 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-qt6mr" podUID="bc737ee4-d87c-4276-a6d1-6f3144879542" containerName="ovs-vswitchd" Feb 27 16:33:22 crc kubenswrapper[4830]: E0227 16:33:22.991245 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5 is running failed: container process not found" containerID="6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 27 16:33:22 crc kubenswrapper[4830]: E0227 16:33:22.992586 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5 is running failed: container process not found" containerID="6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 27 16:33:22 crc kubenswrapper[4830]: E0227 16:33:22.992972 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5 is running failed: container process not found" containerID="6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 27 16:33:22 crc kubenswrapper[4830]: E0227 16:33:22.993031 4830 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-qt6mr" podUID="bc737ee4-d87c-4276-a6d1-6f3144879542" containerName="ovsdb-server" Feb 27 16:33:22 crc kubenswrapper[4830]: E0227 16:33:22.995074 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4a66491103ecf784427f1721d3379810efd654720c7344f0ebbf5e10bc7a1b3d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 27 16:33:22 crc kubenswrapper[4830]: E0227 16:33:22.996540 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4a66491103ecf784427f1721d3379810efd654720c7344f0ebbf5e10bc7a1b3d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 27 16:33:22 crc kubenswrapper[4830]: E0227 16:33:22.998551 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4a66491103ecf784427f1721d3379810efd654720c7344f0ebbf5e10bc7a1b3d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 27 16:33:22 crc kubenswrapper[4830]: E0227 16:33:22.998654 4830 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-qt6mr" podUID="bc737ee4-d87c-4276-a6d1-6f3144879542" containerName="ovs-vswitchd" Feb 27 16:33:25 crc kubenswrapper[4830]: I0227 16:33:25.338316 4830 generic.go:334] "Generic (PLEG): container finished" podID="acdbf1f3-efd7-4181-b99c-a0697c465c4b" containerID="a56e16403fc2d569470e79c24225b344a16dacbbe2255d02caeb6351695ce986" exitCode=0 Feb 27 16:33:25 crc kubenswrapper[4830]: I0227 16:33:25.338718 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8559c55d4f-z6hpf" event={"ID":"acdbf1f3-efd7-4181-b99c-a0697c465c4b","Type":"ContainerDied","Data":"a56e16403fc2d569470e79c24225b344a16dacbbe2255d02caeb6351695ce986"} Feb 27 16:33:25 crc kubenswrapper[4830]: I0227 16:33:25.566525 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8559c55d4f-z6hpf" Feb 27 16:33:25 crc kubenswrapper[4830]: I0227 16:33:25.643068 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/acdbf1f3-efd7-4181-b99c-a0697c465c4b-config\") pod \"acdbf1f3-efd7-4181-b99c-a0697c465c4b\" (UID: \"acdbf1f3-efd7-4181-b99c-a0697c465c4b\") " Feb 27 16:33:25 crc kubenswrapper[4830]: I0227 16:33:25.643269 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/acdbf1f3-efd7-4181-b99c-a0697c465c4b-internal-tls-certs\") pod \"acdbf1f3-efd7-4181-b99c-a0697c465c4b\" (UID: \"acdbf1f3-efd7-4181-b99c-a0697c465c4b\") " Feb 27 16:33:25 crc kubenswrapper[4830]: I0227 16:33:25.643307 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dkvn\" (UniqueName: \"kubernetes.io/projected/acdbf1f3-efd7-4181-b99c-a0697c465c4b-kube-api-access-8dkvn\") pod \"acdbf1f3-efd7-4181-b99c-a0697c465c4b\" (UID: \"acdbf1f3-efd7-4181-b99c-a0697c465c4b\") " Feb 27 16:33:25 crc kubenswrapper[4830]: I0227 16:33:25.643399 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/acdbf1f3-efd7-4181-b99c-a0697c465c4b-public-tls-certs\") pod \"acdbf1f3-efd7-4181-b99c-a0697c465c4b\" (UID: \"acdbf1f3-efd7-4181-b99c-a0697c465c4b\") " Feb 27 16:33:25 crc kubenswrapper[4830]: I0227 16:33:25.643519 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/acdbf1f3-efd7-4181-b99c-a0697c465c4b-ovndb-tls-certs\") pod \"acdbf1f3-efd7-4181-b99c-a0697c465c4b\" (UID: \"acdbf1f3-efd7-4181-b99c-a0697c465c4b\") " Feb 27 16:33:25 crc kubenswrapper[4830]: I0227 16:33:25.643574 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acdbf1f3-efd7-4181-b99c-a0697c465c4b-combined-ca-bundle\") pod \"acdbf1f3-efd7-4181-b99c-a0697c465c4b\" (UID: \"acdbf1f3-efd7-4181-b99c-a0697c465c4b\") " Feb 27 16:33:25 crc kubenswrapper[4830]: I0227 16:33:25.643632 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/acdbf1f3-efd7-4181-b99c-a0697c465c4b-httpd-config\") pod \"acdbf1f3-efd7-4181-b99c-a0697c465c4b\" (UID: \"acdbf1f3-efd7-4181-b99c-a0697c465c4b\") " Feb 27 16:33:25 crc kubenswrapper[4830]: I0227 16:33:25.649057 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acdbf1f3-efd7-4181-b99c-a0697c465c4b-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "acdbf1f3-efd7-4181-b99c-a0697c465c4b" (UID: "acdbf1f3-efd7-4181-b99c-a0697c465c4b"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:25 crc kubenswrapper[4830]: I0227 16:33:25.649923 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acdbf1f3-efd7-4181-b99c-a0697c465c4b-kube-api-access-8dkvn" (OuterVolumeSpecName: "kube-api-access-8dkvn") pod "acdbf1f3-efd7-4181-b99c-a0697c465c4b" (UID: "acdbf1f3-efd7-4181-b99c-a0697c465c4b"). InnerVolumeSpecName "kube-api-access-8dkvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:25 crc kubenswrapper[4830]: I0227 16:33:25.691062 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acdbf1f3-efd7-4181-b99c-a0697c465c4b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "acdbf1f3-efd7-4181-b99c-a0697c465c4b" (UID: "acdbf1f3-efd7-4181-b99c-a0697c465c4b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:25 crc kubenswrapper[4830]: I0227 16:33:25.691821 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acdbf1f3-efd7-4181-b99c-a0697c465c4b-config" (OuterVolumeSpecName: "config") pod "acdbf1f3-efd7-4181-b99c-a0697c465c4b" (UID: "acdbf1f3-efd7-4181-b99c-a0697c465c4b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:25 crc kubenswrapper[4830]: I0227 16:33:25.697671 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acdbf1f3-efd7-4181-b99c-a0697c465c4b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "acdbf1f3-efd7-4181-b99c-a0697c465c4b" (UID: "acdbf1f3-efd7-4181-b99c-a0697c465c4b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:25 crc kubenswrapper[4830]: I0227 16:33:25.710298 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acdbf1f3-efd7-4181-b99c-a0697c465c4b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "acdbf1f3-efd7-4181-b99c-a0697c465c4b" (UID: "acdbf1f3-efd7-4181-b99c-a0697c465c4b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:25 crc kubenswrapper[4830]: I0227 16:33:25.715097 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acdbf1f3-efd7-4181-b99c-a0697c465c4b-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "acdbf1f3-efd7-4181-b99c-a0697c465c4b" (UID: "acdbf1f3-efd7-4181-b99c-a0697c465c4b"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:25 crc kubenswrapper[4830]: I0227 16:33:25.746002 4830 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/acdbf1f3-efd7-4181-b99c-a0697c465c4b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:25 crc kubenswrapper[4830]: I0227 16:33:25.746172 4830 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/acdbf1f3-efd7-4181-b99c-a0697c465c4b-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:25 crc kubenswrapper[4830]: I0227 16:33:25.746274 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acdbf1f3-efd7-4181-b99c-a0697c465c4b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:25 crc kubenswrapper[4830]: I0227 16:33:25.746355 4830 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/acdbf1f3-efd7-4181-b99c-a0697c465c4b-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:25 crc kubenswrapper[4830]: I0227 16:33:25.746433 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/acdbf1f3-efd7-4181-b99c-a0697c465c4b-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:25 crc kubenswrapper[4830]: I0227 16:33:25.746517 4830 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/acdbf1f3-efd7-4181-b99c-a0697c465c4b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:25 crc kubenswrapper[4830]: I0227 16:33:25.746594 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dkvn\" (UniqueName: \"kubernetes.io/projected/acdbf1f3-efd7-4181-b99c-a0697c465c4b-kube-api-access-8dkvn\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:26 crc kubenswrapper[4830]: I0227 16:33:26.358217 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8559c55d4f-z6hpf" event={"ID":"acdbf1f3-efd7-4181-b99c-a0697c465c4b","Type":"ContainerDied","Data":"5de618222396caaef75cd85687bfe44cc5a6458f007071c8e6edcbabb8998680"} Feb 27 16:33:26 crc kubenswrapper[4830]: I0227 16:33:26.358323 4830 scope.go:117] "RemoveContainer" containerID="825cde15be9549d56742ccbdc2f57b6324396f78c69861f72b851d87071dd387" Feb 27 16:33:26 crc kubenswrapper[4830]: I0227 16:33:26.358346 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8559c55d4f-z6hpf" Feb 27 16:33:26 crc kubenswrapper[4830]: I0227 16:33:26.395353 4830 scope.go:117] "RemoveContainer" containerID="a56e16403fc2d569470e79c24225b344a16dacbbe2255d02caeb6351695ce986" Feb 27 16:33:26 crc kubenswrapper[4830]: I0227 16:33:26.422223 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8559c55d4f-z6hpf"] Feb 27 16:33:26 crc kubenswrapper[4830]: I0227 16:33:26.430314 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8559c55d4f-z6hpf"] Feb 27 16:33:26 crc kubenswrapper[4830]: I0227 16:33:26.762992 4830 scope.go:117] "RemoveContainer" containerID="ca4233a6fa911a1d6b959075103b585a592935c9e9a6d178d17fef41a2fea048" Feb 27 16:33:26 crc kubenswrapper[4830]: E0227 16:33:26.763342 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:33:26 crc kubenswrapper[4830]: I0227 16:33:26.778856 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acdbf1f3-efd7-4181-b99c-a0697c465c4b" path="/var/lib/kubelet/pods/acdbf1f3-efd7-4181-b99c-a0697c465c4b/volumes" Feb 27 16:33:27 crc kubenswrapper[4830]: E0227 16:33:27.991494 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5 is running failed: container process not found" containerID="6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 27 16:33:27 crc kubenswrapper[4830]: E0227 16:33:27.992625 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5 is running failed: container process not found" containerID="6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 27 16:33:27 crc kubenswrapper[4830]: E0227 16:33:27.993135 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5 is running failed: container process not found" containerID="6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 27 16:33:27 crc kubenswrapper[4830]: E0227 16:33:27.993213 4830 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-qt6mr" podUID="bc737ee4-d87c-4276-a6d1-6f3144879542" containerName="ovsdb-server" Feb 27 16:33:27 crc kubenswrapper[4830]: E0227 16:33:27.993315 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4a66491103ecf784427f1721d3379810efd654720c7344f0ebbf5e10bc7a1b3d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 27 16:33:27 crc kubenswrapper[4830]: E0227 16:33:27.995720 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4a66491103ecf784427f1721d3379810efd654720c7344f0ebbf5e10bc7a1b3d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 27 16:33:27 crc kubenswrapper[4830]: E0227 16:33:27.998073 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4a66491103ecf784427f1721d3379810efd654720c7344f0ebbf5e10bc7a1b3d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 27 16:33:27 crc kubenswrapper[4830]: E0227 16:33:27.998224 4830 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-qt6mr" podUID="bc737ee4-d87c-4276-a6d1-6f3144879542" containerName="ovs-vswitchd" Feb 27 16:33:32 crc kubenswrapper[4830]: I0227 16:33:32.442776 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qt6mr_bc737ee4-d87c-4276-a6d1-6f3144879542/ovs-vswitchd/0.log" Feb 27 16:33:32 crc kubenswrapper[4830]: I0227 16:33:32.446856 4830 generic.go:334] "Generic (PLEG): container finished" podID="bc737ee4-d87c-4276-a6d1-6f3144879542" containerID="4a66491103ecf784427f1721d3379810efd654720c7344f0ebbf5e10bc7a1b3d" exitCode=137 Feb 27 16:33:32 crc kubenswrapper[4830]: I0227 16:33:32.446911 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qt6mr" event={"ID":"bc737ee4-d87c-4276-a6d1-6f3144879542","Type":"ContainerDied","Data":"4a66491103ecf784427f1721d3379810efd654720c7344f0ebbf5e10bc7a1b3d"} Feb 27 16:33:32 crc kubenswrapper[4830]: E0227 16:33:32.990465 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4a66491103ecf784427f1721d3379810efd654720c7344f0ebbf5e10bc7a1b3d is running failed: container process not found" containerID="4a66491103ecf784427f1721d3379810efd654720c7344f0ebbf5e10bc7a1b3d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 27 16:33:32 crc kubenswrapper[4830]: E0227 16:33:32.990600 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5 is running failed: container process not found" containerID="6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 27 16:33:32 crc kubenswrapper[4830]: E0227 16:33:32.990913 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5 is running failed: container process not found" containerID="6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 27 16:33:32 crc kubenswrapper[4830]: E0227 16:33:32.991052 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4a66491103ecf784427f1721d3379810efd654720c7344f0ebbf5e10bc7a1b3d is running failed: container process not found" containerID="4a66491103ecf784427f1721d3379810efd654720c7344f0ebbf5e10bc7a1b3d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 27 16:33:32 crc kubenswrapper[4830]: E0227 16:33:32.991373 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4a66491103ecf784427f1721d3379810efd654720c7344f0ebbf5e10bc7a1b3d is running failed: container process not found" containerID="4a66491103ecf784427f1721d3379810efd654720c7344f0ebbf5e10bc7a1b3d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 27 16:33:32 crc kubenswrapper[4830]: E0227 16:33:32.991417 4830 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4a66491103ecf784427f1721d3379810efd654720c7344f0ebbf5e10bc7a1b3d is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-qt6mr" podUID="bc737ee4-d87c-4276-a6d1-6f3144879542" containerName="ovs-vswitchd" Feb 27 16:33:32 crc kubenswrapper[4830]: E0227 16:33:32.991384 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5 is running failed: container process not found" containerID="6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 27 16:33:32 crc kubenswrapper[4830]: E0227 16:33:32.991464 4830 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-qt6mr" podUID="bc737ee4-d87c-4276-a6d1-6f3144879542" containerName="ovsdb-server" Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.019494 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qt6mr_bc737ee4-d87c-4276-a6d1-6f3144879542/ovs-vswitchd/0.log" Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.021107 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-qt6mr" Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.089362 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49zwz\" (UniqueName: \"kubernetes.io/projected/bc737ee4-d87c-4276-a6d1-6f3144879542-kube-api-access-49zwz\") pod \"bc737ee4-d87c-4276-a6d1-6f3144879542\" (UID: \"bc737ee4-d87c-4276-a6d1-6f3144879542\") " Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.089438 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bc737ee4-d87c-4276-a6d1-6f3144879542-var-lib\") pod \"bc737ee4-d87c-4276-a6d1-6f3144879542\" (UID: \"bc737ee4-d87c-4276-a6d1-6f3144879542\") " Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.089513 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bc737ee4-d87c-4276-a6d1-6f3144879542-etc-ovs\") pod \"bc737ee4-d87c-4276-a6d1-6f3144879542\" (UID: \"bc737ee4-d87c-4276-a6d1-6f3144879542\") " Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.089556 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bc737ee4-d87c-4276-a6d1-6f3144879542-var-log\") pod \"bc737ee4-d87c-4276-a6d1-6f3144879542\" (UID: \"bc737ee4-d87c-4276-a6d1-6f3144879542\") " Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.089620 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc737ee4-d87c-4276-a6d1-6f3144879542-scripts\") pod \"bc737ee4-d87c-4276-a6d1-6f3144879542\" (UID: \"bc737ee4-d87c-4276-a6d1-6f3144879542\") " Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.089650 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bc737ee4-d87c-4276-a6d1-6f3144879542-var-run\") pod \"bc737ee4-d87c-4276-a6d1-6f3144879542\" (UID: \"bc737ee4-d87c-4276-a6d1-6f3144879542\") " Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.090000 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc737ee4-d87c-4276-a6d1-6f3144879542-var-run" (OuterVolumeSpecName: "var-run") pod "bc737ee4-d87c-4276-a6d1-6f3144879542" (UID: "bc737ee4-d87c-4276-a6d1-6f3144879542"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.090006 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc737ee4-d87c-4276-a6d1-6f3144879542-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "bc737ee4-d87c-4276-a6d1-6f3144879542" (UID: "bc737ee4-d87c-4276-a6d1-6f3144879542"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.090065 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc737ee4-d87c-4276-a6d1-6f3144879542-var-lib" (OuterVolumeSpecName: "var-lib") pod "bc737ee4-d87c-4276-a6d1-6f3144879542" (UID: "bc737ee4-d87c-4276-a6d1-6f3144879542"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.090073 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc737ee4-d87c-4276-a6d1-6f3144879542-var-log" (OuterVolumeSpecName: "var-log") pod "bc737ee4-d87c-4276-a6d1-6f3144879542" (UID: "bc737ee4-d87c-4276-a6d1-6f3144879542"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.091205 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc737ee4-d87c-4276-a6d1-6f3144879542-scripts" (OuterVolumeSpecName: "scripts") pod "bc737ee4-d87c-4276-a6d1-6f3144879542" (UID: "bc737ee4-d87c-4276-a6d1-6f3144879542"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.098549 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc737ee4-d87c-4276-a6d1-6f3144879542-kube-api-access-49zwz" (OuterVolumeSpecName: "kube-api-access-49zwz") pod "bc737ee4-d87c-4276-a6d1-6f3144879542" (UID: "bc737ee4-d87c-4276-a6d1-6f3144879542"). InnerVolumeSpecName "kube-api-access-49zwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.191710 4830 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bc737ee4-d87c-4276-a6d1-6f3144879542-var-log\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.191785 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc737ee4-d87c-4276-a6d1-6f3144879542-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.191810 4830 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bc737ee4-d87c-4276-a6d1-6f3144879542-var-run\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.191830 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49zwz\" (UniqueName: \"kubernetes.io/projected/bc737ee4-d87c-4276-a6d1-6f3144879542-kube-api-access-49zwz\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.191852 4830 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bc737ee4-d87c-4276-a6d1-6f3144879542-var-lib\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.191875 4830 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bc737ee4-d87c-4276-a6d1-6f3144879542-etc-ovs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.471032 4830 generic.go:334] "Generic (PLEG): container finished" podID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerID="bd8b53933ff6dda1af3029d46d29a1b791028b8a3ae0508dffa6e043e33ce932" exitCode=137 Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.471098 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f","Type":"ContainerDied","Data":"bd8b53933ff6dda1af3029d46d29a1b791028b8a3ae0508dffa6e043e33ce932"} Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.473311 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qt6mr_bc737ee4-d87c-4276-a6d1-6f3144879542/ovs-vswitchd/0.log" Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.473899 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qt6mr" event={"ID":"bc737ee4-d87c-4276-a6d1-6f3144879542","Type":"ContainerDied","Data":"55e9b8ebb3a52da47ce3bb0f86fc446908427f7967fa006357a03cd8be4789b9"} Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.473936 4830 scope.go:117] "RemoveContainer" containerID="4a66491103ecf784427f1721d3379810efd654720c7344f0ebbf5e10bc7a1b3d" Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.474098 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-qt6mr" Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.504799 4830 scope.go:117] "RemoveContainer" containerID="6dd6bce0125fbdcdd86f5eb9074b21ac3c2ba5ef9063bcabe41f111d338472f5" Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.521378 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-qt6mr"] Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.529207 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-qt6mr"] Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.536399 4830 scope.go:117] "RemoveContainer" containerID="75f105c69a81a404a85e4253f51be6a0844b8fa41fe1407a258ae3b5998a42f6" Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.702229 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.800429 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-cache\") pod \"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f\" (UID: \"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f\") " Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.800488 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-etc-swift\") pod \"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f\" (UID: \"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f\") " Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.800541 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-combined-ca-bundle\") pod \"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f\" (UID: \"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f\") " Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.800637 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-lock\") pod \"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f\" (UID: \"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f\") " Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.800677 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrcrs\" (UniqueName: \"kubernetes.io/projected/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-kube-api-access-wrcrs\") pod \"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f\" (UID: \"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f\") " Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.800716 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f\" (UID: \"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f\") " Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.801695 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-lock" (OuterVolumeSpecName: "lock") pod "f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" (UID: "f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.801704 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-cache" (OuterVolumeSpecName: "cache") pod "f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" (UID: "f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.803778 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "swift") pod "f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" (UID: "f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.806691 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-kube-api-access-wrcrs" (OuterVolumeSpecName: "kube-api-access-wrcrs") pod "f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" (UID: "f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f"). InnerVolumeSpecName "kube-api-access-wrcrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.810343 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" (UID: "f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.902755 4830 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-lock\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.902803 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrcrs\" (UniqueName: \"kubernetes.io/projected/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-kube-api-access-wrcrs\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.902839 4830 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.902859 4830 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-cache\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.902876 4830 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:33 crc kubenswrapper[4830]: I0227 16:33:33.928460 4830 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 27 16:33:34 crc kubenswrapper[4830]: I0227 16:33:34.004888 4830 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:34 crc kubenswrapper[4830]: I0227 16:33:34.075909 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" (UID: "f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:33:34 crc kubenswrapper[4830]: I0227 16:33:34.106209 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:33:34 crc kubenswrapper[4830]: I0227 16:33:34.501213 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f","Type":"ContainerDied","Data":"901d194be787f5ed6546be3354e5327541c03bd1ff10b0104ee52b902078a56c"} Feb 27 16:33:34 crc kubenswrapper[4830]: I0227 16:33:34.501304 4830 scope.go:117] "RemoveContainer" containerID="bd8b53933ff6dda1af3029d46d29a1b791028b8a3ae0508dffa6e043e33ce932" Feb 27 16:33:34 crc kubenswrapper[4830]: I0227 16:33:34.502002 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 27 16:33:34 crc kubenswrapper[4830]: I0227 16:33:34.564448 4830 scope.go:117] "RemoveContainer" containerID="2ecea93ad489597ba408891f7afe44675c8c3d67fbcc4edfbe9a3debbac6c3a1" Feb 27 16:33:34 crc kubenswrapper[4830]: I0227 16:33:34.577686 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Feb 27 16:33:34 crc kubenswrapper[4830]: I0227 16:33:34.591171 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Feb 27 16:33:34 crc kubenswrapper[4830]: I0227 16:33:34.632832 4830 scope.go:117] "RemoveContainer" containerID="7cfd581745eb62c04447e2179fa4d6397a6ffb2801133df8571673fd2fc8908e" Feb 27 16:33:34 crc kubenswrapper[4830]: I0227 16:33:34.667092 4830 scope.go:117] "RemoveContainer" containerID="d7c3c63f60fa6c0faabdef005cd6435637f7aa45e44077b6d1579dbcfce2ffa5" Feb 27 16:33:34 crc kubenswrapper[4830]: I0227 16:33:34.695216 4830 scope.go:117] "RemoveContainer" containerID="2111c96223f006387077459f4429b67f715648783b2df873c937a40d47be2181" Feb 27 16:33:34 crc kubenswrapper[4830]: I0227 16:33:34.723026 4830 scope.go:117] "RemoveContainer" containerID="ee0b677352a33d7fbcb2e9fab57bf5d672b03867dad9240c6c1fbd8e2b1f0b37" Feb 27 16:33:34 crc kubenswrapper[4830]: I0227 16:33:34.757623 4830 scope.go:117] "RemoveContainer" containerID="2b750caa248530febbfbd4731fc41f64ef7a9129eab2a66780052a81ccfecb65" Feb 27 16:33:34 crc kubenswrapper[4830]: I0227 16:33:34.779731 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc737ee4-d87c-4276-a6d1-6f3144879542" path="/var/lib/kubelet/pods/bc737ee4-d87c-4276-a6d1-6f3144879542/volumes" Feb 27 16:33:34 crc kubenswrapper[4830]: I0227 16:33:34.781878 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" path="/var/lib/kubelet/pods/f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f/volumes" Feb 27 16:33:34 crc kubenswrapper[4830]: I0227 16:33:34.785261 4830 scope.go:117] "RemoveContainer" containerID="63b86b7398c02b758efbf23ee7393a15e9d70cbae4e28af8dae65670306da7a0" Feb 27 16:33:34 crc kubenswrapper[4830]: I0227 16:33:34.815356 4830 scope.go:117] "RemoveContainer" containerID="fddbdac256b4a79af48834ea268b02e9852631ab71cc27740d8344fa2927b417" Feb 27 16:33:34 crc kubenswrapper[4830]: I0227 16:33:34.838885 4830 scope.go:117] "RemoveContainer" containerID="fe39e07eaf48b0f3b6310a52d48a7901fe69c67e61f2bc86fcae68e60845e160" Feb 27 16:33:34 crc kubenswrapper[4830]: I0227 16:33:34.863823 4830 scope.go:117] "RemoveContainer" containerID="abb82842a2a5f9faa42c2a6d73afbddfe73443d7841d35f06ec15c1730975fed" Feb 27 16:33:34 crc kubenswrapper[4830]: I0227 16:33:34.885275 4830 scope.go:117] "RemoveContainer" containerID="b54307be9a881794a66b55a9bca85b4703855db739e2c59f98b8842a64710ed1" Feb 27 16:33:34 crc kubenswrapper[4830]: I0227 16:33:34.907310 4830 scope.go:117] "RemoveContainer" containerID="09edcd425fc07104a2a290237930b325e8877e8ef116e51111ef81ba1b7710e2" Feb 27 16:33:34 crc kubenswrapper[4830]: I0227 16:33:34.939747 4830 scope.go:117] "RemoveContainer" containerID="a6f8e6e02ca541ffa4fab936a485162a21cf976d73c728274bb3fd83cc01abb4" Feb 27 16:33:34 crc kubenswrapper[4830]: I0227 16:33:34.960829 4830 scope.go:117] "RemoveContainer" containerID="d31525bce81210150593ba3db8f8611a5b2d43ff82b2e5c7435f34ad45248c17" Feb 27 16:33:38 crc kubenswrapper[4830]: I0227 16:33:38.315508 4830 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod9e9ca76d-ae40-4258-8f4d-09e15a0d8cd6] : Timed out while waiting for systemd to remove kubepods-besteffort-pod9e9ca76d_ae40_4258_8f4d_09e15a0d8cd6.slice" Feb 27 16:33:38 crc kubenswrapper[4830]: I0227 16:33:38.369573 4830 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod0ea4ce89-3e8b-4521-9398-3406c6bf0166"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod0ea4ce89-3e8b-4521-9398-3406c6bf0166] : Timed out while waiting for systemd to remove kubepods-besteffort-pod0ea4ce89_3e8b_4521_9398_3406c6bf0166.slice" Feb 27 16:33:38 crc kubenswrapper[4830]: E0227 16:33:38.369658 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod0ea4ce89-3e8b-4521-9398-3406c6bf0166] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod0ea4ce89-3e8b-4521-9398-3406c6bf0166] : Timed out while waiting for systemd to remove kubepods-besteffort-pod0ea4ce89_3e8b_4521_9398_3406c6bf0166.slice" pod="openstack/nova-cell1-5e39-account-create-update-r88l6" podUID="0ea4ce89-3e8b-4521-9398-3406c6bf0166" Feb 27 16:33:38 crc kubenswrapper[4830]: I0227 16:33:38.562072 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5e39-account-create-update-r88l6" Feb 27 16:33:38 crc kubenswrapper[4830]: I0227 16:33:38.639797 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-5e39-account-create-update-r88l6"] Feb 27 16:33:38 crc kubenswrapper[4830]: I0227 16:33:38.652420 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-5e39-account-create-update-r88l6"] Feb 27 16:33:38 crc kubenswrapper[4830]: I0227 16:33:38.763867 4830 scope.go:117] "RemoveContainer" containerID="ca4233a6fa911a1d6b959075103b585a592935c9e9a6d178d17fef41a2fea048" Feb 27 16:33:38 crc kubenswrapper[4830]: E0227 16:33:38.764545 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:33:38 crc kubenswrapper[4830]: I0227 16:33:38.780339 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ea4ce89-3e8b-4521-9398-3406c6bf0166" path="/var/lib/kubelet/pods/0ea4ce89-3e8b-4521-9398-3406c6bf0166/volumes" Feb 27 16:33:53 crc kubenswrapper[4830]: I0227 16:33:53.763540 4830 scope.go:117] "RemoveContainer" containerID="ca4233a6fa911a1d6b959075103b585a592935c9e9a6d178d17fef41a2fea048" Feb 27 16:33:53 crc kubenswrapper[4830]: E0227 16:33:53.765670 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.170732 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536834-qfhvj"] Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.172221 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41fafe33-b43b-4dcb-9edd-b365d0749e10" containerName="cinder-api-log" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.172256 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="41fafe33-b43b-4dcb-9edd-b365d0749e10" containerName="cinder-api-log" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.172285 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8d4cd44-9972-445e-bac3-63441b6fa4cc" containerName="glance-httpd" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.172301 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8d4cd44-9972-445e-bac3-63441b6fa4cc" containerName="glance-httpd" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.172334 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c017daa-cb8f-4629-80e6-a671a8455149" containerName="openstack-network-exporter" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.172351 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c017daa-cb8f-4629-80e6-a671a8455149" containerName="openstack-network-exporter" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.172373 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc737ee4-d87c-4276-a6d1-6f3144879542" containerName="ovsdb-server-init" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.172389 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc737ee4-d87c-4276-a6d1-6f3144879542" containerName="ovsdb-server-init" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.172419 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c" containerName="nova-api-api" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.172435 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c" containerName="nova-api-api" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.172454 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb3cdab6-15fa-40e1-a187-e277086227da" containerName="memcached" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.172473 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb3cdab6-15fa-40e1-a187-e277086227da" containerName="memcached" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.172502 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c017daa-cb8f-4629-80e6-a671a8455149" containerName="ovn-northd" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.172518 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c017daa-cb8f-4629-80e6-a671a8455149" containerName="ovn-northd" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.172539 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a989aa76-9246-46b2-9f1e-7900cfecedc2" containerName="nova-cell1-conductor-conductor" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.172557 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="a989aa76-9246-46b2-9f1e-7900cfecedc2" containerName="nova-cell1-conductor-conductor" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.172581 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="account-reaper" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.172787 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="account-reaper" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.172815 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a234743b-8983-4a60-bbb4-59ad823b83e2" containerName="barbican-api" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.172831 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="a234743b-8983-4a60-bbb4-59ad823b83e2" containerName="barbican-api" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.172851 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3" containerName="mysql-bootstrap" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.172867 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3" containerName="mysql-bootstrap" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.172887 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc737ee4-d87c-4276-a6d1-6f3144879542" containerName="ovs-vswitchd" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.172902 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc737ee4-d87c-4276-a6d1-6f3144879542" containerName="ovs-vswitchd" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.172929 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="swift-recon-cron" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.172985 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="swift-recon-cron" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.173013 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa5b7bdd-50bb-4123-a32a-0c7e97035a3f" containerName="setup-container" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.173032 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa5b7bdd-50bb-4123-a32a-0c7e97035a3f" containerName="setup-container" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.173089 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2fe2ad2-a0de-49aa-95fd-ef5f15032676" containerName="ceilometer-central-agent" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.173107 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2fe2ad2-a0de-49aa-95fd-ef5f15032676" containerName="ceilometer-central-agent" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.173132 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acdbf1f3-efd7-4181-b99c-a0697c465c4b" containerName="neutron-httpd" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.173153 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="acdbf1f3-efd7-4181-b99c-a0697c465c4b" containerName="neutron-httpd" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.173174 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="rsync" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.173190 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="rsync" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.173209 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a234743b-8983-4a60-bbb4-59ad823b83e2" containerName="barbican-api-log" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.173225 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="a234743b-8983-4a60-bbb4-59ad823b83e2" containerName="barbican-api-log" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.173252 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf" containerName="placement-log" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.173269 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf" containerName="placement-log" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.173298 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acdbf1f3-efd7-4181-b99c-a0697c465c4b" containerName="neutron-api" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.173315 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="acdbf1f3-efd7-4181-b99c-a0697c465c4b" containerName="neutron-api" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.173345 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc737ee4-d87c-4276-a6d1-6f3144879542" containerName="ovsdb-server" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.173362 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc737ee4-d87c-4276-a6d1-6f3144879542" containerName="ovsdb-server" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.173395 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="object-auditor" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.173413 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="object-auditor" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.173442 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="account-auditor" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.173461 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="account-auditor" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.173488 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="container-server" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.173505 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="container-server" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.173529 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4280aaf-817d-41e1-9867-715359ae322e" containerName="nova-metadata-metadata" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.173545 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4280aaf-817d-41e1-9867-715359ae322e" containerName="nova-metadata-metadata" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.173578 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf" containerName="placement-api" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.173597 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf" containerName="placement-api" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.173619 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09849d6c-7457-4130-9074-73154d22af1f" containerName="mariadb-account-create-update" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.173636 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="09849d6c-7457-4130-9074-73154d22af1f" containerName="mariadb-account-create-update" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.173663 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="object-replicator" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.173679 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="object-replicator" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.173708 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47514135-95a6-4b77-815a-ebf23a3cab82" containerName="rabbitmq" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.173726 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="47514135-95a6-4b77-815a-ebf23a3cab82" containerName="rabbitmq" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.173744 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28316ca0-eb95-47b0-bc7e-d31591facdc5" containerName="keystone-api" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.173760 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="28316ca0-eb95-47b0-bc7e-d31591facdc5" containerName="keystone-api" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.173788 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2fe2ad2-a0de-49aa-95fd-ef5f15032676" containerName="proxy-httpd" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.173804 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2fe2ad2-a0de-49aa-95fd-ef5f15032676" containerName="proxy-httpd" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.173832 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3" containerName="galera" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.173848 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3" containerName="galera" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.173880 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aef23409-e12b-4ef3-a968-f666e5a127ae" containerName="kube-state-metrics" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.173896 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="aef23409-e12b-4ef3-a968-f666e5a127ae" containerName="kube-state-metrics" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.173920 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32" containerName="barbican-worker" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.173937 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32" containerName="barbican-worker" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.173990 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="object-server" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.174006 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="object-server" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.174031 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="container-updater" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.174047 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="container-updater" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.174069 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="container-replicator" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.174086 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="container-replicator" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.181127 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4280aaf-817d-41e1-9867-715359ae322e" containerName="nova-metadata-log" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.181180 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4280aaf-817d-41e1-9867-715359ae322e" containerName="nova-metadata-log" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.181213 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="object-expirer" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.181226 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="object-expirer" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.181241 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2fe2ad2-a0de-49aa-95fd-ef5f15032676" containerName="sg-core" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.181253 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2fe2ad2-a0de-49aa-95fd-ef5f15032676" containerName="sg-core" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.181276 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32" containerName="barbican-worker-log" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.181291 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32" containerName="barbican-worker-log" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.181311 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="object-updater" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.181324 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="object-updater" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.181351 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="account-server" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.181365 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="account-server" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.181380 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47514135-95a6-4b77-815a-ebf23a3cab82" containerName="setup-container" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.181393 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="47514135-95a6-4b77-815a-ebf23a3cab82" containerName="setup-container" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.181416 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa5b7bdd-50bb-4123-a32a-0c7e97035a3f" containerName="rabbitmq" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.181429 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa5b7bdd-50bb-4123-a32a-0c7e97035a3f" containerName="rabbitmq" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.181443 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2fe2ad2-a0de-49aa-95fd-ef5f15032676" containerName="ceilometer-notification-agent" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.181456 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2fe2ad2-a0de-49aa-95fd-ef5f15032676" containerName="ceilometer-notification-agent" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.181472 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41fafe33-b43b-4dcb-9edd-b365d0749e10" containerName="cinder-api" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.181485 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="41fafe33-b43b-4dcb-9edd-b365d0749e10" containerName="cinder-api" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.181504 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c" containerName="nova-api-log" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.181517 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c" containerName="nova-api-log" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.181539 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73fa27e0-b59d-44b0-8648-7e696f71cd61" containerName="glance-log" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.181553 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="73fa27e0-b59d-44b0-8648-7e696f71cd61" containerName="glance-log" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.181574 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="account-replicator" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.181586 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="account-replicator" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.181607 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8d4cd44-9972-445e-bac3-63441b6fa4cc" containerName="glance-log" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.181620 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8d4cd44-9972-445e-bac3-63441b6fa4cc" containerName="glance-log" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.181649 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="container-auditor" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.181662 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="container-auditor" Feb 27 16:34:00 crc kubenswrapper[4830]: E0227 16:34:00.181677 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73fa27e0-b59d-44b0-8648-7e696f71cd61" containerName="glance-httpd" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.181690 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="73fa27e0-b59d-44b0-8648-7e696f71cd61" containerName="glance-httpd" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.182195 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32" containerName="barbican-worker-log" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.182222 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="acdbf1f3-efd7-4181-b99c-a0697c465c4b" containerName="neutron-api" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.182248 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="73fa27e0-b59d-44b0-8648-7e696f71cd61" containerName="glance-log" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.182276 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c017daa-cb8f-4629-80e6-a671a8455149" containerName="openstack-network-exporter" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.182303 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="a234743b-8983-4a60-bbb4-59ad823b83e2" containerName="barbican-api-log" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.182325 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="rsync" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.182340 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2fe2ad2-a0de-49aa-95fd-ef5f15032676" containerName="proxy-httpd" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.182360 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="container-replicator" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.182375 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="container-updater" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.182390 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="a989aa76-9246-46b2-9f1e-7900cfecedc2" containerName="nova-cell1-conductor-conductor" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.182405 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4baf4d8-24c9-4aa8-b72e-9d6d9cdd5f32" containerName="barbican-worker" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.182424 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="object-replicator" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.182451 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="object-updater" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.182468 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="28316ca0-eb95-47b0-bc7e-d31591facdc5" containerName="keystone-api" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.182490 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c017daa-cb8f-4629-80e6-a671a8455149" containerName="ovn-northd" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.182512 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8d4cd44-9972-445e-bac3-63441b6fa4cc" containerName="glance-log" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.182526 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa5b7bdd-50bb-4123-a32a-0c7e97035a3f" containerName="rabbitmq" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.182544 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="a234743b-8983-4a60-bbb4-59ad823b83e2" containerName="barbican-api" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.182561 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="account-reaper" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.182575 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2fe2ad2-a0de-49aa-95fd-ef5f15032676" containerName="ceilometer-central-agent" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.182595 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c" containerName="nova-api-api" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.182615 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="account-replicator" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.182629 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2fe2ad2-a0de-49aa-95fd-ef5f15032676" containerName="ceilometer-notification-agent" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.182648 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4280aaf-817d-41e1-9867-715359ae322e" containerName="nova-metadata-log" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.182662 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="41fafe33-b43b-4dcb-9edd-b365d0749e10" containerName="cinder-api-log" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.182679 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf33c958-d345-4a0b-a2d8-7c8aedfb5cf3" containerName="galera" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.182692 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="09849d6c-7457-4130-9074-73154d22af1f" containerName="mariadb-account-create-update" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.182707 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="object-auditor" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.182728 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="aef23409-e12b-4ef3-a968-f666e5a127ae" containerName="kube-state-metrics" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.182742 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="41fafe33-b43b-4dcb-9edd-b365d0749e10" containerName="cinder-api" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.182757 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="91362817-2bc3-48d8-a4ae-8ba5cb8f2b4c" containerName="nova-api-log" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.182779 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="container-auditor" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.182796 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="account-server" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.182812 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="container-server" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.182839 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8d4cd44-9972-445e-bac3-63441b6fa4cc" containerName="glance-httpd" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.182883 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf" containerName="placement-api" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.182916 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc737ee4-d87c-4276-a6d1-6f3144879542" containerName="ovsdb-server" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.182980 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf57a5ff-eb3d-4f4b-8ac0-0aac8013fbbf" containerName="placement-log" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.183006 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="09849d6c-7457-4130-9074-73154d22af1f" containerName="mariadb-account-create-update" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.183029 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc737ee4-d87c-4276-a6d1-6f3144879542" containerName="ovs-vswitchd" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.183047 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2fe2ad2-a0de-49aa-95fd-ef5f15032676" containerName="sg-core" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.183066 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="47514135-95a6-4b77-815a-ebf23a3cab82" containerName="rabbitmq" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.183083 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="account-auditor" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.183108 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="73fa27e0-b59d-44b0-8648-7e696f71cd61" containerName="glance-httpd" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.183135 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="acdbf1f3-efd7-4181-b99c-a0697c465c4b" containerName="neutron-httpd" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.183155 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="object-expirer" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.183178 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4280aaf-817d-41e1-9867-715359ae322e" containerName="nova-metadata-metadata" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.183202 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb3cdab6-15fa-40e1-a187-e277086227da" containerName="memcached" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.183227 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="object-server" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.183244 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0f13fa9-3e9d-4d0b-8f8f-bcca14e1617f" containerName="swift-recon-cron" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.187229 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536834-qfhvj"] Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.187439 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536834-qfhvj" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.193739 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.193800 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.194268 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.335095 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6548t\" (UniqueName: \"kubernetes.io/projected/fb84cf11-0669-422d-9608-f5b339989bd5-kube-api-access-6548t\") pod \"auto-csr-approver-29536834-qfhvj\" (UID: \"fb84cf11-0669-422d-9608-f5b339989bd5\") " pod="openshift-infra/auto-csr-approver-29536834-qfhvj" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.436724 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6548t\" (UniqueName: \"kubernetes.io/projected/fb84cf11-0669-422d-9608-f5b339989bd5-kube-api-access-6548t\") pod \"auto-csr-approver-29536834-qfhvj\" (UID: \"fb84cf11-0669-422d-9608-f5b339989bd5\") " pod="openshift-infra/auto-csr-approver-29536834-qfhvj" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.464073 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6548t\" (UniqueName: \"kubernetes.io/projected/fb84cf11-0669-422d-9608-f5b339989bd5-kube-api-access-6548t\") pod \"auto-csr-approver-29536834-qfhvj\" (UID: \"fb84cf11-0669-422d-9608-f5b339989bd5\") " pod="openshift-infra/auto-csr-approver-29536834-qfhvj" Feb 27 16:34:00 crc kubenswrapper[4830]: I0227 16:34:00.528110 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536834-qfhvj" Feb 27 16:34:01 crc kubenswrapper[4830]: I0227 16:34:01.083401 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536834-qfhvj"] Feb 27 16:34:01 crc kubenswrapper[4830]: I0227 16:34:01.894214 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536834-qfhvj" event={"ID":"fb84cf11-0669-422d-9608-f5b339989bd5","Type":"ContainerStarted","Data":"230c935e4704e3ea10647bed9dfcd1bb14cfbd5af7f9451b37ea809f9a98eeed"} Feb 27 16:34:02 crc kubenswrapper[4830]: I0227 16:34:02.908752 4830 generic.go:334] "Generic (PLEG): container finished" podID="fb84cf11-0669-422d-9608-f5b339989bd5" containerID="de445fa7b0d1be3672075bf8502f43e5f1bdfe4724a214743128c8f9140d38f5" exitCode=0 Feb 27 16:34:02 crc kubenswrapper[4830]: I0227 16:34:02.908847 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536834-qfhvj" event={"ID":"fb84cf11-0669-422d-9608-f5b339989bd5","Type":"ContainerDied","Data":"de445fa7b0d1be3672075bf8502f43e5f1bdfe4724a214743128c8f9140d38f5"} Feb 27 16:34:04 crc kubenswrapper[4830]: I0227 16:34:04.292929 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536834-qfhvj" Feb 27 16:34:04 crc kubenswrapper[4830]: I0227 16:34:04.403406 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6548t\" (UniqueName: \"kubernetes.io/projected/fb84cf11-0669-422d-9608-f5b339989bd5-kube-api-access-6548t\") pod \"fb84cf11-0669-422d-9608-f5b339989bd5\" (UID: \"fb84cf11-0669-422d-9608-f5b339989bd5\") " Feb 27 16:34:04 crc kubenswrapper[4830]: I0227 16:34:04.411321 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb84cf11-0669-422d-9608-f5b339989bd5-kube-api-access-6548t" (OuterVolumeSpecName: "kube-api-access-6548t") pod "fb84cf11-0669-422d-9608-f5b339989bd5" (UID: "fb84cf11-0669-422d-9608-f5b339989bd5"). InnerVolumeSpecName "kube-api-access-6548t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:34:04 crc kubenswrapper[4830]: I0227 16:34:04.505503 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6548t\" (UniqueName: \"kubernetes.io/projected/fb84cf11-0669-422d-9608-f5b339989bd5-kube-api-access-6548t\") on node \"crc\" DevicePath \"\"" Feb 27 16:34:04 crc kubenswrapper[4830]: I0227 16:34:04.943185 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536834-qfhvj" event={"ID":"fb84cf11-0669-422d-9608-f5b339989bd5","Type":"ContainerDied","Data":"230c935e4704e3ea10647bed9dfcd1bb14cfbd5af7f9451b37ea809f9a98eeed"} Feb 27 16:34:04 crc kubenswrapper[4830]: I0227 16:34:04.943245 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="230c935e4704e3ea10647bed9dfcd1bb14cfbd5af7f9451b37ea809f9a98eeed" Feb 27 16:34:04 crc kubenswrapper[4830]: I0227 16:34:04.943267 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536834-qfhvj" Feb 27 16:34:05 crc kubenswrapper[4830]: I0227 16:34:05.386637 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536828-8jpfz"] Feb 27 16:34:05 crc kubenswrapper[4830]: I0227 16:34:05.397325 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536828-8jpfz"] Feb 27 16:34:06 crc kubenswrapper[4830]: I0227 16:34:06.779506 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea3b2c46-c98b-4cf7-b7a1-0a7dfe22cc63" path="/var/lib/kubelet/pods/ea3b2c46-c98b-4cf7-b7a1-0a7dfe22cc63/volumes" Feb 27 16:34:08 crc kubenswrapper[4830]: I0227 16:34:08.762845 4830 scope.go:117] "RemoveContainer" containerID="ca4233a6fa911a1d6b959075103b585a592935c9e9a6d178d17fef41a2fea048" Feb 27 16:34:08 crc kubenswrapper[4830]: E0227 16:34:08.763414 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:34:21 crc kubenswrapper[4830]: I0227 16:34:21.763105 4830 scope.go:117] "RemoveContainer" containerID="ca4233a6fa911a1d6b959075103b585a592935c9e9a6d178d17fef41a2fea048" Feb 27 16:34:21 crc kubenswrapper[4830]: E0227 16:34:21.764140 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:34:32 crc kubenswrapper[4830]: I0227 16:34:32.605836 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nd4gj"] Feb 27 16:34:32 crc kubenswrapper[4830]: E0227 16:34:32.607128 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09849d6c-7457-4130-9074-73154d22af1f" containerName="mariadb-account-create-update" Feb 27 16:34:32 crc kubenswrapper[4830]: I0227 16:34:32.607162 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="09849d6c-7457-4130-9074-73154d22af1f" containerName="mariadb-account-create-update" Feb 27 16:34:32 crc kubenswrapper[4830]: E0227 16:34:32.607206 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb84cf11-0669-422d-9608-f5b339989bd5" containerName="oc" Feb 27 16:34:32 crc kubenswrapper[4830]: I0227 16:34:32.607224 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb84cf11-0669-422d-9608-f5b339989bd5" containerName="oc" Feb 27 16:34:32 crc kubenswrapper[4830]: I0227 16:34:32.612618 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb84cf11-0669-422d-9608-f5b339989bd5" containerName="oc" Feb 27 16:34:32 crc kubenswrapper[4830]: I0227 16:34:32.615117 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nd4gj" Feb 27 16:34:32 crc kubenswrapper[4830]: I0227 16:34:32.626139 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nd4gj"] Feb 27 16:34:32 crc kubenswrapper[4830]: I0227 16:34:32.716222 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4rz8\" (UniqueName: \"kubernetes.io/projected/545c166c-7c85-434c-9043-f652abc2843d-kube-api-access-b4rz8\") pod \"community-operators-nd4gj\" (UID: \"545c166c-7c85-434c-9043-f652abc2843d\") " pod="openshift-marketplace/community-operators-nd4gj" Feb 27 16:34:32 crc kubenswrapper[4830]: I0227 16:34:32.716290 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/545c166c-7c85-434c-9043-f652abc2843d-catalog-content\") pod \"community-operators-nd4gj\" (UID: \"545c166c-7c85-434c-9043-f652abc2843d\") " pod="openshift-marketplace/community-operators-nd4gj" Feb 27 16:34:32 crc kubenswrapper[4830]: I0227 16:34:32.716400 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/545c166c-7c85-434c-9043-f652abc2843d-utilities\") pod \"community-operators-nd4gj\" (UID: \"545c166c-7c85-434c-9043-f652abc2843d\") " pod="openshift-marketplace/community-operators-nd4gj" Feb 27 16:34:32 crc kubenswrapper[4830]: I0227 16:34:32.817651 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4rz8\" (UniqueName: \"kubernetes.io/projected/545c166c-7c85-434c-9043-f652abc2843d-kube-api-access-b4rz8\") pod \"community-operators-nd4gj\" (UID: \"545c166c-7c85-434c-9043-f652abc2843d\") " pod="openshift-marketplace/community-operators-nd4gj" Feb 27 16:34:32 crc kubenswrapper[4830]: I0227 16:34:32.817721 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/545c166c-7c85-434c-9043-f652abc2843d-catalog-content\") pod \"community-operators-nd4gj\" (UID: \"545c166c-7c85-434c-9043-f652abc2843d\") " pod="openshift-marketplace/community-operators-nd4gj" Feb 27 16:34:32 crc kubenswrapper[4830]: I0227 16:34:32.817784 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/545c166c-7c85-434c-9043-f652abc2843d-utilities\") pod \"community-operators-nd4gj\" (UID: \"545c166c-7c85-434c-9043-f652abc2843d\") " pod="openshift-marketplace/community-operators-nd4gj" Feb 27 16:34:32 crc kubenswrapper[4830]: I0227 16:34:32.818412 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/545c166c-7c85-434c-9043-f652abc2843d-catalog-content\") pod \"community-operators-nd4gj\" (UID: \"545c166c-7c85-434c-9043-f652abc2843d\") " pod="openshift-marketplace/community-operators-nd4gj" Feb 27 16:34:32 crc kubenswrapper[4830]: I0227 16:34:32.818425 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/545c166c-7c85-434c-9043-f652abc2843d-utilities\") pod \"community-operators-nd4gj\" (UID: \"545c166c-7c85-434c-9043-f652abc2843d\") " pod="openshift-marketplace/community-operators-nd4gj" Feb 27 16:34:32 crc kubenswrapper[4830]: I0227 16:34:32.844115 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4rz8\" (UniqueName: \"kubernetes.io/projected/545c166c-7c85-434c-9043-f652abc2843d-kube-api-access-b4rz8\") pod \"community-operators-nd4gj\" (UID: \"545c166c-7c85-434c-9043-f652abc2843d\") " pod="openshift-marketplace/community-operators-nd4gj" Feb 27 16:34:32 crc kubenswrapper[4830]: I0227 16:34:32.944456 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nd4gj" Feb 27 16:34:33 crc kubenswrapper[4830]: I0227 16:34:33.236504 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nd4gj"] Feb 27 16:34:33 crc kubenswrapper[4830]: I0227 16:34:33.295529 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nd4gj" event={"ID":"545c166c-7c85-434c-9043-f652abc2843d","Type":"ContainerStarted","Data":"0007ffb652b14fcd6efff09a5a0ef20e33d67b6986550b0cb631b762ea3db795"} Feb 27 16:34:34 crc kubenswrapper[4830]: I0227 16:34:34.313400 4830 generic.go:334] "Generic (PLEG): container finished" podID="545c166c-7c85-434c-9043-f652abc2843d" containerID="f948743d55e8fa6b4af022c00cdcfed4791b81cec7b8cf5b157c2820b7b433ad" exitCode=0 Feb 27 16:34:34 crc kubenswrapper[4830]: I0227 16:34:34.313785 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nd4gj" event={"ID":"545c166c-7c85-434c-9043-f652abc2843d","Type":"ContainerDied","Data":"f948743d55e8fa6b4af022c00cdcfed4791b81cec7b8cf5b157c2820b7b433ad"} Feb 27 16:34:35 crc kubenswrapper[4830]: I0227 16:34:35.327235 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nd4gj" event={"ID":"545c166c-7c85-434c-9043-f652abc2843d","Type":"ContainerStarted","Data":"15e517dab86bc8b10afe498033228cdbb886549ed9fe98108f5a6e3be19ad22a"} Feb 27 16:34:35 crc kubenswrapper[4830]: I0227 16:34:35.765258 4830 scope.go:117] "RemoveContainer" containerID="ca4233a6fa911a1d6b959075103b585a592935c9e9a6d178d17fef41a2fea048" Feb 27 16:34:35 crc kubenswrapper[4830]: E0227 16:34:35.765596 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:34:36 crc kubenswrapper[4830]: I0227 16:34:36.340850 4830 generic.go:334] "Generic (PLEG): container finished" podID="545c166c-7c85-434c-9043-f652abc2843d" containerID="15e517dab86bc8b10afe498033228cdbb886549ed9fe98108f5a6e3be19ad22a" exitCode=0 Feb 27 16:34:36 crc kubenswrapper[4830]: I0227 16:34:36.340919 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nd4gj" event={"ID":"545c166c-7c85-434c-9043-f652abc2843d","Type":"ContainerDied","Data":"15e517dab86bc8b10afe498033228cdbb886549ed9fe98108f5a6e3be19ad22a"} Feb 27 16:34:37 crc kubenswrapper[4830]: I0227 16:34:37.356248 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nd4gj" event={"ID":"545c166c-7c85-434c-9043-f652abc2843d","Type":"ContainerStarted","Data":"9ffc97979694f788112cb8231f169b4f004632fd222b61042972979fc3ca0d79"} Feb 27 16:34:37 crc kubenswrapper[4830]: I0227 16:34:37.400536 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nd4gj" podStartSLOduration=2.676845267 podStartE2EDuration="5.400513249s" podCreationTimestamp="2026-02-27 16:34:32 +0000 UTC" firstStartedPulling="2026-02-27 16:34:34.318114087 +0000 UTC m=+1670.407386590" lastFinishedPulling="2026-02-27 16:34:37.041782069 +0000 UTC m=+1673.131054572" observedRunningTime="2026-02-27 16:34:37.388614165 +0000 UTC m=+1673.477886668" watchObservedRunningTime="2026-02-27 16:34:37.400513249 +0000 UTC m=+1673.489785752" Feb 27 16:34:40 crc kubenswrapper[4830]: I0227 16:34:40.563543 4830 scope.go:117] "RemoveContainer" containerID="68e148d9c338e25590dbfaf5b9ed31c09c1d25b0cdfd43f35a0878475443aaf7" Feb 27 16:34:40 crc kubenswrapper[4830]: I0227 16:34:40.601630 4830 scope.go:117] "RemoveContainer" containerID="03fae1fb8e9a6d2c747afacdabeb6fc5b1752527700bbfdf259b9f15c3429baa" Feb 27 16:34:40 crc kubenswrapper[4830]: I0227 16:34:40.645663 4830 scope.go:117] "RemoveContainer" containerID="500cf1204bd29c7d932fe8fd9f4fcaa432d627c80cd7cc1c4807fae6e659c38a" Feb 27 16:34:40 crc kubenswrapper[4830]: I0227 16:34:40.697136 4830 scope.go:117] "RemoveContainer" containerID="4ad23027e7d75e6249247d76978f4d82e1283097eecebb5ce536bbb32a4f656a" Feb 27 16:34:40 crc kubenswrapper[4830]: I0227 16:34:40.748305 4830 scope.go:117] "RemoveContainer" containerID="32f67a0fa88a204c52134df945dd8bacfe73574220c11eccbe9250a8c9a31014" Feb 27 16:34:40 crc kubenswrapper[4830]: I0227 16:34:40.781306 4830 scope.go:117] "RemoveContainer" containerID="3f5b67b1fe465ff975e3223d66d6907410f1c1f41206c171986f3359ac5885d2" Feb 27 16:34:40 crc kubenswrapper[4830]: I0227 16:34:40.806279 4830 scope.go:117] "RemoveContainer" containerID="cb84941fad9c3a38a9d12732b8e29c8e9b49915990ba8ad56e2677abfe635ad9" Feb 27 16:34:40 crc kubenswrapper[4830]: I0227 16:34:40.863452 4830 scope.go:117] "RemoveContainer" containerID="5618df31dec13a8fa8c264acbc16b8fc53b1c9f9523f6216c8bce6be25fbacb1" Feb 27 16:34:40 crc kubenswrapper[4830]: I0227 16:34:40.888887 4830 scope.go:117] "RemoveContainer" containerID="ecb8bd5d0eb2c9090d00fc7c2e75ec3a65b6414bbffd98c8d36fcdd1b36d3983" Feb 27 16:34:40 crc kubenswrapper[4830]: I0227 16:34:40.919199 4830 scope.go:117] "RemoveContainer" containerID="37bbcf553bf7957f279c1d2e295e8937f67d4c1dc6186d8ebb25e160632ad917" Feb 27 16:34:40 crc kubenswrapper[4830]: I0227 16:34:40.948740 4830 scope.go:117] "RemoveContainer" containerID="c38481cf7ee01c4ffc8908412dd17ed7ec743f3072b5c6e5861cbac77132070e" Feb 27 16:34:40 crc kubenswrapper[4830]: I0227 16:34:40.982826 4830 scope.go:117] "RemoveContainer" containerID="edf7280348701155c989d49d0431a7c220e4237323ae8e514c1fed6e11d215dd" Feb 27 16:34:41 crc kubenswrapper[4830]: I0227 16:34:41.010817 4830 scope.go:117] "RemoveContainer" containerID="03feac40296d7a4209bb84be744dfc7a7221fe91f52d107820ff8c50b9949c8f" Feb 27 16:34:42 crc kubenswrapper[4830]: I0227 16:34:42.944811 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nd4gj" Feb 27 16:34:42 crc kubenswrapper[4830]: I0227 16:34:42.945196 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nd4gj" Feb 27 16:34:43 crc kubenswrapper[4830]: I0227 16:34:43.022798 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nd4gj" Feb 27 16:34:43 crc kubenswrapper[4830]: I0227 16:34:43.492856 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nd4gj" Feb 27 16:34:43 crc kubenswrapper[4830]: I0227 16:34:43.571555 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nd4gj"] Feb 27 16:34:45 crc kubenswrapper[4830]: I0227 16:34:45.446002 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nd4gj" podUID="545c166c-7c85-434c-9043-f652abc2843d" containerName="registry-server" containerID="cri-o://9ffc97979694f788112cb8231f169b4f004632fd222b61042972979fc3ca0d79" gracePeriod=2 Feb 27 16:34:45 crc kubenswrapper[4830]: I0227 16:34:45.905846 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nd4gj" Feb 27 16:34:45 crc kubenswrapper[4830]: I0227 16:34:45.934068 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4rz8\" (UniqueName: \"kubernetes.io/projected/545c166c-7c85-434c-9043-f652abc2843d-kube-api-access-b4rz8\") pod \"545c166c-7c85-434c-9043-f652abc2843d\" (UID: \"545c166c-7c85-434c-9043-f652abc2843d\") " Feb 27 16:34:45 crc kubenswrapper[4830]: I0227 16:34:45.934114 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/545c166c-7c85-434c-9043-f652abc2843d-utilities\") pod \"545c166c-7c85-434c-9043-f652abc2843d\" (UID: \"545c166c-7c85-434c-9043-f652abc2843d\") " Feb 27 16:34:45 crc kubenswrapper[4830]: I0227 16:34:45.934277 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/545c166c-7c85-434c-9043-f652abc2843d-catalog-content\") pod \"545c166c-7c85-434c-9043-f652abc2843d\" (UID: \"545c166c-7c85-434c-9043-f652abc2843d\") " Feb 27 16:34:45 crc kubenswrapper[4830]: I0227 16:34:45.935485 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/545c166c-7c85-434c-9043-f652abc2843d-utilities" (OuterVolumeSpecName: "utilities") pod "545c166c-7c85-434c-9043-f652abc2843d" (UID: "545c166c-7c85-434c-9043-f652abc2843d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:34:45 crc kubenswrapper[4830]: I0227 16:34:45.944266 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/545c166c-7c85-434c-9043-f652abc2843d-kube-api-access-b4rz8" (OuterVolumeSpecName: "kube-api-access-b4rz8") pod "545c166c-7c85-434c-9043-f652abc2843d" (UID: "545c166c-7c85-434c-9043-f652abc2843d"). InnerVolumeSpecName "kube-api-access-b4rz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:34:46 crc kubenswrapper[4830]: I0227 16:34:46.035812 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4rz8\" (UniqueName: \"kubernetes.io/projected/545c166c-7c85-434c-9043-f652abc2843d-kube-api-access-b4rz8\") on node \"crc\" DevicePath \"\"" Feb 27 16:34:46 crc kubenswrapper[4830]: I0227 16:34:46.035847 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/545c166c-7c85-434c-9043-f652abc2843d-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 16:34:46 crc kubenswrapper[4830]: I0227 16:34:46.383850 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/545c166c-7c85-434c-9043-f652abc2843d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "545c166c-7c85-434c-9043-f652abc2843d" (UID: "545c166c-7c85-434c-9043-f652abc2843d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:34:46 crc kubenswrapper[4830]: I0227 16:34:46.442755 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/545c166c-7c85-434c-9043-f652abc2843d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 16:34:46 crc kubenswrapper[4830]: I0227 16:34:46.454816 4830 generic.go:334] "Generic (PLEG): container finished" podID="545c166c-7c85-434c-9043-f652abc2843d" containerID="9ffc97979694f788112cb8231f169b4f004632fd222b61042972979fc3ca0d79" exitCode=0 Feb 27 16:34:46 crc kubenswrapper[4830]: I0227 16:34:46.454865 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nd4gj" event={"ID":"545c166c-7c85-434c-9043-f652abc2843d","Type":"ContainerDied","Data":"9ffc97979694f788112cb8231f169b4f004632fd222b61042972979fc3ca0d79"} Feb 27 16:34:46 crc kubenswrapper[4830]: I0227 16:34:46.454889 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nd4gj" Feb 27 16:34:46 crc kubenswrapper[4830]: I0227 16:34:46.454923 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nd4gj" event={"ID":"545c166c-7c85-434c-9043-f652abc2843d","Type":"ContainerDied","Data":"0007ffb652b14fcd6efff09a5a0ef20e33d67b6986550b0cb631b762ea3db795"} Feb 27 16:34:46 crc kubenswrapper[4830]: I0227 16:34:46.454994 4830 scope.go:117] "RemoveContainer" containerID="9ffc97979694f788112cb8231f169b4f004632fd222b61042972979fc3ca0d79" Feb 27 16:34:46 crc kubenswrapper[4830]: I0227 16:34:46.471277 4830 scope.go:117] "RemoveContainer" containerID="15e517dab86bc8b10afe498033228cdbb886549ed9fe98108f5a6e3be19ad22a" Feb 27 16:34:46 crc kubenswrapper[4830]: I0227 16:34:46.479787 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nd4gj"] Feb 27 16:34:46 crc kubenswrapper[4830]: I0227 16:34:46.496548 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nd4gj"] Feb 27 16:34:46 crc kubenswrapper[4830]: I0227 16:34:46.501939 4830 scope.go:117] "RemoveContainer" containerID="f948743d55e8fa6b4af022c00cdcfed4791b81cec7b8cf5b157c2820b7b433ad" Feb 27 16:34:46 crc kubenswrapper[4830]: I0227 16:34:46.515890 4830 scope.go:117] "RemoveContainer" containerID="9ffc97979694f788112cb8231f169b4f004632fd222b61042972979fc3ca0d79" Feb 27 16:34:46 crc kubenswrapper[4830]: E0227 16:34:46.516328 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ffc97979694f788112cb8231f169b4f004632fd222b61042972979fc3ca0d79\": container with ID starting with 9ffc97979694f788112cb8231f169b4f004632fd222b61042972979fc3ca0d79 not found: ID does not exist" containerID="9ffc97979694f788112cb8231f169b4f004632fd222b61042972979fc3ca0d79" Feb 27 16:34:46 crc kubenswrapper[4830]: I0227 16:34:46.516367 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ffc97979694f788112cb8231f169b4f004632fd222b61042972979fc3ca0d79"} err="failed to get container status \"9ffc97979694f788112cb8231f169b4f004632fd222b61042972979fc3ca0d79\": rpc error: code = NotFound desc = could not find container \"9ffc97979694f788112cb8231f169b4f004632fd222b61042972979fc3ca0d79\": container with ID starting with 9ffc97979694f788112cb8231f169b4f004632fd222b61042972979fc3ca0d79 not found: ID does not exist" Feb 27 16:34:46 crc kubenswrapper[4830]: I0227 16:34:46.516393 4830 scope.go:117] "RemoveContainer" containerID="15e517dab86bc8b10afe498033228cdbb886549ed9fe98108f5a6e3be19ad22a" Feb 27 16:34:46 crc kubenswrapper[4830]: E0227 16:34:46.516672 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15e517dab86bc8b10afe498033228cdbb886549ed9fe98108f5a6e3be19ad22a\": container with ID starting with 15e517dab86bc8b10afe498033228cdbb886549ed9fe98108f5a6e3be19ad22a not found: ID does not exist" containerID="15e517dab86bc8b10afe498033228cdbb886549ed9fe98108f5a6e3be19ad22a" Feb 27 16:34:46 crc kubenswrapper[4830]: I0227 16:34:46.516702 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15e517dab86bc8b10afe498033228cdbb886549ed9fe98108f5a6e3be19ad22a"} err="failed to get container status \"15e517dab86bc8b10afe498033228cdbb886549ed9fe98108f5a6e3be19ad22a\": rpc error: code = NotFound desc = could not find container \"15e517dab86bc8b10afe498033228cdbb886549ed9fe98108f5a6e3be19ad22a\": container with ID starting with 15e517dab86bc8b10afe498033228cdbb886549ed9fe98108f5a6e3be19ad22a not found: ID does not exist" Feb 27 16:34:46 crc kubenswrapper[4830]: I0227 16:34:46.516719 4830 scope.go:117] "RemoveContainer" containerID="f948743d55e8fa6b4af022c00cdcfed4791b81cec7b8cf5b157c2820b7b433ad" Feb 27 16:34:46 crc kubenswrapper[4830]: E0227 16:34:46.517010 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f948743d55e8fa6b4af022c00cdcfed4791b81cec7b8cf5b157c2820b7b433ad\": container with ID starting with f948743d55e8fa6b4af022c00cdcfed4791b81cec7b8cf5b157c2820b7b433ad not found: ID does not exist" containerID="f948743d55e8fa6b4af022c00cdcfed4791b81cec7b8cf5b157c2820b7b433ad" Feb 27 16:34:46 crc kubenswrapper[4830]: I0227 16:34:46.517041 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f948743d55e8fa6b4af022c00cdcfed4791b81cec7b8cf5b157c2820b7b433ad"} err="failed to get container status \"f948743d55e8fa6b4af022c00cdcfed4791b81cec7b8cf5b157c2820b7b433ad\": rpc error: code = NotFound desc = could not find container \"f948743d55e8fa6b4af022c00cdcfed4791b81cec7b8cf5b157c2820b7b433ad\": container with ID starting with f948743d55e8fa6b4af022c00cdcfed4791b81cec7b8cf5b157c2820b7b433ad not found: ID does not exist" Feb 27 16:34:46 crc kubenswrapper[4830]: I0227 16:34:46.776635 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="545c166c-7c85-434c-9043-f652abc2843d" path="/var/lib/kubelet/pods/545c166c-7c85-434c-9043-f652abc2843d/volumes" Feb 27 16:34:49 crc kubenswrapper[4830]: I0227 16:34:49.763042 4830 scope.go:117] "RemoveContainer" containerID="ca4233a6fa911a1d6b959075103b585a592935c9e9a6d178d17fef41a2fea048" Feb 27 16:34:49 crc kubenswrapper[4830]: E0227 16:34:49.763639 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:35:02 crc kubenswrapper[4830]: I0227 16:35:02.763752 4830 scope.go:117] "RemoveContainer" containerID="ca4233a6fa911a1d6b959075103b585a592935c9e9a6d178d17fef41a2fea048" Feb 27 16:35:02 crc kubenswrapper[4830]: E0227 16:35:02.764710 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:35:05 crc kubenswrapper[4830]: I0227 16:35:05.590820 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g7sxt"] Feb 27 16:35:05 crc kubenswrapper[4830]: E0227 16:35:05.591676 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="545c166c-7c85-434c-9043-f652abc2843d" containerName="registry-server" Feb 27 16:35:05 crc kubenswrapper[4830]: I0227 16:35:05.591699 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="545c166c-7c85-434c-9043-f652abc2843d" containerName="registry-server" Feb 27 16:35:05 crc kubenswrapper[4830]: E0227 16:35:05.591726 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="545c166c-7c85-434c-9043-f652abc2843d" containerName="extract-content" Feb 27 16:35:05 crc kubenswrapper[4830]: I0227 16:35:05.591740 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="545c166c-7c85-434c-9043-f652abc2843d" containerName="extract-content" Feb 27 16:35:05 crc kubenswrapper[4830]: E0227 16:35:05.591919 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="545c166c-7c85-434c-9043-f652abc2843d" containerName="extract-utilities" Feb 27 16:35:05 crc kubenswrapper[4830]: I0227 16:35:05.591932 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="545c166c-7c85-434c-9043-f652abc2843d" containerName="extract-utilities" Feb 27 16:35:05 crc kubenswrapper[4830]: I0227 16:35:05.592253 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="545c166c-7c85-434c-9043-f652abc2843d" containerName="registry-server" Feb 27 16:35:05 crc kubenswrapper[4830]: I0227 16:35:05.594070 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g7sxt" Feb 27 16:35:05 crc kubenswrapper[4830]: I0227 16:35:05.621710 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g7sxt"] Feb 27 16:35:05 crc kubenswrapper[4830]: I0227 16:35:05.676040 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20520c1b-b75e-473e-934d-ad1dbf402085-utilities\") pod \"certified-operators-g7sxt\" (UID: \"20520c1b-b75e-473e-934d-ad1dbf402085\") " pod="openshift-marketplace/certified-operators-g7sxt" Feb 27 16:35:05 crc kubenswrapper[4830]: I0227 16:35:05.676088 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20520c1b-b75e-473e-934d-ad1dbf402085-catalog-content\") pod \"certified-operators-g7sxt\" (UID: \"20520c1b-b75e-473e-934d-ad1dbf402085\") " pod="openshift-marketplace/certified-operators-g7sxt" Feb 27 16:35:05 crc kubenswrapper[4830]: I0227 16:35:05.676193 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8csm\" (UniqueName: \"kubernetes.io/projected/20520c1b-b75e-473e-934d-ad1dbf402085-kube-api-access-k8csm\") pod \"certified-operators-g7sxt\" (UID: \"20520c1b-b75e-473e-934d-ad1dbf402085\") " pod="openshift-marketplace/certified-operators-g7sxt" Feb 27 16:35:05 crc kubenswrapper[4830]: I0227 16:35:05.777570 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8csm\" (UniqueName: \"kubernetes.io/projected/20520c1b-b75e-473e-934d-ad1dbf402085-kube-api-access-k8csm\") pod \"certified-operators-g7sxt\" (UID: \"20520c1b-b75e-473e-934d-ad1dbf402085\") " pod="openshift-marketplace/certified-operators-g7sxt" Feb 27 16:35:05 crc kubenswrapper[4830]: I0227 16:35:05.777684 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20520c1b-b75e-473e-934d-ad1dbf402085-utilities\") pod \"certified-operators-g7sxt\" (UID: \"20520c1b-b75e-473e-934d-ad1dbf402085\") " pod="openshift-marketplace/certified-operators-g7sxt" Feb 27 16:35:05 crc kubenswrapper[4830]: I0227 16:35:05.777704 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20520c1b-b75e-473e-934d-ad1dbf402085-catalog-content\") pod \"certified-operators-g7sxt\" (UID: \"20520c1b-b75e-473e-934d-ad1dbf402085\") " pod="openshift-marketplace/certified-operators-g7sxt" Feb 27 16:35:05 crc kubenswrapper[4830]: I0227 16:35:05.778370 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20520c1b-b75e-473e-934d-ad1dbf402085-catalog-content\") pod \"certified-operators-g7sxt\" (UID: \"20520c1b-b75e-473e-934d-ad1dbf402085\") " pod="openshift-marketplace/certified-operators-g7sxt" Feb 27 16:35:05 crc kubenswrapper[4830]: I0227 16:35:05.779420 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20520c1b-b75e-473e-934d-ad1dbf402085-utilities\") pod \"certified-operators-g7sxt\" (UID: \"20520c1b-b75e-473e-934d-ad1dbf402085\") " pod="openshift-marketplace/certified-operators-g7sxt" Feb 27 16:35:05 crc kubenswrapper[4830]: I0227 16:35:05.801208 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8csm\" (UniqueName: \"kubernetes.io/projected/20520c1b-b75e-473e-934d-ad1dbf402085-kube-api-access-k8csm\") pod \"certified-operators-g7sxt\" (UID: \"20520c1b-b75e-473e-934d-ad1dbf402085\") " pod="openshift-marketplace/certified-operators-g7sxt" Feb 27 16:35:05 crc kubenswrapper[4830]: I0227 16:35:05.985858 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g7sxt" Feb 27 16:35:06 crc kubenswrapper[4830]: I0227 16:35:06.254934 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g7sxt"] Feb 27 16:35:06 crc kubenswrapper[4830]: I0227 16:35:06.699061 4830 generic.go:334] "Generic (PLEG): container finished" podID="20520c1b-b75e-473e-934d-ad1dbf402085" containerID="1db8d9c0e8058a623120c65acedba6982e177f7dea16131f7e6928cd89d87a5a" exitCode=0 Feb 27 16:35:06 crc kubenswrapper[4830]: I0227 16:35:06.699104 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g7sxt" event={"ID":"20520c1b-b75e-473e-934d-ad1dbf402085","Type":"ContainerDied","Data":"1db8d9c0e8058a623120c65acedba6982e177f7dea16131f7e6928cd89d87a5a"} Feb 27 16:35:06 crc kubenswrapper[4830]: I0227 16:35:06.699129 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g7sxt" event={"ID":"20520c1b-b75e-473e-934d-ad1dbf402085","Type":"ContainerStarted","Data":"6501f9b985829343aa5fdd982f42aaa67e91c7c728299e5251e4b4b5b9718299"} Feb 27 16:35:07 crc kubenswrapper[4830]: I0227 16:35:07.714140 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g7sxt" event={"ID":"20520c1b-b75e-473e-934d-ad1dbf402085","Type":"ContainerStarted","Data":"f4892ad34372d4bebb8ce09703950cb8e82d7815c36ee79403142c01e93278ac"} Feb 27 16:35:08 crc kubenswrapper[4830]: I0227 16:35:08.748572 4830 generic.go:334] "Generic (PLEG): container finished" podID="20520c1b-b75e-473e-934d-ad1dbf402085" containerID="f4892ad34372d4bebb8ce09703950cb8e82d7815c36ee79403142c01e93278ac" exitCode=0 Feb 27 16:35:08 crc kubenswrapper[4830]: I0227 16:35:08.749016 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g7sxt" event={"ID":"20520c1b-b75e-473e-934d-ad1dbf402085","Type":"ContainerDied","Data":"f4892ad34372d4bebb8ce09703950cb8e82d7815c36ee79403142c01e93278ac"} Feb 27 16:35:08 crc kubenswrapper[4830]: I0227 16:35:08.796747 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2zsnb"] Feb 27 16:35:08 crc kubenswrapper[4830]: I0227 16:35:08.799304 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2zsnb" Feb 27 16:35:08 crc kubenswrapper[4830]: I0227 16:35:08.808640 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2zsnb"] Feb 27 16:35:08 crc kubenswrapper[4830]: I0227 16:35:08.838849 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e24be1f-6313-4c81-a77c-9a52a1399c92-catalog-content\") pod \"redhat-marketplace-2zsnb\" (UID: \"6e24be1f-6313-4c81-a77c-9a52a1399c92\") " pod="openshift-marketplace/redhat-marketplace-2zsnb" Feb 27 16:35:08 crc kubenswrapper[4830]: I0227 16:35:08.838908 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7zvx\" (UniqueName: \"kubernetes.io/projected/6e24be1f-6313-4c81-a77c-9a52a1399c92-kube-api-access-z7zvx\") pod \"redhat-marketplace-2zsnb\" (UID: \"6e24be1f-6313-4c81-a77c-9a52a1399c92\") " pod="openshift-marketplace/redhat-marketplace-2zsnb" Feb 27 16:35:08 crc kubenswrapper[4830]: I0227 16:35:08.839046 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e24be1f-6313-4c81-a77c-9a52a1399c92-utilities\") pod \"redhat-marketplace-2zsnb\" (UID: \"6e24be1f-6313-4c81-a77c-9a52a1399c92\") " pod="openshift-marketplace/redhat-marketplace-2zsnb" Feb 27 16:35:08 crc kubenswrapper[4830]: I0227 16:35:08.940103 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e24be1f-6313-4c81-a77c-9a52a1399c92-utilities\") pod \"redhat-marketplace-2zsnb\" (UID: \"6e24be1f-6313-4c81-a77c-9a52a1399c92\") " pod="openshift-marketplace/redhat-marketplace-2zsnb" Feb 27 16:35:08 crc kubenswrapper[4830]: I0227 16:35:08.940213 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e24be1f-6313-4c81-a77c-9a52a1399c92-catalog-content\") pod \"redhat-marketplace-2zsnb\" (UID: \"6e24be1f-6313-4c81-a77c-9a52a1399c92\") " pod="openshift-marketplace/redhat-marketplace-2zsnb" Feb 27 16:35:08 crc kubenswrapper[4830]: I0227 16:35:08.940252 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7zvx\" (UniqueName: \"kubernetes.io/projected/6e24be1f-6313-4c81-a77c-9a52a1399c92-kube-api-access-z7zvx\") pod \"redhat-marketplace-2zsnb\" (UID: \"6e24be1f-6313-4c81-a77c-9a52a1399c92\") " pod="openshift-marketplace/redhat-marketplace-2zsnb" Feb 27 16:35:08 crc kubenswrapper[4830]: I0227 16:35:08.940674 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e24be1f-6313-4c81-a77c-9a52a1399c92-utilities\") pod \"redhat-marketplace-2zsnb\" (UID: \"6e24be1f-6313-4c81-a77c-9a52a1399c92\") " pod="openshift-marketplace/redhat-marketplace-2zsnb" Feb 27 16:35:08 crc kubenswrapper[4830]: I0227 16:35:08.941006 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e24be1f-6313-4c81-a77c-9a52a1399c92-catalog-content\") pod \"redhat-marketplace-2zsnb\" (UID: \"6e24be1f-6313-4c81-a77c-9a52a1399c92\") " pod="openshift-marketplace/redhat-marketplace-2zsnb" Feb 27 16:35:08 crc kubenswrapper[4830]: I0227 16:35:08.965030 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7zvx\" (UniqueName: \"kubernetes.io/projected/6e24be1f-6313-4c81-a77c-9a52a1399c92-kube-api-access-z7zvx\") pod \"redhat-marketplace-2zsnb\" (UID: \"6e24be1f-6313-4c81-a77c-9a52a1399c92\") " pod="openshift-marketplace/redhat-marketplace-2zsnb" Feb 27 16:35:09 crc kubenswrapper[4830]: I0227 16:35:09.143579 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2zsnb" Feb 27 16:35:09 crc kubenswrapper[4830]: I0227 16:35:09.532297 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2zsnb"] Feb 27 16:35:09 crc kubenswrapper[4830]: W0227 16:35:09.538355 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e24be1f_6313_4c81_a77c_9a52a1399c92.slice/crio-d5b34ead2974a904944bf88e5eb15e41b63a138c8b8d0bb8e2c24910bdd7856c WatchSource:0}: Error finding container d5b34ead2974a904944bf88e5eb15e41b63a138c8b8d0bb8e2c24910bdd7856c: Status 404 returned error can't find the container with id d5b34ead2974a904944bf88e5eb15e41b63a138c8b8d0bb8e2c24910bdd7856c Feb 27 16:35:09 crc kubenswrapper[4830]: I0227 16:35:09.759842 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g7sxt" event={"ID":"20520c1b-b75e-473e-934d-ad1dbf402085","Type":"ContainerStarted","Data":"ce1b185fc251e395f291c5405c712a9c51abffe8767489a851e98a934e72ba19"} Feb 27 16:35:09 crc kubenswrapper[4830]: I0227 16:35:09.763263 4830 generic.go:334] "Generic (PLEG): container finished" podID="6e24be1f-6313-4c81-a77c-9a52a1399c92" containerID="0d15de3035d1a4c8c93396e6487eb745fd6fd0a431dc2dc9728b1aec269f3c0b" exitCode=0 Feb 27 16:35:09 crc kubenswrapper[4830]: I0227 16:35:09.763319 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zsnb" event={"ID":"6e24be1f-6313-4c81-a77c-9a52a1399c92","Type":"ContainerDied","Data":"0d15de3035d1a4c8c93396e6487eb745fd6fd0a431dc2dc9728b1aec269f3c0b"} Feb 27 16:35:09 crc kubenswrapper[4830]: I0227 16:35:09.763371 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zsnb" event={"ID":"6e24be1f-6313-4c81-a77c-9a52a1399c92","Type":"ContainerStarted","Data":"d5b34ead2974a904944bf88e5eb15e41b63a138c8b8d0bb8e2c24910bdd7856c"} Feb 27 16:35:09 crc kubenswrapper[4830]: I0227 16:35:09.791308 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g7sxt" podStartSLOduration=2.328931065 podStartE2EDuration="4.791285422s" podCreationTimestamp="2026-02-27 16:35:05 +0000 UTC" firstStartedPulling="2026-02-27 16:35:06.701262432 +0000 UTC m=+1702.790534895" lastFinishedPulling="2026-02-27 16:35:09.163616749 +0000 UTC m=+1705.252889252" observedRunningTime="2026-02-27 16:35:09.790090852 +0000 UTC m=+1705.879363325" watchObservedRunningTime="2026-02-27 16:35:09.791285422 +0000 UTC m=+1705.880557915" Feb 27 16:35:10 crc kubenswrapper[4830]: I0227 16:35:10.774081 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zsnb" event={"ID":"6e24be1f-6313-4c81-a77c-9a52a1399c92","Type":"ContainerStarted","Data":"a140760e838cf347ca71d5f3d746366a845f4c0564134c5f1c2eb356327aad0c"} Feb 27 16:35:11 crc kubenswrapper[4830]: I0227 16:35:11.784076 4830 generic.go:334] "Generic (PLEG): container finished" podID="6e24be1f-6313-4c81-a77c-9a52a1399c92" containerID="a140760e838cf347ca71d5f3d746366a845f4c0564134c5f1c2eb356327aad0c" exitCode=0 Feb 27 16:35:11 crc kubenswrapper[4830]: I0227 16:35:11.784137 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zsnb" event={"ID":"6e24be1f-6313-4c81-a77c-9a52a1399c92","Type":"ContainerDied","Data":"a140760e838cf347ca71d5f3d746366a845f4c0564134c5f1c2eb356327aad0c"} Feb 27 16:35:12 crc kubenswrapper[4830]: I0227 16:35:12.798843 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zsnb" event={"ID":"6e24be1f-6313-4c81-a77c-9a52a1399c92","Type":"ContainerStarted","Data":"8163e70fb357f23f860ca26bdfdf550882a607ba8e2b6fd87f84c6bbe181999f"} Feb 27 16:35:12 crc kubenswrapper[4830]: I0227 16:35:12.827478 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2zsnb" podStartSLOduration=2.275084225 podStartE2EDuration="4.827458383s" podCreationTimestamp="2026-02-27 16:35:08 +0000 UTC" firstStartedPulling="2026-02-27 16:35:09.764922001 +0000 UTC m=+1705.854194464" lastFinishedPulling="2026-02-27 16:35:12.317296119 +0000 UTC m=+1708.406568622" observedRunningTime="2026-02-27 16:35:12.817700282 +0000 UTC m=+1708.906972795" watchObservedRunningTime="2026-02-27 16:35:12.827458383 +0000 UTC m=+1708.916730856" Feb 27 16:35:14 crc kubenswrapper[4830]: I0227 16:35:14.774230 4830 scope.go:117] "RemoveContainer" containerID="ca4233a6fa911a1d6b959075103b585a592935c9e9a6d178d17fef41a2fea048" Feb 27 16:35:14 crc kubenswrapper[4830]: E0227 16:35:14.774973 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:35:15 crc kubenswrapper[4830]: I0227 16:35:15.986207 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g7sxt" Feb 27 16:35:15 crc kubenswrapper[4830]: I0227 16:35:15.986678 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g7sxt" Feb 27 16:35:16 crc kubenswrapper[4830]: I0227 16:35:16.068778 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g7sxt" Feb 27 16:35:16 crc kubenswrapper[4830]: I0227 16:35:16.915431 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g7sxt" Feb 27 16:35:17 crc kubenswrapper[4830]: I0227 16:35:17.169331 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g7sxt"] Feb 27 16:35:18 crc kubenswrapper[4830]: I0227 16:35:18.865040 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g7sxt" podUID="20520c1b-b75e-473e-934d-ad1dbf402085" containerName="registry-server" containerID="cri-o://ce1b185fc251e395f291c5405c712a9c51abffe8767489a851e98a934e72ba19" gracePeriod=2 Feb 27 16:35:19 crc kubenswrapper[4830]: I0227 16:35:19.144410 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2zsnb" Feb 27 16:35:19 crc kubenswrapper[4830]: I0227 16:35:19.144456 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2zsnb" Feb 27 16:35:19 crc kubenswrapper[4830]: I0227 16:35:19.220907 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2zsnb" Feb 27 16:35:19 crc kubenswrapper[4830]: I0227 16:35:19.881639 4830 generic.go:334] "Generic (PLEG): container finished" podID="20520c1b-b75e-473e-934d-ad1dbf402085" containerID="ce1b185fc251e395f291c5405c712a9c51abffe8767489a851e98a934e72ba19" exitCode=0 Feb 27 16:35:19 crc kubenswrapper[4830]: I0227 16:35:19.881745 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g7sxt" event={"ID":"20520c1b-b75e-473e-934d-ad1dbf402085","Type":"ContainerDied","Data":"ce1b185fc251e395f291c5405c712a9c51abffe8767489a851e98a934e72ba19"} Feb 27 16:35:19 crc kubenswrapper[4830]: I0227 16:35:19.959885 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2zsnb" Feb 27 16:35:20 crc kubenswrapper[4830]: I0227 16:35:20.507080 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g7sxt" Feb 27 16:35:20 crc kubenswrapper[4830]: I0227 16:35:20.556891 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2zsnb"] Feb 27 16:35:20 crc kubenswrapper[4830]: I0227 16:35:20.632098 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20520c1b-b75e-473e-934d-ad1dbf402085-utilities\") pod \"20520c1b-b75e-473e-934d-ad1dbf402085\" (UID: \"20520c1b-b75e-473e-934d-ad1dbf402085\") " Feb 27 16:35:20 crc kubenswrapper[4830]: I0227 16:35:20.632168 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20520c1b-b75e-473e-934d-ad1dbf402085-catalog-content\") pod \"20520c1b-b75e-473e-934d-ad1dbf402085\" (UID: \"20520c1b-b75e-473e-934d-ad1dbf402085\") " Feb 27 16:35:20 crc kubenswrapper[4830]: I0227 16:35:20.632203 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8csm\" (UniqueName: \"kubernetes.io/projected/20520c1b-b75e-473e-934d-ad1dbf402085-kube-api-access-k8csm\") pod \"20520c1b-b75e-473e-934d-ad1dbf402085\" (UID: \"20520c1b-b75e-473e-934d-ad1dbf402085\") " Feb 27 16:35:20 crc kubenswrapper[4830]: I0227 16:35:20.633015 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20520c1b-b75e-473e-934d-ad1dbf402085-utilities" (OuterVolumeSpecName: "utilities") pod "20520c1b-b75e-473e-934d-ad1dbf402085" (UID: "20520c1b-b75e-473e-934d-ad1dbf402085"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:35:20 crc kubenswrapper[4830]: I0227 16:35:20.641197 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20520c1b-b75e-473e-934d-ad1dbf402085-kube-api-access-k8csm" (OuterVolumeSpecName: "kube-api-access-k8csm") pod "20520c1b-b75e-473e-934d-ad1dbf402085" (UID: "20520c1b-b75e-473e-934d-ad1dbf402085"). InnerVolumeSpecName "kube-api-access-k8csm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:35:20 crc kubenswrapper[4830]: I0227 16:35:20.704886 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20520c1b-b75e-473e-934d-ad1dbf402085-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20520c1b-b75e-473e-934d-ad1dbf402085" (UID: "20520c1b-b75e-473e-934d-ad1dbf402085"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:35:20 crc kubenswrapper[4830]: I0227 16:35:20.733761 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20520c1b-b75e-473e-934d-ad1dbf402085-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 16:35:20 crc kubenswrapper[4830]: I0227 16:35:20.733790 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8csm\" (UniqueName: \"kubernetes.io/projected/20520c1b-b75e-473e-934d-ad1dbf402085-kube-api-access-k8csm\") on node \"crc\" DevicePath \"\"" Feb 27 16:35:20 crc kubenswrapper[4830]: I0227 16:35:20.733802 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20520c1b-b75e-473e-934d-ad1dbf402085-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 16:35:20 crc kubenswrapper[4830]: I0227 16:35:20.896114 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g7sxt" event={"ID":"20520c1b-b75e-473e-934d-ad1dbf402085","Type":"ContainerDied","Data":"6501f9b985829343aa5fdd982f42aaa67e91c7c728299e5251e4b4b5b9718299"} Feb 27 16:35:20 crc kubenswrapper[4830]: I0227 16:35:20.896152 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g7sxt" Feb 27 16:35:20 crc kubenswrapper[4830]: I0227 16:35:20.896204 4830 scope.go:117] "RemoveContainer" containerID="ce1b185fc251e395f291c5405c712a9c51abffe8767489a851e98a934e72ba19" Feb 27 16:35:20 crc kubenswrapper[4830]: I0227 16:35:20.935348 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g7sxt"] Feb 27 16:35:20 crc kubenswrapper[4830]: I0227 16:35:20.943373 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g7sxt"] Feb 27 16:35:20 crc kubenswrapper[4830]: I0227 16:35:20.950221 4830 scope.go:117] "RemoveContainer" containerID="f4892ad34372d4bebb8ce09703950cb8e82d7815c36ee79403142c01e93278ac" Feb 27 16:35:20 crc kubenswrapper[4830]: I0227 16:35:20.983215 4830 scope.go:117] "RemoveContainer" containerID="1db8d9c0e8058a623120c65acedba6982e177f7dea16131f7e6928cd89d87a5a" Feb 27 16:35:21 crc kubenswrapper[4830]: I0227 16:35:21.915072 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2zsnb" podUID="6e24be1f-6313-4c81-a77c-9a52a1399c92" containerName="registry-server" containerID="cri-o://8163e70fb357f23f860ca26bdfdf550882a607ba8e2b6fd87f84c6bbe181999f" gracePeriod=2 Feb 27 16:35:22 crc kubenswrapper[4830]: I0227 16:35:22.422717 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2zsnb" Feb 27 16:35:22 crc kubenswrapper[4830]: I0227 16:35:22.574341 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e24be1f-6313-4c81-a77c-9a52a1399c92-utilities\") pod \"6e24be1f-6313-4c81-a77c-9a52a1399c92\" (UID: \"6e24be1f-6313-4c81-a77c-9a52a1399c92\") " Feb 27 16:35:22 crc kubenswrapper[4830]: I0227 16:35:22.574416 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7zvx\" (UniqueName: \"kubernetes.io/projected/6e24be1f-6313-4c81-a77c-9a52a1399c92-kube-api-access-z7zvx\") pod \"6e24be1f-6313-4c81-a77c-9a52a1399c92\" (UID: \"6e24be1f-6313-4c81-a77c-9a52a1399c92\") " Feb 27 16:35:22 crc kubenswrapper[4830]: I0227 16:35:22.574505 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e24be1f-6313-4c81-a77c-9a52a1399c92-catalog-content\") pod \"6e24be1f-6313-4c81-a77c-9a52a1399c92\" (UID: \"6e24be1f-6313-4c81-a77c-9a52a1399c92\") " Feb 27 16:35:22 crc kubenswrapper[4830]: I0227 16:35:22.575651 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e24be1f-6313-4c81-a77c-9a52a1399c92-utilities" (OuterVolumeSpecName: "utilities") pod "6e24be1f-6313-4c81-a77c-9a52a1399c92" (UID: "6e24be1f-6313-4c81-a77c-9a52a1399c92"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:35:22 crc kubenswrapper[4830]: I0227 16:35:22.583477 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e24be1f-6313-4c81-a77c-9a52a1399c92-kube-api-access-z7zvx" (OuterVolumeSpecName: "kube-api-access-z7zvx") pod "6e24be1f-6313-4c81-a77c-9a52a1399c92" (UID: "6e24be1f-6313-4c81-a77c-9a52a1399c92"). InnerVolumeSpecName "kube-api-access-z7zvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:35:22 crc kubenswrapper[4830]: I0227 16:35:22.631007 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e24be1f-6313-4c81-a77c-9a52a1399c92-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e24be1f-6313-4c81-a77c-9a52a1399c92" (UID: "6e24be1f-6313-4c81-a77c-9a52a1399c92"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:35:22 crc kubenswrapper[4830]: I0227 16:35:22.675978 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e24be1f-6313-4c81-a77c-9a52a1399c92-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 16:35:22 crc kubenswrapper[4830]: I0227 16:35:22.676011 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7zvx\" (UniqueName: \"kubernetes.io/projected/6e24be1f-6313-4c81-a77c-9a52a1399c92-kube-api-access-z7zvx\") on node \"crc\" DevicePath \"\"" Feb 27 16:35:22 crc kubenswrapper[4830]: I0227 16:35:22.676047 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e24be1f-6313-4c81-a77c-9a52a1399c92-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 16:35:22 crc kubenswrapper[4830]: I0227 16:35:22.779385 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20520c1b-b75e-473e-934d-ad1dbf402085" path="/var/lib/kubelet/pods/20520c1b-b75e-473e-934d-ad1dbf402085/volumes" Feb 27 16:35:22 crc kubenswrapper[4830]: I0227 16:35:22.932406 4830 generic.go:334] "Generic (PLEG): container finished" podID="6e24be1f-6313-4c81-a77c-9a52a1399c92" containerID="8163e70fb357f23f860ca26bdfdf550882a607ba8e2b6fd87f84c6bbe181999f" exitCode=0 Feb 27 16:35:22 crc kubenswrapper[4830]: I0227 16:35:22.932509 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zsnb" event={"ID":"6e24be1f-6313-4c81-a77c-9a52a1399c92","Type":"ContainerDied","Data":"8163e70fb357f23f860ca26bdfdf550882a607ba8e2b6fd87f84c6bbe181999f"} Feb 27 16:35:22 crc kubenswrapper[4830]: I0227 16:35:22.932626 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2zsnb" event={"ID":"6e24be1f-6313-4c81-a77c-9a52a1399c92","Type":"ContainerDied","Data":"d5b34ead2974a904944bf88e5eb15e41b63a138c8b8d0bb8e2c24910bdd7856c"} Feb 27 16:35:22 crc kubenswrapper[4830]: I0227 16:35:22.932659 4830 scope.go:117] "RemoveContainer" containerID="8163e70fb357f23f860ca26bdfdf550882a607ba8e2b6fd87f84c6bbe181999f" Feb 27 16:35:22 crc kubenswrapper[4830]: I0227 16:35:22.934338 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2zsnb" Feb 27 16:35:22 crc kubenswrapper[4830]: I0227 16:35:22.977355 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2zsnb"] Feb 27 16:35:22 crc kubenswrapper[4830]: I0227 16:35:22.980033 4830 scope.go:117] "RemoveContainer" containerID="a140760e838cf347ca71d5f3d746366a845f4c0564134c5f1c2eb356327aad0c" Feb 27 16:35:22 crc kubenswrapper[4830]: I0227 16:35:22.991217 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2zsnb"] Feb 27 16:35:23 crc kubenswrapper[4830]: I0227 16:35:23.008734 4830 scope.go:117] "RemoveContainer" containerID="0d15de3035d1a4c8c93396e6487eb745fd6fd0a431dc2dc9728b1aec269f3c0b" Feb 27 16:35:23 crc kubenswrapper[4830]: I0227 16:35:23.049887 4830 scope.go:117] "RemoveContainer" containerID="8163e70fb357f23f860ca26bdfdf550882a607ba8e2b6fd87f84c6bbe181999f" Feb 27 16:35:23 crc kubenswrapper[4830]: E0227 16:35:23.050381 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8163e70fb357f23f860ca26bdfdf550882a607ba8e2b6fd87f84c6bbe181999f\": container with ID starting with 8163e70fb357f23f860ca26bdfdf550882a607ba8e2b6fd87f84c6bbe181999f not found: ID does not exist" containerID="8163e70fb357f23f860ca26bdfdf550882a607ba8e2b6fd87f84c6bbe181999f" Feb 27 16:35:23 crc kubenswrapper[4830]: I0227 16:35:23.050462 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8163e70fb357f23f860ca26bdfdf550882a607ba8e2b6fd87f84c6bbe181999f"} err="failed to get container status \"8163e70fb357f23f860ca26bdfdf550882a607ba8e2b6fd87f84c6bbe181999f\": rpc error: code = NotFound desc = could not find container \"8163e70fb357f23f860ca26bdfdf550882a607ba8e2b6fd87f84c6bbe181999f\": container with ID starting with 8163e70fb357f23f860ca26bdfdf550882a607ba8e2b6fd87f84c6bbe181999f not found: ID does not exist" Feb 27 16:35:23 crc kubenswrapper[4830]: I0227 16:35:23.050514 4830 scope.go:117] "RemoveContainer" containerID="a140760e838cf347ca71d5f3d746366a845f4c0564134c5f1c2eb356327aad0c" Feb 27 16:35:23 crc kubenswrapper[4830]: E0227 16:35:23.051168 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a140760e838cf347ca71d5f3d746366a845f4c0564134c5f1c2eb356327aad0c\": container with ID starting with a140760e838cf347ca71d5f3d746366a845f4c0564134c5f1c2eb356327aad0c not found: ID does not exist" containerID="a140760e838cf347ca71d5f3d746366a845f4c0564134c5f1c2eb356327aad0c" Feb 27 16:35:23 crc kubenswrapper[4830]: I0227 16:35:23.051223 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a140760e838cf347ca71d5f3d746366a845f4c0564134c5f1c2eb356327aad0c"} err="failed to get container status \"a140760e838cf347ca71d5f3d746366a845f4c0564134c5f1c2eb356327aad0c\": rpc error: code = NotFound desc = could not find container \"a140760e838cf347ca71d5f3d746366a845f4c0564134c5f1c2eb356327aad0c\": container with ID starting with a140760e838cf347ca71d5f3d746366a845f4c0564134c5f1c2eb356327aad0c not found: ID does not exist" Feb 27 16:35:23 crc kubenswrapper[4830]: I0227 16:35:23.051263 4830 scope.go:117] "RemoveContainer" containerID="0d15de3035d1a4c8c93396e6487eb745fd6fd0a431dc2dc9728b1aec269f3c0b" Feb 27 16:35:23 crc kubenswrapper[4830]: E0227 16:35:23.051884 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d15de3035d1a4c8c93396e6487eb745fd6fd0a431dc2dc9728b1aec269f3c0b\": container with ID starting with 0d15de3035d1a4c8c93396e6487eb745fd6fd0a431dc2dc9728b1aec269f3c0b not found: ID does not exist" containerID="0d15de3035d1a4c8c93396e6487eb745fd6fd0a431dc2dc9728b1aec269f3c0b" Feb 27 16:35:23 crc kubenswrapper[4830]: I0227 16:35:23.051935 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d15de3035d1a4c8c93396e6487eb745fd6fd0a431dc2dc9728b1aec269f3c0b"} err="failed to get container status \"0d15de3035d1a4c8c93396e6487eb745fd6fd0a431dc2dc9728b1aec269f3c0b\": rpc error: code = NotFound desc = could not find container \"0d15de3035d1a4c8c93396e6487eb745fd6fd0a431dc2dc9728b1aec269f3c0b\": container with ID starting with 0d15de3035d1a4c8c93396e6487eb745fd6fd0a431dc2dc9728b1aec269f3c0b not found: ID does not exist" Feb 27 16:35:24 crc kubenswrapper[4830]: I0227 16:35:24.775493 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e24be1f-6313-4c81-a77c-9a52a1399c92" path="/var/lib/kubelet/pods/6e24be1f-6313-4c81-a77c-9a52a1399c92/volumes" Feb 27 16:35:26 crc kubenswrapper[4830]: I0227 16:35:26.762112 4830 scope.go:117] "RemoveContainer" containerID="ca4233a6fa911a1d6b959075103b585a592935c9e9a6d178d17fef41a2fea048" Feb 27 16:35:26 crc kubenswrapper[4830]: E0227 16:35:26.762811 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:35:40 crc kubenswrapper[4830]: I0227 16:35:40.762698 4830 scope.go:117] "RemoveContainer" containerID="ca4233a6fa911a1d6b959075103b585a592935c9e9a6d178d17fef41a2fea048" Feb 27 16:35:40 crc kubenswrapper[4830]: E0227 16:35:40.763423 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:35:41 crc kubenswrapper[4830]: I0227 16:35:41.341843 4830 scope.go:117] "RemoveContainer" containerID="5aa1a3f44a359ee2559e80363d7b378c4edd45c9e53a4c526fc5cd51ab32b3bd" Feb 27 16:35:41 crc kubenswrapper[4830]: I0227 16:35:41.382319 4830 scope.go:117] "RemoveContainer" containerID="e4819e5de70fd14096f08664e021aa68dbcaff8638b286c6df70bcb4924b7183" Feb 27 16:35:41 crc kubenswrapper[4830]: I0227 16:35:41.435672 4830 scope.go:117] "RemoveContainer" containerID="5ba71f57d4ef167e52e99073d88d3906f54807c2add0033c3a350acff76a4f58" Feb 27 16:35:41 crc kubenswrapper[4830]: I0227 16:35:41.466243 4830 scope.go:117] "RemoveContainer" containerID="f06c98d4e511d3e89e496c04ad5a11d60444ab50c2a4dc23cb608869e9b5b98a" Feb 27 16:35:41 crc kubenswrapper[4830]: I0227 16:35:41.504147 4830 scope.go:117] "RemoveContainer" containerID="15a23e14b83d11b94ee3dc1d1a64b6c64f14f01947565c6e8dd5152c025f9fa1" Feb 27 16:35:41 crc kubenswrapper[4830]: I0227 16:35:41.528030 4830 scope.go:117] "RemoveContainer" containerID="fa02ddd168c52a09e17f02290dc6532b6d413641b49271f0c4fad4240693f403" Feb 27 16:35:41 crc kubenswrapper[4830]: I0227 16:35:41.592854 4830 scope.go:117] "RemoveContainer" containerID="83468be4a573a535ebb115952f4765ad160eb3dfbc1efdfc8c056f4eb57a9f74" Feb 27 16:35:41 crc kubenswrapper[4830]: I0227 16:35:41.625439 4830 scope.go:117] "RemoveContainer" containerID="3d21eb9349f83e5f5678001a64d350ad6000cb3e4a2539605409baceb3f4194e" Feb 27 16:35:41 crc kubenswrapper[4830]: I0227 16:35:41.651002 4830 scope.go:117] "RemoveContainer" containerID="cc733946c2730a559cac7a10dc518215f583a9b93706df17edea75a23418ffdc" Feb 27 16:35:41 crc kubenswrapper[4830]: I0227 16:35:41.699655 4830 scope.go:117] "RemoveContainer" containerID="451fb0be371d26426de1032670cbe01b5e0d72f0687f212f205ce0a0f1049841" Feb 27 16:35:41 crc kubenswrapper[4830]: I0227 16:35:41.721610 4830 scope.go:117] "RemoveContainer" containerID="8fc927ca0d436c6b9abd47100b757b549c955783daccc2bc942d4e651824c752" Feb 27 16:35:41 crc kubenswrapper[4830]: I0227 16:35:41.744434 4830 scope.go:117] "RemoveContainer" containerID="9c796b9641b31cc033bbc7ab7769fb39d6172c20eb3c7d1d5f5b77f73a0e8a9b" Feb 27 16:35:41 crc kubenswrapper[4830]: I0227 16:35:41.771807 4830 scope.go:117] "RemoveContainer" containerID="45a724c55f887a9873187e6e48da3fda84671199ed89e01366feec742267f675" Feb 27 16:35:41 crc kubenswrapper[4830]: I0227 16:35:41.795007 4830 scope.go:117] "RemoveContainer" containerID="bb6df788f5e8ca91abf23ff245808d9a7fe090cde362eff70896185f860b5a62" Feb 27 16:35:41 crc kubenswrapper[4830]: I0227 16:35:41.852883 4830 scope.go:117] "RemoveContainer" containerID="ed60d71a50308f4619438818ff5aee5f8e275b029bf18e8c1c8441cf23db8dd5" Feb 27 16:35:41 crc kubenswrapper[4830]: I0227 16:35:41.881610 4830 scope.go:117] "RemoveContainer" containerID="5f5402616bb7611817535016b614d7887cf7031895a3cb81400c32e205dcc9d4" Feb 27 16:35:41 crc kubenswrapper[4830]: I0227 16:35:41.939490 4830 scope.go:117] "RemoveContainer" containerID="2dd0daef7553edc948d313e884252b38bd2ca52a2e86007a5c75ebe4c3a88a04" Feb 27 16:35:55 crc kubenswrapper[4830]: I0227 16:35:55.762866 4830 scope.go:117] "RemoveContainer" containerID="ca4233a6fa911a1d6b959075103b585a592935c9e9a6d178d17fef41a2fea048" Feb 27 16:35:55 crc kubenswrapper[4830]: E0227 16:35:55.763995 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:36:00 crc kubenswrapper[4830]: I0227 16:36:00.157440 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536836-5jk7h"] Feb 27 16:36:00 crc kubenswrapper[4830]: E0227 16:36:00.158194 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20520c1b-b75e-473e-934d-ad1dbf402085" containerName="extract-content" Feb 27 16:36:00 crc kubenswrapper[4830]: I0227 16:36:00.158213 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="20520c1b-b75e-473e-934d-ad1dbf402085" containerName="extract-content" Feb 27 16:36:00 crc kubenswrapper[4830]: E0227 16:36:00.158239 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e24be1f-6313-4c81-a77c-9a52a1399c92" containerName="extract-utilities" Feb 27 16:36:00 crc kubenswrapper[4830]: I0227 16:36:00.158250 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e24be1f-6313-4c81-a77c-9a52a1399c92" containerName="extract-utilities" Feb 27 16:36:00 crc kubenswrapper[4830]: E0227 16:36:00.158270 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20520c1b-b75e-473e-934d-ad1dbf402085" containerName="extract-utilities" Feb 27 16:36:00 crc kubenswrapper[4830]: I0227 16:36:00.158281 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="20520c1b-b75e-473e-934d-ad1dbf402085" containerName="extract-utilities" Feb 27 16:36:00 crc kubenswrapper[4830]: E0227 16:36:00.158302 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e24be1f-6313-4c81-a77c-9a52a1399c92" containerName="registry-server" Feb 27 16:36:00 crc kubenswrapper[4830]: I0227 16:36:00.158312 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e24be1f-6313-4c81-a77c-9a52a1399c92" containerName="registry-server" Feb 27 16:36:00 crc kubenswrapper[4830]: E0227 16:36:00.158329 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20520c1b-b75e-473e-934d-ad1dbf402085" containerName="registry-server" Feb 27 16:36:00 crc kubenswrapper[4830]: I0227 16:36:00.158339 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="20520c1b-b75e-473e-934d-ad1dbf402085" containerName="registry-server" Feb 27 16:36:00 crc kubenswrapper[4830]: E0227 16:36:00.158366 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e24be1f-6313-4c81-a77c-9a52a1399c92" containerName="extract-content" Feb 27 16:36:00 crc kubenswrapper[4830]: I0227 16:36:00.158377 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e24be1f-6313-4c81-a77c-9a52a1399c92" containerName="extract-content" Feb 27 16:36:00 crc kubenswrapper[4830]: I0227 16:36:00.158598 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e24be1f-6313-4c81-a77c-9a52a1399c92" containerName="registry-server" Feb 27 16:36:00 crc kubenswrapper[4830]: I0227 16:36:00.158631 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="20520c1b-b75e-473e-934d-ad1dbf402085" containerName="registry-server" Feb 27 16:36:00 crc kubenswrapper[4830]: I0227 16:36:00.159499 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536836-5jk7h" Feb 27 16:36:00 crc kubenswrapper[4830]: I0227 16:36:00.162715 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 16:36:00 crc kubenswrapper[4830]: I0227 16:36:00.164880 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 16:36:00 crc kubenswrapper[4830]: I0227 16:36:00.165273 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 16:36:00 crc kubenswrapper[4830]: I0227 16:36:00.169878 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536836-5jk7h"] Feb 27 16:36:00 crc kubenswrapper[4830]: I0227 16:36:00.332150 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45kqn\" (UniqueName: \"kubernetes.io/projected/bfb596df-e396-4217-ab45-f32af8481b49-kube-api-access-45kqn\") pod \"auto-csr-approver-29536836-5jk7h\" (UID: \"bfb596df-e396-4217-ab45-f32af8481b49\") " pod="openshift-infra/auto-csr-approver-29536836-5jk7h" Feb 27 16:36:00 crc kubenswrapper[4830]: I0227 16:36:00.434148 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45kqn\" (UniqueName: \"kubernetes.io/projected/bfb596df-e396-4217-ab45-f32af8481b49-kube-api-access-45kqn\") pod \"auto-csr-approver-29536836-5jk7h\" (UID: \"bfb596df-e396-4217-ab45-f32af8481b49\") " pod="openshift-infra/auto-csr-approver-29536836-5jk7h" Feb 27 16:36:00 crc kubenswrapper[4830]: I0227 16:36:00.463886 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45kqn\" (UniqueName: \"kubernetes.io/projected/bfb596df-e396-4217-ab45-f32af8481b49-kube-api-access-45kqn\") pod \"auto-csr-approver-29536836-5jk7h\" (UID: \"bfb596df-e396-4217-ab45-f32af8481b49\") " pod="openshift-infra/auto-csr-approver-29536836-5jk7h" Feb 27 16:36:00 crc kubenswrapper[4830]: I0227 16:36:00.499770 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536836-5jk7h" Feb 27 16:36:01 crc kubenswrapper[4830]: I0227 16:36:01.032649 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536836-5jk7h"] Feb 27 16:36:01 crc kubenswrapper[4830]: I0227 16:36:01.405654 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536836-5jk7h" event={"ID":"bfb596df-e396-4217-ab45-f32af8481b49","Type":"ContainerStarted","Data":"5df9406f01996dc9a958cde6714066c2574f6be352d3bcbc1c1727ddcaec824a"} Feb 27 16:36:06 crc kubenswrapper[4830]: I0227 16:36:06.457037 4830 generic.go:334] "Generic (PLEG): container finished" podID="bfb596df-e396-4217-ab45-f32af8481b49" containerID="51c33700809d2de0279cc4f0e5d6a3af45cedc384dda5a7267684f7fbb7c2fd9" exitCode=0 Feb 27 16:36:06 crc kubenswrapper[4830]: I0227 16:36:06.457114 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536836-5jk7h" event={"ID":"bfb596df-e396-4217-ab45-f32af8481b49","Type":"ContainerDied","Data":"51c33700809d2de0279cc4f0e5d6a3af45cedc384dda5a7267684f7fbb7c2fd9"} Feb 27 16:36:06 crc kubenswrapper[4830]: I0227 16:36:06.763335 4830 scope.go:117] "RemoveContainer" containerID="ca4233a6fa911a1d6b959075103b585a592935c9e9a6d178d17fef41a2fea048" Feb 27 16:36:06 crc kubenswrapper[4830]: E0227 16:36:06.763724 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:36:07 crc kubenswrapper[4830]: I0227 16:36:07.857537 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536836-5jk7h" Feb 27 16:36:07 crc kubenswrapper[4830]: I0227 16:36:07.977079 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45kqn\" (UniqueName: \"kubernetes.io/projected/bfb596df-e396-4217-ab45-f32af8481b49-kube-api-access-45kqn\") pod \"bfb596df-e396-4217-ab45-f32af8481b49\" (UID: \"bfb596df-e396-4217-ab45-f32af8481b49\") " Feb 27 16:36:07 crc kubenswrapper[4830]: I0227 16:36:07.987992 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfb596df-e396-4217-ab45-f32af8481b49-kube-api-access-45kqn" (OuterVolumeSpecName: "kube-api-access-45kqn") pod "bfb596df-e396-4217-ab45-f32af8481b49" (UID: "bfb596df-e396-4217-ab45-f32af8481b49"). InnerVolumeSpecName "kube-api-access-45kqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:36:08 crc kubenswrapper[4830]: I0227 16:36:08.079504 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45kqn\" (UniqueName: \"kubernetes.io/projected/bfb596df-e396-4217-ab45-f32af8481b49-kube-api-access-45kqn\") on node \"crc\" DevicePath \"\"" Feb 27 16:36:08 crc kubenswrapper[4830]: I0227 16:36:08.489234 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536836-5jk7h" event={"ID":"bfb596df-e396-4217-ab45-f32af8481b49","Type":"ContainerDied","Data":"5df9406f01996dc9a958cde6714066c2574f6be352d3bcbc1c1727ddcaec824a"} Feb 27 16:36:08 crc kubenswrapper[4830]: I0227 16:36:08.489312 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5df9406f01996dc9a958cde6714066c2574f6be352d3bcbc1c1727ddcaec824a" Feb 27 16:36:08 crc kubenswrapper[4830]: I0227 16:36:08.489312 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536836-5jk7h" Feb 27 16:36:08 crc kubenswrapper[4830]: I0227 16:36:08.960856 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536830-gwpcb"] Feb 27 16:36:08 crc kubenswrapper[4830]: I0227 16:36:08.970436 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536830-gwpcb"] Feb 27 16:36:10 crc kubenswrapper[4830]: I0227 16:36:10.777497 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1141b071-f448-4a3f-b062-0255dd5dc38a" path="/var/lib/kubelet/pods/1141b071-f448-4a3f-b062-0255dd5dc38a/volumes" Feb 27 16:36:17 crc kubenswrapper[4830]: I0227 16:36:17.762782 4830 scope.go:117] "RemoveContainer" containerID="ca4233a6fa911a1d6b959075103b585a592935c9e9a6d178d17fef41a2fea048" Feb 27 16:36:17 crc kubenswrapper[4830]: E0227 16:36:17.763683 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:36:32 crc kubenswrapper[4830]: I0227 16:36:32.762631 4830 scope.go:117] "RemoveContainer" containerID="ca4233a6fa911a1d6b959075103b585a592935c9e9a6d178d17fef41a2fea048" Feb 27 16:36:32 crc kubenswrapper[4830]: E0227 16:36:32.763254 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:36:42 crc kubenswrapper[4830]: I0227 16:36:42.435420 4830 scope.go:117] "RemoveContainer" containerID="25a00b007e3e1a8c77c7bf619655cf9ead3a6eb2aa47a2c778cfc3371c33e4c5" Feb 27 16:36:42 crc kubenswrapper[4830]: I0227 16:36:42.473359 4830 scope.go:117] "RemoveContainer" containerID="3bd476206784383c2fbe0db210deee00da003f513b1f05dcbc55ea33c264c212" Feb 27 16:36:42 crc kubenswrapper[4830]: I0227 16:36:42.507138 4830 scope.go:117] "RemoveContainer" containerID="0e99db8779b62c9b60211a3a800d8786d6e5d19fd2046d962c492ef86848b48c" Feb 27 16:36:42 crc kubenswrapper[4830]: I0227 16:36:42.535874 4830 scope.go:117] "RemoveContainer" containerID="78f7362752654ea3426af2a1f637ac858637b23cda39620187459b1ca0eb954f" Feb 27 16:36:42 crc kubenswrapper[4830]: I0227 16:36:42.610627 4830 scope.go:117] "RemoveContainer" containerID="91059dd00f11fc333eace4b793fe5a4f3fca466216720380e52c9fb9f6ce33ff" Feb 27 16:36:42 crc kubenswrapper[4830]: I0227 16:36:42.661762 4830 scope.go:117] "RemoveContainer" containerID="21b7ce6b7e12d2dc0f7f2b14e5661ca319f4a158bd99eb2265e8cc2844c46aeb" Feb 27 16:36:42 crc kubenswrapper[4830]: I0227 16:36:42.732377 4830 scope.go:117] "RemoveContainer" containerID="6cf3d9b94980e2ca5aa0032ef28c8b51ac4ff272ea01954cb10fbe1ad64d9f4b" Feb 27 16:36:42 crc kubenswrapper[4830]: I0227 16:36:42.770422 4830 scope.go:117] "RemoveContainer" containerID="b09f3432889e78f005fbd21fbbd94888d63605d1bfe41b4d25fbe78bb2a37a78" Feb 27 16:36:42 crc kubenswrapper[4830]: I0227 16:36:42.800295 4830 scope.go:117] "RemoveContainer" containerID="9f254100c8c027338b42ed369be0ddd72af937c9d87a9a808607f1dcc876c8ed" Feb 27 16:36:42 crc kubenswrapper[4830]: I0227 16:36:42.847224 4830 scope.go:117] "RemoveContainer" containerID="7dad8ffa6283d569435591881ebf2eedf721235312643b6378985dffadc0a1cf" Feb 27 16:36:42 crc kubenswrapper[4830]: I0227 16:36:42.872082 4830 scope.go:117] "RemoveContainer" containerID="950d48e73b6efcd60895c954c30b438b4679dbaef80ec5b055875078164bbaed" Feb 27 16:36:42 crc kubenswrapper[4830]: I0227 16:36:42.902865 4830 scope.go:117] "RemoveContainer" containerID="45fba76ddd5f2fe4e68c5bc218edf28d6a195079fa1921a738dce0674accf471" Feb 27 16:36:42 crc kubenswrapper[4830]: I0227 16:36:42.935923 4830 scope.go:117] "RemoveContainer" containerID="e0ebf55234da05702605efd47d9b98f871b639eba4fd4ec313dd14863324ce11" Feb 27 16:36:42 crc kubenswrapper[4830]: I0227 16:36:42.968507 4830 scope.go:117] "RemoveContainer" containerID="d25e9e29213d4dd9d13dc6e8f8443d64cbecee22307bae547934dfd69a24c51a" Feb 27 16:36:42 crc kubenswrapper[4830]: I0227 16:36:42.985201 4830 scope.go:117] "RemoveContainer" containerID="b4c2a77141370e51625fa6bf385bb1eb77fc6e2be81322189a2da160e42e03d0" Feb 27 16:36:43 crc kubenswrapper[4830]: I0227 16:36:43.004739 4830 scope.go:117] "RemoveContainer" containerID="85d763a18db7b37b5aad502746d28ab199cdbba48317de720fcc8ea126e9dc74" Feb 27 16:36:43 crc kubenswrapper[4830]: I0227 16:36:43.024790 4830 scope.go:117] "RemoveContainer" containerID="2806447b980a5bb9a3cd7703b0ad68eb92d2cfebdeadd41257a5e2d7279f3f4f" Feb 27 16:36:43 crc kubenswrapper[4830]: I0227 16:36:43.049663 4830 scope.go:117] "RemoveContainer" containerID="4ad340ff7e5d3dcbe59313ae7a759101ba1b8edf59a86c29f287b2cb3edf2de6" Feb 27 16:36:43 crc kubenswrapper[4830]: I0227 16:36:43.078006 4830 scope.go:117] "RemoveContainer" containerID="e9f5c3e023cd95041492158a368466cd55fb311d519a59bb0776c7d0e6ebc352" Feb 27 16:36:43 crc kubenswrapper[4830]: I0227 16:36:43.103572 4830 scope.go:117] "RemoveContainer" containerID="4379a4562487a2f829fd847e713d7b48e4f30ff72dfa48612a5cee4351449110" Feb 27 16:36:43 crc kubenswrapper[4830]: I0227 16:36:43.126873 4830 scope.go:117] "RemoveContainer" containerID="7b743cc093d9cd3e5deb61678bf56225726f2ee5f6b916d24acb306d92c0ebc6" Feb 27 16:36:43 crc kubenswrapper[4830]: I0227 16:36:43.159602 4830 scope.go:117] "RemoveContainer" containerID="40cab2835902cbbd7f2108f23209c5d896b2d0b912cf229a63563e0cdf02215b" Feb 27 16:36:44 crc kubenswrapper[4830]: I0227 16:36:44.771491 4830 scope.go:117] "RemoveContainer" containerID="ca4233a6fa911a1d6b959075103b585a592935c9e9a6d178d17fef41a2fea048" Feb 27 16:36:44 crc kubenswrapper[4830]: E0227 16:36:44.772343 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:36:58 crc kubenswrapper[4830]: I0227 16:36:58.762454 4830 scope.go:117] "RemoveContainer" containerID="ca4233a6fa911a1d6b959075103b585a592935c9e9a6d178d17fef41a2fea048" Feb 27 16:36:58 crc kubenswrapper[4830]: E0227 16:36:58.763496 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:37:10 crc kubenswrapper[4830]: I0227 16:37:10.763431 4830 scope.go:117] "RemoveContainer" containerID="ca4233a6fa911a1d6b959075103b585a592935c9e9a6d178d17fef41a2fea048" Feb 27 16:37:10 crc kubenswrapper[4830]: E0227 16:37:10.765727 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:37:22 crc kubenswrapper[4830]: I0227 16:37:22.762166 4830 scope.go:117] "RemoveContainer" containerID="ca4233a6fa911a1d6b959075103b585a592935c9e9a6d178d17fef41a2fea048" Feb 27 16:37:22 crc kubenswrapper[4830]: E0227 16:37:22.762963 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:37:36 crc kubenswrapper[4830]: I0227 16:37:36.762623 4830 scope.go:117] "RemoveContainer" containerID="ca4233a6fa911a1d6b959075103b585a592935c9e9a6d178d17fef41a2fea048" Feb 27 16:37:36 crc kubenswrapper[4830]: E0227 16:37:36.763754 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:37:43 crc kubenswrapper[4830]: I0227 16:37:43.470165 4830 scope.go:117] "RemoveContainer" containerID="98c4c8d56c7429d2d9520ab93e5ce3ee5be86799ca9c538051edf6e0b6ea6c3d" Feb 27 16:37:43 crc kubenswrapper[4830]: I0227 16:37:43.496802 4830 scope.go:117] "RemoveContainer" containerID="f8f34796ac91c21f0c695f92907c09775357969b6a31121699e96e8f2d086147" Feb 27 16:37:43 crc kubenswrapper[4830]: I0227 16:37:43.540327 4830 scope.go:117] "RemoveContainer" containerID="e8228851dc153740caa4991add05b87921eb8d07bae6164bd7ec594683dd08a2" Feb 27 16:37:43 crc kubenswrapper[4830]: I0227 16:37:43.614923 4830 scope.go:117] "RemoveContainer" containerID="fab28b8a8cf858968ae516c93ad0ff86bedd83c0c7423732d17b0e07a14d18d2" Feb 27 16:37:43 crc kubenswrapper[4830]: I0227 16:37:43.681314 4830 scope.go:117] "RemoveContainer" containerID="0706f1a0759f33eb60e2fb30aec7479b6c7a940dfc76f25533bdda83b5ca913e" Feb 27 16:37:43 crc kubenswrapper[4830]: I0227 16:37:43.709662 4830 scope.go:117] "RemoveContainer" containerID="a5137475aad41fb8eb7b0a7b72def6633e3820a0b964c9cad287965ce3680cca" Feb 27 16:37:43 crc kubenswrapper[4830]: I0227 16:37:43.735967 4830 scope.go:117] "RemoveContainer" containerID="c2905f95d9b1bd685977d7be7161ae0adaba055e9615f02fecc0602b6c991b5c" Feb 27 16:37:43 crc kubenswrapper[4830]: I0227 16:37:43.764582 4830 scope.go:117] "RemoveContainer" containerID="d5a7dd60a232991741b101e4a9891977b3f095d90be1312762610a6cc6b35dfd" Feb 27 16:37:43 crc kubenswrapper[4830]: I0227 16:37:43.784086 4830 scope.go:117] "RemoveContainer" containerID="aaca4e638aa616674edc02748979015b7798beec0a50cf331a81661c6f522394" Feb 27 16:37:50 crc kubenswrapper[4830]: I0227 16:37:50.762710 4830 scope.go:117] "RemoveContainer" containerID="ca4233a6fa911a1d6b959075103b585a592935c9e9a6d178d17fef41a2fea048" Feb 27 16:37:50 crc kubenswrapper[4830]: E0227 16:37:50.765166 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:38:00 crc kubenswrapper[4830]: I0227 16:38:00.157845 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536838-sjpxn"] Feb 27 16:38:00 crc kubenswrapper[4830]: E0227 16:38:00.159170 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfb596df-e396-4217-ab45-f32af8481b49" containerName="oc" Feb 27 16:38:00 crc kubenswrapper[4830]: I0227 16:38:00.159196 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfb596df-e396-4217-ab45-f32af8481b49" containerName="oc" Feb 27 16:38:00 crc kubenswrapper[4830]: I0227 16:38:00.159481 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfb596df-e396-4217-ab45-f32af8481b49" containerName="oc" Feb 27 16:38:00 crc kubenswrapper[4830]: I0227 16:38:00.160193 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536838-sjpxn" Feb 27 16:38:00 crc kubenswrapper[4830]: I0227 16:38:00.163604 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 16:38:00 crc kubenswrapper[4830]: I0227 16:38:00.167779 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 16:38:00 crc kubenswrapper[4830]: I0227 16:38:00.167808 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 16:38:00 crc kubenswrapper[4830]: I0227 16:38:00.176438 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536838-sjpxn"] Feb 27 16:38:00 crc kubenswrapper[4830]: I0227 16:38:00.259080 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzfvz\" (UniqueName: \"kubernetes.io/projected/d68eb6a9-3f9d-49da-b00a-16c94d10b1e0-kube-api-access-kzfvz\") pod \"auto-csr-approver-29536838-sjpxn\" (UID: \"d68eb6a9-3f9d-49da-b00a-16c94d10b1e0\") " pod="openshift-infra/auto-csr-approver-29536838-sjpxn" Feb 27 16:38:00 crc kubenswrapper[4830]: I0227 16:38:00.361086 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzfvz\" (UniqueName: \"kubernetes.io/projected/d68eb6a9-3f9d-49da-b00a-16c94d10b1e0-kube-api-access-kzfvz\") pod \"auto-csr-approver-29536838-sjpxn\" (UID: \"d68eb6a9-3f9d-49da-b00a-16c94d10b1e0\") " pod="openshift-infra/auto-csr-approver-29536838-sjpxn" Feb 27 16:38:00 crc kubenswrapper[4830]: I0227 16:38:00.398640 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzfvz\" (UniqueName: \"kubernetes.io/projected/d68eb6a9-3f9d-49da-b00a-16c94d10b1e0-kube-api-access-kzfvz\") pod \"auto-csr-approver-29536838-sjpxn\" (UID: \"d68eb6a9-3f9d-49da-b00a-16c94d10b1e0\") " pod="openshift-infra/auto-csr-approver-29536838-sjpxn" Feb 27 16:38:00 crc kubenswrapper[4830]: I0227 16:38:00.478444 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536838-sjpxn" Feb 27 16:38:00 crc kubenswrapper[4830]: I0227 16:38:00.949182 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536838-sjpxn"] Feb 27 16:38:00 crc kubenswrapper[4830]: I0227 16:38:00.957437 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 16:38:01 crc kubenswrapper[4830]: I0227 16:38:01.648934 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536838-sjpxn" event={"ID":"d68eb6a9-3f9d-49da-b00a-16c94d10b1e0","Type":"ContainerStarted","Data":"4f1b7c6c224a94ae1fff3a3c420023842dc6ddc277d6c7029011e85348107747"} Feb 27 16:38:01 crc kubenswrapper[4830]: I0227 16:38:01.763623 4830 scope.go:117] "RemoveContainer" containerID="ca4233a6fa911a1d6b959075103b585a592935c9e9a6d178d17fef41a2fea048" Feb 27 16:38:01 crc kubenswrapper[4830]: E0227 16:38:01.764270 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:38:02 crc kubenswrapper[4830]: I0227 16:38:02.659059 4830 generic.go:334] "Generic (PLEG): container finished" podID="d68eb6a9-3f9d-49da-b00a-16c94d10b1e0" containerID="5c588e4bfca51877a2c987f383ce4f876fe68b73bceb40ebcc1c3ab39c2d797a" exitCode=0 Feb 27 16:38:02 crc kubenswrapper[4830]: I0227 16:38:02.659194 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536838-sjpxn" event={"ID":"d68eb6a9-3f9d-49da-b00a-16c94d10b1e0","Type":"ContainerDied","Data":"5c588e4bfca51877a2c987f383ce4f876fe68b73bceb40ebcc1c3ab39c2d797a"} Feb 27 16:38:04 crc kubenswrapper[4830]: I0227 16:38:04.686635 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536838-sjpxn" event={"ID":"d68eb6a9-3f9d-49da-b00a-16c94d10b1e0","Type":"ContainerDied","Data":"4f1b7c6c224a94ae1fff3a3c420023842dc6ddc277d6c7029011e85348107747"} Feb 27 16:38:04 crc kubenswrapper[4830]: I0227 16:38:04.687104 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f1b7c6c224a94ae1fff3a3c420023842dc6ddc277d6c7029011e85348107747" Feb 27 16:38:04 crc kubenswrapper[4830]: I0227 16:38:04.717176 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536838-sjpxn" Feb 27 16:38:04 crc kubenswrapper[4830]: I0227 16:38:04.838328 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzfvz\" (UniqueName: \"kubernetes.io/projected/d68eb6a9-3f9d-49da-b00a-16c94d10b1e0-kube-api-access-kzfvz\") pod \"d68eb6a9-3f9d-49da-b00a-16c94d10b1e0\" (UID: \"d68eb6a9-3f9d-49da-b00a-16c94d10b1e0\") " Feb 27 16:38:04 crc kubenswrapper[4830]: I0227 16:38:04.847727 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d68eb6a9-3f9d-49da-b00a-16c94d10b1e0-kube-api-access-kzfvz" (OuterVolumeSpecName: "kube-api-access-kzfvz") pod "d68eb6a9-3f9d-49da-b00a-16c94d10b1e0" (UID: "d68eb6a9-3f9d-49da-b00a-16c94d10b1e0"). InnerVolumeSpecName "kube-api-access-kzfvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:38:04 crc kubenswrapper[4830]: I0227 16:38:04.939809 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzfvz\" (UniqueName: \"kubernetes.io/projected/d68eb6a9-3f9d-49da-b00a-16c94d10b1e0-kube-api-access-kzfvz\") on node \"crc\" DevicePath \"\"" Feb 27 16:38:05 crc kubenswrapper[4830]: I0227 16:38:05.697218 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536838-sjpxn" Feb 27 16:38:05 crc kubenswrapper[4830]: I0227 16:38:05.820380 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536832-nwxgv"] Feb 27 16:38:05 crc kubenswrapper[4830]: I0227 16:38:05.830629 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536832-nwxgv"] Feb 27 16:38:06 crc kubenswrapper[4830]: I0227 16:38:06.777638 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3706b00e-6257-4879-b0bb-066b912637da" path="/var/lib/kubelet/pods/3706b00e-6257-4879-b0bb-066b912637da/volumes" Feb 27 16:38:13 crc kubenswrapper[4830]: I0227 16:38:13.763448 4830 scope.go:117] "RemoveContainer" containerID="ca4233a6fa911a1d6b959075103b585a592935c9e9a6d178d17fef41a2fea048" Feb 27 16:38:14 crc kubenswrapper[4830]: I0227 16:38:14.796535 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerStarted","Data":"b810a866e7e028ecd9333aa0ac47bc4872c9ecf682d5561ecaf3d6e30e0e0340"} Feb 27 16:38:43 crc kubenswrapper[4830]: I0227 16:38:43.957100 4830 scope.go:117] "RemoveContainer" containerID="67f705d66ad4d26d1a66a751f763fac473304bb8b591b54c2c0c497cc8ee46c6" Feb 27 16:38:44 crc kubenswrapper[4830]: I0227 16:38:44.000075 4830 scope.go:117] "RemoveContainer" containerID="17c416fd77703fb7feb38dfb7c6e7aef3b647f80b42763e1c40e7ca828662e25" Feb 27 16:38:44 crc kubenswrapper[4830]: I0227 16:38:44.035043 4830 scope.go:117] "RemoveContainer" containerID="144b29fbee6ca22072cb52d8025180f33aea96191753e1a5038399c82ac702fc" Feb 27 16:38:44 crc kubenswrapper[4830]: I0227 16:38:44.073055 4830 scope.go:117] "RemoveContainer" containerID="6e897d68c31265e9f5fea3191c220fdd3f653e9c14499ea7470715d9f71ca8e2" Feb 27 16:38:44 crc kubenswrapper[4830]: I0227 16:38:44.128620 4830 scope.go:117] "RemoveContainer" containerID="5e4b95ff9e120a4e75ce39c775be2aee2b80b55e4a33fe61a9e413a3ae463cf6" Feb 27 16:38:44 crc kubenswrapper[4830]: I0227 16:38:44.153789 4830 scope.go:117] "RemoveContainer" containerID="71f9a2d35a123a7c42bc68cc143760e467aedb724086c36e562efbf095e0c426" Feb 27 16:38:44 crc kubenswrapper[4830]: I0227 16:38:44.177225 4830 scope.go:117] "RemoveContainer" containerID="efb022c64f6ae8ffd2fec27339e107e45b38a12b6d4a8d2858182ad516e6d9f9" Feb 27 16:38:44 crc kubenswrapper[4830]: I0227 16:38:44.202371 4830 scope.go:117] "RemoveContainer" containerID="1954751f889385192cc38a0ea54da4d4fbf33340070fa0346fa385af89879ac7" Feb 27 16:38:44 crc kubenswrapper[4830]: I0227 16:38:44.225234 4830 scope.go:117] "RemoveContainer" containerID="d42b710ef87298f2e0a2e01a47fd2d62e290785d9674d8573992395513f85975" Feb 27 16:38:44 crc kubenswrapper[4830]: I0227 16:38:44.268534 4830 scope.go:117] "RemoveContainer" containerID="53a40c635318ff11c80f75f6211616278bbd9c179f11fec9265e63a26e70b0ac" Feb 27 16:38:44 crc kubenswrapper[4830]: I0227 16:38:44.285075 4830 scope.go:117] "RemoveContainer" containerID="72e38d1c2009b64b0066ca1c11420f6777aab9186b8f6d7357f2184e318a87ad" Feb 27 16:38:44 crc kubenswrapper[4830]: I0227 16:38:44.300739 4830 scope.go:117] "RemoveContainer" containerID="e377c9fe2c2c4014633d618a399228bda3185620f06415bda5d22e2216dcccee" Feb 27 16:38:44 crc kubenswrapper[4830]: I0227 16:38:44.318600 4830 scope.go:117] "RemoveContainer" containerID="a3e19fe9784a7e84ad00ba5db518baa23ac731605584cf84a3a6192b109fa71e" Feb 27 16:38:44 crc kubenswrapper[4830]: I0227 16:38:44.338587 4830 scope.go:117] "RemoveContainer" containerID="bde345255725008534174e08aa3bfd1e9e5abd79b8d35b0ffbbec8fdecf1e21f" Feb 27 16:40:00 crc kubenswrapper[4830]: I0227 16:40:00.150935 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536840-l2pjd"] Feb 27 16:40:00 crc kubenswrapper[4830]: E0227 16:40:00.152373 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d68eb6a9-3f9d-49da-b00a-16c94d10b1e0" containerName="oc" Feb 27 16:40:00 crc kubenswrapper[4830]: I0227 16:40:00.152408 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="d68eb6a9-3f9d-49da-b00a-16c94d10b1e0" containerName="oc" Feb 27 16:40:00 crc kubenswrapper[4830]: I0227 16:40:00.152786 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="d68eb6a9-3f9d-49da-b00a-16c94d10b1e0" containerName="oc" Feb 27 16:40:00 crc kubenswrapper[4830]: I0227 16:40:00.153702 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536840-l2pjd" Feb 27 16:40:00 crc kubenswrapper[4830]: I0227 16:40:00.158882 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 16:40:00 crc kubenswrapper[4830]: I0227 16:40:00.159032 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 16:40:00 crc kubenswrapper[4830]: I0227 16:40:00.159203 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 16:40:00 crc kubenswrapper[4830]: I0227 16:40:00.161113 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6rrl\" (UniqueName: \"kubernetes.io/projected/80f0ba68-90f8-401e-b07c-7a110ebbcdd8-kube-api-access-x6rrl\") pod \"auto-csr-approver-29536840-l2pjd\" (UID: \"80f0ba68-90f8-401e-b07c-7a110ebbcdd8\") " pod="openshift-infra/auto-csr-approver-29536840-l2pjd" Feb 27 16:40:00 crc kubenswrapper[4830]: I0227 16:40:00.163098 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536840-l2pjd"] Feb 27 16:40:00 crc kubenswrapper[4830]: I0227 16:40:00.262754 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6rrl\" (UniqueName: \"kubernetes.io/projected/80f0ba68-90f8-401e-b07c-7a110ebbcdd8-kube-api-access-x6rrl\") pod \"auto-csr-approver-29536840-l2pjd\" (UID: \"80f0ba68-90f8-401e-b07c-7a110ebbcdd8\") " pod="openshift-infra/auto-csr-approver-29536840-l2pjd" Feb 27 16:40:00 crc kubenswrapper[4830]: I0227 16:40:00.285262 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6rrl\" (UniqueName: \"kubernetes.io/projected/80f0ba68-90f8-401e-b07c-7a110ebbcdd8-kube-api-access-x6rrl\") pod \"auto-csr-approver-29536840-l2pjd\" (UID: \"80f0ba68-90f8-401e-b07c-7a110ebbcdd8\") " pod="openshift-infra/auto-csr-approver-29536840-l2pjd" Feb 27 16:40:00 crc kubenswrapper[4830]: I0227 16:40:00.490765 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536840-l2pjd" Feb 27 16:40:00 crc kubenswrapper[4830]: I0227 16:40:00.959990 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536840-l2pjd"] Feb 27 16:40:00 crc kubenswrapper[4830]: W0227 16:40:00.969168 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80f0ba68_90f8_401e_b07c_7a110ebbcdd8.slice/crio-f2d73e0f4334625cc56df96751dadc1dc86599de707b5033a3e7ce62dba30d68 WatchSource:0}: Error finding container f2d73e0f4334625cc56df96751dadc1dc86599de707b5033a3e7ce62dba30d68: Status 404 returned error can't find the container with id f2d73e0f4334625cc56df96751dadc1dc86599de707b5033a3e7ce62dba30d68 Feb 27 16:40:01 crc kubenswrapper[4830]: I0227 16:40:01.808833 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536840-l2pjd" event={"ID":"80f0ba68-90f8-401e-b07c-7a110ebbcdd8","Type":"ContainerStarted","Data":"f2d73e0f4334625cc56df96751dadc1dc86599de707b5033a3e7ce62dba30d68"} Feb 27 16:40:03 crc kubenswrapper[4830]: I0227 16:40:03.838506 4830 generic.go:334] "Generic (PLEG): container finished" podID="80f0ba68-90f8-401e-b07c-7a110ebbcdd8" containerID="a4bc264d1a03d587270a70e7a6343495af75bdb1492c3a935f7fb76e7c176ddb" exitCode=0 Feb 27 16:40:03 crc kubenswrapper[4830]: I0227 16:40:03.838583 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536840-l2pjd" event={"ID":"80f0ba68-90f8-401e-b07c-7a110ebbcdd8","Type":"ContainerDied","Data":"a4bc264d1a03d587270a70e7a6343495af75bdb1492c3a935f7fb76e7c176ddb"} Feb 27 16:40:05 crc kubenswrapper[4830]: I0227 16:40:05.174770 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536840-l2pjd" Feb 27 16:40:05 crc kubenswrapper[4830]: I0227 16:40:05.339628 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6rrl\" (UniqueName: \"kubernetes.io/projected/80f0ba68-90f8-401e-b07c-7a110ebbcdd8-kube-api-access-x6rrl\") pod \"80f0ba68-90f8-401e-b07c-7a110ebbcdd8\" (UID: \"80f0ba68-90f8-401e-b07c-7a110ebbcdd8\") " Feb 27 16:40:05 crc kubenswrapper[4830]: I0227 16:40:05.348313 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80f0ba68-90f8-401e-b07c-7a110ebbcdd8-kube-api-access-x6rrl" (OuterVolumeSpecName: "kube-api-access-x6rrl") pod "80f0ba68-90f8-401e-b07c-7a110ebbcdd8" (UID: "80f0ba68-90f8-401e-b07c-7a110ebbcdd8"). InnerVolumeSpecName "kube-api-access-x6rrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:40:05 crc kubenswrapper[4830]: I0227 16:40:05.443110 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6rrl\" (UniqueName: \"kubernetes.io/projected/80f0ba68-90f8-401e-b07c-7a110ebbcdd8-kube-api-access-x6rrl\") on node \"crc\" DevicePath \"\"" Feb 27 16:40:05 crc kubenswrapper[4830]: I0227 16:40:05.863913 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536840-l2pjd" event={"ID":"80f0ba68-90f8-401e-b07c-7a110ebbcdd8","Type":"ContainerDied","Data":"f2d73e0f4334625cc56df96751dadc1dc86599de707b5033a3e7ce62dba30d68"} Feb 27 16:40:05 crc kubenswrapper[4830]: I0227 16:40:05.864076 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2d73e0f4334625cc56df96751dadc1dc86599de707b5033a3e7ce62dba30d68" Feb 27 16:40:05 crc kubenswrapper[4830]: I0227 16:40:05.864456 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536840-l2pjd" Feb 27 16:40:06 crc kubenswrapper[4830]: I0227 16:40:06.276976 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536834-qfhvj"] Feb 27 16:40:06 crc kubenswrapper[4830]: I0227 16:40:06.287304 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536834-qfhvj"] Feb 27 16:40:06 crc kubenswrapper[4830]: I0227 16:40:06.769082 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb84cf11-0669-422d-9608-f5b339989bd5" path="/var/lib/kubelet/pods/fb84cf11-0669-422d-9608-f5b339989bd5/volumes" Feb 27 16:40:33 crc kubenswrapper[4830]: I0227 16:40:33.160820 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 16:40:33 crc kubenswrapper[4830]: I0227 16:40:33.161938 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 16:40:44 crc kubenswrapper[4830]: I0227 16:40:44.549553 4830 scope.go:117] "RemoveContainer" containerID="de445fa7b0d1be3672075bf8502f43e5f1bdfe4724a214743128c8f9140d38f5" Feb 27 16:41:03 crc kubenswrapper[4830]: I0227 16:41:03.160309 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 16:41:03 crc kubenswrapper[4830]: I0227 16:41:03.161133 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 16:41:33 crc kubenswrapper[4830]: I0227 16:41:33.160689 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 16:41:33 crc kubenswrapper[4830]: I0227 16:41:33.161321 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 16:41:33 crc kubenswrapper[4830]: I0227 16:41:33.161385 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 16:41:33 crc kubenswrapper[4830]: I0227 16:41:33.162247 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b810a866e7e028ecd9333aa0ac47bc4872c9ecf682d5561ecaf3d6e30e0e0340"} pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 16:41:33 crc kubenswrapper[4830]: I0227 16:41:33.162339 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" containerID="cri-o://b810a866e7e028ecd9333aa0ac47bc4872c9ecf682d5561ecaf3d6e30e0e0340" gracePeriod=600 Feb 27 16:41:33 crc kubenswrapper[4830]: I0227 16:41:33.701490 4830 generic.go:334] "Generic (PLEG): container finished" podID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerID="b810a866e7e028ecd9333aa0ac47bc4872c9ecf682d5561ecaf3d6e30e0e0340" exitCode=0 Feb 27 16:41:33 crc kubenswrapper[4830]: I0227 16:41:33.701562 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerDied","Data":"b810a866e7e028ecd9333aa0ac47bc4872c9ecf682d5561ecaf3d6e30e0e0340"} Feb 27 16:41:33 crc kubenswrapper[4830]: I0227 16:41:33.701984 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerStarted","Data":"c4de61c7f48929592fa4fe3911fffe24750acfdd56a4eab75365c7aa2a8b7dfb"} Feb 27 16:41:33 crc kubenswrapper[4830]: I0227 16:41:33.702023 4830 scope.go:117] "RemoveContainer" containerID="ca4233a6fa911a1d6b959075103b585a592935c9e9a6d178d17fef41a2fea048" Feb 27 16:42:00 crc kubenswrapper[4830]: I0227 16:42:00.155531 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536842-vzjv4"] Feb 27 16:42:00 crc kubenswrapper[4830]: E0227 16:42:00.156383 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80f0ba68-90f8-401e-b07c-7a110ebbcdd8" containerName="oc" Feb 27 16:42:00 crc kubenswrapper[4830]: I0227 16:42:00.156398 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="80f0ba68-90f8-401e-b07c-7a110ebbcdd8" containerName="oc" Feb 27 16:42:00 crc kubenswrapper[4830]: I0227 16:42:00.156558 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="80f0ba68-90f8-401e-b07c-7a110ebbcdd8" containerName="oc" Feb 27 16:42:00 crc kubenswrapper[4830]: I0227 16:42:00.157113 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536842-vzjv4" Feb 27 16:42:00 crc kubenswrapper[4830]: I0227 16:42:00.161123 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 16:42:00 crc kubenswrapper[4830]: I0227 16:42:00.161380 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 16:42:00 crc kubenswrapper[4830]: I0227 16:42:00.171768 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 16:42:00 crc kubenswrapper[4830]: I0227 16:42:00.185817 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536842-vzjv4"] Feb 27 16:42:00 crc kubenswrapper[4830]: I0227 16:42:00.193685 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2rql\" (UniqueName: \"kubernetes.io/projected/ddabb2aa-eef2-4c5e-8db9-738fb96d3b6a-kube-api-access-n2rql\") pod \"auto-csr-approver-29536842-vzjv4\" (UID: \"ddabb2aa-eef2-4c5e-8db9-738fb96d3b6a\") " pod="openshift-infra/auto-csr-approver-29536842-vzjv4" Feb 27 16:42:00 crc kubenswrapper[4830]: I0227 16:42:00.294847 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2rql\" (UniqueName: \"kubernetes.io/projected/ddabb2aa-eef2-4c5e-8db9-738fb96d3b6a-kube-api-access-n2rql\") pod \"auto-csr-approver-29536842-vzjv4\" (UID: \"ddabb2aa-eef2-4c5e-8db9-738fb96d3b6a\") " pod="openshift-infra/auto-csr-approver-29536842-vzjv4" Feb 27 16:42:00 crc kubenswrapper[4830]: I0227 16:42:00.319644 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2rql\" (UniqueName: \"kubernetes.io/projected/ddabb2aa-eef2-4c5e-8db9-738fb96d3b6a-kube-api-access-n2rql\") pod \"auto-csr-approver-29536842-vzjv4\" (UID: \"ddabb2aa-eef2-4c5e-8db9-738fb96d3b6a\") " pod="openshift-infra/auto-csr-approver-29536842-vzjv4" Feb 27 16:42:00 crc kubenswrapper[4830]: I0227 16:42:00.492053 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536842-vzjv4" Feb 27 16:42:01 crc kubenswrapper[4830]: I0227 16:42:01.020713 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536842-vzjv4"] Feb 27 16:42:01 crc kubenswrapper[4830]: I0227 16:42:01.986140 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536842-vzjv4" event={"ID":"ddabb2aa-eef2-4c5e-8db9-738fb96d3b6a","Type":"ContainerStarted","Data":"1db62b0506206a8319dfbc27ec0b79512dfab3323bed2cd276148dceaf84ada5"} Feb 27 16:42:02 crc kubenswrapper[4830]: I0227 16:42:02.996850 4830 generic.go:334] "Generic (PLEG): container finished" podID="ddabb2aa-eef2-4c5e-8db9-738fb96d3b6a" containerID="c34c3a982d168b19783847bacdcf4ceb89f783b676f874ead2102d6282f28730" exitCode=0 Feb 27 16:42:02 crc kubenswrapper[4830]: I0227 16:42:02.996926 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536842-vzjv4" event={"ID":"ddabb2aa-eef2-4c5e-8db9-738fb96d3b6a","Type":"ContainerDied","Data":"c34c3a982d168b19783847bacdcf4ceb89f783b676f874ead2102d6282f28730"} Feb 27 16:42:04 crc kubenswrapper[4830]: I0227 16:42:04.348085 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536842-vzjv4" Feb 27 16:42:04 crc kubenswrapper[4830]: I0227 16:42:04.469581 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2rql\" (UniqueName: \"kubernetes.io/projected/ddabb2aa-eef2-4c5e-8db9-738fb96d3b6a-kube-api-access-n2rql\") pod \"ddabb2aa-eef2-4c5e-8db9-738fb96d3b6a\" (UID: \"ddabb2aa-eef2-4c5e-8db9-738fb96d3b6a\") " Feb 27 16:42:04 crc kubenswrapper[4830]: I0227 16:42:04.479361 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddabb2aa-eef2-4c5e-8db9-738fb96d3b6a-kube-api-access-n2rql" (OuterVolumeSpecName: "kube-api-access-n2rql") pod "ddabb2aa-eef2-4c5e-8db9-738fb96d3b6a" (UID: "ddabb2aa-eef2-4c5e-8db9-738fb96d3b6a"). InnerVolumeSpecName "kube-api-access-n2rql". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:42:04 crc kubenswrapper[4830]: I0227 16:42:04.571396 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2rql\" (UniqueName: \"kubernetes.io/projected/ddabb2aa-eef2-4c5e-8db9-738fb96d3b6a-kube-api-access-n2rql\") on node \"crc\" DevicePath \"\"" Feb 27 16:42:05 crc kubenswrapper[4830]: I0227 16:42:05.016545 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536842-vzjv4" event={"ID":"ddabb2aa-eef2-4c5e-8db9-738fb96d3b6a","Type":"ContainerDied","Data":"1db62b0506206a8319dfbc27ec0b79512dfab3323bed2cd276148dceaf84ada5"} Feb 27 16:42:05 crc kubenswrapper[4830]: I0227 16:42:05.016596 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1db62b0506206a8319dfbc27ec0b79512dfab3323bed2cd276148dceaf84ada5" Feb 27 16:42:05 crc kubenswrapper[4830]: I0227 16:42:05.016637 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536842-vzjv4" Feb 27 16:42:05 crc kubenswrapper[4830]: I0227 16:42:05.435070 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536836-5jk7h"] Feb 27 16:42:05 crc kubenswrapper[4830]: I0227 16:42:05.441106 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536836-5jk7h"] Feb 27 16:42:06 crc kubenswrapper[4830]: I0227 16:42:06.777202 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfb596df-e396-4217-ab45-f32af8481b49" path="/var/lib/kubelet/pods/bfb596df-e396-4217-ab45-f32af8481b49/volumes" Feb 27 16:42:44 crc kubenswrapper[4830]: I0227 16:42:44.672816 4830 scope.go:117] "RemoveContainer" containerID="51c33700809d2de0279cc4f0e5d6a3af45cedc384dda5a7267684f7fbb7c2fd9" Feb 27 16:42:47 crc kubenswrapper[4830]: I0227 16:42:47.170490 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l65bw"] Feb 27 16:42:47 crc kubenswrapper[4830]: E0227 16:42:47.171314 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddabb2aa-eef2-4c5e-8db9-738fb96d3b6a" containerName="oc" Feb 27 16:42:47 crc kubenswrapper[4830]: I0227 16:42:47.171337 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddabb2aa-eef2-4c5e-8db9-738fb96d3b6a" containerName="oc" Feb 27 16:42:47 crc kubenswrapper[4830]: I0227 16:42:47.171620 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddabb2aa-eef2-4c5e-8db9-738fb96d3b6a" containerName="oc" Feb 27 16:42:47 crc kubenswrapper[4830]: I0227 16:42:47.173295 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l65bw" Feb 27 16:42:47 crc kubenswrapper[4830]: I0227 16:42:47.185173 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l65bw"] Feb 27 16:42:47 crc kubenswrapper[4830]: I0227 16:42:47.292454 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr2l5\" (UniqueName: \"kubernetes.io/projected/8fca3286-0d0e-48a2-a43d-aafd94218b81-kube-api-access-pr2l5\") pod \"redhat-operators-l65bw\" (UID: \"8fca3286-0d0e-48a2-a43d-aafd94218b81\") " pod="openshift-marketplace/redhat-operators-l65bw" Feb 27 16:42:47 crc kubenswrapper[4830]: I0227 16:42:47.292575 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fca3286-0d0e-48a2-a43d-aafd94218b81-catalog-content\") pod \"redhat-operators-l65bw\" (UID: \"8fca3286-0d0e-48a2-a43d-aafd94218b81\") " pod="openshift-marketplace/redhat-operators-l65bw" Feb 27 16:42:47 crc kubenswrapper[4830]: I0227 16:42:47.292613 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fca3286-0d0e-48a2-a43d-aafd94218b81-utilities\") pod \"redhat-operators-l65bw\" (UID: \"8fca3286-0d0e-48a2-a43d-aafd94218b81\") " pod="openshift-marketplace/redhat-operators-l65bw" Feb 27 16:42:47 crc kubenswrapper[4830]: I0227 16:42:47.394270 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr2l5\" (UniqueName: \"kubernetes.io/projected/8fca3286-0d0e-48a2-a43d-aafd94218b81-kube-api-access-pr2l5\") pod \"redhat-operators-l65bw\" (UID: \"8fca3286-0d0e-48a2-a43d-aafd94218b81\") " pod="openshift-marketplace/redhat-operators-l65bw" Feb 27 16:42:47 crc kubenswrapper[4830]: I0227 16:42:47.394370 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fca3286-0d0e-48a2-a43d-aafd94218b81-catalog-content\") pod \"redhat-operators-l65bw\" (UID: \"8fca3286-0d0e-48a2-a43d-aafd94218b81\") " pod="openshift-marketplace/redhat-operators-l65bw" Feb 27 16:42:47 crc kubenswrapper[4830]: I0227 16:42:47.394414 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fca3286-0d0e-48a2-a43d-aafd94218b81-utilities\") pod \"redhat-operators-l65bw\" (UID: \"8fca3286-0d0e-48a2-a43d-aafd94218b81\") " pod="openshift-marketplace/redhat-operators-l65bw" Feb 27 16:42:47 crc kubenswrapper[4830]: I0227 16:42:47.395056 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fca3286-0d0e-48a2-a43d-aafd94218b81-catalog-content\") pod \"redhat-operators-l65bw\" (UID: \"8fca3286-0d0e-48a2-a43d-aafd94218b81\") " pod="openshift-marketplace/redhat-operators-l65bw" Feb 27 16:42:47 crc kubenswrapper[4830]: I0227 16:42:47.395126 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fca3286-0d0e-48a2-a43d-aafd94218b81-utilities\") pod \"redhat-operators-l65bw\" (UID: \"8fca3286-0d0e-48a2-a43d-aafd94218b81\") " pod="openshift-marketplace/redhat-operators-l65bw" Feb 27 16:42:47 crc kubenswrapper[4830]: I0227 16:42:47.416607 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr2l5\" (UniqueName: \"kubernetes.io/projected/8fca3286-0d0e-48a2-a43d-aafd94218b81-kube-api-access-pr2l5\") pod \"redhat-operators-l65bw\" (UID: \"8fca3286-0d0e-48a2-a43d-aafd94218b81\") " pod="openshift-marketplace/redhat-operators-l65bw" Feb 27 16:42:47 crc kubenswrapper[4830]: I0227 16:42:47.492244 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l65bw" Feb 27 16:42:47 crc kubenswrapper[4830]: I0227 16:42:47.960547 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l65bw"] Feb 27 16:42:48 crc kubenswrapper[4830]: I0227 16:42:48.446276 4830 generic.go:334] "Generic (PLEG): container finished" podID="8fca3286-0d0e-48a2-a43d-aafd94218b81" containerID="4ac86be85472928cc42b28deb0eda9934358d21e90a0d563fd1a3d1b2494f969" exitCode=0 Feb 27 16:42:48 crc kubenswrapper[4830]: I0227 16:42:48.446604 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l65bw" event={"ID":"8fca3286-0d0e-48a2-a43d-aafd94218b81","Type":"ContainerDied","Data":"4ac86be85472928cc42b28deb0eda9934358d21e90a0d563fd1a3d1b2494f969"} Feb 27 16:42:48 crc kubenswrapper[4830]: I0227 16:42:48.446636 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l65bw" event={"ID":"8fca3286-0d0e-48a2-a43d-aafd94218b81","Type":"ContainerStarted","Data":"4ace474b4b76b600223397b7118da2c3abc7957c806739213afda2ace3105a4b"} Feb 27 16:42:49 crc kubenswrapper[4830]: I0227 16:42:49.465305 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l65bw" event={"ID":"8fca3286-0d0e-48a2-a43d-aafd94218b81","Type":"ContainerStarted","Data":"f4867970e58c67de17ac8b5e4d66607693a8eb4fe926f87ba9147288d8faf13c"} Feb 27 16:42:50 crc kubenswrapper[4830]: I0227 16:42:50.479199 4830 generic.go:334] "Generic (PLEG): container finished" podID="8fca3286-0d0e-48a2-a43d-aafd94218b81" containerID="f4867970e58c67de17ac8b5e4d66607693a8eb4fe926f87ba9147288d8faf13c" exitCode=0 Feb 27 16:42:50 crc kubenswrapper[4830]: I0227 16:42:50.479443 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l65bw" event={"ID":"8fca3286-0d0e-48a2-a43d-aafd94218b81","Type":"ContainerDied","Data":"f4867970e58c67de17ac8b5e4d66607693a8eb4fe926f87ba9147288d8faf13c"} Feb 27 16:42:51 crc kubenswrapper[4830]: I0227 16:42:51.490940 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l65bw" event={"ID":"8fca3286-0d0e-48a2-a43d-aafd94218b81","Type":"ContainerStarted","Data":"d64773d1fa168b9e7151da1a1cef3d757cfe79a2910b3a5a4d1335e62f031898"} Feb 27 16:42:51 crc kubenswrapper[4830]: I0227 16:42:51.522728 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l65bw" podStartSLOduration=1.924853386 podStartE2EDuration="4.522691877s" podCreationTimestamp="2026-02-27 16:42:47 +0000 UTC" firstStartedPulling="2026-02-27 16:42:48.44821729 +0000 UTC m=+2164.537489753" lastFinishedPulling="2026-02-27 16:42:51.046055781 +0000 UTC m=+2167.135328244" observedRunningTime="2026-02-27 16:42:51.513297307 +0000 UTC m=+2167.602569810" watchObservedRunningTime="2026-02-27 16:42:51.522691877 +0000 UTC m=+2167.611964380" Feb 27 16:42:57 crc kubenswrapper[4830]: I0227 16:42:57.492453 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l65bw" Feb 27 16:42:57 crc kubenswrapper[4830]: I0227 16:42:57.493199 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l65bw" Feb 27 16:42:58 crc kubenswrapper[4830]: I0227 16:42:58.561625 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l65bw" podUID="8fca3286-0d0e-48a2-a43d-aafd94218b81" containerName="registry-server" probeResult="failure" output=< Feb 27 16:42:58 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Feb 27 16:42:58 crc kubenswrapper[4830]: > Feb 27 16:43:07 crc kubenswrapper[4830]: I0227 16:43:07.573454 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l65bw" Feb 27 16:43:07 crc kubenswrapper[4830]: I0227 16:43:07.648147 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l65bw" Feb 27 16:43:08 crc kubenswrapper[4830]: I0227 16:43:08.496467 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l65bw"] Feb 27 16:43:08 crc kubenswrapper[4830]: I0227 16:43:08.642241 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l65bw" podUID="8fca3286-0d0e-48a2-a43d-aafd94218b81" containerName="registry-server" containerID="cri-o://d64773d1fa168b9e7151da1a1cef3d757cfe79a2910b3a5a4d1335e62f031898" gracePeriod=2 Feb 27 16:43:09 crc kubenswrapper[4830]: I0227 16:43:09.653605 4830 generic.go:334] "Generic (PLEG): container finished" podID="8fca3286-0d0e-48a2-a43d-aafd94218b81" containerID="d64773d1fa168b9e7151da1a1cef3d757cfe79a2910b3a5a4d1335e62f031898" exitCode=0 Feb 27 16:43:09 crc kubenswrapper[4830]: I0227 16:43:09.653842 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l65bw" event={"ID":"8fca3286-0d0e-48a2-a43d-aafd94218b81","Type":"ContainerDied","Data":"d64773d1fa168b9e7151da1a1cef3d757cfe79a2910b3a5a4d1335e62f031898"} Feb 27 16:43:09 crc kubenswrapper[4830]: I0227 16:43:09.654185 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l65bw" event={"ID":"8fca3286-0d0e-48a2-a43d-aafd94218b81","Type":"ContainerDied","Data":"4ace474b4b76b600223397b7118da2c3abc7957c806739213afda2ace3105a4b"} Feb 27 16:43:09 crc kubenswrapper[4830]: I0227 16:43:09.654217 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ace474b4b76b600223397b7118da2c3abc7957c806739213afda2ace3105a4b" Feb 27 16:43:09 crc kubenswrapper[4830]: I0227 16:43:09.693115 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l65bw" Feb 27 16:43:09 crc kubenswrapper[4830]: I0227 16:43:09.763557 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fca3286-0d0e-48a2-a43d-aafd94218b81-catalog-content\") pod \"8fca3286-0d0e-48a2-a43d-aafd94218b81\" (UID: \"8fca3286-0d0e-48a2-a43d-aafd94218b81\") " Feb 27 16:43:09 crc kubenswrapper[4830]: I0227 16:43:09.763661 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pr2l5\" (UniqueName: \"kubernetes.io/projected/8fca3286-0d0e-48a2-a43d-aafd94218b81-kube-api-access-pr2l5\") pod \"8fca3286-0d0e-48a2-a43d-aafd94218b81\" (UID: \"8fca3286-0d0e-48a2-a43d-aafd94218b81\") " Feb 27 16:43:09 crc kubenswrapper[4830]: I0227 16:43:09.763762 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fca3286-0d0e-48a2-a43d-aafd94218b81-utilities\") pod \"8fca3286-0d0e-48a2-a43d-aafd94218b81\" (UID: \"8fca3286-0d0e-48a2-a43d-aafd94218b81\") " Feb 27 16:43:09 crc kubenswrapper[4830]: I0227 16:43:09.765167 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fca3286-0d0e-48a2-a43d-aafd94218b81-utilities" (OuterVolumeSpecName: "utilities") pod "8fca3286-0d0e-48a2-a43d-aafd94218b81" (UID: "8fca3286-0d0e-48a2-a43d-aafd94218b81"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:43:09 crc kubenswrapper[4830]: I0227 16:43:09.772169 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fca3286-0d0e-48a2-a43d-aafd94218b81-kube-api-access-pr2l5" (OuterVolumeSpecName: "kube-api-access-pr2l5") pod "8fca3286-0d0e-48a2-a43d-aafd94218b81" (UID: "8fca3286-0d0e-48a2-a43d-aafd94218b81"). InnerVolumeSpecName "kube-api-access-pr2l5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:43:09 crc kubenswrapper[4830]: I0227 16:43:09.865110 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fca3286-0d0e-48a2-a43d-aafd94218b81-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 16:43:09 crc kubenswrapper[4830]: I0227 16:43:09.865146 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pr2l5\" (UniqueName: \"kubernetes.io/projected/8fca3286-0d0e-48a2-a43d-aafd94218b81-kube-api-access-pr2l5\") on node \"crc\" DevicePath \"\"" Feb 27 16:43:09 crc kubenswrapper[4830]: I0227 16:43:09.913555 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fca3286-0d0e-48a2-a43d-aafd94218b81-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8fca3286-0d0e-48a2-a43d-aafd94218b81" (UID: "8fca3286-0d0e-48a2-a43d-aafd94218b81"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:43:09 crc kubenswrapper[4830]: I0227 16:43:09.966916 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fca3286-0d0e-48a2-a43d-aafd94218b81-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 16:43:10 crc kubenswrapper[4830]: I0227 16:43:10.663838 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l65bw" Feb 27 16:43:10 crc kubenswrapper[4830]: I0227 16:43:10.717044 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l65bw"] Feb 27 16:43:10 crc kubenswrapper[4830]: I0227 16:43:10.721907 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l65bw"] Feb 27 16:43:10 crc kubenswrapper[4830]: I0227 16:43:10.774568 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fca3286-0d0e-48a2-a43d-aafd94218b81" path="/var/lib/kubelet/pods/8fca3286-0d0e-48a2-a43d-aafd94218b81/volumes" Feb 27 16:43:33 crc kubenswrapper[4830]: I0227 16:43:33.159933 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 16:43:33 crc kubenswrapper[4830]: I0227 16:43:33.160644 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 16:44:00 crc kubenswrapper[4830]: I0227 16:44:00.160529 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536844-mxwt6"] Feb 27 16:44:00 crc kubenswrapper[4830]: E0227 16:44:00.161709 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fca3286-0d0e-48a2-a43d-aafd94218b81" containerName="extract-content" Feb 27 16:44:00 crc kubenswrapper[4830]: I0227 16:44:00.161732 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fca3286-0d0e-48a2-a43d-aafd94218b81" containerName="extract-content" Feb 27 16:44:00 crc kubenswrapper[4830]: E0227 16:44:00.161771 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fca3286-0d0e-48a2-a43d-aafd94218b81" containerName="extract-utilities" Feb 27 16:44:00 crc kubenswrapper[4830]: I0227 16:44:00.161783 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fca3286-0d0e-48a2-a43d-aafd94218b81" containerName="extract-utilities" Feb 27 16:44:00 crc kubenswrapper[4830]: E0227 16:44:00.161818 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fca3286-0d0e-48a2-a43d-aafd94218b81" containerName="registry-server" Feb 27 16:44:00 crc kubenswrapper[4830]: I0227 16:44:00.161832 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fca3286-0d0e-48a2-a43d-aafd94218b81" containerName="registry-server" Feb 27 16:44:00 crc kubenswrapper[4830]: I0227 16:44:00.162101 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fca3286-0d0e-48a2-a43d-aafd94218b81" containerName="registry-server" Feb 27 16:44:00 crc kubenswrapper[4830]: I0227 16:44:00.162883 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536844-mxwt6" Feb 27 16:44:00 crc kubenswrapper[4830]: I0227 16:44:00.170433 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 16:44:00 crc kubenswrapper[4830]: I0227 16:44:00.170538 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 16:44:00 crc kubenswrapper[4830]: I0227 16:44:00.171509 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 16:44:00 crc kubenswrapper[4830]: I0227 16:44:00.190139 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536844-mxwt6"] Feb 27 16:44:00 crc kubenswrapper[4830]: I0227 16:44:00.207579 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8cdp\" (UniqueName: \"kubernetes.io/projected/84cb45db-b04b-4162-8ddf-ad745104891a-kube-api-access-l8cdp\") pod \"auto-csr-approver-29536844-mxwt6\" (UID: \"84cb45db-b04b-4162-8ddf-ad745104891a\") " pod="openshift-infra/auto-csr-approver-29536844-mxwt6" Feb 27 16:44:00 crc kubenswrapper[4830]: I0227 16:44:00.310010 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8cdp\" (UniqueName: \"kubernetes.io/projected/84cb45db-b04b-4162-8ddf-ad745104891a-kube-api-access-l8cdp\") pod \"auto-csr-approver-29536844-mxwt6\" (UID: \"84cb45db-b04b-4162-8ddf-ad745104891a\") " pod="openshift-infra/auto-csr-approver-29536844-mxwt6" Feb 27 16:44:00 crc kubenswrapper[4830]: I0227 16:44:00.331244 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8cdp\" (UniqueName: \"kubernetes.io/projected/84cb45db-b04b-4162-8ddf-ad745104891a-kube-api-access-l8cdp\") pod \"auto-csr-approver-29536844-mxwt6\" (UID: \"84cb45db-b04b-4162-8ddf-ad745104891a\") " pod="openshift-infra/auto-csr-approver-29536844-mxwt6" Feb 27 16:44:00 crc kubenswrapper[4830]: I0227 16:44:00.506432 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536844-mxwt6" Feb 27 16:44:00 crc kubenswrapper[4830]: I0227 16:44:00.957257 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536844-mxwt6"] Feb 27 16:44:00 crc kubenswrapper[4830]: W0227 16:44:00.972431 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84cb45db_b04b_4162_8ddf_ad745104891a.slice/crio-4946b87886cbd6390a9150f6a1ca0f3214db063f21761e9c83db1790285c97b3 WatchSource:0}: Error finding container 4946b87886cbd6390a9150f6a1ca0f3214db063f21761e9c83db1790285c97b3: Status 404 returned error can't find the container with id 4946b87886cbd6390a9150f6a1ca0f3214db063f21761e9c83db1790285c97b3 Feb 27 16:44:00 crc kubenswrapper[4830]: I0227 16:44:00.975416 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 16:44:01 crc kubenswrapper[4830]: I0227 16:44:01.165682 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536844-mxwt6" event={"ID":"84cb45db-b04b-4162-8ddf-ad745104891a","Type":"ContainerStarted","Data":"4946b87886cbd6390a9150f6a1ca0f3214db063f21761e9c83db1790285c97b3"} Feb 27 16:44:03 crc kubenswrapper[4830]: I0227 16:44:03.160099 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 16:44:03 crc kubenswrapper[4830]: I0227 16:44:03.160887 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 16:44:03 crc kubenswrapper[4830]: I0227 16:44:03.185910 4830 generic.go:334] "Generic (PLEG): container finished" podID="84cb45db-b04b-4162-8ddf-ad745104891a" containerID="d8983ef771441f42bf4e8640879aaf9cf659027f467b21fb8abd11277603d54c" exitCode=0 Feb 27 16:44:03 crc kubenswrapper[4830]: I0227 16:44:03.186010 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536844-mxwt6" event={"ID":"84cb45db-b04b-4162-8ddf-ad745104891a","Type":"ContainerDied","Data":"d8983ef771441f42bf4e8640879aaf9cf659027f467b21fb8abd11277603d54c"} Feb 27 16:44:04 crc kubenswrapper[4830]: I0227 16:44:04.564200 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536844-mxwt6" Feb 27 16:44:04 crc kubenswrapper[4830]: I0227 16:44:04.578204 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8cdp\" (UniqueName: \"kubernetes.io/projected/84cb45db-b04b-4162-8ddf-ad745104891a-kube-api-access-l8cdp\") pod \"84cb45db-b04b-4162-8ddf-ad745104891a\" (UID: \"84cb45db-b04b-4162-8ddf-ad745104891a\") " Feb 27 16:44:04 crc kubenswrapper[4830]: I0227 16:44:04.587639 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84cb45db-b04b-4162-8ddf-ad745104891a-kube-api-access-l8cdp" (OuterVolumeSpecName: "kube-api-access-l8cdp") pod "84cb45db-b04b-4162-8ddf-ad745104891a" (UID: "84cb45db-b04b-4162-8ddf-ad745104891a"). InnerVolumeSpecName "kube-api-access-l8cdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:44:04 crc kubenswrapper[4830]: I0227 16:44:04.679505 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8cdp\" (UniqueName: \"kubernetes.io/projected/84cb45db-b04b-4162-8ddf-ad745104891a-kube-api-access-l8cdp\") on node \"crc\" DevicePath \"\"" Feb 27 16:44:05 crc kubenswrapper[4830]: I0227 16:44:05.206110 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536844-mxwt6" event={"ID":"84cb45db-b04b-4162-8ddf-ad745104891a","Type":"ContainerDied","Data":"4946b87886cbd6390a9150f6a1ca0f3214db063f21761e9c83db1790285c97b3"} Feb 27 16:44:05 crc kubenswrapper[4830]: I0227 16:44:05.206175 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4946b87886cbd6390a9150f6a1ca0f3214db063f21761e9c83db1790285c97b3" Feb 27 16:44:05 crc kubenswrapper[4830]: I0227 16:44:05.206195 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536844-mxwt6" Feb 27 16:44:05 crc kubenswrapper[4830]: I0227 16:44:05.644077 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536838-sjpxn"] Feb 27 16:44:05 crc kubenswrapper[4830]: I0227 16:44:05.653766 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536838-sjpxn"] Feb 27 16:44:06 crc kubenswrapper[4830]: I0227 16:44:06.780822 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d68eb6a9-3f9d-49da-b00a-16c94d10b1e0" path="/var/lib/kubelet/pods/d68eb6a9-3f9d-49da-b00a-16c94d10b1e0/volumes" Feb 27 16:44:33 crc kubenswrapper[4830]: I0227 16:44:33.160177 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 16:44:33 crc kubenswrapper[4830]: I0227 16:44:33.162051 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 16:44:33 crc kubenswrapper[4830]: I0227 16:44:33.162150 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 16:44:33 crc kubenswrapper[4830]: I0227 16:44:33.162832 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c4de61c7f48929592fa4fe3911fffe24750acfdd56a4eab75365c7aa2a8b7dfb"} pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 16:44:33 crc kubenswrapper[4830]: I0227 16:44:33.162903 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" containerID="cri-o://c4de61c7f48929592fa4fe3911fffe24750acfdd56a4eab75365c7aa2a8b7dfb" gracePeriod=600 Feb 27 16:44:33 crc kubenswrapper[4830]: E0227 16:44:33.306020 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:44:33 crc kubenswrapper[4830]: I0227 16:44:33.427395 4830 generic.go:334] "Generic (PLEG): container finished" podID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerID="c4de61c7f48929592fa4fe3911fffe24750acfdd56a4eab75365c7aa2a8b7dfb" exitCode=0 Feb 27 16:44:33 crc kubenswrapper[4830]: I0227 16:44:33.427442 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerDied","Data":"c4de61c7f48929592fa4fe3911fffe24750acfdd56a4eab75365c7aa2a8b7dfb"} Feb 27 16:44:33 crc kubenswrapper[4830]: I0227 16:44:33.427479 4830 scope.go:117] "RemoveContainer" containerID="b810a866e7e028ecd9333aa0ac47bc4872c9ecf682d5561ecaf3d6e30e0e0340" Feb 27 16:44:33 crc kubenswrapper[4830]: I0227 16:44:33.427896 4830 scope.go:117] "RemoveContainer" containerID="c4de61c7f48929592fa4fe3911fffe24750acfdd56a4eab75365c7aa2a8b7dfb" Feb 27 16:44:33 crc kubenswrapper[4830]: E0227 16:44:33.428224 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:44:44 crc kubenswrapper[4830]: I0227 16:44:44.779467 4830 scope.go:117] "RemoveContainer" containerID="5c588e4bfca51877a2c987f383ce4f876fe68b73bceb40ebcc1c3ab39c2d797a" Feb 27 16:44:46 crc kubenswrapper[4830]: I0227 16:44:46.762462 4830 scope.go:117] "RemoveContainer" containerID="c4de61c7f48929592fa4fe3911fffe24750acfdd56a4eab75365c7aa2a8b7dfb" Feb 27 16:44:46 crc kubenswrapper[4830]: E0227 16:44:46.763018 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:45:00 crc kubenswrapper[4830]: I0227 16:45:00.182886 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536845-jr2fm"] Feb 27 16:45:00 crc kubenswrapper[4830]: E0227 16:45:00.184055 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84cb45db-b04b-4162-8ddf-ad745104891a" containerName="oc" Feb 27 16:45:00 crc kubenswrapper[4830]: I0227 16:45:00.184103 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="84cb45db-b04b-4162-8ddf-ad745104891a" containerName="oc" Feb 27 16:45:00 crc kubenswrapper[4830]: I0227 16:45:00.184761 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="84cb45db-b04b-4162-8ddf-ad745104891a" containerName="oc" Feb 27 16:45:00 crc kubenswrapper[4830]: I0227 16:45:00.185842 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536845-jr2fm" Feb 27 16:45:00 crc kubenswrapper[4830]: I0227 16:45:00.188916 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 16:45:00 crc kubenswrapper[4830]: I0227 16:45:00.189271 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 16:45:00 crc kubenswrapper[4830]: I0227 16:45:00.197755 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536845-jr2fm"] Feb 27 16:45:00 crc kubenswrapper[4830]: I0227 16:45:00.336868 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmhmk\" (UniqueName: \"kubernetes.io/projected/c7acf39c-5119-438f-bfca-2aa403a29a4b-kube-api-access-dmhmk\") pod \"collect-profiles-29536845-jr2fm\" (UID: \"c7acf39c-5119-438f-bfca-2aa403a29a4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536845-jr2fm" Feb 27 16:45:00 crc kubenswrapper[4830]: I0227 16:45:00.337021 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7acf39c-5119-438f-bfca-2aa403a29a4b-config-volume\") pod \"collect-profiles-29536845-jr2fm\" (UID: \"c7acf39c-5119-438f-bfca-2aa403a29a4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536845-jr2fm" Feb 27 16:45:00 crc kubenswrapper[4830]: I0227 16:45:00.337081 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7acf39c-5119-438f-bfca-2aa403a29a4b-secret-volume\") pod \"collect-profiles-29536845-jr2fm\" (UID: \"c7acf39c-5119-438f-bfca-2aa403a29a4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536845-jr2fm" Feb 27 16:45:00 crc kubenswrapper[4830]: I0227 16:45:00.439557 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmhmk\" (UniqueName: \"kubernetes.io/projected/c7acf39c-5119-438f-bfca-2aa403a29a4b-kube-api-access-dmhmk\") pod \"collect-profiles-29536845-jr2fm\" (UID: \"c7acf39c-5119-438f-bfca-2aa403a29a4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536845-jr2fm" Feb 27 16:45:00 crc kubenswrapper[4830]: I0227 16:45:00.440269 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7acf39c-5119-438f-bfca-2aa403a29a4b-config-volume\") pod \"collect-profiles-29536845-jr2fm\" (UID: \"c7acf39c-5119-438f-bfca-2aa403a29a4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536845-jr2fm" Feb 27 16:45:00 crc kubenswrapper[4830]: I0227 16:45:00.441856 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7acf39c-5119-438f-bfca-2aa403a29a4b-secret-volume\") pod \"collect-profiles-29536845-jr2fm\" (UID: \"c7acf39c-5119-438f-bfca-2aa403a29a4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536845-jr2fm" Feb 27 16:45:00 crc kubenswrapper[4830]: I0227 16:45:00.443595 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7acf39c-5119-438f-bfca-2aa403a29a4b-config-volume\") pod \"collect-profiles-29536845-jr2fm\" (UID: \"c7acf39c-5119-438f-bfca-2aa403a29a4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536845-jr2fm" Feb 27 16:45:00 crc kubenswrapper[4830]: I0227 16:45:00.449474 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7acf39c-5119-438f-bfca-2aa403a29a4b-secret-volume\") pod \"collect-profiles-29536845-jr2fm\" (UID: \"c7acf39c-5119-438f-bfca-2aa403a29a4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536845-jr2fm" Feb 27 16:45:00 crc kubenswrapper[4830]: I0227 16:45:00.459894 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmhmk\" (UniqueName: \"kubernetes.io/projected/c7acf39c-5119-438f-bfca-2aa403a29a4b-kube-api-access-dmhmk\") pod \"collect-profiles-29536845-jr2fm\" (UID: \"c7acf39c-5119-438f-bfca-2aa403a29a4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536845-jr2fm" Feb 27 16:45:00 crc kubenswrapper[4830]: I0227 16:45:00.519492 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536845-jr2fm" Feb 27 16:45:00 crc kubenswrapper[4830]: I0227 16:45:00.762914 4830 scope.go:117] "RemoveContainer" containerID="c4de61c7f48929592fa4fe3911fffe24750acfdd56a4eab75365c7aa2a8b7dfb" Feb 27 16:45:00 crc kubenswrapper[4830]: E0227 16:45:00.763714 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:45:00 crc kubenswrapper[4830]: I0227 16:45:00.872945 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536845-jr2fm"] Feb 27 16:45:00 crc kubenswrapper[4830]: W0227 16:45:00.875425 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7acf39c_5119_438f_bfca_2aa403a29a4b.slice/crio-93137a6732bb3c7060ce829ff7a7fb403aaf046da077e5147ef756c9a26af428 WatchSource:0}: Error finding container 93137a6732bb3c7060ce829ff7a7fb403aaf046da077e5147ef756c9a26af428: Status 404 returned error can't find the container with id 93137a6732bb3c7060ce829ff7a7fb403aaf046da077e5147ef756c9a26af428 Feb 27 16:45:01 crc kubenswrapper[4830]: I0227 16:45:01.665657 4830 generic.go:334] "Generic (PLEG): container finished" podID="c7acf39c-5119-438f-bfca-2aa403a29a4b" containerID="2f0846069a58d31584c7158e1dc49b088af3a82683ebccd78ae041bf55658993" exitCode=0 Feb 27 16:45:01 crc kubenswrapper[4830]: I0227 16:45:01.665692 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536845-jr2fm" event={"ID":"c7acf39c-5119-438f-bfca-2aa403a29a4b","Type":"ContainerDied","Data":"2f0846069a58d31584c7158e1dc49b088af3a82683ebccd78ae041bf55658993"} Feb 27 16:45:01 crc kubenswrapper[4830]: I0227 16:45:01.665713 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536845-jr2fm" event={"ID":"c7acf39c-5119-438f-bfca-2aa403a29a4b","Type":"ContainerStarted","Data":"93137a6732bb3c7060ce829ff7a7fb403aaf046da077e5147ef756c9a26af428"} Feb 27 16:45:03 crc kubenswrapper[4830]: I0227 16:45:03.030458 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536845-jr2fm" Feb 27 16:45:03 crc kubenswrapper[4830]: I0227 16:45:03.181671 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmhmk\" (UniqueName: \"kubernetes.io/projected/c7acf39c-5119-438f-bfca-2aa403a29a4b-kube-api-access-dmhmk\") pod \"c7acf39c-5119-438f-bfca-2aa403a29a4b\" (UID: \"c7acf39c-5119-438f-bfca-2aa403a29a4b\") " Feb 27 16:45:03 crc kubenswrapper[4830]: I0227 16:45:03.181754 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7acf39c-5119-438f-bfca-2aa403a29a4b-secret-volume\") pod \"c7acf39c-5119-438f-bfca-2aa403a29a4b\" (UID: \"c7acf39c-5119-438f-bfca-2aa403a29a4b\") " Feb 27 16:45:03 crc kubenswrapper[4830]: I0227 16:45:03.181787 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7acf39c-5119-438f-bfca-2aa403a29a4b-config-volume\") pod \"c7acf39c-5119-438f-bfca-2aa403a29a4b\" (UID: \"c7acf39c-5119-438f-bfca-2aa403a29a4b\") " Feb 27 16:45:03 crc kubenswrapper[4830]: I0227 16:45:03.183208 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7acf39c-5119-438f-bfca-2aa403a29a4b-config-volume" (OuterVolumeSpecName: "config-volume") pod "c7acf39c-5119-438f-bfca-2aa403a29a4b" (UID: "c7acf39c-5119-438f-bfca-2aa403a29a4b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:45:03 crc kubenswrapper[4830]: I0227 16:45:03.189070 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7acf39c-5119-438f-bfca-2aa403a29a4b-kube-api-access-dmhmk" (OuterVolumeSpecName: "kube-api-access-dmhmk") pod "c7acf39c-5119-438f-bfca-2aa403a29a4b" (UID: "c7acf39c-5119-438f-bfca-2aa403a29a4b"). InnerVolumeSpecName "kube-api-access-dmhmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:45:03 crc kubenswrapper[4830]: I0227 16:45:03.189220 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7acf39c-5119-438f-bfca-2aa403a29a4b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c7acf39c-5119-438f-bfca-2aa403a29a4b" (UID: "c7acf39c-5119-438f-bfca-2aa403a29a4b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:45:03 crc kubenswrapper[4830]: I0227 16:45:03.283544 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmhmk\" (UniqueName: \"kubernetes.io/projected/c7acf39c-5119-438f-bfca-2aa403a29a4b-kube-api-access-dmhmk\") on node \"crc\" DevicePath \"\"" Feb 27 16:45:03 crc kubenswrapper[4830]: I0227 16:45:03.283596 4830 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7acf39c-5119-438f-bfca-2aa403a29a4b-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 16:45:03 crc kubenswrapper[4830]: I0227 16:45:03.283616 4830 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7acf39c-5119-438f-bfca-2aa403a29a4b-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 16:45:03 crc kubenswrapper[4830]: I0227 16:45:03.685441 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536845-jr2fm" event={"ID":"c7acf39c-5119-438f-bfca-2aa403a29a4b","Type":"ContainerDied","Data":"93137a6732bb3c7060ce829ff7a7fb403aaf046da077e5147ef756c9a26af428"} Feb 27 16:45:03 crc kubenswrapper[4830]: I0227 16:45:03.685501 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93137a6732bb3c7060ce829ff7a7fb403aaf046da077e5147ef756c9a26af428" Feb 27 16:45:03 crc kubenswrapper[4830]: I0227 16:45:03.685534 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536845-jr2fm" Feb 27 16:45:04 crc kubenswrapper[4830]: I0227 16:45:04.127955 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536800-n8kg4"] Feb 27 16:45:04 crc kubenswrapper[4830]: I0227 16:45:04.139905 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536800-n8kg4"] Feb 27 16:45:04 crc kubenswrapper[4830]: I0227 16:45:04.777915 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="041ea905-9e91-41e3-9db6-820256d951aa" path="/var/lib/kubelet/pods/041ea905-9e91-41e3-9db6-820256d951aa/volumes" Feb 27 16:45:11 crc kubenswrapper[4830]: I0227 16:45:11.762485 4830 scope.go:117] "RemoveContainer" containerID="c4de61c7f48929592fa4fe3911fffe24750acfdd56a4eab75365c7aa2a8b7dfb" Feb 27 16:45:11 crc kubenswrapper[4830]: E0227 16:45:11.763242 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:45:26 crc kubenswrapper[4830]: I0227 16:45:26.763652 4830 scope.go:117] "RemoveContainer" containerID="c4de61c7f48929592fa4fe3911fffe24750acfdd56a4eab75365c7aa2a8b7dfb" Feb 27 16:45:26 crc kubenswrapper[4830]: E0227 16:45:26.764694 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:45:38 crc kubenswrapper[4830]: I0227 16:45:38.762451 4830 scope.go:117] "RemoveContainer" containerID="c4de61c7f48929592fa4fe3911fffe24750acfdd56a4eab75365c7aa2a8b7dfb" Feb 27 16:45:38 crc kubenswrapper[4830]: E0227 16:45:38.763582 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:45:44 crc kubenswrapper[4830]: I0227 16:45:44.873485 4830 scope.go:117] "RemoveContainer" containerID="1d1db6ba2d26bed55d01d495302f772f7446ef48d3dfc1ab5d8cdb0c74fec5ae" Feb 27 16:45:46 crc kubenswrapper[4830]: I0227 16:45:46.624703 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tpzsj"] Feb 27 16:45:46 crc kubenswrapper[4830]: E0227 16:45:46.625651 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7acf39c-5119-438f-bfca-2aa403a29a4b" containerName="collect-profiles" Feb 27 16:45:46 crc kubenswrapper[4830]: I0227 16:45:46.625674 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7acf39c-5119-438f-bfca-2aa403a29a4b" containerName="collect-profiles" Feb 27 16:45:46 crc kubenswrapper[4830]: I0227 16:45:46.625917 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7acf39c-5119-438f-bfca-2aa403a29a4b" containerName="collect-profiles" Feb 27 16:45:46 crc kubenswrapper[4830]: I0227 16:45:46.627691 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tpzsj" Feb 27 16:45:46 crc kubenswrapper[4830]: I0227 16:45:46.645044 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tpzsj"] Feb 27 16:45:46 crc kubenswrapper[4830]: I0227 16:45:46.784419 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfjld\" (UniqueName: \"kubernetes.io/projected/7749db70-0e11-4920-8da0-e9626204708b-kube-api-access-jfjld\") pod \"community-operators-tpzsj\" (UID: \"7749db70-0e11-4920-8da0-e9626204708b\") " pod="openshift-marketplace/community-operators-tpzsj" Feb 27 16:45:46 crc kubenswrapper[4830]: I0227 16:45:46.784497 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7749db70-0e11-4920-8da0-e9626204708b-catalog-content\") pod \"community-operators-tpzsj\" (UID: \"7749db70-0e11-4920-8da0-e9626204708b\") " pod="openshift-marketplace/community-operators-tpzsj" Feb 27 16:45:46 crc kubenswrapper[4830]: I0227 16:45:46.784645 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7749db70-0e11-4920-8da0-e9626204708b-utilities\") pod \"community-operators-tpzsj\" (UID: \"7749db70-0e11-4920-8da0-e9626204708b\") " pod="openshift-marketplace/community-operators-tpzsj" Feb 27 16:45:46 crc kubenswrapper[4830]: I0227 16:45:46.886352 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7749db70-0e11-4920-8da0-e9626204708b-utilities\") pod \"community-operators-tpzsj\" (UID: \"7749db70-0e11-4920-8da0-e9626204708b\") " pod="openshift-marketplace/community-operators-tpzsj" Feb 27 16:45:46 crc kubenswrapper[4830]: I0227 16:45:46.886445 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfjld\" (UniqueName: \"kubernetes.io/projected/7749db70-0e11-4920-8da0-e9626204708b-kube-api-access-jfjld\") pod \"community-operators-tpzsj\" (UID: \"7749db70-0e11-4920-8da0-e9626204708b\") " pod="openshift-marketplace/community-operators-tpzsj" Feb 27 16:45:46 crc kubenswrapper[4830]: I0227 16:45:46.886485 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7749db70-0e11-4920-8da0-e9626204708b-catalog-content\") pod \"community-operators-tpzsj\" (UID: \"7749db70-0e11-4920-8da0-e9626204708b\") " pod="openshift-marketplace/community-operators-tpzsj" Feb 27 16:45:46 crc kubenswrapper[4830]: I0227 16:45:46.887929 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7749db70-0e11-4920-8da0-e9626204708b-catalog-content\") pod \"community-operators-tpzsj\" (UID: \"7749db70-0e11-4920-8da0-e9626204708b\") " pod="openshift-marketplace/community-operators-tpzsj" Feb 27 16:45:46 crc kubenswrapper[4830]: I0227 16:45:46.888085 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7749db70-0e11-4920-8da0-e9626204708b-utilities\") pod \"community-operators-tpzsj\" (UID: \"7749db70-0e11-4920-8da0-e9626204708b\") " pod="openshift-marketplace/community-operators-tpzsj" Feb 27 16:45:46 crc kubenswrapper[4830]: I0227 16:45:46.914123 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfjld\" (UniqueName: \"kubernetes.io/projected/7749db70-0e11-4920-8da0-e9626204708b-kube-api-access-jfjld\") pod \"community-operators-tpzsj\" (UID: \"7749db70-0e11-4920-8da0-e9626204708b\") " pod="openshift-marketplace/community-operators-tpzsj" Feb 27 16:45:46 crc kubenswrapper[4830]: I0227 16:45:46.947593 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tpzsj" Feb 27 16:45:47 crc kubenswrapper[4830]: I0227 16:45:47.430113 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tpzsj"] Feb 27 16:45:48 crc kubenswrapper[4830]: I0227 16:45:48.065483 4830 generic.go:334] "Generic (PLEG): container finished" podID="7749db70-0e11-4920-8da0-e9626204708b" containerID="84c667736d133d66a919daac92a88b364fe75b4b1a503fc7f5dd38a816f5d815" exitCode=0 Feb 27 16:45:48 crc kubenswrapper[4830]: I0227 16:45:48.065597 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tpzsj" event={"ID":"7749db70-0e11-4920-8da0-e9626204708b","Type":"ContainerDied","Data":"84c667736d133d66a919daac92a88b364fe75b4b1a503fc7f5dd38a816f5d815"} Feb 27 16:45:48 crc kubenswrapper[4830]: I0227 16:45:48.065968 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tpzsj" event={"ID":"7749db70-0e11-4920-8da0-e9626204708b","Type":"ContainerStarted","Data":"afc0662fe2a7c516941cf2b086d8fe95de77b9f017f10f2df3938917af6572e3"} Feb 27 16:45:49 crc kubenswrapper[4830]: I0227 16:45:49.763377 4830 scope.go:117] "RemoveContainer" containerID="c4de61c7f48929592fa4fe3911fffe24750acfdd56a4eab75365c7aa2a8b7dfb" Feb 27 16:45:49 crc kubenswrapper[4830]: E0227 16:45:49.763655 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:45:50 crc kubenswrapper[4830]: I0227 16:45:50.083645 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tpzsj" event={"ID":"7749db70-0e11-4920-8da0-e9626204708b","Type":"ContainerStarted","Data":"dec27d125c6f84d34f659a9d47a976b2c53da135fd16bb054c64b13a17091b4b"} Feb 27 16:45:50 crc kubenswrapper[4830]: I0227 16:45:50.602569 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8jmsb"] Feb 27 16:45:50 crc kubenswrapper[4830]: I0227 16:45:50.605534 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8jmsb" Feb 27 16:45:50 crc kubenswrapper[4830]: I0227 16:45:50.619290 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8jmsb"] Feb 27 16:45:50 crc kubenswrapper[4830]: I0227 16:45:50.743650 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5324b6d2-025b-4377-8af1-462d4220cbb6-utilities\") pod \"certified-operators-8jmsb\" (UID: \"5324b6d2-025b-4377-8af1-462d4220cbb6\") " pod="openshift-marketplace/certified-operators-8jmsb" Feb 27 16:45:50 crc kubenswrapper[4830]: I0227 16:45:50.743733 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5324b6d2-025b-4377-8af1-462d4220cbb6-catalog-content\") pod \"certified-operators-8jmsb\" (UID: \"5324b6d2-025b-4377-8af1-462d4220cbb6\") " pod="openshift-marketplace/certified-operators-8jmsb" Feb 27 16:45:50 crc kubenswrapper[4830]: I0227 16:45:50.743823 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5kqf\" (UniqueName: \"kubernetes.io/projected/5324b6d2-025b-4377-8af1-462d4220cbb6-kube-api-access-n5kqf\") pod \"certified-operators-8jmsb\" (UID: \"5324b6d2-025b-4377-8af1-462d4220cbb6\") " pod="openshift-marketplace/certified-operators-8jmsb" Feb 27 16:45:50 crc kubenswrapper[4830]: I0227 16:45:50.845541 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5324b6d2-025b-4377-8af1-462d4220cbb6-catalog-content\") pod \"certified-operators-8jmsb\" (UID: \"5324b6d2-025b-4377-8af1-462d4220cbb6\") " pod="openshift-marketplace/certified-operators-8jmsb" Feb 27 16:45:50 crc kubenswrapper[4830]: I0227 16:45:50.845632 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5kqf\" (UniqueName: \"kubernetes.io/projected/5324b6d2-025b-4377-8af1-462d4220cbb6-kube-api-access-n5kqf\") pod \"certified-operators-8jmsb\" (UID: \"5324b6d2-025b-4377-8af1-462d4220cbb6\") " pod="openshift-marketplace/certified-operators-8jmsb" Feb 27 16:45:50 crc kubenswrapper[4830]: I0227 16:45:50.845825 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5324b6d2-025b-4377-8af1-462d4220cbb6-utilities\") pod \"certified-operators-8jmsb\" (UID: \"5324b6d2-025b-4377-8af1-462d4220cbb6\") " pod="openshift-marketplace/certified-operators-8jmsb" Feb 27 16:45:50 crc kubenswrapper[4830]: I0227 16:45:50.846516 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5324b6d2-025b-4377-8af1-462d4220cbb6-catalog-content\") pod \"certified-operators-8jmsb\" (UID: \"5324b6d2-025b-4377-8af1-462d4220cbb6\") " pod="openshift-marketplace/certified-operators-8jmsb" Feb 27 16:45:50 crc kubenswrapper[4830]: I0227 16:45:50.846802 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5324b6d2-025b-4377-8af1-462d4220cbb6-utilities\") pod \"certified-operators-8jmsb\" (UID: \"5324b6d2-025b-4377-8af1-462d4220cbb6\") " pod="openshift-marketplace/certified-operators-8jmsb" Feb 27 16:45:50 crc kubenswrapper[4830]: I0227 16:45:50.873934 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5kqf\" (UniqueName: \"kubernetes.io/projected/5324b6d2-025b-4377-8af1-462d4220cbb6-kube-api-access-n5kqf\") pod \"certified-operators-8jmsb\" (UID: \"5324b6d2-025b-4377-8af1-462d4220cbb6\") " pod="openshift-marketplace/certified-operators-8jmsb" Feb 27 16:45:50 crc kubenswrapper[4830]: I0227 16:45:50.924043 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8jmsb" Feb 27 16:45:51 crc kubenswrapper[4830]: I0227 16:45:51.097495 4830 generic.go:334] "Generic (PLEG): container finished" podID="7749db70-0e11-4920-8da0-e9626204708b" containerID="dec27d125c6f84d34f659a9d47a976b2c53da135fd16bb054c64b13a17091b4b" exitCode=0 Feb 27 16:45:51 crc kubenswrapper[4830]: I0227 16:45:51.097541 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tpzsj" event={"ID":"7749db70-0e11-4920-8da0-e9626204708b","Type":"ContainerDied","Data":"dec27d125c6f84d34f659a9d47a976b2c53da135fd16bb054c64b13a17091b4b"} Feb 27 16:45:51 crc kubenswrapper[4830]: I0227 16:45:51.408197 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8jmsb"] Feb 27 16:45:51 crc kubenswrapper[4830]: W0227 16:45:51.413714 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5324b6d2_025b_4377_8af1_462d4220cbb6.slice/crio-df9c514fe96184c380b0599e8a2468692b6f49a38f22f88b2eae3412c638000a WatchSource:0}: Error finding container df9c514fe96184c380b0599e8a2468692b6f49a38f22f88b2eae3412c638000a: Status 404 returned error can't find the container with id df9c514fe96184c380b0599e8a2468692b6f49a38f22f88b2eae3412c638000a Feb 27 16:45:52 crc kubenswrapper[4830]: I0227 16:45:52.105747 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tpzsj" event={"ID":"7749db70-0e11-4920-8da0-e9626204708b","Type":"ContainerStarted","Data":"81bd59198af4e06a502363ab2d767b21f8cbbe96b79e6064ffa6589b1d5ef024"} Feb 27 16:45:52 crc kubenswrapper[4830]: I0227 16:45:52.107234 4830 generic.go:334] "Generic (PLEG): container finished" podID="5324b6d2-025b-4377-8af1-462d4220cbb6" containerID="2981052b786f1ce241feafb23b4aafdbeec4687b153675f719200cfca69e3e99" exitCode=0 Feb 27 16:45:52 crc kubenswrapper[4830]: I0227 16:45:52.107262 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jmsb" event={"ID":"5324b6d2-025b-4377-8af1-462d4220cbb6","Type":"ContainerDied","Data":"2981052b786f1ce241feafb23b4aafdbeec4687b153675f719200cfca69e3e99"} Feb 27 16:45:52 crc kubenswrapper[4830]: I0227 16:45:52.107277 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jmsb" event={"ID":"5324b6d2-025b-4377-8af1-462d4220cbb6","Type":"ContainerStarted","Data":"df9c514fe96184c380b0599e8a2468692b6f49a38f22f88b2eae3412c638000a"} Feb 27 16:45:52 crc kubenswrapper[4830]: I0227 16:45:52.129120 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tpzsj" podStartSLOduration=2.697807473 podStartE2EDuration="6.129100838s" podCreationTimestamp="2026-02-27 16:45:46 +0000 UTC" firstStartedPulling="2026-02-27 16:45:48.067885411 +0000 UTC m=+2344.157157914" lastFinishedPulling="2026-02-27 16:45:51.499178826 +0000 UTC m=+2347.588451279" observedRunningTime="2026-02-27 16:45:52.12752876 +0000 UTC m=+2348.216801223" watchObservedRunningTime="2026-02-27 16:45:52.129100838 +0000 UTC m=+2348.218373301" Feb 27 16:45:53 crc kubenswrapper[4830]: I0227 16:45:53.119045 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jmsb" event={"ID":"5324b6d2-025b-4377-8af1-462d4220cbb6","Type":"ContainerStarted","Data":"0e98f09f089f8fa8ca349a278d84503c864c2e086e428fafc942d11fcfcc7a75"} Feb 27 16:45:54 crc kubenswrapper[4830]: I0227 16:45:54.129521 4830 generic.go:334] "Generic (PLEG): container finished" podID="5324b6d2-025b-4377-8af1-462d4220cbb6" containerID="0e98f09f089f8fa8ca349a278d84503c864c2e086e428fafc942d11fcfcc7a75" exitCode=0 Feb 27 16:45:54 crc kubenswrapper[4830]: I0227 16:45:54.129595 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jmsb" event={"ID":"5324b6d2-025b-4377-8af1-462d4220cbb6","Type":"ContainerDied","Data":"0e98f09f089f8fa8ca349a278d84503c864c2e086e428fafc942d11fcfcc7a75"} Feb 27 16:45:55 crc kubenswrapper[4830]: I0227 16:45:55.143493 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jmsb" event={"ID":"5324b6d2-025b-4377-8af1-462d4220cbb6","Type":"ContainerStarted","Data":"e7dd97d3cce3932badfe66d47465d990e5941041c2e21c5412b1d1a610e30245"} Feb 27 16:45:55 crc kubenswrapper[4830]: I0227 16:45:55.173900 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8jmsb" podStartSLOduration=2.73746364 podStartE2EDuration="5.173867186s" podCreationTimestamp="2026-02-27 16:45:50 +0000 UTC" firstStartedPulling="2026-02-27 16:45:52.10848748 +0000 UTC m=+2348.197759933" lastFinishedPulling="2026-02-27 16:45:54.544890986 +0000 UTC m=+2350.634163479" observedRunningTime="2026-02-27 16:45:55.170608195 +0000 UTC m=+2351.259880688" watchObservedRunningTime="2026-02-27 16:45:55.173867186 +0000 UTC m=+2351.263139689" Feb 27 16:45:56 crc kubenswrapper[4830]: I0227 16:45:56.948518 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tpzsj" Feb 27 16:45:56 crc kubenswrapper[4830]: I0227 16:45:56.948596 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tpzsj" Feb 27 16:45:57 crc kubenswrapper[4830]: I0227 16:45:57.012072 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tpzsj" Feb 27 16:45:57 crc kubenswrapper[4830]: I0227 16:45:57.198867 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tpzsj" Feb 27 16:45:58 crc kubenswrapper[4830]: I0227 16:45:58.194541 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tpzsj"] Feb 27 16:45:59 crc kubenswrapper[4830]: I0227 16:45:59.176839 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tpzsj" podUID="7749db70-0e11-4920-8da0-e9626204708b" containerName="registry-server" containerID="cri-o://81bd59198af4e06a502363ab2d767b21f8cbbe96b79e6064ffa6589b1d5ef024" gracePeriod=2 Feb 27 16:45:59 crc kubenswrapper[4830]: I0227 16:45:59.583647 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tpzsj" Feb 27 16:45:59 crc kubenswrapper[4830]: I0227 16:45:59.690209 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7749db70-0e11-4920-8da0-e9626204708b-utilities\") pod \"7749db70-0e11-4920-8da0-e9626204708b\" (UID: \"7749db70-0e11-4920-8da0-e9626204708b\") " Feb 27 16:45:59 crc kubenswrapper[4830]: I0227 16:45:59.690299 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfjld\" (UniqueName: \"kubernetes.io/projected/7749db70-0e11-4920-8da0-e9626204708b-kube-api-access-jfjld\") pod \"7749db70-0e11-4920-8da0-e9626204708b\" (UID: \"7749db70-0e11-4920-8da0-e9626204708b\") " Feb 27 16:45:59 crc kubenswrapper[4830]: I0227 16:45:59.690391 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7749db70-0e11-4920-8da0-e9626204708b-catalog-content\") pod \"7749db70-0e11-4920-8da0-e9626204708b\" (UID: \"7749db70-0e11-4920-8da0-e9626204708b\") " Feb 27 16:45:59 crc kubenswrapper[4830]: I0227 16:45:59.691635 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7749db70-0e11-4920-8da0-e9626204708b-utilities" (OuterVolumeSpecName: "utilities") pod "7749db70-0e11-4920-8da0-e9626204708b" (UID: "7749db70-0e11-4920-8da0-e9626204708b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:45:59 crc kubenswrapper[4830]: I0227 16:45:59.702912 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7749db70-0e11-4920-8da0-e9626204708b-kube-api-access-jfjld" (OuterVolumeSpecName: "kube-api-access-jfjld") pod "7749db70-0e11-4920-8da0-e9626204708b" (UID: "7749db70-0e11-4920-8da0-e9626204708b"). InnerVolumeSpecName "kube-api-access-jfjld". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:45:59 crc kubenswrapper[4830]: I0227 16:45:59.759853 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7749db70-0e11-4920-8da0-e9626204708b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7749db70-0e11-4920-8da0-e9626204708b" (UID: "7749db70-0e11-4920-8da0-e9626204708b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:45:59 crc kubenswrapper[4830]: I0227 16:45:59.791801 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7749db70-0e11-4920-8da0-e9626204708b-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 16:45:59 crc kubenswrapper[4830]: I0227 16:45:59.791842 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfjld\" (UniqueName: \"kubernetes.io/projected/7749db70-0e11-4920-8da0-e9626204708b-kube-api-access-jfjld\") on node \"crc\" DevicePath \"\"" Feb 27 16:45:59 crc kubenswrapper[4830]: I0227 16:45:59.791852 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7749db70-0e11-4920-8da0-e9626204708b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 16:46:00 crc kubenswrapper[4830]: I0227 16:46:00.159819 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536846-fjbwl"] Feb 27 16:46:00 crc kubenswrapper[4830]: E0227 16:46:00.160387 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7749db70-0e11-4920-8da0-e9626204708b" containerName="extract-content" Feb 27 16:46:00 crc kubenswrapper[4830]: I0227 16:46:00.160414 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="7749db70-0e11-4920-8da0-e9626204708b" containerName="extract-content" Feb 27 16:46:00 crc kubenswrapper[4830]: E0227 16:46:00.160454 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7749db70-0e11-4920-8da0-e9626204708b" containerName="extract-utilities" Feb 27 16:46:00 crc kubenswrapper[4830]: I0227 16:46:00.160472 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="7749db70-0e11-4920-8da0-e9626204708b" containerName="extract-utilities" Feb 27 16:46:00 crc kubenswrapper[4830]: E0227 16:46:00.160510 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7749db70-0e11-4920-8da0-e9626204708b" containerName="registry-server" Feb 27 16:46:00 crc kubenswrapper[4830]: I0227 16:46:00.160526 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="7749db70-0e11-4920-8da0-e9626204708b" containerName="registry-server" Feb 27 16:46:00 crc kubenswrapper[4830]: I0227 16:46:00.160782 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="7749db70-0e11-4920-8da0-e9626204708b" containerName="registry-server" Feb 27 16:46:00 crc kubenswrapper[4830]: I0227 16:46:00.161561 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536846-fjbwl" Feb 27 16:46:00 crc kubenswrapper[4830]: I0227 16:46:00.172054 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536846-fjbwl"] Feb 27 16:46:00 crc kubenswrapper[4830]: I0227 16:46:00.172302 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 16:46:00 crc kubenswrapper[4830]: I0227 16:46:00.172350 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 16:46:00 crc kubenswrapper[4830]: I0227 16:46:00.173048 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 16:46:00 crc kubenswrapper[4830]: I0227 16:46:00.197815 4830 generic.go:334] "Generic (PLEG): container finished" podID="7749db70-0e11-4920-8da0-e9626204708b" containerID="81bd59198af4e06a502363ab2d767b21f8cbbe96b79e6064ffa6589b1d5ef024" exitCode=0 Feb 27 16:46:00 crc kubenswrapper[4830]: I0227 16:46:00.197857 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tpzsj" event={"ID":"7749db70-0e11-4920-8da0-e9626204708b","Type":"ContainerDied","Data":"81bd59198af4e06a502363ab2d767b21f8cbbe96b79e6064ffa6589b1d5ef024"} Feb 27 16:46:00 crc kubenswrapper[4830]: I0227 16:46:00.197930 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tpzsj" event={"ID":"7749db70-0e11-4920-8da0-e9626204708b","Type":"ContainerDied","Data":"afc0662fe2a7c516941cf2b086d8fe95de77b9f017f10f2df3938917af6572e3"} Feb 27 16:46:00 crc kubenswrapper[4830]: I0227 16:46:00.197965 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tpzsj" Feb 27 16:46:00 crc kubenswrapper[4830]: I0227 16:46:00.198185 4830 scope.go:117] "RemoveContainer" containerID="81bd59198af4e06a502363ab2d767b21f8cbbe96b79e6064ffa6589b1d5ef024" Feb 27 16:46:00 crc kubenswrapper[4830]: I0227 16:46:00.220818 4830 scope.go:117] "RemoveContainer" containerID="dec27d125c6f84d34f659a9d47a976b2c53da135fd16bb054c64b13a17091b4b" Feb 27 16:46:00 crc kubenswrapper[4830]: I0227 16:46:00.235696 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tpzsj"] Feb 27 16:46:00 crc kubenswrapper[4830]: I0227 16:46:00.242674 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tpzsj"] Feb 27 16:46:00 crc kubenswrapper[4830]: I0227 16:46:00.260576 4830 scope.go:117] "RemoveContainer" containerID="84c667736d133d66a919daac92a88b364fe75b4b1a503fc7f5dd38a816f5d815" Feb 27 16:46:00 crc kubenswrapper[4830]: I0227 16:46:00.289683 4830 scope.go:117] "RemoveContainer" containerID="81bd59198af4e06a502363ab2d767b21f8cbbe96b79e6064ffa6589b1d5ef024" Feb 27 16:46:00 crc kubenswrapper[4830]: E0227 16:46:00.290177 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81bd59198af4e06a502363ab2d767b21f8cbbe96b79e6064ffa6589b1d5ef024\": container with ID starting with 81bd59198af4e06a502363ab2d767b21f8cbbe96b79e6064ffa6589b1d5ef024 not found: ID does not exist" containerID="81bd59198af4e06a502363ab2d767b21f8cbbe96b79e6064ffa6589b1d5ef024" Feb 27 16:46:00 crc kubenswrapper[4830]: I0227 16:46:00.290222 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81bd59198af4e06a502363ab2d767b21f8cbbe96b79e6064ffa6589b1d5ef024"} err="failed to get container status \"81bd59198af4e06a502363ab2d767b21f8cbbe96b79e6064ffa6589b1d5ef024\": rpc error: code = NotFound desc = could not find container \"81bd59198af4e06a502363ab2d767b21f8cbbe96b79e6064ffa6589b1d5ef024\": container with ID starting with 81bd59198af4e06a502363ab2d767b21f8cbbe96b79e6064ffa6589b1d5ef024 not found: ID does not exist" Feb 27 16:46:00 crc kubenswrapper[4830]: I0227 16:46:00.290243 4830 scope.go:117] "RemoveContainer" containerID="dec27d125c6f84d34f659a9d47a976b2c53da135fd16bb054c64b13a17091b4b" Feb 27 16:46:00 crc kubenswrapper[4830]: E0227 16:46:00.290529 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dec27d125c6f84d34f659a9d47a976b2c53da135fd16bb054c64b13a17091b4b\": container with ID starting with dec27d125c6f84d34f659a9d47a976b2c53da135fd16bb054c64b13a17091b4b not found: ID does not exist" containerID="dec27d125c6f84d34f659a9d47a976b2c53da135fd16bb054c64b13a17091b4b" Feb 27 16:46:00 crc kubenswrapper[4830]: I0227 16:46:00.290552 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dec27d125c6f84d34f659a9d47a976b2c53da135fd16bb054c64b13a17091b4b"} err="failed to get container status \"dec27d125c6f84d34f659a9d47a976b2c53da135fd16bb054c64b13a17091b4b\": rpc error: code = NotFound desc = could not find container \"dec27d125c6f84d34f659a9d47a976b2c53da135fd16bb054c64b13a17091b4b\": container with ID starting with dec27d125c6f84d34f659a9d47a976b2c53da135fd16bb054c64b13a17091b4b not found: ID does not exist" Feb 27 16:46:00 crc kubenswrapper[4830]: I0227 16:46:00.290574 4830 scope.go:117] "RemoveContainer" containerID="84c667736d133d66a919daac92a88b364fe75b4b1a503fc7f5dd38a816f5d815" Feb 27 16:46:00 crc kubenswrapper[4830]: E0227 16:46:00.290816 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84c667736d133d66a919daac92a88b364fe75b4b1a503fc7f5dd38a816f5d815\": container with ID starting with 84c667736d133d66a919daac92a88b364fe75b4b1a503fc7f5dd38a816f5d815 not found: ID does not exist" containerID="84c667736d133d66a919daac92a88b364fe75b4b1a503fc7f5dd38a816f5d815" Feb 27 16:46:00 crc kubenswrapper[4830]: I0227 16:46:00.290854 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84c667736d133d66a919daac92a88b364fe75b4b1a503fc7f5dd38a816f5d815"} err="failed to get container status \"84c667736d133d66a919daac92a88b364fe75b4b1a503fc7f5dd38a816f5d815\": rpc error: code = NotFound desc = could not find container \"84c667736d133d66a919daac92a88b364fe75b4b1a503fc7f5dd38a816f5d815\": container with ID starting with 84c667736d133d66a919daac92a88b364fe75b4b1a503fc7f5dd38a816f5d815 not found: ID does not exist" Feb 27 16:46:00 crc kubenswrapper[4830]: I0227 16:46:00.298800 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwbhw\" (UniqueName: \"kubernetes.io/projected/80e2b3eb-fbf9-41df-b723-3b5a4271d33f-kube-api-access-vwbhw\") pod \"auto-csr-approver-29536846-fjbwl\" (UID: \"80e2b3eb-fbf9-41df-b723-3b5a4271d33f\") " pod="openshift-infra/auto-csr-approver-29536846-fjbwl" Feb 27 16:46:00 crc kubenswrapper[4830]: I0227 16:46:00.400657 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwbhw\" (UniqueName: \"kubernetes.io/projected/80e2b3eb-fbf9-41df-b723-3b5a4271d33f-kube-api-access-vwbhw\") pod \"auto-csr-approver-29536846-fjbwl\" (UID: \"80e2b3eb-fbf9-41df-b723-3b5a4271d33f\") " pod="openshift-infra/auto-csr-approver-29536846-fjbwl" Feb 27 16:46:00 crc kubenswrapper[4830]: I0227 16:46:00.422984 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwbhw\" (UniqueName: \"kubernetes.io/projected/80e2b3eb-fbf9-41df-b723-3b5a4271d33f-kube-api-access-vwbhw\") pod \"auto-csr-approver-29536846-fjbwl\" (UID: \"80e2b3eb-fbf9-41df-b723-3b5a4271d33f\") " pod="openshift-infra/auto-csr-approver-29536846-fjbwl" Feb 27 16:46:00 crc kubenswrapper[4830]: I0227 16:46:00.495852 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536846-fjbwl" Feb 27 16:46:00 crc kubenswrapper[4830]: I0227 16:46:00.774874 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7749db70-0e11-4920-8da0-e9626204708b" path="/var/lib/kubelet/pods/7749db70-0e11-4920-8da0-e9626204708b/volumes" Feb 27 16:46:00 crc kubenswrapper[4830]: I0227 16:46:00.925231 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8jmsb" Feb 27 16:46:00 crc kubenswrapper[4830]: I0227 16:46:00.925273 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8jmsb" Feb 27 16:46:00 crc kubenswrapper[4830]: I0227 16:46:00.977885 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536846-fjbwl"] Feb 27 16:46:00 crc kubenswrapper[4830]: W0227 16:46:00.985736 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80e2b3eb_fbf9_41df_b723_3b5a4271d33f.slice/crio-68ebca0e70458c535cee9f43e8d9e710f2f40aa3240269379d77f48524729118 WatchSource:0}: Error finding container 68ebca0e70458c535cee9f43e8d9e710f2f40aa3240269379d77f48524729118: Status 404 returned error can't find the container with id 68ebca0e70458c535cee9f43e8d9e710f2f40aa3240269379d77f48524729118 Feb 27 16:46:01 crc kubenswrapper[4830]: I0227 16:46:01.007869 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8jmsb" Feb 27 16:46:01 crc kubenswrapper[4830]: I0227 16:46:01.208571 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536846-fjbwl" event={"ID":"80e2b3eb-fbf9-41df-b723-3b5a4271d33f","Type":"ContainerStarted","Data":"68ebca0e70458c535cee9f43e8d9e710f2f40aa3240269379d77f48524729118"} Feb 27 16:46:01 crc kubenswrapper[4830]: I0227 16:46:01.280877 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8jmsb" Feb 27 16:46:02 crc kubenswrapper[4830]: I0227 16:46:02.584834 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8jmsb"] Feb 27 16:46:03 crc kubenswrapper[4830]: I0227 16:46:03.232018 4830 generic.go:334] "Generic (PLEG): container finished" podID="80e2b3eb-fbf9-41df-b723-3b5a4271d33f" containerID="72876424e4b21fe42899f517b622ca58b66978a43e1c64a1c5e4556b11ad4e13" exitCode=0 Feb 27 16:46:03 crc kubenswrapper[4830]: I0227 16:46:03.232114 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536846-fjbwl" event={"ID":"80e2b3eb-fbf9-41df-b723-3b5a4271d33f","Type":"ContainerDied","Data":"72876424e4b21fe42899f517b622ca58b66978a43e1c64a1c5e4556b11ad4e13"} Feb 27 16:46:03 crc kubenswrapper[4830]: I0227 16:46:03.232420 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8jmsb" podUID="5324b6d2-025b-4377-8af1-462d4220cbb6" containerName="registry-server" containerID="cri-o://e7dd97d3cce3932badfe66d47465d990e5941041c2e21c5412b1d1a610e30245" gracePeriod=2 Feb 27 16:46:03 crc kubenswrapper[4830]: I0227 16:46:03.704608 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8jmsb" Feb 27 16:46:03 crc kubenswrapper[4830]: I0227 16:46:03.856781 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5324b6d2-025b-4377-8af1-462d4220cbb6-utilities\") pod \"5324b6d2-025b-4377-8af1-462d4220cbb6\" (UID: \"5324b6d2-025b-4377-8af1-462d4220cbb6\") " Feb 27 16:46:03 crc kubenswrapper[4830]: I0227 16:46:03.856912 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5kqf\" (UniqueName: \"kubernetes.io/projected/5324b6d2-025b-4377-8af1-462d4220cbb6-kube-api-access-n5kqf\") pod \"5324b6d2-025b-4377-8af1-462d4220cbb6\" (UID: \"5324b6d2-025b-4377-8af1-462d4220cbb6\") " Feb 27 16:46:03 crc kubenswrapper[4830]: I0227 16:46:03.856995 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5324b6d2-025b-4377-8af1-462d4220cbb6-catalog-content\") pod \"5324b6d2-025b-4377-8af1-462d4220cbb6\" (UID: \"5324b6d2-025b-4377-8af1-462d4220cbb6\") " Feb 27 16:46:03 crc kubenswrapper[4830]: I0227 16:46:03.857883 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5324b6d2-025b-4377-8af1-462d4220cbb6-utilities" (OuterVolumeSpecName: "utilities") pod "5324b6d2-025b-4377-8af1-462d4220cbb6" (UID: "5324b6d2-025b-4377-8af1-462d4220cbb6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:46:03 crc kubenswrapper[4830]: I0227 16:46:03.865227 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5324b6d2-025b-4377-8af1-462d4220cbb6-kube-api-access-n5kqf" (OuterVolumeSpecName: "kube-api-access-n5kqf") pod "5324b6d2-025b-4377-8af1-462d4220cbb6" (UID: "5324b6d2-025b-4377-8af1-462d4220cbb6"). InnerVolumeSpecName "kube-api-access-n5kqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:46:03 crc kubenswrapper[4830]: I0227 16:46:03.959286 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5324b6d2-025b-4377-8af1-462d4220cbb6-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 16:46:03 crc kubenswrapper[4830]: I0227 16:46:03.959342 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5kqf\" (UniqueName: \"kubernetes.io/projected/5324b6d2-025b-4377-8af1-462d4220cbb6-kube-api-access-n5kqf\") on node \"crc\" DevicePath \"\"" Feb 27 16:46:04 crc kubenswrapper[4830]: I0227 16:46:04.246558 4830 generic.go:334] "Generic (PLEG): container finished" podID="5324b6d2-025b-4377-8af1-462d4220cbb6" containerID="e7dd97d3cce3932badfe66d47465d990e5941041c2e21c5412b1d1a610e30245" exitCode=0 Feb 27 16:46:04 crc kubenswrapper[4830]: I0227 16:46:04.246876 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jmsb" event={"ID":"5324b6d2-025b-4377-8af1-462d4220cbb6","Type":"ContainerDied","Data":"e7dd97d3cce3932badfe66d47465d990e5941041c2e21c5412b1d1a610e30245"} Feb 27 16:46:04 crc kubenswrapper[4830]: I0227 16:46:04.247010 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jmsb" event={"ID":"5324b6d2-025b-4377-8af1-462d4220cbb6","Type":"ContainerDied","Data":"df9c514fe96184c380b0599e8a2468692b6f49a38f22f88b2eae3412c638000a"} Feb 27 16:46:04 crc kubenswrapper[4830]: I0227 16:46:04.247054 4830 scope.go:117] "RemoveContainer" containerID="e7dd97d3cce3932badfe66d47465d990e5941041c2e21c5412b1d1a610e30245" Feb 27 16:46:04 crc kubenswrapper[4830]: I0227 16:46:04.247663 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8jmsb" Feb 27 16:46:04 crc kubenswrapper[4830]: I0227 16:46:04.285776 4830 scope.go:117] "RemoveContainer" containerID="0e98f09f089f8fa8ca349a278d84503c864c2e086e428fafc942d11fcfcc7a75" Feb 27 16:46:04 crc kubenswrapper[4830]: I0227 16:46:04.324953 4830 scope.go:117] "RemoveContainer" containerID="2981052b786f1ce241feafb23b4aafdbeec4687b153675f719200cfca69e3e99" Feb 27 16:46:04 crc kubenswrapper[4830]: I0227 16:46:04.358031 4830 scope.go:117] "RemoveContainer" containerID="e7dd97d3cce3932badfe66d47465d990e5941041c2e21c5412b1d1a610e30245" Feb 27 16:46:04 crc kubenswrapper[4830]: E0227 16:46:04.360459 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7dd97d3cce3932badfe66d47465d990e5941041c2e21c5412b1d1a610e30245\": container with ID starting with e7dd97d3cce3932badfe66d47465d990e5941041c2e21c5412b1d1a610e30245 not found: ID does not exist" containerID="e7dd97d3cce3932badfe66d47465d990e5941041c2e21c5412b1d1a610e30245" Feb 27 16:46:04 crc kubenswrapper[4830]: I0227 16:46:04.360521 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7dd97d3cce3932badfe66d47465d990e5941041c2e21c5412b1d1a610e30245"} err="failed to get container status \"e7dd97d3cce3932badfe66d47465d990e5941041c2e21c5412b1d1a610e30245\": rpc error: code = NotFound desc = could not find container \"e7dd97d3cce3932badfe66d47465d990e5941041c2e21c5412b1d1a610e30245\": container with ID starting with e7dd97d3cce3932badfe66d47465d990e5941041c2e21c5412b1d1a610e30245 not found: ID does not exist" Feb 27 16:46:04 crc kubenswrapper[4830]: I0227 16:46:04.360562 4830 scope.go:117] "RemoveContainer" containerID="0e98f09f089f8fa8ca349a278d84503c864c2e086e428fafc942d11fcfcc7a75" Feb 27 16:46:04 crc kubenswrapper[4830]: E0227 16:46:04.361142 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e98f09f089f8fa8ca349a278d84503c864c2e086e428fafc942d11fcfcc7a75\": container with ID starting with 0e98f09f089f8fa8ca349a278d84503c864c2e086e428fafc942d11fcfcc7a75 not found: ID does not exist" containerID="0e98f09f089f8fa8ca349a278d84503c864c2e086e428fafc942d11fcfcc7a75" Feb 27 16:46:04 crc kubenswrapper[4830]: I0227 16:46:04.361196 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e98f09f089f8fa8ca349a278d84503c864c2e086e428fafc942d11fcfcc7a75"} err="failed to get container status \"0e98f09f089f8fa8ca349a278d84503c864c2e086e428fafc942d11fcfcc7a75\": rpc error: code = NotFound desc = could not find container \"0e98f09f089f8fa8ca349a278d84503c864c2e086e428fafc942d11fcfcc7a75\": container with ID starting with 0e98f09f089f8fa8ca349a278d84503c864c2e086e428fafc942d11fcfcc7a75 not found: ID does not exist" Feb 27 16:46:04 crc kubenswrapper[4830]: I0227 16:46:04.361226 4830 scope.go:117] "RemoveContainer" containerID="2981052b786f1ce241feafb23b4aafdbeec4687b153675f719200cfca69e3e99" Feb 27 16:46:04 crc kubenswrapper[4830]: E0227 16:46:04.361610 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2981052b786f1ce241feafb23b4aafdbeec4687b153675f719200cfca69e3e99\": container with ID starting with 2981052b786f1ce241feafb23b4aafdbeec4687b153675f719200cfca69e3e99 not found: ID does not exist" containerID="2981052b786f1ce241feafb23b4aafdbeec4687b153675f719200cfca69e3e99" Feb 27 16:46:04 crc kubenswrapper[4830]: I0227 16:46:04.361655 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2981052b786f1ce241feafb23b4aafdbeec4687b153675f719200cfca69e3e99"} err="failed to get container status \"2981052b786f1ce241feafb23b4aafdbeec4687b153675f719200cfca69e3e99\": rpc error: code = NotFound desc = could not find container \"2981052b786f1ce241feafb23b4aafdbeec4687b153675f719200cfca69e3e99\": container with ID starting with 2981052b786f1ce241feafb23b4aafdbeec4687b153675f719200cfca69e3e99 not found: ID does not exist" Feb 27 16:46:04 crc kubenswrapper[4830]: I0227 16:46:04.578074 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536846-fjbwl" Feb 27 16:46:04 crc kubenswrapper[4830]: I0227 16:46:04.668871 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwbhw\" (UniqueName: \"kubernetes.io/projected/80e2b3eb-fbf9-41df-b723-3b5a4271d33f-kube-api-access-vwbhw\") pod \"80e2b3eb-fbf9-41df-b723-3b5a4271d33f\" (UID: \"80e2b3eb-fbf9-41df-b723-3b5a4271d33f\") " Feb 27 16:46:04 crc kubenswrapper[4830]: I0227 16:46:04.674703 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80e2b3eb-fbf9-41df-b723-3b5a4271d33f-kube-api-access-vwbhw" (OuterVolumeSpecName: "kube-api-access-vwbhw") pod "80e2b3eb-fbf9-41df-b723-3b5a4271d33f" (UID: "80e2b3eb-fbf9-41df-b723-3b5a4271d33f"). InnerVolumeSpecName "kube-api-access-vwbhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:46:04 crc kubenswrapper[4830]: I0227 16:46:04.767578 4830 scope.go:117] "RemoveContainer" containerID="c4de61c7f48929592fa4fe3911fffe24750acfdd56a4eab75365c7aa2a8b7dfb" Feb 27 16:46:04 crc kubenswrapper[4830]: E0227 16:46:04.767999 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:46:04 crc kubenswrapper[4830]: I0227 16:46:04.774935 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwbhw\" (UniqueName: \"kubernetes.io/projected/80e2b3eb-fbf9-41df-b723-3b5a4271d33f-kube-api-access-vwbhw\") on node \"crc\" DevicePath \"\"" Feb 27 16:46:04 crc kubenswrapper[4830]: I0227 16:46:04.859775 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5324b6d2-025b-4377-8af1-462d4220cbb6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5324b6d2-025b-4377-8af1-462d4220cbb6" (UID: "5324b6d2-025b-4377-8af1-462d4220cbb6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:46:04 crc kubenswrapper[4830]: I0227 16:46:04.877362 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5324b6d2-025b-4377-8af1-462d4220cbb6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 16:46:05 crc kubenswrapper[4830]: I0227 16:46:05.197151 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8jmsb"] Feb 27 16:46:05 crc kubenswrapper[4830]: I0227 16:46:05.202727 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8jmsb"] Feb 27 16:46:05 crc kubenswrapper[4830]: I0227 16:46:05.259051 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536846-fjbwl" Feb 27 16:46:05 crc kubenswrapper[4830]: I0227 16:46:05.259010 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536846-fjbwl" event={"ID":"80e2b3eb-fbf9-41df-b723-3b5a4271d33f","Type":"ContainerDied","Data":"68ebca0e70458c535cee9f43e8d9e710f2f40aa3240269379d77f48524729118"} Feb 27 16:46:05 crc kubenswrapper[4830]: I0227 16:46:05.259230 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68ebca0e70458c535cee9f43e8d9e710f2f40aa3240269379d77f48524729118" Feb 27 16:46:05 crc kubenswrapper[4830]: I0227 16:46:05.677706 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536840-l2pjd"] Feb 27 16:46:05 crc kubenswrapper[4830]: I0227 16:46:05.684339 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536840-l2pjd"] Feb 27 16:46:06 crc kubenswrapper[4830]: I0227 16:46:06.778878 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5324b6d2-025b-4377-8af1-462d4220cbb6" path="/var/lib/kubelet/pods/5324b6d2-025b-4377-8af1-462d4220cbb6/volumes" Feb 27 16:46:06 crc kubenswrapper[4830]: I0227 16:46:06.780678 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80f0ba68-90f8-401e-b07c-7a110ebbcdd8" path="/var/lib/kubelet/pods/80f0ba68-90f8-401e-b07c-7a110ebbcdd8/volumes" Feb 27 16:46:18 crc kubenswrapper[4830]: I0227 16:46:18.763277 4830 scope.go:117] "RemoveContainer" containerID="c4de61c7f48929592fa4fe3911fffe24750acfdd56a4eab75365c7aa2a8b7dfb" Feb 27 16:46:18 crc kubenswrapper[4830]: E0227 16:46:18.764218 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:46:30 crc kubenswrapper[4830]: I0227 16:46:30.762774 4830 scope.go:117] "RemoveContainer" containerID="c4de61c7f48929592fa4fe3911fffe24750acfdd56a4eab75365c7aa2a8b7dfb" Feb 27 16:46:30 crc kubenswrapper[4830]: E0227 16:46:30.763824 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:46:42 crc kubenswrapper[4830]: I0227 16:46:42.764263 4830 scope.go:117] "RemoveContainer" containerID="c4de61c7f48929592fa4fe3911fffe24750acfdd56a4eab75365c7aa2a8b7dfb" Feb 27 16:46:42 crc kubenswrapper[4830]: E0227 16:46:42.765526 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:46:44 crc kubenswrapper[4830]: I0227 16:46:44.940143 4830 scope.go:117] "RemoveContainer" containerID="a4bc264d1a03d587270a70e7a6343495af75bdb1492c3a935f7fb76e7c176ddb" Feb 27 16:46:57 crc kubenswrapper[4830]: I0227 16:46:57.764124 4830 scope.go:117] "RemoveContainer" containerID="c4de61c7f48929592fa4fe3911fffe24750acfdd56a4eab75365c7aa2a8b7dfb" Feb 27 16:46:57 crc kubenswrapper[4830]: E0227 16:46:57.764998 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:47:12 crc kubenswrapper[4830]: I0227 16:47:12.762892 4830 scope.go:117] "RemoveContainer" containerID="c4de61c7f48929592fa4fe3911fffe24750acfdd56a4eab75365c7aa2a8b7dfb" Feb 27 16:47:12 crc kubenswrapper[4830]: E0227 16:47:12.763907 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:47:25 crc kubenswrapper[4830]: I0227 16:47:25.762622 4830 scope.go:117] "RemoveContainer" containerID="c4de61c7f48929592fa4fe3911fffe24750acfdd56a4eab75365c7aa2a8b7dfb" Feb 27 16:47:25 crc kubenswrapper[4830]: E0227 16:47:25.763709 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:47:37 crc kubenswrapper[4830]: I0227 16:47:37.763220 4830 scope.go:117] "RemoveContainer" containerID="c4de61c7f48929592fa4fe3911fffe24750acfdd56a4eab75365c7aa2a8b7dfb" Feb 27 16:47:37 crc kubenswrapper[4830]: E0227 16:47:37.764393 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:47:51 crc kubenswrapper[4830]: I0227 16:47:51.762623 4830 scope.go:117] "RemoveContainer" containerID="c4de61c7f48929592fa4fe3911fffe24750acfdd56a4eab75365c7aa2a8b7dfb" Feb 27 16:47:51 crc kubenswrapper[4830]: E0227 16:47:51.763718 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:48:00 crc kubenswrapper[4830]: I0227 16:48:00.145085 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536848-vbmxb"] Feb 27 16:48:00 crc kubenswrapper[4830]: E0227 16:48:00.146070 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5324b6d2-025b-4377-8af1-462d4220cbb6" containerName="registry-server" Feb 27 16:48:00 crc kubenswrapper[4830]: I0227 16:48:00.146092 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="5324b6d2-025b-4377-8af1-462d4220cbb6" containerName="registry-server" Feb 27 16:48:00 crc kubenswrapper[4830]: E0227 16:48:00.146119 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5324b6d2-025b-4377-8af1-462d4220cbb6" containerName="extract-content" Feb 27 16:48:00 crc kubenswrapper[4830]: I0227 16:48:00.146131 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="5324b6d2-025b-4377-8af1-462d4220cbb6" containerName="extract-content" Feb 27 16:48:00 crc kubenswrapper[4830]: E0227 16:48:00.146156 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80e2b3eb-fbf9-41df-b723-3b5a4271d33f" containerName="oc" Feb 27 16:48:00 crc kubenswrapper[4830]: I0227 16:48:00.146169 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="80e2b3eb-fbf9-41df-b723-3b5a4271d33f" containerName="oc" Feb 27 16:48:00 crc kubenswrapper[4830]: E0227 16:48:00.146211 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5324b6d2-025b-4377-8af1-462d4220cbb6" containerName="extract-utilities" Feb 27 16:48:00 crc kubenswrapper[4830]: I0227 16:48:00.146224 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="5324b6d2-025b-4377-8af1-462d4220cbb6" containerName="extract-utilities" Feb 27 16:48:00 crc kubenswrapper[4830]: I0227 16:48:00.146440 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="5324b6d2-025b-4377-8af1-462d4220cbb6" containerName="registry-server" Feb 27 16:48:00 crc kubenswrapper[4830]: I0227 16:48:00.146476 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="80e2b3eb-fbf9-41df-b723-3b5a4271d33f" containerName="oc" Feb 27 16:48:00 crc kubenswrapper[4830]: I0227 16:48:00.147329 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536848-vbmxb" Feb 27 16:48:00 crc kubenswrapper[4830]: I0227 16:48:00.153930 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536848-vbmxb"] Feb 27 16:48:00 crc kubenswrapper[4830]: I0227 16:48:00.155681 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 16:48:00 crc kubenswrapper[4830]: I0227 16:48:00.155926 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 16:48:00 crc kubenswrapper[4830]: I0227 16:48:00.156179 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 16:48:00 crc kubenswrapper[4830]: I0227 16:48:00.235603 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-426h8\" (UniqueName: \"kubernetes.io/projected/d99a3c6b-dcaa-48f5-a4e9-ec3a32fbeb81-kube-api-access-426h8\") pod \"auto-csr-approver-29536848-vbmxb\" (UID: \"d99a3c6b-dcaa-48f5-a4e9-ec3a32fbeb81\") " pod="openshift-infra/auto-csr-approver-29536848-vbmxb" Feb 27 16:48:00 crc kubenswrapper[4830]: I0227 16:48:00.337167 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-426h8\" (UniqueName: \"kubernetes.io/projected/d99a3c6b-dcaa-48f5-a4e9-ec3a32fbeb81-kube-api-access-426h8\") pod \"auto-csr-approver-29536848-vbmxb\" (UID: \"d99a3c6b-dcaa-48f5-a4e9-ec3a32fbeb81\") " pod="openshift-infra/auto-csr-approver-29536848-vbmxb" Feb 27 16:48:00 crc kubenswrapper[4830]: I0227 16:48:00.360933 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-426h8\" (UniqueName: \"kubernetes.io/projected/d99a3c6b-dcaa-48f5-a4e9-ec3a32fbeb81-kube-api-access-426h8\") pod \"auto-csr-approver-29536848-vbmxb\" (UID: \"d99a3c6b-dcaa-48f5-a4e9-ec3a32fbeb81\") " pod="openshift-infra/auto-csr-approver-29536848-vbmxb" Feb 27 16:48:00 crc kubenswrapper[4830]: I0227 16:48:00.482893 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536848-vbmxb" Feb 27 16:48:01 crc kubenswrapper[4830]: I0227 16:48:01.015888 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536848-vbmxb"] Feb 27 16:48:01 crc kubenswrapper[4830]: I0227 16:48:01.303490 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536848-vbmxb" event={"ID":"d99a3c6b-dcaa-48f5-a4e9-ec3a32fbeb81","Type":"ContainerStarted","Data":"779c329038f9243e4f8f171a16568887a69880e3ee484fe27af0dba6dd2a86eb"} Feb 27 16:48:02 crc kubenswrapper[4830]: I0227 16:48:02.313296 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536848-vbmxb" event={"ID":"d99a3c6b-dcaa-48f5-a4e9-ec3a32fbeb81","Type":"ContainerStarted","Data":"fa9a650dfe277730b698f61cf48f8b2efe6e8e862cc2b8560f83b02860cf19e4"} Feb 27 16:48:02 crc kubenswrapper[4830]: I0227 16:48:02.334639 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536848-vbmxb" podStartSLOduration=1.431394768 podStartE2EDuration="2.334620822s" podCreationTimestamp="2026-02-27 16:48:00 +0000 UTC" firstStartedPulling="2026-02-27 16:48:01.026574493 +0000 UTC m=+2477.115846956" lastFinishedPulling="2026-02-27 16:48:01.929800537 +0000 UTC m=+2478.019073010" observedRunningTime="2026-02-27 16:48:02.332920711 +0000 UTC m=+2478.422193214" watchObservedRunningTime="2026-02-27 16:48:02.334620822 +0000 UTC m=+2478.423893285" Feb 27 16:48:03 crc kubenswrapper[4830]: I0227 16:48:03.324076 4830 generic.go:334] "Generic (PLEG): container finished" podID="d99a3c6b-dcaa-48f5-a4e9-ec3a32fbeb81" containerID="fa9a650dfe277730b698f61cf48f8b2efe6e8e862cc2b8560f83b02860cf19e4" exitCode=0 Feb 27 16:48:03 crc kubenswrapper[4830]: I0227 16:48:03.324141 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536848-vbmxb" event={"ID":"d99a3c6b-dcaa-48f5-a4e9-ec3a32fbeb81","Type":"ContainerDied","Data":"fa9a650dfe277730b698f61cf48f8b2efe6e8e862cc2b8560f83b02860cf19e4"} Feb 27 16:48:04 crc kubenswrapper[4830]: I0227 16:48:04.682527 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536848-vbmxb" Feb 27 16:48:04 crc kubenswrapper[4830]: I0227 16:48:04.807513 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-426h8\" (UniqueName: \"kubernetes.io/projected/d99a3c6b-dcaa-48f5-a4e9-ec3a32fbeb81-kube-api-access-426h8\") pod \"d99a3c6b-dcaa-48f5-a4e9-ec3a32fbeb81\" (UID: \"d99a3c6b-dcaa-48f5-a4e9-ec3a32fbeb81\") " Feb 27 16:48:04 crc kubenswrapper[4830]: I0227 16:48:04.819261 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d99a3c6b-dcaa-48f5-a4e9-ec3a32fbeb81-kube-api-access-426h8" (OuterVolumeSpecName: "kube-api-access-426h8") pod "d99a3c6b-dcaa-48f5-a4e9-ec3a32fbeb81" (UID: "d99a3c6b-dcaa-48f5-a4e9-ec3a32fbeb81"). InnerVolumeSpecName "kube-api-access-426h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:48:04 crc kubenswrapper[4830]: I0227 16:48:04.911363 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-426h8\" (UniqueName: \"kubernetes.io/projected/d99a3c6b-dcaa-48f5-a4e9-ec3a32fbeb81-kube-api-access-426h8\") on node \"crc\" DevicePath \"\"" Feb 27 16:48:05 crc kubenswrapper[4830]: I0227 16:48:05.346206 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536848-vbmxb" event={"ID":"d99a3c6b-dcaa-48f5-a4e9-ec3a32fbeb81","Type":"ContainerDied","Data":"779c329038f9243e4f8f171a16568887a69880e3ee484fe27af0dba6dd2a86eb"} Feb 27 16:48:05 crc kubenswrapper[4830]: I0227 16:48:05.346282 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="779c329038f9243e4f8f171a16568887a69880e3ee484fe27af0dba6dd2a86eb" Feb 27 16:48:05 crc kubenswrapper[4830]: I0227 16:48:05.346301 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536848-vbmxb" Feb 27 16:48:05 crc kubenswrapper[4830]: I0227 16:48:05.431298 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536842-vzjv4"] Feb 27 16:48:05 crc kubenswrapper[4830]: I0227 16:48:05.441161 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536842-vzjv4"] Feb 27 16:48:06 crc kubenswrapper[4830]: I0227 16:48:06.763725 4830 scope.go:117] "RemoveContainer" containerID="c4de61c7f48929592fa4fe3911fffe24750acfdd56a4eab75365c7aa2a8b7dfb" Feb 27 16:48:06 crc kubenswrapper[4830]: E0227 16:48:06.764208 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:48:06 crc kubenswrapper[4830]: I0227 16:48:06.782431 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddabb2aa-eef2-4c5e-8db9-738fb96d3b6a" path="/var/lib/kubelet/pods/ddabb2aa-eef2-4c5e-8db9-738fb96d3b6a/volumes" Feb 27 16:48:17 crc kubenswrapper[4830]: I0227 16:48:17.768553 4830 scope.go:117] "RemoveContainer" containerID="c4de61c7f48929592fa4fe3911fffe24750acfdd56a4eab75365c7aa2a8b7dfb" Feb 27 16:48:17 crc kubenswrapper[4830]: E0227 16:48:17.769426 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:48:30 crc kubenswrapper[4830]: I0227 16:48:30.763222 4830 scope.go:117] "RemoveContainer" containerID="c4de61c7f48929592fa4fe3911fffe24750acfdd56a4eab75365c7aa2a8b7dfb" Feb 27 16:48:30 crc kubenswrapper[4830]: E0227 16:48:30.764188 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:48:44 crc kubenswrapper[4830]: I0227 16:48:44.769464 4830 scope.go:117] "RemoveContainer" containerID="c4de61c7f48929592fa4fe3911fffe24750acfdd56a4eab75365c7aa2a8b7dfb" Feb 27 16:48:44 crc kubenswrapper[4830]: E0227 16:48:44.771098 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:48:45 crc kubenswrapper[4830]: I0227 16:48:45.057218 4830 scope.go:117] "RemoveContainer" containerID="c34c3a982d168b19783847bacdcf4ceb89f783b676f874ead2102d6282f28730" Feb 27 16:48:57 crc kubenswrapper[4830]: I0227 16:48:57.762036 4830 scope.go:117] "RemoveContainer" containerID="c4de61c7f48929592fa4fe3911fffe24750acfdd56a4eab75365c7aa2a8b7dfb" Feb 27 16:48:57 crc kubenswrapper[4830]: E0227 16:48:57.762861 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:49:11 crc kubenswrapper[4830]: I0227 16:49:11.763856 4830 scope.go:117] "RemoveContainer" containerID="c4de61c7f48929592fa4fe3911fffe24750acfdd56a4eab75365c7aa2a8b7dfb" Feb 27 16:49:11 crc kubenswrapper[4830]: E0227 16:49:11.765042 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:49:23 crc kubenswrapper[4830]: I0227 16:49:23.763522 4830 scope.go:117] "RemoveContainer" containerID="c4de61c7f48929592fa4fe3911fffe24750acfdd56a4eab75365c7aa2a8b7dfb" Feb 27 16:49:23 crc kubenswrapper[4830]: E0227 16:49:23.765532 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:49:37 crc kubenswrapper[4830]: I0227 16:49:37.762889 4830 scope.go:117] "RemoveContainer" containerID="c4de61c7f48929592fa4fe3911fffe24750acfdd56a4eab75365c7aa2a8b7dfb" Feb 27 16:49:38 crc kubenswrapper[4830]: I0227 16:49:38.171175 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerStarted","Data":"32607c4338fb1b3f01bf1111028cf86636ecfa24037b80286a80a7e17ea37393"} Feb 27 16:49:45 crc kubenswrapper[4830]: I0227 16:49:45.141296 4830 scope.go:117] "RemoveContainer" containerID="4ac86be85472928cc42b28deb0eda9934358d21e90a0d563fd1a3d1b2494f969" Feb 27 16:49:45 crc kubenswrapper[4830]: I0227 16:49:45.185008 4830 scope.go:117] "RemoveContainer" containerID="d64773d1fa168b9e7151da1a1cef3d757cfe79a2910b3a5a4d1335e62f031898" Feb 27 16:49:45 crc kubenswrapper[4830]: I0227 16:49:45.224240 4830 scope.go:117] "RemoveContainer" containerID="f4867970e58c67de17ac8b5e4d66607693a8eb4fe926f87ba9147288d8faf13c" Feb 27 16:50:00 crc kubenswrapper[4830]: I0227 16:50:00.165654 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536850-4svr6"] Feb 27 16:50:00 crc kubenswrapper[4830]: E0227 16:50:00.166768 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d99a3c6b-dcaa-48f5-a4e9-ec3a32fbeb81" containerName="oc" Feb 27 16:50:00 crc kubenswrapper[4830]: I0227 16:50:00.166792 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="d99a3c6b-dcaa-48f5-a4e9-ec3a32fbeb81" containerName="oc" Feb 27 16:50:00 crc kubenswrapper[4830]: I0227 16:50:00.167140 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="d99a3c6b-dcaa-48f5-a4e9-ec3a32fbeb81" containerName="oc" Feb 27 16:50:00 crc kubenswrapper[4830]: I0227 16:50:00.167811 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536850-4svr6" Feb 27 16:50:00 crc kubenswrapper[4830]: I0227 16:50:00.173300 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 16:50:00 crc kubenswrapper[4830]: I0227 16:50:00.175655 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 16:50:00 crc kubenswrapper[4830]: I0227 16:50:00.195076 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536850-4svr6"] Feb 27 16:50:00 crc kubenswrapper[4830]: I0227 16:50:00.196417 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 16:50:00 crc kubenswrapper[4830]: I0227 16:50:00.349463 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6l77\" (UniqueName: \"kubernetes.io/projected/56d65891-d21c-4da5-a5f4-f39606656c0b-kube-api-access-s6l77\") pod \"auto-csr-approver-29536850-4svr6\" (UID: \"56d65891-d21c-4da5-a5f4-f39606656c0b\") " pod="openshift-infra/auto-csr-approver-29536850-4svr6" Feb 27 16:50:00 crc kubenswrapper[4830]: I0227 16:50:00.451209 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6l77\" (UniqueName: \"kubernetes.io/projected/56d65891-d21c-4da5-a5f4-f39606656c0b-kube-api-access-s6l77\") pod \"auto-csr-approver-29536850-4svr6\" (UID: \"56d65891-d21c-4da5-a5f4-f39606656c0b\") " pod="openshift-infra/auto-csr-approver-29536850-4svr6" Feb 27 16:50:00 crc kubenswrapper[4830]: I0227 16:50:00.477755 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6l77\" (UniqueName: \"kubernetes.io/projected/56d65891-d21c-4da5-a5f4-f39606656c0b-kube-api-access-s6l77\") pod \"auto-csr-approver-29536850-4svr6\" (UID: \"56d65891-d21c-4da5-a5f4-f39606656c0b\") " pod="openshift-infra/auto-csr-approver-29536850-4svr6" Feb 27 16:50:00 crc kubenswrapper[4830]: I0227 16:50:00.533857 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536850-4svr6" Feb 27 16:50:01 crc kubenswrapper[4830]: I0227 16:50:01.050507 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536850-4svr6"] Feb 27 16:50:01 crc kubenswrapper[4830]: I0227 16:50:01.056917 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 16:50:01 crc kubenswrapper[4830]: I0227 16:50:01.414221 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536850-4svr6" event={"ID":"56d65891-d21c-4da5-a5f4-f39606656c0b","Type":"ContainerStarted","Data":"1d537a215bc43a67cf330b62951215a1fce4aaddc2bb77464499a87cf60931b5"} Feb 27 16:50:03 crc kubenswrapper[4830]: I0227 16:50:03.444294 4830 generic.go:334] "Generic (PLEG): container finished" podID="56d65891-d21c-4da5-a5f4-f39606656c0b" containerID="c2cad79082b6458298e697b09f6d2ed648523d215732030601012e41908f16b3" exitCode=0 Feb 27 16:50:03 crc kubenswrapper[4830]: I0227 16:50:03.444406 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536850-4svr6" event={"ID":"56d65891-d21c-4da5-a5f4-f39606656c0b","Type":"ContainerDied","Data":"c2cad79082b6458298e697b09f6d2ed648523d215732030601012e41908f16b3"} Feb 27 16:50:04 crc kubenswrapper[4830]: I0227 16:50:04.755769 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536850-4svr6" Feb 27 16:50:04 crc kubenswrapper[4830]: I0227 16:50:04.925992 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6l77\" (UniqueName: \"kubernetes.io/projected/56d65891-d21c-4da5-a5f4-f39606656c0b-kube-api-access-s6l77\") pod \"56d65891-d21c-4da5-a5f4-f39606656c0b\" (UID: \"56d65891-d21c-4da5-a5f4-f39606656c0b\") " Feb 27 16:50:04 crc kubenswrapper[4830]: I0227 16:50:04.939133 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56d65891-d21c-4da5-a5f4-f39606656c0b-kube-api-access-s6l77" (OuterVolumeSpecName: "kube-api-access-s6l77") pod "56d65891-d21c-4da5-a5f4-f39606656c0b" (UID: "56d65891-d21c-4da5-a5f4-f39606656c0b"). InnerVolumeSpecName "kube-api-access-s6l77". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:50:05 crc kubenswrapper[4830]: I0227 16:50:05.027555 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6l77\" (UniqueName: \"kubernetes.io/projected/56d65891-d21c-4da5-a5f4-f39606656c0b-kube-api-access-s6l77\") on node \"crc\" DevicePath \"\"" Feb 27 16:50:05 crc kubenswrapper[4830]: I0227 16:50:05.464600 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536850-4svr6" event={"ID":"56d65891-d21c-4da5-a5f4-f39606656c0b","Type":"ContainerDied","Data":"1d537a215bc43a67cf330b62951215a1fce4aaddc2bb77464499a87cf60931b5"} Feb 27 16:50:05 crc kubenswrapper[4830]: I0227 16:50:05.464651 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536850-4svr6" Feb 27 16:50:05 crc kubenswrapper[4830]: I0227 16:50:05.464662 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d537a215bc43a67cf330b62951215a1fce4aaddc2bb77464499a87cf60931b5" Feb 27 16:50:05 crc kubenswrapper[4830]: I0227 16:50:05.846717 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536844-mxwt6"] Feb 27 16:50:05 crc kubenswrapper[4830]: I0227 16:50:05.852909 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536844-mxwt6"] Feb 27 16:50:06 crc kubenswrapper[4830]: I0227 16:50:06.780679 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84cb45db-b04b-4162-8ddf-ad745104891a" path="/var/lib/kubelet/pods/84cb45db-b04b-4162-8ddf-ad745104891a/volumes" Feb 27 16:50:45 crc kubenswrapper[4830]: I0227 16:50:45.281996 4830 scope.go:117] "RemoveContainer" containerID="d8983ef771441f42bf4e8640879aaf9cf659027f467b21fb8abd11277603d54c" Feb 27 16:52:00 crc kubenswrapper[4830]: I0227 16:52:00.155778 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536852-6vbvt"] Feb 27 16:52:00 crc kubenswrapper[4830]: E0227 16:52:00.156804 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56d65891-d21c-4da5-a5f4-f39606656c0b" containerName="oc" Feb 27 16:52:00 crc kubenswrapper[4830]: I0227 16:52:00.156825 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d65891-d21c-4da5-a5f4-f39606656c0b" containerName="oc" Feb 27 16:52:00 crc kubenswrapper[4830]: I0227 16:52:00.157123 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d65891-d21c-4da5-a5f4-f39606656c0b" containerName="oc" Feb 27 16:52:00 crc kubenswrapper[4830]: I0227 16:52:00.157809 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536852-6vbvt" Feb 27 16:52:00 crc kubenswrapper[4830]: I0227 16:52:00.160457 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 16:52:00 crc kubenswrapper[4830]: I0227 16:52:00.160757 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 16:52:00 crc kubenswrapper[4830]: I0227 16:52:00.160820 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 16:52:00 crc kubenswrapper[4830]: I0227 16:52:00.173839 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536852-6vbvt"] Feb 27 16:52:00 crc kubenswrapper[4830]: I0227 16:52:00.214899 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzk75\" (UniqueName: \"kubernetes.io/projected/3a4ee390-448f-427a-bf19-cc86ffbbe968-kube-api-access-lzk75\") pod \"auto-csr-approver-29536852-6vbvt\" (UID: \"3a4ee390-448f-427a-bf19-cc86ffbbe968\") " pod="openshift-infra/auto-csr-approver-29536852-6vbvt" Feb 27 16:52:00 crc kubenswrapper[4830]: I0227 16:52:00.317820 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzk75\" (UniqueName: \"kubernetes.io/projected/3a4ee390-448f-427a-bf19-cc86ffbbe968-kube-api-access-lzk75\") pod \"auto-csr-approver-29536852-6vbvt\" (UID: \"3a4ee390-448f-427a-bf19-cc86ffbbe968\") " pod="openshift-infra/auto-csr-approver-29536852-6vbvt" Feb 27 16:52:00 crc kubenswrapper[4830]: I0227 16:52:00.345984 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzk75\" (UniqueName: \"kubernetes.io/projected/3a4ee390-448f-427a-bf19-cc86ffbbe968-kube-api-access-lzk75\") pod \"auto-csr-approver-29536852-6vbvt\" (UID: \"3a4ee390-448f-427a-bf19-cc86ffbbe968\") " pod="openshift-infra/auto-csr-approver-29536852-6vbvt" Feb 27 16:52:00 crc kubenswrapper[4830]: I0227 16:52:00.493129 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536852-6vbvt" Feb 27 16:52:00 crc kubenswrapper[4830]: I0227 16:52:00.999207 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536852-6vbvt"] Feb 27 16:52:01 crc kubenswrapper[4830]: I0227 16:52:01.214148 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536852-6vbvt" event={"ID":"3a4ee390-448f-427a-bf19-cc86ffbbe968","Type":"ContainerStarted","Data":"d86c68689ef0e58ff020f798cbb209281ac3686d8e0cf8c0a1c901382a3c7ef7"} Feb 27 16:52:03 crc kubenswrapper[4830]: I0227 16:52:03.159870 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 16:52:03 crc kubenswrapper[4830]: I0227 16:52:03.160023 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 16:52:03 crc kubenswrapper[4830]: I0227 16:52:03.236368 4830 generic.go:334] "Generic (PLEG): container finished" podID="3a4ee390-448f-427a-bf19-cc86ffbbe968" containerID="74ec3129796e69c4352b4797a7157d74c0a337b837658466ca0a93857c935343" exitCode=0 Feb 27 16:52:03 crc kubenswrapper[4830]: I0227 16:52:03.236438 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536852-6vbvt" event={"ID":"3a4ee390-448f-427a-bf19-cc86ffbbe968","Type":"ContainerDied","Data":"74ec3129796e69c4352b4797a7157d74c0a337b837658466ca0a93857c935343"} Feb 27 16:52:04 crc kubenswrapper[4830]: I0227 16:52:04.523734 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536852-6vbvt" Feb 27 16:52:04 crc kubenswrapper[4830]: I0227 16:52:04.697071 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzk75\" (UniqueName: \"kubernetes.io/projected/3a4ee390-448f-427a-bf19-cc86ffbbe968-kube-api-access-lzk75\") pod \"3a4ee390-448f-427a-bf19-cc86ffbbe968\" (UID: \"3a4ee390-448f-427a-bf19-cc86ffbbe968\") " Feb 27 16:52:04 crc kubenswrapper[4830]: I0227 16:52:04.703654 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a4ee390-448f-427a-bf19-cc86ffbbe968-kube-api-access-lzk75" (OuterVolumeSpecName: "kube-api-access-lzk75") pod "3a4ee390-448f-427a-bf19-cc86ffbbe968" (UID: "3a4ee390-448f-427a-bf19-cc86ffbbe968"). InnerVolumeSpecName "kube-api-access-lzk75". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:52:04 crc kubenswrapper[4830]: I0227 16:52:04.799554 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzk75\" (UniqueName: \"kubernetes.io/projected/3a4ee390-448f-427a-bf19-cc86ffbbe968-kube-api-access-lzk75\") on node \"crc\" DevicePath \"\"" Feb 27 16:52:05 crc kubenswrapper[4830]: I0227 16:52:05.260377 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536852-6vbvt" event={"ID":"3a4ee390-448f-427a-bf19-cc86ffbbe968","Type":"ContainerDied","Data":"d86c68689ef0e58ff020f798cbb209281ac3686d8e0cf8c0a1c901382a3c7ef7"} Feb 27 16:52:05 crc kubenswrapper[4830]: I0227 16:52:05.260741 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d86c68689ef0e58ff020f798cbb209281ac3686d8e0cf8c0a1c901382a3c7ef7" Feb 27 16:52:05 crc kubenswrapper[4830]: I0227 16:52:05.260508 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536852-6vbvt" Feb 27 16:52:05 crc kubenswrapper[4830]: I0227 16:52:05.623708 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536846-fjbwl"] Feb 27 16:52:05 crc kubenswrapper[4830]: I0227 16:52:05.630871 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536846-fjbwl"] Feb 27 16:52:06 crc kubenswrapper[4830]: I0227 16:52:06.778354 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80e2b3eb-fbf9-41df-b723-3b5a4271d33f" path="/var/lib/kubelet/pods/80e2b3eb-fbf9-41df-b723-3b5a4271d33f/volumes" Feb 27 16:52:33 crc kubenswrapper[4830]: I0227 16:52:33.160741 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 16:52:33 crc kubenswrapper[4830]: I0227 16:52:33.161406 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 16:52:45 crc kubenswrapper[4830]: I0227 16:52:45.407435 4830 scope.go:117] "RemoveContainer" containerID="72876424e4b21fe42899f517b622ca58b66978a43e1c64a1c5e4556b11ad4e13" Feb 27 16:53:03 crc kubenswrapper[4830]: I0227 16:53:03.159897 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 16:53:03 crc kubenswrapper[4830]: I0227 16:53:03.160793 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 16:53:03 crc kubenswrapper[4830]: I0227 16:53:03.160860 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 16:53:03 crc kubenswrapper[4830]: I0227 16:53:03.161731 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"32607c4338fb1b3f01bf1111028cf86636ecfa24037b80286a80a7e17ea37393"} pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 16:53:03 crc kubenswrapper[4830]: I0227 16:53:03.161828 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" containerID="cri-o://32607c4338fb1b3f01bf1111028cf86636ecfa24037b80286a80a7e17ea37393" gracePeriod=600 Feb 27 16:53:03 crc kubenswrapper[4830]: I0227 16:53:03.877721 4830 generic.go:334] "Generic (PLEG): container finished" podID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerID="32607c4338fb1b3f01bf1111028cf86636ecfa24037b80286a80a7e17ea37393" exitCode=0 Feb 27 16:53:03 crc kubenswrapper[4830]: I0227 16:53:03.878213 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerDied","Data":"32607c4338fb1b3f01bf1111028cf86636ecfa24037b80286a80a7e17ea37393"} Feb 27 16:53:03 crc kubenswrapper[4830]: I0227 16:53:03.878267 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerStarted","Data":"b1477650672db48c8bb6d798d8fc8040ec3d1666489a66fcec0561cf4cfade74"} Feb 27 16:53:03 crc kubenswrapper[4830]: I0227 16:53:03.878300 4830 scope.go:117] "RemoveContainer" containerID="c4de61c7f48929592fa4fe3911fffe24750acfdd56a4eab75365c7aa2a8b7dfb" Feb 27 16:53:39 crc kubenswrapper[4830]: I0227 16:53:39.607761 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zbt7j"] Feb 27 16:53:39 crc kubenswrapper[4830]: E0227 16:53:39.608868 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a4ee390-448f-427a-bf19-cc86ffbbe968" containerName="oc" Feb 27 16:53:39 crc kubenswrapper[4830]: I0227 16:53:39.608889 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a4ee390-448f-427a-bf19-cc86ffbbe968" containerName="oc" Feb 27 16:53:39 crc kubenswrapper[4830]: I0227 16:53:39.609256 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a4ee390-448f-427a-bf19-cc86ffbbe968" containerName="oc" Feb 27 16:53:39 crc kubenswrapper[4830]: I0227 16:53:39.611041 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zbt7j" Feb 27 16:53:39 crc kubenswrapper[4830]: I0227 16:53:39.630127 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zbt7j"] Feb 27 16:53:39 crc kubenswrapper[4830]: I0227 16:53:39.744649 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2de4d80e-8c2f-4332-95b4-842c80841212-catalog-content\") pod \"redhat-operators-zbt7j\" (UID: \"2de4d80e-8c2f-4332-95b4-842c80841212\") " pod="openshift-marketplace/redhat-operators-zbt7j" Feb 27 16:53:39 crc kubenswrapper[4830]: I0227 16:53:39.744951 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2de4d80e-8c2f-4332-95b4-842c80841212-utilities\") pod \"redhat-operators-zbt7j\" (UID: \"2de4d80e-8c2f-4332-95b4-842c80841212\") " pod="openshift-marketplace/redhat-operators-zbt7j" Feb 27 16:53:39 crc kubenswrapper[4830]: I0227 16:53:39.745188 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgfr4\" (UniqueName: \"kubernetes.io/projected/2de4d80e-8c2f-4332-95b4-842c80841212-kube-api-access-pgfr4\") pod \"redhat-operators-zbt7j\" (UID: \"2de4d80e-8c2f-4332-95b4-842c80841212\") " pod="openshift-marketplace/redhat-operators-zbt7j" Feb 27 16:53:39 crc kubenswrapper[4830]: I0227 16:53:39.846594 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgfr4\" (UniqueName: \"kubernetes.io/projected/2de4d80e-8c2f-4332-95b4-842c80841212-kube-api-access-pgfr4\") pod \"redhat-operators-zbt7j\" (UID: \"2de4d80e-8c2f-4332-95b4-842c80841212\") " pod="openshift-marketplace/redhat-operators-zbt7j" Feb 27 16:53:39 crc kubenswrapper[4830]: I0227 16:53:39.846710 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2de4d80e-8c2f-4332-95b4-842c80841212-catalog-content\") pod \"redhat-operators-zbt7j\" (UID: \"2de4d80e-8c2f-4332-95b4-842c80841212\") " pod="openshift-marketplace/redhat-operators-zbt7j" Feb 27 16:53:39 crc kubenswrapper[4830]: I0227 16:53:39.846830 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2de4d80e-8c2f-4332-95b4-842c80841212-utilities\") pod \"redhat-operators-zbt7j\" (UID: \"2de4d80e-8c2f-4332-95b4-842c80841212\") " pod="openshift-marketplace/redhat-operators-zbt7j" Feb 27 16:53:39 crc kubenswrapper[4830]: I0227 16:53:39.848028 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2de4d80e-8c2f-4332-95b4-842c80841212-utilities\") pod \"redhat-operators-zbt7j\" (UID: \"2de4d80e-8c2f-4332-95b4-842c80841212\") " pod="openshift-marketplace/redhat-operators-zbt7j" Feb 27 16:53:39 crc kubenswrapper[4830]: I0227 16:53:39.848078 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2de4d80e-8c2f-4332-95b4-842c80841212-catalog-content\") pod \"redhat-operators-zbt7j\" (UID: \"2de4d80e-8c2f-4332-95b4-842c80841212\") " pod="openshift-marketplace/redhat-operators-zbt7j" Feb 27 16:53:39 crc kubenswrapper[4830]: I0227 16:53:39.867930 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgfr4\" (UniqueName: \"kubernetes.io/projected/2de4d80e-8c2f-4332-95b4-842c80841212-kube-api-access-pgfr4\") pod \"redhat-operators-zbt7j\" (UID: \"2de4d80e-8c2f-4332-95b4-842c80841212\") " pod="openshift-marketplace/redhat-operators-zbt7j" Feb 27 16:53:39 crc kubenswrapper[4830]: I0227 16:53:39.943353 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zbt7j" Feb 27 16:53:40 crc kubenswrapper[4830]: I0227 16:53:40.396537 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zbt7j"] Feb 27 16:53:41 crc kubenswrapper[4830]: I0227 16:53:41.226879 4830 generic.go:334] "Generic (PLEG): container finished" podID="2de4d80e-8c2f-4332-95b4-842c80841212" containerID="79358e1dcef5cbbebbdcf6be25720bdd88e38941ddaf457073f72542481438ab" exitCode=0 Feb 27 16:53:41 crc kubenswrapper[4830]: I0227 16:53:41.226954 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbt7j" event={"ID":"2de4d80e-8c2f-4332-95b4-842c80841212","Type":"ContainerDied","Data":"79358e1dcef5cbbebbdcf6be25720bdd88e38941ddaf457073f72542481438ab"} Feb 27 16:53:41 crc kubenswrapper[4830]: I0227 16:53:41.227239 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbt7j" event={"ID":"2de4d80e-8c2f-4332-95b4-842c80841212","Type":"ContainerStarted","Data":"c3a4ca9beccfa0cbe5462ba289551842617544786da6966f0cf3014d2195a367"} Feb 27 16:53:42 crc kubenswrapper[4830]: I0227 16:53:42.238688 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbt7j" event={"ID":"2de4d80e-8c2f-4332-95b4-842c80841212","Type":"ContainerStarted","Data":"07006fb798b6d5b0fdb825db8b176209de1b1439118147eed2bd3d170390141c"} Feb 27 16:53:43 crc kubenswrapper[4830]: I0227 16:53:43.250791 4830 generic.go:334] "Generic (PLEG): container finished" podID="2de4d80e-8c2f-4332-95b4-842c80841212" containerID="07006fb798b6d5b0fdb825db8b176209de1b1439118147eed2bd3d170390141c" exitCode=0 Feb 27 16:53:43 crc kubenswrapper[4830]: I0227 16:53:43.250858 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbt7j" event={"ID":"2de4d80e-8c2f-4332-95b4-842c80841212","Type":"ContainerDied","Data":"07006fb798b6d5b0fdb825db8b176209de1b1439118147eed2bd3d170390141c"} Feb 27 16:53:44 crc kubenswrapper[4830]: I0227 16:53:44.263274 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbt7j" event={"ID":"2de4d80e-8c2f-4332-95b4-842c80841212","Type":"ContainerStarted","Data":"ca46b311c1f0b66188e40aa38b713fd49c678718df6a969c4bd019b92521863b"} Feb 27 16:53:44 crc kubenswrapper[4830]: I0227 16:53:44.301795 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zbt7j" podStartSLOduration=2.864105691 podStartE2EDuration="5.301767552s" podCreationTimestamp="2026-02-27 16:53:39 +0000 UTC" firstStartedPulling="2026-02-27 16:53:41.229862244 +0000 UTC m=+2817.319134747" lastFinishedPulling="2026-02-27 16:53:43.667524145 +0000 UTC m=+2819.756796608" observedRunningTime="2026-02-27 16:53:44.291093249 +0000 UTC m=+2820.380365792" watchObservedRunningTime="2026-02-27 16:53:44.301767552 +0000 UTC m=+2820.391040055" Feb 27 16:53:49 crc kubenswrapper[4830]: I0227 16:53:49.944171 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zbt7j" Feb 27 16:53:49 crc kubenswrapper[4830]: I0227 16:53:49.944818 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zbt7j" Feb 27 16:53:50 crc kubenswrapper[4830]: I0227 16:53:50.991357 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zbt7j" podUID="2de4d80e-8c2f-4332-95b4-842c80841212" containerName="registry-server" probeResult="failure" output=< Feb 27 16:53:50 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Feb 27 16:53:50 crc kubenswrapper[4830]: > Feb 27 16:54:00 crc kubenswrapper[4830]: I0227 16:54:00.007364 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zbt7j" Feb 27 16:54:00 crc kubenswrapper[4830]: I0227 16:54:00.070129 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zbt7j" Feb 27 16:54:00 crc kubenswrapper[4830]: I0227 16:54:00.141065 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536854-9j24s"] Feb 27 16:54:00 crc kubenswrapper[4830]: I0227 16:54:00.142284 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536854-9j24s" Feb 27 16:54:00 crc kubenswrapper[4830]: I0227 16:54:00.145484 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 16:54:00 crc kubenswrapper[4830]: I0227 16:54:00.145529 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 16:54:00 crc kubenswrapper[4830]: I0227 16:54:00.151826 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 16:54:00 crc kubenswrapper[4830]: I0227 16:54:00.151849 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536854-9j24s"] Feb 27 16:54:00 crc kubenswrapper[4830]: I0227 16:54:00.256528 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zbt7j"] Feb 27 16:54:00 crc kubenswrapper[4830]: I0227 16:54:00.272173 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwljx\" (UniqueName: \"kubernetes.io/projected/c00c49e6-0391-440f-b78c-7746d978baa3-kube-api-access-jwljx\") pod \"auto-csr-approver-29536854-9j24s\" (UID: \"c00c49e6-0391-440f-b78c-7746d978baa3\") " pod="openshift-infra/auto-csr-approver-29536854-9j24s" Feb 27 16:54:00 crc kubenswrapper[4830]: I0227 16:54:00.373300 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwljx\" (UniqueName: \"kubernetes.io/projected/c00c49e6-0391-440f-b78c-7746d978baa3-kube-api-access-jwljx\") pod \"auto-csr-approver-29536854-9j24s\" (UID: \"c00c49e6-0391-440f-b78c-7746d978baa3\") " pod="openshift-infra/auto-csr-approver-29536854-9j24s" Feb 27 16:54:00 crc kubenswrapper[4830]: I0227 16:54:00.397411 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwljx\" (UniqueName: \"kubernetes.io/projected/c00c49e6-0391-440f-b78c-7746d978baa3-kube-api-access-jwljx\") pod \"auto-csr-approver-29536854-9j24s\" (UID: \"c00c49e6-0391-440f-b78c-7746d978baa3\") " pod="openshift-infra/auto-csr-approver-29536854-9j24s" Feb 27 16:54:00 crc kubenswrapper[4830]: I0227 16:54:00.464436 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536854-9j24s" Feb 27 16:54:00 crc kubenswrapper[4830]: W0227 16:54:00.983455 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc00c49e6_0391_440f_b78c_7746d978baa3.slice/crio-234ec3739777e9b721b75fcd623edbe8161d2d4c014fd5b64249bc9572294ce6 WatchSource:0}: Error finding container 234ec3739777e9b721b75fcd623edbe8161d2d4c014fd5b64249bc9572294ce6: Status 404 returned error can't find the container with id 234ec3739777e9b721b75fcd623edbe8161d2d4c014fd5b64249bc9572294ce6 Feb 27 16:54:00 crc kubenswrapper[4830]: I0227 16:54:00.996343 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536854-9j24s"] Feb 27 16:54:01 crc kubenswrapper[4830]: I0227 16:54:01.420005 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536854-9j24s" event={"ID":"c00c49e6-0391-440f-b78c-7746d978baa3","Type":"ContainerStarted","Data":"234ec3739777e9b721b75fcd623edbe8161d2d4c014fd5b64249bc9572294ce6"} Feb 27 16:54:01 crc kubenswrapper[4830]: I0227 16:54:01.420212 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zbt7j" podUID="2de4d80e-8c2f-4332-95b4-842c80841212" containerName="registry-server" containerID="cri-o://ca46b311c1f0b66188e40aa38b713fd49c678718df6a969c4bd019b92521863b" gracePeriod=2 Feb 27 16:54:02 crc kubenswrapper[4830]: I0227 16:54:02.387880 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zbt7j" Feb 27 16:54:02 crc kubenswrapper[4830]: I0227 16:54:02.442722 4830 generic.go:334] "Generic (PLEG): container finished" podID="2de4d80e-8c2f-4332-95b4-842c80841212" containerID="ca46b311c1f0b66188e40aa38b713fd49c678718df6a969c4bd019b92521863b" exitCode=0 Feb 27 16:54:02 crc kubenswrapper[4830]: I0227 16:54:02.442834 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbt7j" event={"ID":"2de4d80e-8c2f-4332-95b4-842c80841212","Type":"ContainerDied","Data":"ca46b311c1f0b66188e40aa38b713fd49c678718df6a969c4bd019b92521863b"} Feb 27 16:54:02 crc kubenswrapper[4830]: I0227 16:54:02.442886 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zbt7j" event={"ID":"2de4d80e-8c2f-4332-95b4-842c80841212","Type":"ContainerDied","Data":"c3a4ca9beccfa0cbe5462ba289551842617544786da6966f0cf3014d2195a367"} Feb 27 16:54:02 crc kubenswrapper[4830]: I0227 16:54:02.442906 4830 scope.go:117] "RemoveContainer" containerID="ca46b311c1f0b66188e40aa38b713fd49c678718df6a969c4bd019b92521863b" Feb 27 16:54:02 crc kubenswrapper[4830]: I0227 16:54:02.443049 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zbt7j" Feb 27 16:54:02 crc kubenswrapper[4830]: I0227 16:54:02.466215 4830 scope.go:117] "RemoveContainer" containerID="07006fb798b6d5b0fdb825db8b176209de1b1439118147eed2bd3d170390141c" Feb 27 16:54:02 crc kubenswrapper[4830]: I0227 16:54:02.501322 4830 scope.go:117] "RemoveContainer" containerID="79358e1dcef5cbbebbdcf6be25720bdd88e38941ddaf457073f72542481438ab" Feb 27 16:54:02 crc kubenswrapper[4830]: I0227 16:54:02.516499 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2de4d80e-8c2f-4332-95b4-842c80841212-catalog-content\") pod \"2de4d80e-8c2f-4332-95b4-842c80841212\" (UID: \"2de4d80e-8c2f-4332-95b4-842c80841212\") " Feb 27 16:54:02 crc kubenswrapper[4830]: I0227 16:54:02.516612 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2de4d80e-8c2f-4332-95b4-842c80841212-utilities\") pod \"2de4d80e-8c2f-4332-95b4-842c80841212\" (UID: \"2de4d80e-8c2f-4332-95b4-842c80841212\") " Feb 27 16:54:02 crc kubenswrapper[4830]: I0227 16:54:02.516675 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgfr4\" (UniqueName: \"kubernetes.io/projected/2de4d80e-8c2f-4332-95b4-842c80841212-kube-api-access-pgfr4\") pod \"2de4d80e-8c2f-4332-95b4-842c80841212\" (UID: \"2de4d80e-8c2f-4332-95b4-842c80841212\") " Feb 27 16:54:02 crc kubenswrapper[4830]: I0227 16:54:02.518056 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2de4d80e-8c2f-4332-95b4-842c80841212-utilities" (OuterVolumeSpecName: "utilities") pod "2de4d80e-8c2f-4332-95b4-842c80841212" (UID: "2de4d80e-8c2f-4332-95b4-842c80841212"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:54:02 crc kubenswrapper[4830]: I0227 16:54:02.522292 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2de4d80e-8c2f-4332-95b4-842c80841212-kube-api-access-pgfr4" (OuterVolumeSpecName: "kube-api-access-pgfr4") pod "2de4d80e-8c2f-4332-95b4-842c80841212" (UID: "2de4d80e-8c2f-4332-95b4-842c80841212"). InnerVolumeSpecName "kube-api-access-pgfr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:54:02 crc kubenswrapper[4830]: I0227 16:54:02.533562 4830 scope.go:117] "RemoveContainer" containerID="ca46b311c1f0b66188e40aa38b713fd49c678718df6a969c4bd019b92521863b" Feb 27 16:54:02 crc kubenswrapper[4830]: E0227 16:54:02.534099 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca46b311c1f0b66188e40aa38b713fd49c678718df6a969c4bd019b92521863b\": container with ID starting with ca46b311c1f0b66188e40aa38b713fd49c678718df6a969c4bd019b92521863b not found: ID does not exist" containerID="ca46b311c1f0b66188e40aa38b713fd49c678718df6a969c4bd019b92521863b" Feb 27 16:54:02 crc kubenswrapper[4830]: I0227 16:54:02.534140 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca46b311c1f0b66188e40aa38b713fd49c678718df6a969c4bd019b92521863b"} err="failed to get container status \"ca46b311c1f0b66188e40aa38b713fd49c678718df6a969c4bd019b92521863b\": rpc error: code = NotFound desc = could not find container \"ca46b311c1f0b66188e40aa38b713fd49c678718df6a969c4bd019b92521863b\": container with ID starting with ca46b311c1f0b66188e40aa38b713fd49c678718df6a969c4bd019b92521863b not found: ID does not exist" Feb 27 16:54:02 crc kubenswrapper[4830]: I0227 16:54:02.534164 4830 scope.go:117] "RemoveContainer" containerID="07006fb798b6d5b0fdb825db8b176209de1b1439118147eed2bd3d170390141c" Feb 27 16:54:02 crc kubenswrapper[4830]: E0227 16:54:02.534880 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07006fb798b6d5b0fdb825db8b176209de1b1439118147eed2bd3d170390141c\": container with ID starting with 07006fb798b6d5b0fdb825db8b176209de1b1439118147eed2bd3d170390141c not found: ID does not exist" containerID="07006fb798b6d5b0fdb825db8b176209de1b1439118147eed2bd3d170390141c" Feb 27 16:54:02 crc kubenswrapper[4830]: I0227 16:54:02.534907 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07006fb798b6d5b0fdb825db8b176209de1b1439118147eed2bd3d170390141c"} err="failed to get container status \"07006fb798b6d5b0fdb825db8b176209de1b1439118147eed2bd3d170390141c\": rpc error: code = NotFound desc = could not find container \"07006fb798b6d5b0fdb825db8b176209de1b1439118147eed2bd3d170390141c\": container with ID starting with 07006fb798b6d5b0fdb825db8b176209de1b1439118147eed2bd3d170390141c not found: ID does not exist" Feb 27 16:54:02 crc kubenswrapper[4830]: I0227 16:54:02.534920 4830 scope.go:117] "RemoveContainer" containerID="79358e1dcef5cbbebbdcf6be25720bdd88e38941ddaf457073f72542481438ab" Feb 27 16:54:02 crc kubenswrapper[4830]: E0227 16:54:02.535256 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79358e1dcef5cbbebbdcf6be25720bdd88e38941ddaf457073f72542481438ab\": container with ID starting with 79358e1dcef5cbbebbdcf6be25720bdd88e38941ddaf457073f72542481438ab not found: ID does not exist" containerID="79358e1dcef5cbbebbdcf6be25720bdd88e38941ddaf457073f72542481438ab" Feb 27 16:54:02 crc kubenswrapper[4830]: I0227 16:54:02.535280 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79358e1dcef5cbbebbdcf6be25720bdd88e38941ddaf457073f72542481438ab"} err="failed to get container status \"79358e1dcef5cbbebbdcf6be25720bdd88e38941ddaf457073f72542481438ab\": rpc error: code = NotFound desc = could not find container \"79358e1dcef5cbbebbdcf6be25720bdd88e38941ddaf457073f72542481438ab\": container with ID starting with 79358e1dcef5cbbebbdcf6be25720bdd88e38941ddaf457073f72542481438ab not found: ID does not exist" Feb 27 16:54:02 crc kubenswrapper[4830]: I0227 16:54:02.618219 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2de4d80e-8c2f-4332-95b4-842c80841212-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 16:54:02 crc kubenswrapper[4830]: I0227 16:54:02.618258 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgfr4\" (UniqueName: \"kubernetes.io/projected/2de4d80e-8c2f-4332-95b4-842c80841212-kube-api-access-pgfr4\") on node \"crc\" DevicePath \"\"" Feb 27 16:54:02 crc kubenswrapper[4830]: I0227 16:54:02.657830 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2de4d80e-8c2f-4332-95b4-842c80841212-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2de4d80e-8c2f-4332-95b4-842c80841212" (UID: "2de4d80e-8c2f-4332-95b4-842c80841212"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:54:02 crc kubenswrapper[4830]: I0227 16:54:02.719439 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2de4d80e-8c2f-4332-95b4-842c80841212-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 16:54:02 crc kubenswrapper[4830]: I0227 16:54:02.779131 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zbt7j"] Feb 27 16:54:02 crc kubenswrapper[4830]: I0227 16:54:02.782473 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zbt7j"] Feb 27 16:54:03 crc kubenswrapper[4830]: I0227 16:54:03.455247 4830 generic.go:334] "Generic (PLEG): container finished" podID="c00c49e6-0391-440f-b78c-7746d978baa3" containerID="8e6a7dcfcf3faeb056159607c1d285792a3d6ea926d6a2597c223dd6c8287879" exitCode=0 Feb 27 16:54:03 crc kubenswrapper[4830]: I0227 16:54:03.455303 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536854-9j24s" event={"ID":"c00c49e6-0391-440f-b78c-7746d978baa3","Type":"ContainerDied","Data":"8e6a7dcfcf3faeb056159607c1d285792a3d6ea926d6a2597c223dd6c8287879"} Feb 27 16:54:04 crc kubenswrapper[4830]: I0227 16:54:04.779903 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2de4d80e-8c2f-4332-95b4-842c80841212" path="/var/lib/kubelet/pods/2de4d80e-8c2f-4332-95b4-842c80841212/volumes" Feb 27 16:54:04 crc kubenswrapper[4830]: I0227 16:54:04.902863 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536854-9j24s" Feb 27 16:54:04 crc kubenswrapper[4830]: I0227 16:54:04.954135 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwljx\" (UniqueName: \"kubernetes.io/projected/c00c49e6-0391-440f-b78c-7746d978baa3-kube-api-access-jwljx\") pod \"c00c49e6-0391-440f-b78c-7746d978baa3\" (UID: \"c00c49e6-0391-440f-b78c-7746d978baa3\") " Feb 27 16:54:04 crc kubenswrapper[4830]: I0227 16:54:04.960197 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c00c49e6-0391-440f-b78c-7746d978baa3-kube-api-access-jwljx" (OuterVolumeSpecName: "kube-api-access-jwljx") pod "c00c49e6-0391-440f-b78c-7746d978baa3" (UID: "c00c49e6-0391-440f-b78c-7746d978baa3"). InnerVolumeSpecName "kube-api-access-jwljx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:54:05 crc kubenswrapper[4830]: I0227 16:54:05.056374 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwljx\" (UniqueName: \"kubernetes.io/projected/c00c49e6-0391-440f-b78c-7746d978baa3-kube-api-access-jwljx\") on node \"crc\" DevicePath \"\"" Feb 27 16:54:05 crc kubenswrapper[4830]: I0227 16:54:05.475313 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536854-9j24s" event={"ID":"c00c49e6-0391-440f-b78c-7746d978baa3","Type":"ContainerDied","Data":"234ec3739777e9b721b75fcd623edbe8161d2d4c014fd5b64249bc9572294ce6"} Feb 27 16:54:05 crc kubenswrapper[4830]: I0227 16:54:05.475408 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="234ec3739777e9b721b75fcd623edbe8161d2d4c014fd5b64249bc9572294ce6" Feb 27 16:54:05 crc kubenswrapper[4830]: I0227 16:54:05.475488 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536854-9j24s" Feb 27 16:54:05 crc kubenswrapper[4830]: I0227 16:54:05.990209 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536848-vbmxb"] Feb 27 16:54:06 crc kubenswrapper[4830]: I0227 16:54:06.000780 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536848-vbmxb"] Feb 27 16:54:06 crc kubenswrapper[4830]: I0227 16:54:06.777668 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d99a3c6b-dcaa-48f5-a4e9-ec3a32fbeb81" path="/var/lib/kubelet/pods/d99a3c6b-dcaa-48f5-a4e9-ec3a32fbeb81/volumes" Feb 27 16:54:45 crc kubenswrapper[4830]: I0227 16:54:45.522544 4830 scope.go:117] "RemoveContainer" containerID="fa9a650dfe277730b698f61cf48f8b2efe6e8e862cc2b8560f83b02860cf19e4" Feb 27 16:55:03 crc kubenswrapper[4830]: I0227 16:55:03.160308 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 16:55:03 crc kubenswrapper[4830]: I0227 16:55:03.161116 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 16:55:13 crc kubenswrapper[4830]: I0227 16:55:13.156899 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xxlxf"] Feb 27 16:55:13 crc kubenswrapper[4830]: E0227 16:55:13.158054 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2de4d80e-8c2f-4332-95b4-842c80841212" containerName="registry-server" Feb 27 16:55:13 crc kubenswrapper[4830]: I0227 16:55:13.158075 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2de4d80e-8c2f-4332-95b4-842c80841212" containerName="registry-server" Feb 27 16:55:13 crc kubenswrapper[4830]: E0227 16:55:13.158102 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c00c49e6-0391-440f-b78c-7746d978baa3" containerName="oc" Feb 27 16:55:13 crc kubenswrapper[4830]: I0227 16:55:13.158115 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="c00c49e6-0391-440f-b78c-7746d978baa3" containerName="oc" Feb 27 16:55:13 crc kubenswrapper[4830]: E0227 16:55:13.158147 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2de4d80e-8c2f-4332-95b4-842c80841212" containerName="extract-content" Feb 27 16:55:13 crc kubenswrapper[4830]: I0227 16:55:13.158159 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2de4d80e-8c2f-4332-95b4-842c80841212" containerName="extract-content" Feb 27 16:55:13 crc kubenswrapper[4830]: E0227 16:55:13.158185 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2de4d80e-8c2f-4332-95b4-842c80841212" containerName="extract-utilities" Feb 27 16:55:13 crc kubenswrapper[4830]: I0227 16:55:13.158199 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2de4d80e-8c2f-4332-95b4-842c80841212" containerName="extract-utilities" Feb 27 16:55:13 crc kubenswrapper[4830]: I0227 16:55:13.158476 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="c00c49e6-0391-440f-b78c-7746d978baa3" containerName="oc" Feb 27 16:55:13 crc kubenswrapper[4830]: I0227 16:55:13.158505 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="2de4d80e-8c2f-4332-95b4-842c80841212" containerName="registry-server" Feb 27 16:55:13 crc kubenswrapper[4830]: I0227 16:55:13.160263 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xxlxf" Feb 27 16:55:13 crc kubenswrapper[4830]: I0227 16:55:13.169878 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xxlxf"] Feb 27 16:55:13 crc kubenswrapper[4830]: I0227 16:55:13.261611 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3939be44-3ec9-48fa-b9ca-a30d7baca0ba-catalog-content\") pod \"redhat-marketplace-xxlxf\" (UID: \"3939be44-3ec9-48fa-b9ca-a30d7baca0ba\") " pod="openshift-marketplace/redhat-marketplace-xxlxf" Feb 27 16:55:13 crc kubenswrapper[4830]: I0227 16:55:13.261795 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3939be44-3ec9-48fa-b9ca-a30d7baca0ba-utilities\") pod \"redhat-marketplace-xxlxf\" (UID: \"3939be44-3ec9-48fa-b9ca-a30d7baca0ba\") " pod="openshift-marketplace/redhat-marketplace-xxlxf" Feb 27 16:55:13 crc kubenswrapper[4830]: I0227 16:55:13.261852 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqx5d\" (UniqueName: \"kubernetes.io/projected/3939be44-3ec9-48fa-b9ca-a30d7baca0ba-kube-api-access-lqx5d\") pod \"redhat-marketplace-xxlxf\" (UID: \"3939be44-3ec9-48fa-b9ca-a30d7baca0ba\") " pod="openshift-marketplace/redhat-marketplace-xxlxf" Feb 27 16:55:13 crc kubenswrapper[4830]: I0227 16:55:13.363126 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3939be44-3ec9-48fa-b9ca-a30d7baca0ba-utilities\") pod \"redhat-marketplace-xxlxf\" (UID: \"3939be44-3ec9-48fa-b9ca-a30d7baca0ba\") " pod="openshift-marketplace/redhat-marketplace-xxlxf" Feb 27 16:55:13 crc kubenswrapper[4830]: I0227 16:55:13.363790 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3939be44-3ec9-48fa-b9ca-a30d7baca0ba-utilities\") pod \"redhat-marketplace-xxlxf\" (UID: \"3939be44-3ec9-48fa-b9ca-a30d7baca0ba\") " pod="openshift-marketplace/redhat-marketplace-xxlxf" Feb 27 16:55:13 crc kubenswrapper[4830]: I0227 16:55:13.364029 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqx5d\" (UniqueName: \"kubernetes.io/projected/3939be44-3ec9-48fa-b9ca-a30d7baca0ba-kube-api-access-lqx5d\") pod \"redhat-marketplace-xxlxf\" (UID: \"3939be44-3ec9-48fa-b9ca-a30d7baca0ba\") " pod="openshift-marketplace/redhat-marketplace-xxlxf" Feb 27 16:55:13 crc kubenswrapper[4830]: I0227 16:55:13.364489 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3939be44-3ec9-48fa-b9ca-a30d7baca0ba-catalog-content\") pod \"redhat-marketplace-xxlxf\" (UID: \"3939be44-3ec9-48fa-b9ca-a30d7baca0ba\") " pod="openshift-marketplace/redhat-marketplace-xxlxf" Feb 27 16:55:13 crc kubenswrapper[4830]: I0227 16:55:13.364824 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3939be44-3ec9-48fa-b9ca-a30d7baca0ba-catalog-content\") pod \"redhat-marketplace-xxlxf\" (UID: \"3939be44-3ec9-48fa-b9ca-a30d7baca0ba\") " pod="openshift-marketplace/redhat-marketplace-xxlxf" Feb 27 16:55:13 crc kubenswrapper[4830]: I0227 16:55:13.396999 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqx5d\" (UniqueName: \"kubernetes.io/projected/3939be44-3ec9-48fa-b9ca-a30d7baca0ba-kube-api-access-lqx5d\") pod \"redhat-marketplace-xxlxf\" (UID: \"3939be44-3ec9-48fa-b9ca-a30d7baca0ba\") " pod="openshift-marketplace/redhat-marketplace-xxlxf" Feb 27 16:55:13 crc kubenswrapper[4830]: I0227 16:55:13.497603 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xxlxf" Feb 27 16:55:13 crc kubenswrapper[4830]: I0227 16:55:13.769700 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xxlxf"] Feb 27 16:55:14 crc kubenswrapper[4830]: I0227 16:55:14.108999 4830 generic.go:334] "Generic (PLEG): container finished" podID="3939be44-3ec9-48fa-b9ca-a30d7baca0ba" containerID="ef177be37d39a1970d1f644068e43c4765bb9dad45c0b1753608dd97f60f931f" exitCode=0 Feb 27 16:55:14 crc kubenswrapper[4830]: I0227 16:55:14.109042 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xxlxf" event={"ID":"3939be44-3ec9-48fa-b9ca-a30d7baca0ba","Type":"ContainerDied","Data":"ef177be37d39a1970d1f644068e43c4765bb9dad45c0b1753608dd97f60f931f"} Feb 27 16:55:14 crc kubenswrapper[4830]: I0227 16:55:14.109072 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xxlxf" event={"ID":"3939be44-3ec9-48fa-b9ca-a30d7baca0ba","Type":"ContainerStarted","Data":"b25c8630f2fc8e4339a4f1c6c7f734ab14e90b9ad253c2b094fe7885b4e15c59"} Feb 27 16:55:14 crc kubenswrapper[4830]: I0227 16:55:14.110984 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 16:55:15 crc kubenswrapper[4830]: I0227 16:55:15.119253 4830 generic.go:334] "Generic (PLEG): container finished" podID="3939be44-3ec9-48fa-b9ca-a30d7baca0ba" containerID="f9f4205d643b4adc19ac897b8abe4de8717a401af4c8352f44ec01bca25b1063" exitCode=0 Feb 27 16:55:15 crc kubenswrapper[4830]: I0227 16:55:15.119335 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xxlxf" event={"ID":"3939be44-3ec9-48fa-b9ca-a30d7baca0ba","Type":"ContainerDied","Data":"f9f4205d643b4adc19ac897b8abe4de8717a401af4c8352f44ec01bca25b1063"} Feb 27 16:55:16 crc kubenswrapper[4830]: I0227 16:55:16.138606 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xxlxf" event={"ID":"3939be44-3ec9-48fa-b9ca-a30d7baca0ba","Type":"ContainerStarted","Data":"56800d2f4a2bbd3c34d90ce09d25a0df31b549d1b295f6635fdc23e217212b16"} Feb 27 16:55:16 crc kubenswrapper[4830]: I0227 16:55:16.169619 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xxlxf" podStartSLOduration=1.6651787219999998 podStartE2EDuration="3.169595603s" podCreationTimestamp="2026-02-27 16:55:13 +0000 UTC" firstStartedPulling="2026-02-27 16:55:14.110733768 +0000 UTC m=+2910.200006231" lastFinishedPulling="2026-02-27 16:55:15.615150639 +0000 UTC m=+2911.704423112" observedRunningTime="2026-02-27 16:55:16.159620201 +0000 UTC m=+2912.248892704" watchObservedRunningTime="2026-02-27 16:55:16.169595603 +0000 UTC m=+2912.258868096" Feb 27 16:55:23 crc kubenswrapper[4830]: I0227 16:55:23.498380 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xxlxf" Feb 27 16:55:23 crc kubenswrapper[4830]: I0227 16:55:23.499079 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xxlxf" Feb 27 16:55:23 crc kubenswrapper[4830]: I0227 16:55:23.560563 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xxlxf" Feb 27 16:55:24 crc kubenswrapper[4830]: I0227 16:55:24.276992 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xxlxf" Feb 27 16:55:24 crc kubenswrapper[4830]: I0227 16:55:24.354768 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xxlxf"] Feb 27 16:55:26 crc kubenswrapper[4830]: I0227 16:55:26.220746 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xxlxf" podUID="3939be44-3ec9-48fa-b9ca-a30d7baca0ba" containerName="registry-server" containerID="cri-o://56800d2f4a2bbd3c34d90ce09d25a0df31b549d1b295f6635fdc23e217212b16" gracePeriod=2 Feb 27 16:55:26 crc kubenswrapper[4830]: I0227 16:55:26.744910 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xxlxf" Feb 27 16:55:26 crc kubenswrapper[4830]: I0227 16:55:26.815395 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqx5d\" (UniqueName: \"kubernetes.io/projected/3939be44-3ec9-48fa-b9ca-a30d7baca0ba-kube-api-access-lqx5d\") pod \"3939be44-3ec9-48fa-b9ca-a30d7baca0ba\" (UID: \"3939be44-3ec9-48fa-b9ca-a30d7baca0ba\") " Feb 27 16:55:26 crc kubenswrapper[4830]: I0227 16:55:26.816265 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3939be44-3ec9-48fa-b9ca-a30d7baca0ba-utilities\") pod \"3939be44-3ec9-48fa-b9ca-a30d7baca0ba\" (UID: \"3939be44-3ec9-48fa-b9ca-a30d7baca0ba\") " Feb 27 16:55:26 crc kubenswrapper[4830]: I0227 16:55:26.816506 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3939be44-3ec9-48fa-b9ca-a30d7baca0ba-catalog-content\") pod \"3939be44-3ec9-48fa-b9ca-a30d7baca0ba\" (UID: \"3939be44-3ec9-48fa-b9ca-a30d7baca0ba\") " Feb 27 16:55:26 crc kubenswrapper[4830]: I0227 16:55:26.817264 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3939be44-3ec9-48fa-b9ca-a30d7baca0ba-utilities" (OuterVolumeSpecName: "utilities") pod "3939be44-3ec9-48fa-b9ca-a30d7baca0ba" (UID: "3939be44-3ec9-48fa-b9ca-a30d7baca0ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:55:26 crc kubenswrapper[4830]: I0227 16:55:26.821079 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3939be44-3ec9-48fa-b9ca-a30d7baca0ba-kube-api-access-lqx5d" (OuterVolumeSpecName: "kube-api-access-lqx5d") pod "3939be44-3ec9-48fa-b9ca-a30d7baca0ba" (UID: "3939be44-3ec9-48fa-b9ca-a30d7baca0ba"). InnerVolumeSpecName "kube-api-access-lqx5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:26 crc kubenswrapper[4830]: I0227 16:55:26.840372 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3939be44-3ec9-48fa-b9ca-a30d7baca0ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3939be44-3ec9-48fa-b9ca-a30d7baca0ba" (UID: "3939be44-3ec9-48fa-b9ca-a30d7baca0ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:55:26 crc kubenswrapper[4830]: I0227 16:55:26.918806 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3939be44-3ec9-48fa-b9ca-a30d7baca0ba-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:26 crc kubenswrapper[4830]: I0227 16:55:26.918859 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqx5d\" (UniqueName: \"kubernetes.io/projected/3939be44-3ec9-48fa-b9ca-a30d7baca0ba-kube-api-access-lqx5d\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:26 crc kubenswrapper[4830]: I0227 16:55:26.918883 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3939be44-3ec9-48fa-b9ca-a30d7baca0ba-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:27 crc kubenswrapper[4830]: I0227 16:55:27.233473 4830 generic.go:334] "Generic (PLEG): container finished" podID="3939be44-3ec9-48fa-b9ca-a30d7baca0ba" containerID="56800d2f4a2bbd3c34d90ce09d25a0df31b549d1b295f6635fdc23e217212b16" exitCode=0 Feb 27 16:55:27 crc kubenswrapper[4830]: I0227 16:55:27.233522 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xxlxf" event={"ID":"3939be44-3ec9-48fa-b9ca-a30d7baca0ba","Type":"ContainerDied","Data":"56800d2f4a2bbd3c34d90ce09d25a0df31b549d1b295f6635fdc23e217212b16"} Feb 27 16:55:27 crc kubenswrapper[4830]: I0227 16:55:27.233551 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xxlxf" event={"ID":"3939be44-3ec9-48fa-b9ca-a30d7baca0ba","Type":"ContainerDied","Data":"b25c8630f2fc8e4339a4f1c6c7f734ab14e90b9ad253c2b094fe7885b4e15c59"} Feb 27 16:55:27 crc kubenswrapper[4830]: I0227 16:55:27.233571 4830 scope.go:117] "RemoveContainer" containerID="56800d2f4a2bbd3c34d90ce09d25a0df31b549d1b295f6635fdc23e217212b16" Feb 27 16:55:27 crc kubenswrapper[4830]: I0227 16:55:27.233628 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xxlxf" Feb 27 16:55:27 crc kubenswrapper[4830]: I0227 16:55:27.264379 4830 scope.go:117] "RemoveContainer" containerID="f9f4205d643b4adc19ac897b8abe4de8717a401af4c8352f44ec01bca25b1063" Feb 27 16:55:27 crc kubenswrapper[4830]: I0227 16:55:27.293456 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xxlxf"] Feb 27 16:55:27 crc kubenswrapper[4830]: I0227 16:55:27.302646 4830 scope.go:117] "RemoveContainer" containerID="ef177be37d39a1970d1f644068e43c4765bb9dad45c0b1753608dd97f60f931f" Feb 27 16:55:27 crc kubenswrapper[4830]: I0227 16:55:27.315070 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xxlxf"] Feb 27 16:55:27 crc kubenswrapper[4830]: I0227 16:55:27.346176 4830 scope.go:117] "RemoveContainer" containerID="56800d2f4a2bbd3c34d90ce09d25a0df31b549d1b295f6635fdc23e217212b16" Feb 27 16:55:27 crc kubenswrapper[4830]: E0227 16:55:27.346926 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56800d2f4a2bbd3c34d90ce09d25a0df31b549d1b295f6635fdc23e217212b16\": container with ID starting with 56800d2f4a2bbd3c34d90ce09d25a0df31b549d1b295f6635fdc23e217212b16 not found: ID does not exist" containerID="56800d2f4a2bbd3c34d90ce09d25a0df31b549d1b295f6635fdc23e217212b16" Feb 27 16:55:27 crc kubenswrapper[4830]: I0227 16:55:27.347018 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56800d2f4a2bbd3c34d90ce09d25a0df31b549d1b295f6635fdc23e217212b16"} err="failed to get container status \"56800d2f4a2bbd3c34d90ce09d25a0df31b549d1b295f6635fdc23e217212b16\": rpc error: code = NotFound desc = could not find container \"56800d2f4a2bbd3c34d90ce09d25a0df31b549d1b295f6635fdc23e217212b16\": container with ID starting with 56800d2f4a2bbd3c34d90ce09d25a0df31b549d1b295f6635fdc23e217212b16 not found: ID does not exist" Feb 27 16:55:27 crc kubenswrapper[4830]: I0227 16:55:27.347060 4830 scope.go:117] "RemoveContainer" containerID="f9f4205d643b4adc19ac897b8abe4de8717a401af4c8352f44ec01bca25b1063" Feb 27 16:55:27 crc kubenswrapper[4830]: E0227 16:55:27.347630 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9f4205d643b4adc19ac897b8abe4de8717a401af4c8352f44ec01bca25b1063\": container with ID starting with f9f4205d643b4adc19ac897b8abe4de8717a401af4c8352f44ec01bca25b1063 not found: ID does not exist" containerID="f9f4205d643b4adc19ac897b8abe4de8717a401af4c8352f44ec01bca25b1063" Feb 27 16:55:27 crc kubenswrapper[4830]: I0227 16:55:27.347688 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9f4205d643b4adc19ac897b8abe4de8717a401af4c8352f44ec01bca25b1063"} err="failed to get container status \"f9f4205d643b4adc19ac897b8abe4de8717a401af4c8352f44ec01bca25b1063\": rpc error: code = NotFound desc = could not find container \"f9f4205d643b4adc19ac897b8abe4de8717a401af4c8352f44ec01bca25b1063\": container with ID starting with f9f4205d643b4adc19ac897b8abe4de8717a401af4c8352f44ec01bca25b1063 not found: ID does not exist" Feb 27 16:55:27 crc kubenswrapper[4830]: I0227 16:55:27.347724 4830 scope.go:117] "RemoveContainer" containerID="ef177be37d39a1970d1f644068e43c4765bb9dad45c0b1753608dd97f60f931f" Feb 27 16:55:27 crc kubenswrapper[4830]: E0227 16:55:27.348249 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef177be37d39a1970d1f644068e43c4765bb9dad45c0b1753608dd97f60f931f\": container with ID starting with ef177be37d39a1970d1f644068e43c4765bb9dad45c0b1753608dd97f60f931f not found: ID does not exist" containerID="ef177be37d39a1970d1f644068e43c4765bb9dad45c0b1753608dd97f60f931f" Feb 27 16:55:27 crc kubenswrapper[4830]: I0227 16:55:27.348299 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef177be37d39a1970d1f644068e43c4765bb9dad45c0b1753608dd97f60f931f"} err="failed to get container status \"ef177be37d39a1970d1f644068e43c4765bb9dad45c0b1753608dd97f60f931f\": rpc error: code = NotFound desc = could not find container \"ef177be37d39a1970d1f644068e43c4765bb9dad45c0b1753608dd97f60f931f\": container with ID starting with ef177be37d39a1970d1f644068e43c4765bb9dad45c0b1753608dd97f60f931f not found: ID does not exist" Feb 27 16:55:28 crc kubenswrapper[4830]: I0227 16:55:28.773280 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3939be44-3ec9-48fa-b9ca-a30d7baca0ba" path="/var/lib/kubelet/pods/3939be44-3ec9-48fa-b9ca-a30d7baca0ba/volumes" Feb 27 16:55:33 crc kubenswrapper[4830]: I0227 16:55:33.160820 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 16:55:33 crc kubenswrapper[4830]: I0227 16:55:33.162236 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 16:56:00 crc kubenswrapper[4830]: I0227 16:56:00.162547 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536856-459dj"] Feb 27 16:56:00 crc kubenswrapper[4830]: E0227 16:56:00.163572 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3939be44-3ec9-48fa-b9ca-a30d7baca0ba" containerName="registry-server" Feb 27 16:56:00 crc kubenswrapper[4830]: I0227 16:56:00.163593 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="3939be44-3ec9-48fa-b9ca-a30d7baca0ba" containerName="registry-server" Feb 27 16:56:00 crc kubenswrapper[4830]: E0227 16:56:00.163657 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3939be44-3ec9-48fa-b9ca-a30d7baca0ba" containerName="extract-utilities" Feb 27 16:56:00 crc kubenswrapper[4830]: I0227 16:56:00.163679 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="3939be44-3ec9-48fa-b9ca-a30d7baca0ba" containerName="extract-utilities" Feb 27 16:56:00 crc kubenswrapper[4830]: E0227 16:56:00.163704 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3939be44-3ec9-48fa-b9ca-a30d7baca0ba" containerName="extract-content" Feb 27 16:56:00 crc kubenswrapper[4830]: I0227 16:56:00.163717 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="3939be44-3ec9-48fa-b9ca-a30d7baca0ba" containerName="extract-content" Feb 27 16:56:00 crc kubenswrapper[4830]: I0227 16:56:00.164017 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="3939be44-3ec9-48fa-b9ca-a30d7baca0ba" containerName="registry-server" Feb 27 16:56:00 crc kubenswrapper[4830]: I0227 16:56:00.164657 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536856-459dj" Feb 27 16:56:00 crc kubenswrapper[4830]: I0227 16:56:00.168303 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 16:56:00 crc kubenswrapper[4830]: I0227 16:56:00.168564 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 16:56:00 crc kubenswrapper[4830]: I0227 16:56:00.169759 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 16:56:00 crc kubenswrapper[4830]: I0227 16:56:00.177299 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7tzn\" (UniqueName: \"kubernetes.io/projected/b872122a-a976-423c-a013-d2946c95c8e8-kube-api-access-x7tzn\") pod \"auto-csr-approver-29536856-459dj\" (UID: \"b872122a-a976-423c-a013-d2946c95c8e8\") " pod="openshift-infra/auto-csr-approver-29536856-459dj" Feb 27 16:56:00 crc kubenswrapper[4830]: I0227 16:56:00.186552 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536856-459dj"] Feb 27 16:56:00 crc kubenswrapper[4830]: I0227 16:56:00.279091 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7tzn\" (UniqueName: \"kubernetes.io/projected/b872122a-a976-423c-a013-d2946c95c8e8-kube-api-access-x7tzn\") pod \"auto-csr-approver-29536856-459dj\" (UID: \"b872122a-a976-423c-a013-d2946c95c8e8\") " pod="openshift-infra/auto-csr-approver-29536856-459dj" Feb 27 16:56:00 crc kubenswrapper[4830]: I0227 16:56:00.304473 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7tzn\" (UniqueName: \"kubernetes.io/projected/b872122a-a976-423c-a013-d2946c95c8e8-kube-api-access-x7tzn\") pod \"auto-csr-approver-29536856-459dj\" (UID: \"b872122a-a976-423c-a013-d2946c95c8e8\") " pod="openshift-infra/auto-csr-approver-29536856-459dj" Feb 27 16:56:00 crc kubenswrapper[4830]: I0227 16:56:00.490467 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536856-459dj" Feb 27 16:56:00 crc kubenswrapper[4830]: I0227 16:56:00.988241 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536856-459dj"] Feb 27 16:56:00 crc kubenswrapper[4830]: W0227 16:56:00.999877 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb872122a_a976_423c_a013_d2946c95c8e8.slice/crio-dd2c70d605dcb349495cd3253be8f513d28c6968c5bd6c4d741f4ec2330cf99d WatchSource:0}: Error finding container dd2c70d605dcb349495cd3253be8f513d28c6968c5bd6c4d741f4ec2330cf99d: Status 404 returned error can't find the container with id dd2c70d605dcb349495cd3253be8f513d28c6968c5bd6c4d741f4ec2330cf99d Feb 27 16:56:01 crc kubenswrapper[4830]: I0227 16:56:01.614537 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536856-459dj" event={"ID":"b872122a-a976-423c-a013-d2946c95c8e8","Type":"ContainerStarted","Data":"dd2c70d605dcb349495cd3253be8f513d28c6968c5bd6c4d741f4ec2330cf99d"} Feb 27 16:56:02 crc kubenswrapper[4830]: I0227 16:56:02.625724 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536856-459dj" event={"ID":"b872122a-a976-423c-a013-d2946c95c8e8","Type":"ContainerStarted","Data":"562409aa010f63a4d3310338ac112a74c7da97cfe6514550b85755741ba419cc"} Feb 27 16:56:02 crc kubenswrapper[4830]: I0227 16:56:02.649774 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536856-459dj" podStartSLOduration=1.6737710049999999 podStartE2EDuration="2.649746858s" podCreationTimestamp="2026-02-27 16:56:00 +0000 UTC" firstStartedPulling="2026-02-27 16:56:01.002168882 +0000 UTC m=+2957.091441375" lastFinishedPulling="2026-02-27 16:56:01.978144725 +0000 UTC m=+2958.067417228" observedRunningTime="2026-02-27 16:56:02.641296211 +0000 UTC m=+2958.730568714" watchObservedRunningTime="2026-02-27 16:56:02.649746858 +0000 UTC m=+2958.739019351" Feb 27 16:56:03 crc kubenswrapper[4830]: I0227 16:56:03.161134 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 16:56:03 crc kubenswrapper[4830]: I0227 16:56:03.161262 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 16:56:03 crc kubenswrapper[4830]: I0227 16:56:03.161356 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 16:56:03 crc kubenswrapper[4830]: I0227 16:56:03.162645 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b1477650672db48c8bb6d798d8fc8040ec3d1666489a66fcec0561cf4cfade74"} pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 16:56:03 crc kubenswrapper[4830]: I0227 16:56:03.162768 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" containerID="cri-o://b1477650672db48c8bb6d798d8fc8040ec3d1666489a66fcec0561cf4cfade74" gracePeriod=600 Feb 27 16:56:03 crc kubenswrapper[4830]: E0227 16:56:03.297916 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:56:03 crc kubenswrapper[4830]: I0227 16:56:03.644275 4830 generic.go:334] "Generic (PLEG): container finished" podID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerID="b1477650672db48c8bb6d798d8fc8040ec3d1666489a66fcec0561cf4cfade74" exitCode=0 Feb 27 16:56:03 crc kubenswrapper[4830]: I0227 16:56:03.644382 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerDied","Data":"b1477650672db48c8bb6d798d8fc8040ec3d1666489a66fcec0561cf4cfade74"} Feb 27 16:56:03 crc kubenswrapper[4830]: I0227 16:56:03.644432 4830 scope.go:117] "RemoveContainer" containerID="32607c4338fb1b3f01bf1111028cf86636ecfa24037b80286a80a7e17ea37393" Feb 27 16:56:03 crc kubenswrapper[4830]: I0227 16:56:03.645410 4830 scope.go:117] "RemoveContainer" containerID="b1477650672db48c8bb6d798d8fc8040ec3d1666489a66fcec0561cf4cfade74" Feb 27 16:56:03 crc kubenswrapper[4830]: E0227 16:56:03.645927 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:56:03 crc kubenswrapper[4830]: I0227 16:56:03.654147 4830 generic.go:334] "Generic (PLEG): container finished" podID="b872122a-a976-423c-a013-d2946c95c8e8" containerID="562409aa010f63a4d3310338ac112a74c7da97cfe6514550b85755741ba419cc" exitCode=0 Feb 27 16:56:03 crc kubenswrapper[4830]: I0227 16:56:03.654392 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536856-459dj" event={"ID":"b872122a-a976-423c-a013-d2946c95c8e8","Type":"ContainerDied","Data":"562409aa010f63a4d3310338ac112a74c7da97cfe6514550b85755741ba419cc"} Feb 27 16:56:04 crc kubenswrapper[4830]: I0227 16:56:04.999003 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536856-459dj" Feb 27 16:56:05 crc kubenswrapper[4830]: I0227 16:56:05.063687 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7tzn\" (UniqueName: \"kubernetes.io/projected/b872122a-a976-423c-a013-d2946c95c8e8-kube-api-access-x7tzn\") pod \"b872122a-a976-423c-a013-d2946c95c8e8\" (UID: \"b872122a-a976-423c-a013-d2946c95c8e8\") " Feb 27 16:56:05 crc kubenswrapper[4830]: I0227 16:56:05.072404 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b872122a-a976-423c-a013-d2946c95c8e8-kube-api-access-x7tzn" (OuterVolumeSpecName: "kube-api-access-x7tzn") pod "b872122a-a976-423c-a013-d2946c95c8e8" (UID: "b872122a-a976-423c-a013-d2946c95c8e8"). InnerVolumeSpecName "kube-api-access-x7tzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:56:05 crc kubenswrapper[4830]: I0227 16:56:05.165775 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7tzn\" (UniqueName: \"kubernetes.io/projected/b872122a-a976-423c-a013-d2946c95c8e8-kube-api-access-x7tzn\") on node \"crc\" DevicePath \"\"" Feb 27 16:56:05 crc kubenswrapper[4830]: I0227 16:56:05.692345 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536856-459dj" event={"ID":"b872122a-a976-423c-a013-d2946c95c8e8","Type":"ContainerDied","Data":"dd2c70d605dcb349495cd3253be8f513d28c6968c5bd6c4d741f4ec2330cf99d"} Feb 27 16:56:05 crc kubenswrapper[4830]: I0227 16:56:05.692429 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd2c70d605dcb349495cd3253be8f513d28c6968c5bd6c4d741f4ec2330cf99d" Feb 27 16:56:05 crc kubenswrapper[4830]: I0227 16:56:05.692530 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536856-459dj" Feb 27 16:56:05 crc kubenswrapper[4830]: I0227 16:56:05.743855 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536850-4svr6"] Feb 27 16:56:05 crc kubenswrapper[4830]: I0227 16:56:05.748867 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536850-4svr6"] Feb 27 16:56:06 crc kubenswrapper[4830]: I0227 16:56:06.777940 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56d65891-d21c-4da5-a5f4-f39606656c0b" path="/var/lib/kubelet/pods/56d65891-d21c-4da5-a5f4-f39606656c0b/volumes" Feb 27 16:56:17 crc kubenswrapper[4830]: I0227 16:56:17.762874 4830 scope.go:117] "RemoveContainer" containerID="b1477650672db48c8bb6d798d8fc8040ec3d1666489a66fcec0561cf4cfade74" Feb 27 16:56:17 crc kubenswrapper[4830]: E0227 16:56:17.764239 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:56:31 crc kubenswrapper[4830]: I0227 16:56:31.761942 4830 scope.go:117] "RemoveContainer" containerID="b1477650672db48c8bb6d798d8fc8040ec3d1666489a66fcec0561cf4cfade74" Feb 27 16:56:31 crc kubenswrapper[4830]: E0227 16:56:31.762976 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:56:38 crc kubenswrapper[4830]: I0227 16:56:38.393271 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7rcxt"] Feb 27 16:56:38 crc kubenswrapper[4830]: E0227 16:56:38.394232 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b872122a-a976-423c-a013-d2946c95c8e8" containerName="oc" Feb 27 16:56:38 crc kubenswrapper[4830]: I0227 16:56:38.394266 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b872122a-a976-423c-a013-d2946c95c8e8" containerName="oc" Feb 27 16:56:38 crc kubenswrapper[4830]: I0227 16:56:38.394577 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="b872122a-a976-423c-a013-d2946c95c8e8" containerName="oc" Feb 27 16:56:38 crc kubenswrapper[4830]: I0227 16:56:38.398031 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7rcxt" Feb 27 16:56:38 crc kubenswrapper[4830]: I0227 16:56:38.414741 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7rcxt"] Feb 27 16:56:38 crc kubenswrapper[4830]: I0227 16:56:38.546183 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt7pg\" (UniqueName: \"kubernetes.io/projected/9afd23ad-3f1c-4683-aa63-50444f93c068-kube-api-access-xt7pg\") pod \"certified-operators-7rcxt\" (UID: \"9afd23ad-3f1c-4683-aa63-50444f93c068\") " pod="openshift-marketplace/certified-operators-7rcxt" Feb 27 16:56:38 crc kubenswrapper[4830]: I0227 16:56:38.546262 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9afd23ad-3f1c-4683-aa63-50444f93c068-utilities\") pod \"certified-operators-7rcxt\" (UID: \"9afd23ad-3f1c-4683-aa63-50444f93c068\") " pod="openshift-marketplace/certified-operators-7rcxt" Feb 27 16:56:38 crc kubenswrapper[4830]: I0227 16:56:38.546312 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9afd23ad-3f1c-4683-aa63-50444f93c068-catalog-content\") pod \"certified-operators-7rcxt\" (UID: \"9afd23ad-3f1c-4683-aa63-50444f93c068\") " pod="openshift-marketplace/certified-operators-7rcxt" Feb 27 16:56:38 crc kubenswrapper[4830]: I0227 16:56:38.647475 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt7pg\" (UniqueName: \"kubernetes.io/projected/9afd23ad-3f1c-4683-aa63-50444f93c068-kube-api-access-xt7pg\") pod \"certified-operators-7rcxt\" (UID: \"9afd23ad-3f1c-4683-aa63-50444f93c068\") " pod="openshift-marketplace/certified-operators-7rcxt" Feb 27 16:56:38 crc kubenswrapper[4830]: I0227 16:56:38.647540 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9afd23ad-3f1c-4683-aa63-50444f93c068-utilities\") pod \"certified-operators-7rcxt\" (UID: \"9afd23ad-3f1c-4683-aa63-50444f93c068\") " pod="openshift-marketplace/certified-operators-7rcxt" Feb 27 16:56:38 crc kubenswrapper[4830]: I0227 16:56:38.647582 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9afd23ad-3f1c-4683-aa63-50444f93c068-catalog-content\") pod \"certified-operators-7rcxt\" (UID: \"9afd23ad-3f1c-4683-aa63-50444f93c068\") " pod="openshift-marketplace/certified-operators-7rcxt" Feb 27 16:56:38 crc kubenswrapper[4830]: I0227 16:56:38.648381 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9afd23ad-3f1c-4683-aa63-50444f93c068-catalog-content\") pod \"certified-operators-7rcxt\" (UID: \"9afd23ad-3f1c-4683-aa63-50444f93c068\") " pod="openshift-marketplace/certified-operators-7rcxt" Feb 27 16:56:38 crc kubenswrapper[4830]: I0227 16:56:38.648630 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9afd23ad-3f1c-4683-aa63-50444f93c068-utilities\") pod \"certified-operators-7rcxt\" (UID: \"9afd23ad-3f1c-4683-aa63-50444f93c068\") " pod="openshift-marketplace/certified-operators-7rcxt" Feb 27 16:56:38 crc kubenswrapper[4830]: I0227 16:56:38.685115 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt7pg\" (UniqueName: \"kubernetes.io/projected/9afd23ad-3f1c-4683-aa63-50444f93c068-kube-api-access-xt7pg\") pod \"certified-operators-7rcxt\" (UID: \"9afd23ad-3f1c-4683-aa63-50444f93c068\") " pod="openshift-marketplace/certified-operators-7rcxt" Feb 27 16:56:38 crc kubenswrapper[4830]: I0227 16:56:38.739252 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7rcxt" Feb 27 16:56:39 crc kubenswrapper[4830]: I0227 16:56:39.215308 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7rcxt"] Feb 27 16:56:40 crc kubenswrapper[4830]: I0227 16:56:40.039337 4830 generic.go:334] "Generic (PLEG): container finished" podID="9afd23ad-3f1c-4683-aa63-50444f93c068" containerID="467251a4468e168e4756d74ff94a8458cf1dc3e2f9be6ff7d1bcc05847c0a744" exitCode=0 Feb 27 16:56:40 crc kubenswrapper[4830]: I0227 16:56:40.039393 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7rcxt" event={"ID":"9afd23ad-3f1c-4683-aa63-50444f93c068","Type":"ContainerDied","Data":"467251a4468e168e4756d74ff94a8458cf1dc3e2f9be6ff7d1bcc05847c0a744"} Feb 27 16:56:40 crc kubenswrapper[4830]: I0227 16:56:40.039425 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7rcxt" event={"ID":"9afd23ad-3f1c-4683-aa63-50444f93c068","Type":"ContainerStarted","Data":"d6303f625506b736140bb680c93320de267f6162bfff8318395aa77b6118150f"} Feb 27 16:56:40 crc kubenswrapper[4830]: I0227 16:56:40.795891 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6crnm"] Feb 27 16:56:40 crc kubenswrapper[4830]: I0227 16:56:40.798894 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6crnm" Feb 27 16:56:40 crc kubenswrapper[4830]: I0227 16:56:40.818448 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6crnm"] Feb 27 16:56:40 crc kubenswrapper[4830]: I0227 16:56:40.985524 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed784632-ada1-4164-b611-5b679437f210-utilities\") pod \"community-operators-6crnm\" (UID: \"ed784632-ada1-4164-b611-5b679437f210\") " pod="openshift-marketplace/community-operators-6crnm" Feb 27 16:56:40 crc kubenswrapper[4830]: I0227 16:56:40.985661 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v84k2\" (UniqueName: \"kubernetes.io/projected/ed784632-ada1-4164-b611-5b679437f210-kube-api-access-v84k2\") pod \"community-operators-6crnm\" (UID: \"ed784632-ada1-4164-b611-5b679437f210\") " pod="openshift-marketplace/community-operators-6crnm" Feb 27 16:56:40 crc kubenswrapper[4830]: I0227 16:56:40.985696 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed784632-ada1-4164-b611-5b679437f210-catalog-content\") pod \"community-operators-6crnm\" (UID: \"ed784632-ada1-4164-b611-5b679437f210\") " pod="openshift-marketplace/community-operators-6crnm" Feb 27 16:56:41 crc kubenswrapper[4830]: I0227 16:56:41.087847 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v84k2\" (UniqueName: \"kubernetes.io/projected/ed784632-ada1-4164-b611-5b679437f210-kube-api-access-v84k2\") pod \"community-operators-6crnm\" (UID: \"ed784632-ada1-4164-b611-5b679437f210\") " pod="openshift-marketplace/community-operators-6crnm" Feb 27 16:56:41 crc kubenswrapper[4830]: I0227 16:56:41.087939 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed784632-ada1-4164-b611-5b679437f210-catalog-content\") pod \"community-operators-6crnm\" (UID: \"ed784632-ada1-4164-b611-5b679437f210\") " pod="openshift-marketplace/community-operators-6crnm" Feb 27 16:56:41 crc kubenswrapper[4830]: I0227 16:56:41.088082 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed784632-ada1-4164-b611-5b679437f210-utilities\") pod \"community-operators-6crnm\" (UID: \"ed784632-ada1-4164-b611-5b679437f210\") " pod="openshift-marketplace/community-operators-6crnm" Feb 27 16:56:41 crc kubenswrapper[4830]: I0227 16:56:41.088751 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed784632-ada1-4164-b611-5b679437f210-catalog-content\") pod \"community-operators-6crnm\" (UID: \"ed784632-ada1-4164-b611-5b679437f210\") " pod="openshift-marketplace/community-operators-6crnm" Feb 27 16:56:41 crc kubenswrapper[4830]: I0227 16:56:41.088774 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed784632-ada1-4164-b611-5b679437f210-utilities\") pod \"community-operators-6crnm\" (UID: \"ed784632-ada1-4164-b611-5b679437f210\") " pod="openshift-marketplace/community-operators-6crnm" Feb 27 16:56:41 crc kubenswrapper[4830]: I0227 16:56:41.123714 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v84k2\" (UniqueName: \"kubernetes.io/projected/ed784632-ada1-4164-b611-5b679437f210-kube-api-access-v84k2\") pod \"community-operators-6crnm\" (UID: \"ed784632-ada1-4164-b611-5b679437f210\") " pod="openshift-marketplace/community-operators-6crnm" Feb 27 16:56:41 crc kubenswrapper[4830]: I0227 16:56:41.143813 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6crnm" Feb 27 16:56:41 crc kubenswrapper[4830]: I0227 16:56:41.640622 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6crnm"] Feb 27 16:56:42 crc kubenswrapper[4830]: I0227 16:56:42.064420 4830 generic.go:334] "Generic (PLEG): container finished" podID="ed784632-ada1-4164-b611-5b679437f210" containerID="2d32f625d3ee0f57e8dfb2f04b7705757636b3c3275becaadbb634cb0bfaa567" exitCode=0 Feb 27 16:56:42 crc kubenswrapper[4830]: I0227 16:56:42.064495 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6crnm" event={"ID":"ed784632-ada1-4164-b611-5b679437f210","Type":"ContainerDied","Data":"2d32f625d3ee0f57e8dfb2f04b7705757636b3c3275becaadbb634cb0bfaa567"} Feb 27 16:56:42 crc kubenswrapper[4830]: I0227 16:56:42.064909 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6crnm" event={"ID":"ed784632-ada1-4164-b611-5b679437f210","Type":"ContainerStarted","Data":"f365e64a4e50b7be52fd1688c84edcfd26ff7591f064f76cc11fab3aaf807e2f"} Feb 27 16:56:42 crc kubenswrapper[4830]: I0227 16:56:42.074276 4830 generic.go:334] "Generic (PLEG): container finished" podID="9afd23ad-3f1c-4683-aa63-50444f93c068" containerID="9f47245f0870eb9dbd281609f27a9440251d2923fe89b491c594d6976b989a8c" exitCode=0 Feb 27 16:56:42 crc kubenswrapper[4830]: I0227 16:56:42.074342 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7rcxt" event={"ID":"9afd23ad-3f1c-4683-aa63-50444f93c068","Type":"ContainerDied","Data":"9f47245f0870eb9dbd281609f27a9440251d2923fe89b491c594d6976b989a8c"} Feb 27 16:56:43 crc kubenswrapper[4830]: I0227 16:56:43.088143 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7rcxt" event={"ID":"9afd23ad-3f1c-4683-aa63-50444f93c068","Type":"ContainerStarted","Data":"16b22357479ab7eb001a909ca4793c43e087d900ad977ee3a51ff08323f1e163"} Feb 27 16:56:43 crc kubenswrapper[4830]: I0227 16:56:43.134858 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7rcxt" podStartSLOduration=2.648932379 podStartE2EDuration="5.13482379s" podCreationTimestamp="2026-02-27 16:56:38 +0000 UTC" firstStartedPulling="2026-02-27 16:56:40.044549421 +0000 UTC m=+2996.133821904" lastFinishedPulling="2026-02-27 16:56:42.530440812 +0000 UTC m=+2998.619713315" observedRunningTime="2026-02-27 16:56:43.12111247 +0000 UTC m=+2999.210384943" watchObservedRunningTime="2026-02-27 16:56:43.13482379 +0000 UTC m=+2999.224096283" Feb 27 16:56:44 crc kubenswrapper[4830]: I0227 16:56:44.103784 4830 generic.go:334] "Generic (PLEG): container finished" podID="ed784632-ada1-4164-b611-5b679437f210" containerID="adb07ce022f68a5648a36a633f8338e8f3d3afc433f27c59a65f0f61732e7161" exitCode=0 Feb 27 16:56:44 crc kubenswrapper[4830]: I0227 16:56:44.103913 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6crnm" event={"ID":"ed784632-ada1-4164-b611-5b679437f210","Type":"ContainerDied","Data":"adb07ce022f68a5648a36a633f8338e8f3d3afc433f27c59a65f0f61732e7161"} Feb 27 16:56:45 crc kubenswrapper[4830]: I0227 16:56:45.115261 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6crnm" event={"ID":"ed784632-ada1-4164-b611-5b679437f210","Type":"ContainerStarted","Data":"ea956a9f881f3226b7d0e0ffe7ed489dde67bb5fb982b15d1c03e4b009052274"} Feb 27 16:56:45 crc kubenswrapper[4830]: I0227 16:56:45.144914 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6crnm" podStartSLOduration=2.74032471 podStartE2EDuration="5.144897857s" podCreationTimestamp="2026-02-27 16:56:40 +0000 UTC" firstStartedPulling="2026-02-27 16:56:42.066644129 +0000 UTC m=+2998.155916622" lastFinishedPulling="2026-02-27 16:56:44.471217276 +0000 UTC m=+3000.560489769" observedRunningTime="2026-02-27 16:56:45.141098129 +0000 UTC m=+3001.230370612" watchObservedRunningTime="2026-02-27 16:56:45.144897857 +0000 UTC m=+3001.234170320" Feb 27 16:56:45 crc kubenswrapper[4830]: I0227 16:56:45.692595 4830 scope.go:117] "RemoveContainer" containerID="c2cad79082b6458298e697b09f6d2ed648523d215732030601012e41908f16b3" Feb 27 16:56:46 crc kubenswrapper[4830]: I0227 16:56:46.762839 4830 scope.go:117] "RemoveContainer" containerID="b1477650672db48c8bb6d798d8fc8040ec3d1666489a66fcec0561cf4cfade74" Feb 27 16:56:46 crc kubenswrapper[4830]: E0227 16:56:46.763566 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:56:48 crc kubenswrapper[4830]: I0227 16:56:48.740062 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7rcxt" Feb 27 16:56:48 crc kubenswrapper[4830]: I0227 16:56:48.742831 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7rcxt" Feb 27 16:56:48 crc kubenswrapper[4830]: I0227 16:56:48.822142 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7rcxt" Feb 27 16:56:49 crc kubenswrapper[4830]: I0227 16:56:49.227499 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7rcxt" Feb 27 16:56:49 crc kubenswrapper[4830]: I0227 16:56:49.968281 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7rcxt"] Feb 27 16:56:51 crc kubenswrapper[4830]: I0227 16:56:51.144757 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6crnm" Feb 27 16:56:51 crc kubenswrapper[4830]: I0227 16:56:51.144825 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6crnm" Feb 27 16:56:51 crc kubenswrapper[4830]: I0227 16:56:51.177076 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7rcxt" podUID="9afd23ad-3f1c-4683-aa63-50444f93c068" containerName="registry-server" containerID="cri-o://16b22357479ab7eb001a909ca4793c43e087d900ad977ee3a51ff08323f1e163" gracePeriod=2 Feb 27 16:56:51 crc kubenswrapper[4830]: I0227 16:56:51.221368 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6crnm" Feb 27 16:56:51 crc kubenswrapper[4830]: I0227 16:56:51.297840 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6crnm" Feb 27 16:56:51 crc kubenswrapper[4830]: I0227 16:56:51.591652 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7rcxt" Feb 27 16:56:51 crc kubenswrapper[4830]: I0227 16:56:51.663012 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9afd23ad-3f1c-4683-aa63-50444f93c068-catalog-content\") pod \"9afd23ad-3f1c-4683-aa63-50444f93c068\" (UID: \"9afd23ad-3f1c-4683-aa63-50444f93c068\") " Feb 27 16:56:51 crc kubenswrapper[4830]: I0227 16:56:51.663073 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xt7pg\" (UniqueName: \"kubernetes.io/projected/9afd23ad-3f1c-4683-aa63-50444f93c068-kube-api-access-xt7pg\") pod \"9afd23ad-3f1c-4683-aa63-50444f93c068\" (UID: \"9afd23ad-3f1c-4683-aa63-50444f93c068\") " Feb 27 16:56:51 crc kubenswrapper[4830]: I0227 16:56:51.663211 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9afd23ad-3f1c-4683-aa63-50444f93c068-utilities\") pod \"9afd23ad-3f1c-4683-aa63-50444f93c068\" (UID: \"9afd23ad-3f1c-4683-aa63-50444f93c068\") " Feb 27 16:56:51 crc kubenswrapper[4830]: I0227 16:56:51.664584 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9afd23ad-3f1c-4683-aa63-50444f93c068-utilities" (OuterVolumeSpecName: "utilities") pod "9afd23ad-3f1c-4683-aa63-50444f93c068" (UID: "9afd23ad-3f1c-4683-aa63-50444f93c068"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:56:51 crc kubenswrapper[4830]: I0227 16:56:51.672467 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9afd23ad-3f1c-4683-aa63-50444f93c068-kube-api-access-xt7pg" (OuterVolumeSpecName: "kube-api-access-xt7pg") pod "9afd23ad-3f1c-4683-aa63-50444f93c068" (UID: "9afd23ad-3f1c-4683-aa63-50444f93c068"). InnerVolumeSpecName "kube-api-access-xt7pg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:56:51 crc kubenswrapper[4830]: I0227 16:56:51.737739 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9afd23ad-3f1c-4683-aa63-50444f93c068-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9afd23ad-3f1c-4683-aa63-50444f93c068" (UID: "9afd23ad-3f1c-4683-aa63-50444f93c068"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:56:51 crc kubenswrapper[4830]: I0227 16:56:51.764498 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9afd23ad-3f1c-4683-aa63-50444f93c068-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 16:56:51 crc kubenswrapper[4830]: I0227 16:56:51.764545 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9afd23ad-3f1c-4683-aa63-50444f93c068-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 16:56:51 crc kubenswrapper[4830]: I0227 16:56:51.764561 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xt7pg\" (UniqueName: \"kubernetes.io/projected/9afd23ad-3f1c-4683-aa63-50444f93c068-kube-api-access-xt7pg\") on node \"crc\" DevicePath \"\"" Feb 27 16:56:52 crc kubenswrapper[4830]: I0227 16:56:52.189235 4830 generic.go:334] "Generic (PLEG): container finished" podID="9afd23ad-3f1c-4683-aa63-50444f93c068" containerID="16b22357479ab7eb001a909ca4793c43e087d900ad977ee3a51ff08323f1e163" exitCode=0 Feb 27 16:56:52 crc kubenswrapper[4830]: I0227 16:56:52.189316 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7rcxt" Feb 27 16:56:52 crc kubenswrapper[4830]: I0227 16:56:52.189306 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7rcxt" event={"ID":"9afd23ad-3f1c-4683-aa63-50444f93c068","Type":"ContainerDied","Data":"16b22357479ab7eb001a909ca4793c43e087d900ad977ee3a51ff08323f1e163"} Feb 27 16:56:52 crc kubenswrapper[4830]: I0227 16:56:52.189386 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7rcxt" event={"ID":"9afd23ad-3f1c-4683-aa63-50444f93c068","Type":"ContainerDied","Data":"d6303f625506b736140bb680c93320de267f6162bfff8318395aa77b6118150f"} Feb 27 16:56:52 crc kubenswrapper[4830]: I0227 16:56:52.189452 4830 scope.go:117] "RemoveContainer" containerID="16b22357479ab7eb001a909ca4793c43e087d900ad977ee3a51ff08323f1e163" Feb 27 16:56:52 crc kubenswrapper[4830]: I0227 16:56:52.212876 4830 scope.go:117] "RemoveContainer" containerID="9f47245f0870eb9dbd281609f27a9440251d2923fe89b491c594d6976b989a8c" Feb 27 16:56:52 crc kubenswrapper[4830]: I0227 16:56:52.241508 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7rcxt"] Feb 27 16:56:52 crc kubenswrapper[4830]: I0227 16:56:52.247900 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7rcxt"] Feb 27 16:56:52 crc kubenswrapper[4830]: I0227 16:56:52.271297 4830 scope.go:117] "RemoveContainer" containerID="467251a4468e168e4756d74ff94a8458cf1dc3e2f9be6ff7d1bcc05847c0a744" Feb 27 16:56:52 crc kubenswrapper[4830]: I0227 16:56:52.295493 4830 scope.go:117] "RemoveContainer" containerID="16b22357479ab7eb001a909ca4793c43e087d900ad977ee3a51ff08323f1e163" Feb 27 16:56:52 crc kubenswrapper[4830]: E0227 16:56:52.295962 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16b22357479ab7eb001a909ca4793c43e087d900ad977ee3a51ff08323f1e163\": container with ID starting with 16b22357479ab7eb001a909ca4793c43e087d900ad977ee3a51ff08323f1e163 not found: ID does not exist" containerID="16b22357479ab7eb001a909ca4793c43e087d900ad977ee3a51ff08323f1e163" Feb 27 16:56:52 crc kubenswrapper[4830]: I0227 16:56:52.296030 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16b22357479ab7eb001a909ca4793c43e087d900ad977ee3a51ff08323f1e163"} err="failed to get container status \"16b22357479ab7eb001a909ca4793c43e087d900ad977ee3a51ff08323f1e163\": rpc error: code = NotFound desc = could not find container \"16b22357479ab7eb001a909ca4793c43e087d900ad977ee3a51ff08323f1e163\": container with ID starting with 16b22357479ab7eb001a909ca4793c43e087d900ad977ee3a51ff08323f1e163 not found: ID does not exist" Feb 27 16:56:52 crc kubenswrapper[4830]: I0227 16:56:52.296065 4830 scope.go:117] "RemoveContainer" containerID="9f47245f0870eb9dbd281609f27a9440251d2923fe89b491c594d6976b989a8c" Feb 27 16:56:52 crc kubenswrapper[4830]: E0227 16:56:52.296503 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f47245f0870eb9dbd281609f27a9440251d2923fe89b491c594d6976b989a8c\": container with ID starting with 9f47245f0870eb9dbd281609f27a9440251d2923fe89b491c594d6976b989a8c not found: ID does not exist" containerID="9f47245f0870eb9dbd281609f27a9440251d2923fe89b491c594d6976b989a8c" Feb 27 16:56:52 crc kubenswrapper[4830]: I0227 16:56:52.296535 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f47245f0870eb9dbd281609f27a9440251d2923fe89b491c594d6976b989a8c"} err="failed to get container status \"9f47245f0870eb9dbd281609f27a9440251d2923fe89b491c594d6976b989a8c\": rpc error: code = NotFound desc = could not find container \"9f47245f0870eb9dbd281609f27a9440251d2923fe89b491c594d6976b989a8c\": container with ID starting with 9f47245f0870eb9dbd281609f27a9440251d2923fe89b491c594d6976b989a8c not found: ID does not exist" Feb 27 16:56:52 crc kubenswrapper[4830]: I0227 16:56:52.296554 4830 scope.go:117] "RemoveContainer" containerID="467251a4468e168e4756d74ff94a8458cf1dc3e2f9be6ff7d1bcc05847c0a744" Feb 27 16:56:52 crc kubenswrapper[4830]: E0227 16:56:52.296869 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"467251a4468e168e4756d74ff94a8458cf1dc3e2f9be6ff7d1bcc05847c0a744\": container with ID starting with 467251a4468e168e4756d74ff94a8458cf1dc3e2f9be6ff7d1bcc05847c0a744 not found: ID does not exist" containerID="467251a4468e168e4756d74ff94a8458cf1dc3e2f9be6ff7d1bcc05847c0a744" Feb 27 16:56:52 crc kubenswrapper[4830]: I0227 16:56:52.296899 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"467251a4468e168e4756d74ff94a8458cf1dc3e2f9be6ff7d1bcc05847c0a744"} err="failed to get container status \"467251a4468e168e4756d74ff94a8458cf1dc3e2f9be6ff7d1bcc05847c0a744\": rpc error: code = NotFound desc = could not find container \"467251a4468e168e4756d74ff94a8458cf1dc3e2f9be6ff7d1bcc05847c0a744\": container with ID starting with 467251a4468e168e4756d74ff94a8458cf1dc3e2f9be6ff7d1bcc05847c0a744 not found: ID does not exist" Feb 27 16:56:52 crc kubenswrapper[4830]: I0227 16:56:52.774453 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9afd23ad-3f1c-4683-aa63-50444f93c068" path="/var/lib/kubelet/pods/9afd23ad-3f1c-4683-aa63-50444f93c068/volumes" Feb 27 16:56:52 crc kubenswrapper[4830]: I0227 16:56:52.968319 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6crnm"] Feb 27 16:56:53 crc kubenswrapper[4830]: I0227 16:56:53.199588 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6crnm" podUID="ed784632-ada1-4164-b611-5b679437f210" containerName="registry-server" containerID="cri-o://ea956a9f881f3226b7d0e0ffe7ed489dde67bb5fb982b15d1c03e4b009052274" gracePeriod=2 Feb 27 16:56:53 crc kubenswrapper[4830]: I0227 16:56:53.702605 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6crnm" Feb 27 16:56:53 crc kubenswrapper[4830]: I0227 16:56:53.898023 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed784632-ada1-4164-b611-5b679437f210-utilities\") pod \"ed784632-ada1-4164-b611-5b679437f210\" (UID: \"ed784632-ada1-4164-b611-5b679437f210\") " Feb 27 16:56:53 crc kubenswrapper[4830]: I0227 16:56:53.898159 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed784632-ada1-4164-b611-5b679437f210-catalog-content\") pod \"ed784632-ada1-4164-b611-5b679437f210\" (UID: \"ed784632-ada1-4164-b611-5b679437f210\") " Feb 27 16:56:53 crc kubenswrapper[4830]: I0227 16:56:53.898375 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v84k2\" (UniqueName: \"kubernetes.io/projected/ed784632-ada1-4164-b611-5b679437f210-kube-api-access-v84k2\") pod \"ed784632-ada1-4164-b611-5b679437f210\" (UID: \"ed784632-ada1-4164-b611-5b679437f210\") " Feb 27 16:56:53 crc kubenswrapper[4830]: I0227 16:56:53.900344 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed784632-ada1-4164-b611-5b679437f210-utilities" (OuterVolumeSpecName: "utilities") pod "ed784632-ada1-4164-b611-5b679437f210" (UID: "ed784632-ada1-4164-b611-5b679437f210"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:56:53 crc kubenswrapper[4830]: I0227 16:56:53.906189 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed784632-ada1-4164-b611-5b679437f210-kube-api-access-v84k2" (OuterVolumeSpecName: "kube-api-access-v84k2") pod "ed784632-ada1-4164-b611-5b679437f210" (UID: "ed784632-ada1-4164-b611-5b679437f210"). InnerVolumeSpecName "kube-api-access-v84k2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:56:54 crc kubenswrapper[4830]: I0227 16:56:54.000075 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v84k2\" (UniqueName: \"kubernetes.io/projected/ed784632-ada1-4164-b611-5b679437f210-kube-api-access-v84k2\") on node \"crc\" DevicePath \"\"" Feb 27 16:56:54 crc kubenswrapper[4830]: I0227 16:56:54.000132 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed784632-ada1-4164-b611-5b679437f210-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 16:56:54 crc kubenswrapper[4830]: I0227 16:56:54.214119 4830 generic.go:334] "Generic (PLEG): container finished" podID="ed784632-ada1-4164-b611-5b679437f210" containerID="ea956a9f881f3226b7d0e0ffe7ed489dde67bb5fb982b15d1c03e4b009052274" exitCode=0 Feb 27 16:56:54 crc kubenswrapper[4830]: I0227 16:56:54.214201 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6crnm" event={"ID":"ed784632-ada1-4164-b611-5b679437f210","Type":"ContainerDied","Data":"ea956a9f881f3226b7d0e0ffe7ed489dde67bb5fb982b15d1c03e4b009052274"} Feb 27 16:56:54 crc kubenswrapper[4830]: I0227 16:56:54.214254 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6crnm" event={"ID":"ed784632-ada1-4164-b611-5b679437f210","Type":"ContainerDied","Data":"f365e64a4e50b7be52fd1688c84edcfd26ff7591f064f76cc11fab3aaf807e2f"} Feb 27 16:56:54 crc kubenswrapper[4830]: I0227 16:56:54.214268 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6crnm" Feb 27 16:56:54 crc kubenswrapper[4830]: I0227 16:56:54.214287 4830 scope.go:117] "RemoveContainer" containerID="ea956a9f881f3226b7d0e0ffe7ed489dde67bb5fb982b15d1c03e4b009052274" Feb 27 16:56:54 crc kubenswrapper[4830]: I0227 16:56:54.239385 4830 scope.go:117] "RemoveContainer" containerID="adb07ce022f68a5648a36a633f8338e8f3d3afc433f27c59a65f0f61732e7161" Feb 27 16:56:54 crc kubenswrapper[4830]: I0227 16:56:54.281847 4830 scope.go:117] "RemoveContainer" containerID="2d32f625d3ee0f57e8dfb2f04b7705757636b3c3275becaadbb634cb0bfaa567" Feb 27 16:56:54 crc kubenswrapper[4830]: I0227 16:56:54.311508 4830 scope.go:117] "RemoveContainer" containerID="ea956a9f881f3226b7d0e0ffe7ed489dde67bb5fb982b15d1c03e4b009052274" Feb 27 16:56:54 crc kubenswrapper[4830]: E0227 16:56:54.312110 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea956a9f881f3226b7d0e0ffe7ed489dde67bb5fb982b15d1c03e4b009052274\": container with ID starting with ea956a9f881f3226b7d0e0ffe7ed489dde67bb5fb982b15d1c03e4b009052274 not found: ID does not exist" containerID="ea956a9f881f3226b7d0e0ffe7ed489dde67bb5fb982b15d1c03e4b009052274" Feb 27 16:56:54 crc kubenswrapper[4830]: I0227 16:56:54.312171 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea956a9f881f3226b7d0e0ffe7ed489dde67bb5fb982b15d1c03e4b009052274"} err="failed to get container status \"ea956a9f881f3226b7d0e0ffe7ed489dde67bb5fb982b15d1c03e4b009052274\": rpc error: code = NotFound desc = could not find container \"ea956a9f881f3226b7d0e0ffe7ed489dde67bb5fb982b15d1c03e4b009052274\": container with ID starting with ea956a9f881f3226b7d0e0ffe7ed489dde67bb5fb982b15d1c03e4b009052274 not found: ID does not exist" Feb 27 16:56:54 crc kubenswrapper[4830]: I0227 16:56:54.312208 4830 scope.go:117] "RemoveContainer" containerID="adb07ce022f68a5648a36a633f8338e8f3d3afc433f27c59a65f0f61732e7161" Feb 27 16:56:54 crc kubenswrapper[4830]: E0227 16:56:54.312745 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adb07ce022f68a5648a36a633f8338e8f3d3afc433f27c59a65f0f61732e7161\": container with ID starting with adb07ce022f68a5648a36a633f8338e8f3d3afc433f27c59a65f0f61732e7161 not found: ID does not exist" containerID="adb07ce022f68a5648a36a633f8338e8f3d3afc433f27c59a65f0f61732e7161" Feb 27 16:56:54 crc kubenswrapper[4830]: I0227 16:56:54.312791 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adb07ce022f68a5648a36a633f8338e8f3d3afc433f27c59a65f0f61732e7161"} err="failed to get container status \"adb07ce022f68a5648a36a633f8338e8f3d3afc433f27c59a65f0f61732e7161\": rpc error: code = NotFound desc = could not find container \"adb07ce022f68a5648a36a633f8338e8f3d3afc433f27c59a65f0f61732e7161\": container with ID starting with adb07ce022f68a5648a36a633f8338e8f3d3afc433f27c59a65f0f61732e7161 not found: ID does not exist" Feb 27 16:56:54 crc kubenswrapper[4830]: I0227 16:56:54.312818 4830 scope.go:117] "RemoveContainer" containerID="2d32f625d3ee0f57e8dfb2f04b7705757636b3c3275becaadbb634cb0bfaa567" Feb 27 16:56:54 crc kubenswrapper[4830]: E0227 16:56:54.313325 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d32f625d3ee0f57e8dfb2f04b7705757636b3c3275becaadbb634cb0bfaa567\": container with ID starting with 2d32f625d3ee0f57e8dfb2f04b7705757636b3c3275becaadbb634cb0bfaa567 not found: ID does not exist" containerID="2d32f625d3ee0f57e8dfb2f04b7705757636b3c3275becaadbb634cb0bfaa567" Feb 27 16:56:54 crc kubenswrapper[4830]: I0227 16:56:54.313371 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d32f625d3ee0f57e8dfb2f04b7705757636b3c3275becaadbb634cb0bfaa567"} err="failed to get container status \"2d32f625d3ee0f57e8dfb2f04b7705757636b3c3275becaadbb634cb0bfaa567\": rpc error: code = NotFound desc = could not find container \"2d32f625d3ee0f57e8dfb2f04b7705757636b3c3275becaadbb634cb0bfaa567\": container with ID starting with 2d32f625d3ee0f57e8dfb2f04b7705757636b3c3275becaadbb634cb0bfaa567 not found: ID does not exist" Feb 27 16:56:54 crc kubenswrapper[4830]: I0227 16:56:54.488582 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed784632-ada1-4164-b611-5b679437f210-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ed784632-ada1-4164-b611-5b679437f210" (UID: "ed784632-ada1-4164-b611-5b679437f210"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:56:54 crc kubenswrapper[4830]: I0227 16:56:54.507418 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed784632-ada1-4164-b611-5b679437f210-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 16:56:54 crc kubenswrapper[4830]: I0227 16:56:54.568906 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6crnm"] Feb 27 16:56:54 crc kubenswrapper[4830]: I0227 16:56:54.575777 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6crnm"] Feb 27 16:56:54 crc kubenswrapper[4830]: I0227 16:56:54.776405 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed784632-ada1-4164-b611-5b679437f210" path="/var/lib/kubelet/pods/ed784632-ada1-4164-b611-5b679437f210/volumes" Feb 27 16:56:57 crc kubenswrapper[4830]: I0227 16:56:57.761857 4830 scope.go:117] "RemoveContainer" containerID="b1477650672db48c8bb6d798d8fc8040ec3d1666489a66fcec0561cf4cfade74" Feb 27 16:56:57 crc kubenswrapper[4830]: E0227 16:56:57.762101 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:57:11 crc kubenswrapper[4830]: I0227 16:57:11.763421 4830 scope.go:117] "RemoveContainer" containerID="b1477650672db48c8bb6d798d8fc8040ec3d1666489a66fcec0561cf4cfade74" Feb 27 16:57:11 crc kubenswrapper[4830]: E0227 16:57:11.764697 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:57:25 crc kubenswrapper[4830]: I0227 16:57:25.763109 4830 scope.go:117] "RemoveContainer" containerID="b1477650672db48c8bb6d798d8fc8040ec3d1666489a66fcec0561cf4cfade74" Feb 27 16:57:25 crc kubenswrapper[4830]: E0227 16:57:25.764055 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:57:36 crc kubenswrapper[4830]: I0227 16:57:36.762809 4830 scope.go:117] "RemoveContainer" containerID="b1477650672db48c8bb6d798d8fc8040ec3d1666489a66fcec0561cf4cfade74" Feb 27 16:57:36 crc kubenswrapper[4830]: E0227 16:57:36.763907 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:57:51 crc kubenswrapper[4830]: I0227 16:57:51.762190 4830 scope.go:117] "RemoveContainer" containerID="b1477650672db48c8bb6d798d8fc8040ec3d1666489a66fcec0561cf4cfade74" Feb 27 16:57:51 crc kubenswrapper[4830]: E0227 16:57:51.763731 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:58:00 crc kubenswrapper[4830]: I0227 16:58:00.172753 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536858-z25p7"] Feb 27 16:58:00 crc kubenswrapper[4830]: E0227 16:58:00.174610 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed784632-ada1-4164-b611-5b679437f210" containerName="extract-content" Feb 27 16:58:00 crc kubenswrapper[4830]: I0227 16:58:00.174647 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed784632-ada1-4164-b611-5b679437f210" containerName="extract-content" Feb 27 16:58:00 crc kubenswrapper[4830]: E0227 16:58:00.174871 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9afd23ad-3f1c-4683-aa63-50444f93c068" containerName="extract-content" Feb 27 16:58:00 crc kubenswrapper[4830]: I0227 16:58:00.174898 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="9afd23ad-3f1c-4683-aa63-50444f93c068" containerName="extract-content" Feb 27 16:58:00 crc kubenswrapper[4830]: E0227 16:58:00.174927 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed784632-ada1-4164-b611-5b679437f210" containerName="registry-server" Feb 27 16:58:00 crc kubenswrapper[4830]: I0227 16:58:00.174991 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed784632-ada1-4164-b611-5b679437f210" containerName="registry-server" Feb 27 16:58:00 crc kubenswrapper[4830]: E0227 16:58:00.175179 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9afd23ad-3f1c-4683-aa63-50444f93c068" containerName="extract-utilities" Feb 27 16:58:00 crc kubenswrapper[4830]: I0227 16:58:00.175202 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="9afd23ad-3f1c-4683-aa63-50444f93c068" containerName="extract-utilities" Feb 27 16:58:00 crc kubenswrapper[4830]: E0227 16:58:00.175227 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed784632-ada1-4164-b611-5b679437f210" containerName="extract-utilities" Feb 27 16:58:00 crc kubenswrapper[4830]: I0227 16:58:00.175242 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed784632-ada1-4164-b611-5b679437f210" containerName="extract-utilities" Feb 27 16:58:00 crc kubenswrapper[4830]: E0227 16:58:00.175274 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9afd23ad-3f1c-4683-aa63-50444f93c068" containerName="registry-server" Feb 27 16:58:00 crc kubenswrapper[4830]: I0227 16:58:00.175288 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="9afd23ad-3f1c-4683-aa63-50444f93c068" containerName="registry-server" Feb 27 16:58:00 crc kubenswrapper[4830]: I0227 16:58:00.175581 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="9afd23ad-3f1c-4683-aa63-50444f93c068" containerName="registry-server" Feb 27 16:58:00 crc kubenswrapper[4830]: I0227 16:58:00.175637 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed784632-ada1-4164-b611-5b679437f210" containerName="registry-server" Feb 27 16:58:00 crc kubenswrapper[4830]: I0227 16:58:00.176748 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536858-z25p7" Feb 27 16:58:00 crc kubenswrapper[4830]: I0227 16:58:00.183637 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 16:58:00 crc kubenswrapper[4830]: I0227 16:58:00.184348 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 16:58:00 crc kubenswrapper[4830]: I0227 16:58:00.185137 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 16:58:00 crc kubenswrapper[4830]: I0227 16:58:00.192826 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536858-z25p7"] Feb 27 16:58:00 crc kubenswrapper[4830]: I0227 16:58:00.304271 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdcfl\" (UniqueName: \"kubernetes.io/projected/6cb7b9b4-2210-42b4-85a1-4ce2396e35bc-kube-api-access-jdcfl\") pod \"auto-csr-approver-29536858-z25p7\" (UID: \"6cb7b9b4-2210-42b4-85a1-4ce2396e35bc\") " pod="openshift-infra/auto-csr-approver-29536858-z25p7" Feb 27 16:58:00 crc kubenswrapper[4830]: I0227 16:58:00.406344 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdcfl\" (UniqueName: \"kubernetes.io/projected/6cb7b9b4-2210-42b4-85a1-4ce2396e35bc-kube-api-access-jdcfl\") pod \"auto-csr-approver-29536858-z25p7\" (UID: \"6cb7b9b4-2210-42b4-85a1-4ce2396e35bc\") " pod="openshift-infra/auto-csr-approver-29536858-z25p7" Feb 27 16:58:00 crc kubenswrapper[4830]: I0227 16:58:00.445635 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdcfl\" (UniqueName: \"kubernetes.io/projected/6cb7b9b4-2210-42b4-85a1-4ce2396e35bc-kube-api-access-jdcfl\") pod \"auto-csr-approver-29536858-z25p7\" (UID: \"6cb7b9b4-2210-42b4-85a1-4ce2396e35bc\") " pod="openshift-infra/auto-csr-approver-29536858-z25p7" Feb 27 16:58:00 crc kubenswrapper[4830]: I0227 16:58:00.508574 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536858-z25p7" Feb 27 16:58:00 crc kubenswrapper[4830]: I0227 16:58:00.807349 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536858-z25p7"] Feb 27 16:58:00 crc kubenswrapper[4830]: I0227 16:58:00.843529 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536858-z25p7" event={"ID":"6cb7b9b4-2210-42b4-85a1-4ce2396e35bc","Type":"ContainerStarted","Data":"792cfcc4510381cbe8a08c800eafff3e37d08dd5dde81cd275c273d86b636f6f"} Feb 27 16:58:02 crc kubenswrapper[4830]: I0227 16:58:02.862423 4830 generic.go:334] "Generic (PLEG): container finished" podID="6cb7b9b4-2210-42b4-85a1-4ce2396e35bc" containerID="71154cbc3d9eaeaa49ce593042f8c3709c33b9bd72fb38798262821dddda085a" exitCode=0 Feb 27 16:58:02 crc kubenswrapper[4830]: I0227 16:58:02.862483 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536858-z25p7" event={"ID":"6cb7b9b4-2210-42b4-85a1-4ce2396e35bc","Type":"ContainerDied","Data":"71154cbc3d9eaeaa49ce593042f8c3709c33b9bd72fb38798262821dddda085a"} Feb 27 16:58:04 crc kubenswrapper[4830]: I0227 16:58:04.142085 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536858-z25p7" Feb 27 16:58:04 crc kubenswrapper[4830]: I0227 16:58:04.262173 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdcfl\" (UniqueName: \"kubernetes.io/projected/6cb7b9b4-2210-42b4-85a1-4ce2396e35bc-kube-api-access-jdcfl\") pod \"6cb7b9b4-2210-42b4-85a1-4ce2396e35bc\" (UID: \"6cb7b9b4-2210-42b4-85a1-4ce2396e35bc\") " Feb 27 16:58:04 crc kubenswrapper[4830]: I0227 16:58:04.271298 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cb7b9b4-2210-42b4-85a1-4ce2396e35bc-kube-api-access-jdcfl" (OuterVolumeSpecName: "kube-api-access-jdcfl") pod "6cb7b9b4-2210-42b4-85a1-4ce2396e35bc" (UID: "6cb7b9b4-2210-42b4-85a1-4ce2396e35bc"). InnerVolumeSpecName "kube-api-access-jdcfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:58:04 crc kubenswrapper[4830]: I0227 16:58:04.365664 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdcfl\" (UniqueName: \"kubernetes.io/projected/6cb7b9b4-2210-42b4-85a1-4ce2396e35bc-kube-api-access-jdcfl\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:04 crc kubenswrapper[4830]: I0227 16:58:04.771630 4830 scope.go:117] "RemoveContainer" containerID="b1477650672db48c8bb6d798d8fc8040ec3d1666489a66fcec0561cf4cfade74" Feb 27 16:58:04 crc kubenswrapper[4830]: E0227 16:58:04.772182 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:58:04 crc kubenswrapper[4830]: I0227 16:58:04.881377 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536858-z25p7" event={"ID":"6cb7b9b4-2210-42b4-85a1-4ce2396e35bc","Type":"ContainerDied","Data":"792cfcc4510381cbe8a08c800eafff3e37d08dd5dde81cd275c273d86b636f6f"} Feb 27 16:58:04 crc kubenswrapper[4830]: I0227 16:58:04.881438 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="792cfcc4510381cbe8a08c800eafff3e37d08dd5dde81cd275c273d86b636f6f" Feb 27 16:58:04 crc kubenswrapper[4830]: I0227 16:58:04.881522 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536858-z25p7" Feb 27 16:58:05 crc kubenswrapper[4830]: I0227 16:58:05.235551 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536852-6vbvt"] Feb 27 16:58:05 crc kubenswrapper[4830]: I0227 16:58:05.245389 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536852-6vbvt"] Feb 27 16:58:06 crc kubenswrapper[4830]: I0227 16:58:06.786530 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a4ee390-448f-427a-bf19-cc86ffbbe968" path="/var/lib/kubelet/pods/3a4ee390-448f-427a-bf19-cc86ffbbe968/volumes" Feb 27 16:58:19 crc kubenswrapper[4830]: I0227 16:58:19.762762 4830 scope.go:117] "RemoveContainer" containerID="b1477650672db48c8bb6d798d8fc8040ec3d1666489a66fcec0561cf4cfade74" Feb 27 16:58:19 crc kubenswrapper[4830]: E0227 16:58:19.763645 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:58:31 crc kubenswrapper[4830]: I0227 16:58:31.763280 4830 scope.go:117] "RemoveContainer" containerID="b1477650672db48c8bb6d798d8fc8040ec3d1666489a66fcec0561cf4cfade74" Feb 27 16:58:31 crc kubenswrapper[4830]: E0227 16:58:31.764340 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:58:45 crc kubenswrapper[4830]: I0227 16:58:45.845418 4830 scope.go:117] "RemoveContainer" containerID="74ec3129796e69c4352b4797a7157d74c0a337b837658466ca0a93857c935343" Feb 27 16:58:46 crc kubenswrapper[4830]: I0227 16:58:46.763066 4830 scope.go:117] "RemoveContainer" containerID="b1477650672db48c8bb6d798d8fc8040ec3d1666489a66fcec0561cf4cfade74" Feb 27 16:58:46 crc kubenswrapper[4830]: E0227 16:58:46.763696 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:59:01 crc kubenswrapper[4830]: I0227 16:59:01.762937 4830 scope.go:117] "RemoveContainer" containerID="b1477650672db48c8bb6d798d8fc8040ec3d1666489a66fcec0561cf4cfade74" Feb 27 16:59:01 crc kubenswrapper[4830]: E0227 16:59:01.764067 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:59:15 crc kubenswrapper[4830]: I0227 16:59:15.762608 4830 scope.go:117] "RemoveContainer" containerID="b1477650672db48c8bb6d798d8fc8040ec3d1666489a66fcec0561cf4cfade74" Feb 27 16:59:15 crc kubenswrapper[4830]: E0227 16:59:15.763593 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:59:29 crc kubenswrapper[4830]: I0227 16:59:29.762555 4830 scope.go:117] "RemoveContainer" containerID="b1477650672db48c8bb6d798d8fc8040ec3d1666489a66fcec0561cf4cfade74" Feb 27 16:59:29 crc kubenswrapper[4830]: E0227 16:59:29.763590 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:59:42 crc kubenswrapper[4830]: I0227 16:59:42.764355 4830 scope.go:117] "RemoveContainer" containerID="b1477650672db48c8bb6d798d8fc8040ec3d1666489a66fcec0561cf4cfade74" Feb 27 16:59:42 crc kubenswrapper[4830]: E0227 16:59:42.767415 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 16:59:54 crc kubenswrapper[4830]: I0227 16:59:54.775090 4830 scope.go:117] "RemoveContainer" containerID="b1477650672db48c8bb6d798d8fc8040ec3d1666489a66fcec0561cf4cfade74" Feb 27 16:59:54 crc kubenswrapper[4830]: E0227 16:59:54.776468 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:00:00 crc kubenswrapper[4830]: I0227 17:00:00.165659 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536860-96hd5"] Feb 27 17:00:00 crc kubenswrapper[4830]: E0227 17:00:00.167080 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cb7b9b4-2210-42b4-85a1-4ce2396e35bc" containerName="oc" Feb 27 17:00:00 crc kubenswrapper[4830]: I0227 17:00:00.167111 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cb7b9b4-2210-42b4-85a1-4ce2396e35bc" containerName="oc" Feb 27 17:00:00 crc kubenswrapper[4830]: I0227 17:00:00.167492 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cb7b9b4-2210-42b4-85a1-4ce2396e35bc" containerName="oc" Feb 27 17:00:00 crc kubenswrapper[4830]: I0227 17:00:00.168528 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536860-96hd5" Feb 27 17:00:00 crc kubenswrapper[4830]: I0227 17:00:00.170618 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:00:00 crc kubenswrapper[4830]: I0227 17:00:00.173035 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:00:00 crc kubenswrapper[4830]: I0227 17:00:00.173273 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 17:00:00 crc kubenswrapper[4830]: I0227 17:00:00.181025 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536860-2n2dr"] Feb 27 17:00:00 crc kubenswrapper[4830]: I0227 17:00:00.182379 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536860-2n2dr" Feb 27 17:00:00 crc kubenswrapper[4830]: I0227 17:00:00.186776 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 17:00:00 crc kubenswrapper[4830]: I0227 17:00:00.187761 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 17:00:00 crc kubenswrapper[4830]: I0227 17:00:00.191409 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536860-2n2dr"] Feb 27 17:00:00 crc kubenswrapper[4830]: I0227 17:00:00.199085 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536860-96hd5"] Feb 27 17:00:00 crc kubenswrapper[4830]: I0227 17:00:00.275188 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvgqv\" (UniqueName: \"kubernetes.io/projected/71af92b9-5cee-48bf-8401-801a0851d27c-kube-api-access-vvgqv\") pod \"auto-csr-approver-29536860-96hd5\" (UID: \"71af92b9-5cee-48bf-8401-801a0851d27c\") " pod="openshift-infra/auto-csr-approver-29536860-96hd5" Feb 27 17:00:00 crc kubenswrapper[4830]: I0227 17:00:00.275254 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnrqm\" (UniqueName: \"kubernetes.io/projected/1abc3c2c-443e-473d-a216-27c5fddb12c5-kube-api-access-dnrqm\") pod \"collect-profiles-29536860-2n2dr\" (UID: \"1abc3c2c-443e-473d-a216-27c5fddb12c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536860-2n2dr" Feb 27 17:00:00 crc kubenswrapper[4830]: I0227 17:00:00.275288 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1abc3c2c-443e-473d-a216-27c5fddb12c5-config-volume\") pod \"collect-profiles-29536860-2n2dr\" (UID: \"1abc3c2c-443e-473d-a216-27c5fddb12c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536860-2n2dr" Feb 27 17:00:00 crc kubenswrapper[4830]: I0227 17:00:00.275344 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1abc3c2c-443e-473d-a216-27c5fddb12c5-secret-volume\") pod \"collect-profiles-29536860-2n2dr\" (UID: \"1abc3c2c-443e-473d-a216-27c5fddb12c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536860-2n2dr" Feb 27 17:00:00 crc kubenswrapper[4830]: I0227 17:00:00.377030 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1abc3c2c-443e-473d-a216-27c5fddb12c5-secret-volume\") pod \"collect-profiles-29536860-2n2dr\" (UID: \"1abc3c2c-443e-473d-a216-27c5fddb12c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536860-2n2dr" Feb 27 17:00:00 crc kubenswrapper[4830]: I0227 17:00:00.377496 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvgqv\" (UniqueName: \"kubernetes.io/projected/71af92b9-5cee-48bf-8401-801a0851d27c-kube-api-access-vvgqv\") pod \"auto-csr-approver-29536860-96hd5\" (UID: \"71af92b9-5cee-48bf-8401-801a0851d27c\") " pod="openshift-infra/auto-csr-approver-29536860-96hd5" Feb 27 17:00:00 crc kubenswrapper[4830]: I0227 17:00:00.377718 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnrqm\" (UniqueName: \"kubernetes.io/projected/1abc3c2c-443e-473d-a216-27c5fddb12c5-kube-api-access-dnrqm\") pod \"collect-profiles-29536860-2n2dr\" (UID: \"1abc3c2c-443e-473d-a216-27c5fddb12c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536860-2n2dr" Feb 27 17:00:00 crc kubenswrapper[4830]: I0227 17:00:00.377910 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1abc3c2c-443e-473d-a216-27c5fddb12c5-config-volume\") pod \"collect-profiles-29536860-2n2dr\" (UID: \"1abc3c2c-443e-473d-a216-27c5fddb12c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536860-2n2dr" Feb 27 17:00:00 crc kubenswrapper[4830]: I0227 17:00:00.379058 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1abc3c2c-443e-473d-a216-27c5fddb12c5-config-volume\") pod \"collect-profiles-29536860-2n2dr\" (UID: \"1abc3c2c-443e-473d-a216-27c5fddb12c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536860-2n2dr" Feb 27 17:00:00 crc kubenswrapper[4830]: I0227 17:00:00.387816 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1abc3c2c-443e-473d-a216-27c5fddb12c5-secret-volume\") pod \"collect-profiles-29536860-2n2dr\" (UID: \"1abc3c2c-443e-473d-a216-27c5fddb12c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536860-2n2dr" Feb 27 17:00:00 crc kubenswrapper[4830]: I0227 17:00:00.397047 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnrqm\" (UniqueName: \"kubernetes.io/projected/1abc3c2c-443e-473d-a216-27c5fddb12c5-kube-api-access-dnrqm\") pod \"collect-profiles-29536860-2n2dr\" (UID: \"1abc3c2c-443e-473d-a216-27c5fddb12c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536860-2n2dr" Feb 27 17:00:00 crc kubenswrapper[4830]: I0227 17:00:00.415230 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvgqv\" (UniqueName: \"kubernetes.io/projected/71af92b9-5cee-48bf-8401-801a0851d27c-kube-api-access-vvgqv\") pod \"auto-csr-approver-29536860-96hd5\" (UID: \"71af92b9-5cee-48bf-8401-801a0851d27c\") " pod="openshift-infra/auto-csr-approver-29536860-96hd5" Feb 27 17:00:00 crc kubenswrapper[4830]: I0227 17:00:00.501608 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536860-96hd5" Feb 27 17:00:00 crc kubenswrapper[4830]: I0227 17:00:00.519991 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536860-2n2dr" Feb 27 17:00:00 crc kubenswrapper[4830]: I0227 17:00:00.808191 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536860-96hd5"] Feb 27 17:00:00 crc kubenswrapper[4830]: I0227 17:00:00.877283 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536860-2n2dr"] Feb 27 17:00:01 crc kubenswrapper[4830]: I0227 17:00:01.065195 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536860-96hd5" event={"ID":"71af92b9-5cee-48bf-8401-801a0851d27c","Type":"ContainerStarted","Data":"50ad19df62775683e2d5da3237a7a4da3b8dfe59d82e7e234027ce1f408a64a6"} Feb 27 17:00:01 crc kubenswrapper[4830]: I0227 17:00:01.066976 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536860-2n2dr" event={"ID":"1abc3c2c-443e-473d-a216-27c5fddb12c5","Type":"ContainerStarted","Data":"77cb231367bef5a5c657a9a74f8cb9920ebf5b7e99f163f9d2e72c2ba91e833f"} Feb 27 17:00:01 crc kubenswrapper[4830]: I0227 17:00:01.067023 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536860-2n2dr" event={"ID":"1abc3c2c-443e-473d-a216-27c5fddb12c5","Type":"ContainerStarted","Data":"06000f60ed71307e41d05616ce0616cc4ea20f606b06f039408a41594ba472eb"} Feb 27 17:00:01 crc kubenswrapper[4830]: I0227 17:00:01.084556 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29536860-2n2dr" podStartSLOduration=1.084533605 podStartE2EDuration="1.084533605s" podCreationTimestamp="2026-02-27 17:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:00:01.080748138 +0000 UTC m=+3197.170020641" watchObservedRunningTime="2026-02-27 17:00:01.084533605 +0000 UTC m=+3197.173806098" Feb 27 17:00:02 crc kubenswrapper[4830]: I0227 17:00:02.080669 4830 generic.go:334] "Generic (PLEG): container finished" podID="1abc3c2c-443e-473d-a216-27c5fddb12c5" containerID="77cb231367bef5a5c657a9a74f8cb9920ebf5b7e99f163f9d2e72c2ba91e833f" exitCode=0 Feb 27 17:00:02 crc kubenswrapper[4830]: I0227 17:00:02.080718 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536860-2n2dr" event={"ID":"1abc3c2c-443e-473d-a216-27c5fddb12c5","Type":"ContainerDied","Data":"77cb231367bef5a5c657a9a74f8cb9920ebf5b7e99f163f9d2e72c2ba91e833f"} Feb 27 17:00:03 crc kubenswrapper[4830]: I0227 17:00:03.483815 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536860-2n2dr" Feb 27 17:00:03 crc kubenswrapper[4830]: I0227 17:00:03.629383 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnrqm\" (UniqueName: \"kubernetes.io/projected/1abc3c2c-443e-473d-a216-27c5fddb12c5-kube-api-access-dnrqm\") pod \"1abc3c2c-443e-473d-a216-27c5fddb12c5\" (UID: \"1abc3c2c-443e-473d-a216-27c5fddb12c5\") " Feb 27 17:00:03 crc kubenswrapper[4830]: I0227 17:00:03.629465 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1abc3c2c-443e-473d-a216-27c5fddb12c5-config-volume\") pod \"1abc3c2c-443e-473d-a216-27c5fddb12c5\" (UID: \"1abc3c2c-443e-473d-a216-27c5fddb12c5\") " Feb 27 17:00:03 crc kubenswrapper[4830]: I0227 17:00:03.629637 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1abc3c2c-443e-473d-a216-27c5fddb12c5-secret-volume\") pod \"1abc3c2c-443e-473d-a216-27c5fddb12c5\" (UID: \"1abc3c2c-443e-473d-a216-27c5fddb12c5\") " Feb 27 17:00:03 crc kubenswrapper[4830]: I0227 17:00:03.631881 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1abc3c2c-443e-473d-a216-27c5fddb12c5-config-volume" (OuterVolumeSpecName: "config-volume") pod "1abc3c2c-443e-473d-a216-27c5fddb12c5" (UID: "1abc3c2c-443e-473d-a216-27c5fddb12c5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:00:03 crc kubenswrapper[4830]: I0227 17:00:03.638286 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1abc3c2c-443e-473d-a216-27c5fddb12c5-kube-api-access-dnrqm" (OuterVolumeSpecName: "kube-api-access-dnrqm") pod "1abc3c2c-443e-473d-a216-27c5fddb12c5" (UID: "1abc3c2c-443e-473d-a216-27c5fddb12c5"). InnerVolumeSpecName "kube-api-access-dnrqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:00:03 crc kubenswrapper[4830]: I0227 17:00:03.638299 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1abc3c2c-443e-473d-a216-27c5fddb12c5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1abc3c2c-443e-473d-a216-27c5fddb12c5" (UID: "1abc3c2c-443e-473d-a216-27c5fddb12c5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:00:03 crc kubenswrapper[4830]: I0227 17:00:03.731048 4830 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1abc3c2c-443e-473d-a216-27c5fddb12c5-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 17:00:03 crc kubenswrapper[4830]: I0227 17:00:03.731088 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnrqm\" (UniqueName: \"kubernetes.io/projected/1abc3c2c-443e-473d-a216-27c5fddb12c5-kube-api-access-dnrqm\") on node \"crc\" DevicePath \"\"" Feb 27 17:00:03 crc kubenswrapper[4830]: I0227 17:00:03.731102 4830 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1abc3c2c-443e-473d-a216-27c5fddb12c5-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 17:00:04 crc kubenswrapper[4830]: I0227 17:00:04.116170 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536860-2n2dr" event={"ID":"1abc3c2c-443e-473d-a216-27c5fddb12c5","Type":"ContainerDied","Data":"06000f60ed71307e41d05616ce0616cc4ea20f606b06f039408a41594ba472eb"} Feb 27 17:00:04 crc kubenswrapper[4830]: I0227 17:00:04.116520 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06000f60ed71307e41d05616ce0616cc4ea20f606b06f039408a41594ba472eb" Feb 27 17:00:04 crc kubenswrapper[4830]: I0227 17:00:04.116232 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536860-2n2dr" Feb 27 17:00:04 crc kubenswrapper[4830]: E0227 17:00:04.288032 4830 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1abc3c2c_443e_473d_a216_27c5fddb12c5.slice\": RecentStats: unable to find data in memory cache]" Feb 27 17:00:04 crc kubenswrapper[4830]: I0227 17:00:04.566021 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536815-jjxmq"] Feb 27 17:00:04 crc kubenswrapper[4830]: I0227 17:00:04.571196 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536815-jjxmq"] Feb 27 17:00:04 crc kubenswrapper[4830]: I0227 17:00:04.776337 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1d91153-884f-474d-a8f6-e14287fd0a16" path="/var/lib/kubelet/pods/b1d91153-884f-474d-a8f6-e14287fd0a16/volumes" Feb 27 17:00:05 crc kubenswrapper[4830]: I0227 17:00:05.130409 4830 generic.go:334] "Generic (PLEG): container finished" podID="71af92b9-5cee-48bf-8401-801a0851d27c" containerID="8d45f9a4a6106e6eb09fd257795deb102bc58ca15d8874897d297e6357ff41c4" exitCode=0 Feb 27 17:00:05 crc kubenswrapper[4830]: I0227 17:00:05.130527 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536860-96hd5" event={"ID":"71af92b9-5cee-48bf-8401-801a0851d27c","Type":"ContainerDied","Data":"8d45f9a4a6106e6eb09fd257795deb102bc58ca15d8874897d297e6357ff41c4"} Feb 27 17:00:06 crc kubenswrapper[4830]: I0227 17:00:06.484518 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536860-96hd5" Feb 27 17:00:06 crc kubenswrapper[4830]: I0227 17:00:06.676286 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvgqv\" (UniqueName: \"kubernetes.io/projected/71af92b9-5cee-48bf-8401-801a0851d27c-kube-api-access-vvgqv\") pod \"71af92b9-5cee-48bf-8401-801a0851d27c\" (UID: \"71af92b9-5cee-48bf-8401-801a0851d27c\") " Feb 27 17:00:06 crc kubenswrapper[4830]: I0227 17:00:06.684185 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71af92b9-5cee-48bf-8401-801a0851d27c-kube-api-access-vvgqv" (OuterVolumeSpecName: "kube-api-access-vvgqv") pod "71af92b9-5cee-48bf-8401-801a0851d27c" (UID: "71af92b9-5cee-48bf-8401-801a0851d27c"). InnerVolumeSpecName "kube-api-access-vvgqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:00:06 crc kubenswrapper[4830]: I0227 17:00:06.778033 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvgqv\" (UniqueName: \"kubernetes.io/projected/71af92b9-5cee-48bf-8401-801a0851d27c-kube-api-access-vvgqv\") on node \"crc\" DevicePath \"\"" Feb 27 17:00:07 crc kubenswrapper[4830]: I0227 17:00:07.154341 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536860-96hd5" event={"ID":"71af92b9-5cee-48bf-8401-801a0851d27c","Type":"ContainerDied","Data":"50ad19df62775683e2d5da3237a7a4da3b8dfe59d82e7e234027ce1f408a64a6"} Feb 27 17:00:07 crc kubenswrapper[4830]: I0227 17:00:07.154399 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50ad19df62775683e2d5da3237a7a4da3b8dfe59d82e7e234027ce1f408a64a6" Feb 27 17:00:07 crc kubenswrapper[4830]: I0227 17:00:07.154797 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536860-96hd5" Feb 27 17:00:07 crc kubenswrapper[4830]: I0227 17:00:07.558082 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536854-9j24s"] Feb 27 17:00:07 crc kubenswrapper[4830]: I0227 17:00:07.568829 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536854-9j24s"] Feb 27 17:00:07 crc kubenswrapper[4830]: I0227 17:00:07.762492 4830 scope.go:117] "RemoveContainer" containerID="b1477650672db48c8bb6d798d8fc8040ec3d1666489a66fcec0561cf4cfade74" Feb 27 17:00:07 crc kubenswrapper[4830]: E0227 17:00:07.762881 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:00:08 crc kubenswrapper[4830]: I0227 17:00:08.772112 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c00c49e6-0391-440f-b78c-7746d978baa3" path="/var/lib/kubelet/pods/c00c49e6-0391-440f-b78c-7746d978baa3/volumes" Feb 27 17:00:22 crc kubenswrapper[4830]: I0227 17:00:22.762757 4830 scope.go:117] "RemoveContainer" containerID="b1477650672db48c8bb6d798d8fc8040ec3d1666489a66fcec0561cf4cfade74" Feb 27 17:00:22 crc kubenswrapper[4830]: E0227 17:00:22.763817 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:00:34 crc kubenswrapper[4830]: I0227 17:00:34.770266 4830 scope.go:117] "RemoveContainer" containerID="b1477650672db48c8bb6d798d8fc8040ec3d1666489a66fcec0561cf4cfade74" Feb 27 17:00:34 crc kubenswrapper[4830]: E0227 17:00:34.771306 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:00:45 crc kubenswrapper[4830]: I0227 17:00:45.955390 4830 scope.go:117] "RemoveContainer" containerID="8cbee3107edaf59aab527082dd6fc221346233646ff72e54576a142fadfef314" Feb 27 17:00:45 crc kubenswrapper[4830]: I0227 17:00:45.982912 4830 scope.go:117] "RemoveContainer" containerID="8e6a7dcfcf3faeb056159607c1d285792a3d6ea926d6a2597c223dd6c8287879" Feb 27 17:00:47 crc kubenswrapper[4830]: I0227 17:00:47.769531 4830 scope.go:117] "RemoveContainer" containerID="b1477650672db48c8bb6d798d8fc8040ec3d1666489a66fcec0561cf4cfade74" Feb 27 17:00:47 crc kubenswrapper[4830]: E0227 17:00:47.774857 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:00:58 crc kubenswrapper[4830]: I0227 17:00:58.762780 4830 scope.go:117] "RemoveContainer" containerID="b1477650672db48c8bb6d798d8fc8040ec3d1666489a66fcec0561cf4cfade74" Feb 27 17:00:58 crc kubenswrapper[4830]: E0227 17:00:58.763764 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:01:11 crc kubenswrapper[4830]: I0227 17:01:11.763012 4830 scope.go:117] "RemoveContainer" containerID="b1477650672db48c8bb6d798d8fc8040ec3d1666489a66fcec0561cf4cfade74" Feb 27 17:01:12 crc kubenswrapper[4830]: I0227 17:01:12.753843 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerStarted","Data":"db78cb7e4ed59ab2b04c3fd90bbd3ca09de79184879f6f7cafe4aab5e64ed8b6"} Feb 27 17:02:00 crc kubenswrapper[4830]: I0227 17:02:00.208597 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536862-8ldb6"] Feb 27 17:02:00 crc kubenswrapper[4830]: E0227 17:02:00.209623 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1abc3c2c-443e-473d-a216-27c5fddb12c5" containerName="collect-profiles" Feb 27 17:02:00 crc kubenswrapper[4830]: I0227 17:02:00.209643 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="1abc3c2c-443e-473d-a216-27c5fddb12c5" containerName="collect-profiles" Feb 27 17:02:00 crc kubenswrapper[4830]: E0227 17:02:00.209678 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71af92b9-5cee-48bf-8401-801a0851d27c" containerName="oc" Feb 27 17:02:00 crc kubenswrapper[4830]: I0227 17:02:00.209690 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="71af92b9-5cee-48bf-8401-801a0851d27c" containerName="oc" Feb 27 17:02:00 crc kubenswrapper[4830]: I0227 17:02:00.209919 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="1abc3c2c-443e-473d-a216-27c5fddb12c5" containerName="collect-profiles" Feb 27 17:02:00 crc kubenswrapper[4830]: I0227 17:02:00.209980 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="71af92b9-5cee-48bf-8401-801a0851d27c" containerName="oc" Feb 27 17:02:00 crc kubenswrapper[4830]: I0227 17:02:00.210667 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536862-8ldb6" Feb 27 17:02:00 crc kubenswrapper[4830]: I0227 17:02:00.214323 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:02:00 crc kubenswrapper[4830]: I0227 17:02:00.219640 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:02:00 crc kubenswrapper[4830]: I0227 17:02:00.220635 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 17:02:00 crc kubenswrapper[4830]: I0227 17:02:00.226481 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536862-8ldb6"] Feb 27 17:02:00 crc kubenswrapper[4830]: I0227 17:02:00.335231 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64jkx\" (UniqueName: \"kubernetes.io/projected/7a8fa961-32e8-4d06-b404-e189e2691884-kube-api-access-64jkx\") pod \"auto-csr-approver-29536862-8ldb6\" (UID: \"7a8fa961-32e8-4d06-b404-e189e2691884\") " pod="openshift-infra/auto-csr-approver-29536862-8ldb6" Feb 27 17:02:00 crc kubenswrapper[4830]: I0227 17:02:00.436756 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64jkx\" (UniqueName: \"kubernetes.io/projected/7a8fa961-32e8-4d06-b404-e189e2691884-kube-api-access-64jkx\") pod \"auto-csr-approver-29536862-8ldb6\" (UID: \"7a8fa961-32e8-4d06-b404-e189e2691884\") " pod="openshift-infra/auto-csr-approver-29536862-8ldb6" Feb 27 17:02:00 crc kubenswrapper[4830]: I0227 17:02:00.478006 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64jkx\" (UniqueName: \"kubernetes.io/projected/7a8fa961-32e8-4d06-b404-e189e2691884-kube-api-access-64jkx\") pod \"auto-csr-approver-29536862-8ldb6\" (UID: \"7a8fa961-32e8-4d06-b404-e189e2691884\") " pod="openshift-infra/auto-csr-approver-29536862-8ldb6" Feb 27 17:02:00 crc kubenswrapper[4830]: I0227 17:02:00.530577 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536862-8ldb6" Feb 27 17:02:00 crc kubenswrapper[4830]: I0227 17:02:00.839504 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536862-8ldb6"] Feb 27 17:02:00 crc kubenswrapper[4830]: I0227 17:02:00.852717 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 17:02:01 crc kubenswrapper[4830]: I0227 17:02:01.253009 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536862-8ldb6" event={"ID":"7a8fa961-32e8-4d06-b404-e189e2691884","Type":"ContainerStarted","Data":"1b19a49d1e041214e25991d9f8581d9c2abb5b8050a81737c4e8e6fb583d35bb"} Feb 27 17:02:03 crc kubenswrapper[4830]: I0227 17:02:03.274811 4830 generic.go:334] "Generic (PLEG): container finished" podID="7a8fa961-32e8-4d06-b404-e189e2691884" containerID="3a4c78e5808e87cfda9635b169561ad7833c2bb0bd03dde0beef0bfe42dfe589" exitCode=0 Feb 27 17:02:03 crc kubenswrapper[4830]: I0227 17:02:03.275132 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536862-8ldb6" event={"ID":"7a8fa961-32e8-4d06-b404-e189e2691884","Type":"ContainerDied","Data":"3a4c78e5808e87cfda9635b169561ad7833c2bb0bd03dde0beef0bfe42dfe589"} Feb 27 17:02:04 crc kubenswrapper[4830]: I0227 17:02:04.679759 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536862-8ldb6" Feb 27 17:02:04 crc kubenswrapper[4830]: I0227 17:02:04.820286 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64jkx\" (UniqueName: \"kubernetes.io/projected/7a8fa961-32e8-4d06-b404-e189e2691884-kube-api-access-64jkx\") pod \"7a8fa961-32e8-4d06-b404-e189e2691884\" (UID: \"7a8fa961-32e8-4d06-b404-e189e2691884\") " Feb 27 17:02:04 crc kubenswrapper[4830]: I0227 17:02:04.830223 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a8fa961-32e8-4d06-b404-e189e2691884-kube-api-access-64jkx" (OuterVolumeSpecName: "kube-api-access-64jkx") pod "7a8fa961-32e8-4d06-b404-e189e2691884" (UID: "7a8fa961-32e8-4d06-b404-e189e2691884"). InnerVolumeSpecName "kube-api-access-64jkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:02:04 crc kubenswrapper[4830]: I0227 17:02:04.922145 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64jkx\" (UniqueName: \"kubernetes.io/projected/7a8fa961-32e8-4d06-b404-e189e2691884-kube-api-access-64jkx\") on node \"crc\" DevicePath \"\"" Feb 27 17:02:05 crc kubenswrapper[4830]: I0227 17:02:05.300015 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536862-8ldb6" event={"ID":"7a8fa961-32e8-4d06-b404-e189e2691884","Type":"ContainerDied","Data":"1b19a49d1e041214e25991d9f8581d9c2abb5b8050a81737c4e8e6fb583d35bb"} Feb 27 17:02:05 crc kubenswrapper[4830]: I0227 17:02:05.300084 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b19a49d1e041214e25991d9f8581d9c2abb5b8050a81737c4e8e6fb583d35bb" Feb 27 17:02:05 crc kubenswrapper[4830]: I0227 17:02:05.300162 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536862-8ldb6" Feb 27 17:02:05 crc kubenswrapper[4830]: I0227 17:02:05.788226 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536856-459dj"] Feb 27 17:02:05 crc kubenswrapper[4830]: I0227 17:02:05.802326 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536856-459dj"] Feb 27 17:02:06 crc kubenswrapper[4830]: I0227 17:02:06.775006 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b872122a-a976-423c-a013-d2946c95c8e8" path="/var/lib/kubelet/pods/b872122a-a976-423c-a013-d2946c95c8e8/volumes" Feb 27 17:02:46 crc kubenswrapper[4830]: I0227 17:02:46.126609 4830 scope.go:117] "RemoveContainer" containerID="562409aa010f63a4d3310338ac112a74c7da97cfe6514550b85755741ba419cc" Feb 27 17:03:33 crc kubenswrapper[4830]: I0227 17:03:33.160429 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:03:33 crc kubenswrapper[4830]: I0227 17:03:33.161426 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:03:40 crc kubenswrapper[4830]: I0227 17:03:40.931394 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wpgl4"] Feb 27 17:03:40 crc kubenswrapper[4830]: E0227 17:03:40.934824 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a8fa961-32e8-4d06-b404-e189e2691884" containerName="oc" Feb 27 17:03:40 crc kubenswrapper[4830]: I0227 17:03:40.938503 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a8fa961-32e8-4d06-b404-e189e2691884" containerName="oc" Feb 27 17:03:40 crc kubenswrapper[4830]: I0227 17:03:40.939019 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a8fa961-32e8-4d06-b404-e189e2691884" containerName="oc" Feb 27 17:03:40 crc kubenswrapper[4830]: I0227 17:03:40.940724 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wpgl4" Feb 27 17:03:40 crc kubenswrapper[4830]: I0227 17:03:40.953939 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wpgl4"] Feb 27 17:03:40 crc kubenswrapper[4830]: I0227 17:03:40.954424 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d37f08a-bebb-4dc2-8589-c0457fa36594-utilities\") pod \"redhat-operators-wpgl4\" (UID: \"6d37f08a-bebb-4dc2-8589-c0457fa36594\") " pod="openshift-marketplace/redhat-operators-wpgl4" Feb 27 17:03:40 crc kubenswrapper[4830]: I0227 17:03:40.954645 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swjdj\" (UniqueName: \"kubernetes.io/projected/6d37f08a-bebb-4dc2-8589-c0457fa36594-kube-api-access-swjdj\") pod \"redhat-operators-wpgl4\" (UID: \"6d37f08a-bebb-4dc2-8589-c0457fa36594\") " pod="openshift-marketplace/redhat-operators-wpgl4" Feb 27 17:03:40 crc kubenswrapper[4830]: I0227 17:03:40.954859 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d37f08a-bebb-4dc2-8589-c0457fa36594-catalog-content\") pod \"redhat-operators-wpgl4\" (UID: \"6d37f08a-bebb-4dc2-8589-c0457fa36594\") " pod="openshift-marketplace/redhat-operators-wpgl4" Feb 27 17:03:41 crc kubenswrapper[4830]: I0227 17:03:41.055721 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d37f08a-bebb-4dc2-8589-c0457fa36594-catalog-content\") pod \"redhat-operators-wpgl4\" (UID: \"6d37f08a-bebb-4dc2-8589-c0457fa36594\") " pod="openshift-marketplace/redhat-operators-wpgl4" Feb 27 17:03:41 crc kubenswrapper[4830]: I0227 17:03:41.055848 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d37f08a-bebb-4dc2-8589-c0457fa36594-utilities\") pod \"redhat-operators-wpgl4\" (UID: \"6d37f08a-bebb-4dc2-8589-c0457fa36594\") " pod="openshift-marketplace/redhat-operators-wpgl4" Feb 27 17:03:41 crc kubenswrapper[4830]: I0227 17:03:41.055885 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swjdj\" (UniqueName: \"kubernetes.io/projected/6d37f08a-bebb-4dc2-8589-c0457fa36594-kube-api-access-swjdj\") pod \"redhat-operators-wpgl4\" (UID: \"6d37f08a-bebb-4dc2-8589-c0457fa36594\") " pod="openshift-marketplace/redhat-operators-wpgl4" Feb 27 17:03:41 crc kubenswrapper[4830]: I0227 17:03:41.056490 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d37f08a-bebb-4dc2-8589-c0457fa36594-catalog-content\") pod \"redhat-operators-wpgl4\" (UID: \"6d37f08a-bebb-4dc2-8589-c0457fa36594\") " pod="openshift-marketplace/redhat-operators-wpgl4" Feb 27 17:03:41 crc kubenswrapper[4830]: I0227 17:03:41.058424 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d37f08a-bebb-4dc2-8589-c0457fa36594-utilities\") pod \"redhat-operators-wpgl4\" (UID: \"6d37f08a-bebb-4dc2-8589-c0457fa36594\") " pod="openshift-marketplace/redhat-operators-wpgl4" Feb 27 17:03:41 crc kubenswrapper[4830]: I0227 17:03:41.094860 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swjdj\" (UniqueName: \"kubernetes.io/projected/6d37f08a-bebb-4dc2-8589-c0457fa36594-kube-api-access-swjdj\") pod \"redhat-operators-wpgl4\" (UID: \"6d37f08a-bebb-4dc2-8589-c0457fa36594\") " pod="openshift-marketplace/redhat-operators-wpgl4" Feb 27 17:03:41 crc kubenswrapper[4830]: I0227 17:03:41.280929 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wpgl4" Feb 27 17:03:41 crc kubenswrapper[4830]: I0227 17:03:41.758433 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wpgl4"] Feb 27 17:03:42 crc kubenswrapper[4830]: I0227 17:03:42.242873 4830 generic.go:334] "Generic (PLEG): container finished" podID="6d37f08a-bebb-4dc2-8589-c0457fa36594" containerID="f2152dfd9a365543c311d2d9246595b4e87c50dd1abf7b076d51fb463d6845de" exitCode=0 Feb 27 17:03:42 crc kubenswrapper[4830]: I0227 17:03:42.242921 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wpgl4" event={"ID":"6d37f08a-bebb-4dc2-8589-c0457fa36594","Type":"ContainerDied","Data":"f2152dfd9a365543c311d2d9246595b4e87c50dd1abf7b076d51fb463d6845de"} Feb 27 17:03:42 crc kubenswrapper[4830]: I0227 17:03:42.242970 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wpgl4" event={"ID":"6d37f08a-bebb-4dc2-8589-c0457fa36594","Type":"ContainerStarted","Data":"e1dc59b0758be9252208e4d889fb8b0fd2076912fab02ba3c313e60f168bc97d"} Feb 27 17:03:44 crc kubenswrapper[4830]: I0227 17:03:44.271338 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wpgl4" event={"ID":"6d37f08a-bebb-4dc2-8589-c0457fa36594","Type":"ContainerStarted","Data":"1786dbc530261a7614ce1f6defe231c8ad451a22247e6c0529dd4d453e222672"} Feb 27 17:03:45 crc kubenswrapper[4830]: I0227 17:03:45.288470 4830 generic.go:334] "Generic (PLEG): container finished" podID="6d37f08a-bebb-4dc2-8589-c0457fa36594" containerID="1786dbc530261a7614ce1f6defe231c8ad451a22247e6c0529dd4d453e222672" exitCode=0 Feb 27 17:03:45 crc kubenswrapper[4830]: I0227 17:03:45.288638 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wpgl4" event={"ID":"6d37f08a-bebb-4dc2-8589-c0457fa36594","Type":"ContainerDied","Data":"1786dbc530261a7614ce1f6defe231c8ad451a22247e6c0529dd4d453e222672"} Feb 27 17:03:46 crc kubenswrapper[4830]: I0227 17:03:46.302588 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wpgl4" event={"ID":"6d37f08a-bebb-4dc2-8589-c0457fa36594","Type":"ContainerStarted","Data":"3535d8544450ef0c31cab13841616c9a4c5f6eedf1a445fcefbf6b1fce31df22"} Feb 27 17:03:46 crc kubenswrapper[4830]: I0227 17:03:46.334277 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wpgl4" podStartSLOduration=2.809753753 podStartE2EDuration="6.334242724s" podCreationTimestamp="2026-02-27 17:03:40 +0000 UTC" firstStartedPulling="2026-02-27 17:03:42.245399418 +0000 UTC m=+3418.334671921" lastFinishedPulling="2026-02-27 17:03:45.769888379 +0000 UTC m=+3421.859160892" observedRunningTime="2026-02-27 17:03:46.332694148 +0000 UTC m=+3422.421966651" watchObservedRunningTime="2026-02-27 17:03:46.334242724 +0000 UTC m=+3422.423515267" Feb 27 17:03:51 crc kubenswrapper[4830]: I0227 17:03:51.281682 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wpgl4" Feb 27 17:03:51 crc kubenswrapper[4830]: I0227 17:03:51.282377 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wpgl4" Feb 27 17:03:52 crc kubenswrapper[4830]: I0227 17:03:52.353457 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wpgl4" podUID="6d37f08a-bebb-4dc2-8589-c0457fa36594" containerName="registry-server" probeResult="failure" output=< Feb 27 17:03:52 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Feb 27 17:03:52 crc kubenswrapper[4830]: > Feb 27 17:04:00 crc kubenswrapper[4830]: I0227 17:04:00.158129 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536864-4xmzm"] Feb 27 17:04:00 crc kubenswrapper[4830]: I0227 17:04:00.159688 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536864-4xmzm" Feb 27 17:04:00 crc kubenswrapper[4830]: I0227 17:04:00.162902 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:04:00 crc kubenswrapper[4830]: I0227 17:04:00.163239 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:04:00 crc kubenswrapper[4830]: I0227 17:04:00.163390 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 17:04:00 crc kubenswrapper[4830]: I0227 17:04:00.170237 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536864-4xmzm"] Feb 27 17:04:00 crc kubenswrapper[4830]: I0227 17:04:00.194791 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkmc2\" (UniqueName: \"kubernetes.io/projected/c18c08f4-6698-4963-981d-0678064c6a3e-kube-api-access-hkmc2\") pod \"auto-csr-approver-29536864-4xmzm\" (UID: \"c18c08f4-6698-4963-981d-0678064c6a3e\") " pod="openshift-infra/auto-csr-approver-29536864-4xmzm" Feb 27 17:04:00 crc kubenswrapper[4830]: I0227 17:04:00.296038 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkmc2\" (UniqueName: \"kubernetes.io/projected/c18c08f4-6698-4963-981d-0678064c6a3e-kube-api-access-hkmc2\") pod \"auto-csr-approver-29536864-4xmzm\" (UID: \"c18c08f4-6698-4963-981d-0678064c6a3e\") " pod="openshift-infra/auto-csr-approver-29536864-4xmzm" Feb 27 17:04:00 crc kubenswrapper[4830]: I0227 17:04:00.332587 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkmc2\" (UniqueName: \"kubernetes.io/projected/c18c08f4-6698-4963-981d-0678064c6a3e-kube-api-access-hkmc2\") pod \"auto-csr-approver-29536864-4xmzm\" (UID: \"c18c08f4-6698-4963-981d-0678064c6a3e\") " pod="openshift-infra/auto-csr-approver-29536864-4xmzm" Feb 27 17:04:00 crc kubenswrapper[4830]: I0227 17:04:00.495426 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536864-4xmzm" Feb 27 17:04:00 crc kubenswrapper[4830]: I0227 17:04:00.820470 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536864-4xmzm"] Feb 27 17:04:00 crc kubenswrapper[4830]: W0227 17:04:00.826577 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc18c08f4_6698_4963_981d_0678064c6a3e.slice/crio-797a24dda36296092fe2c7da0373f382556744f4fd462096dc4cdf1cc0a06965 WatchSource:0}: Error finding container 797a24dda36296092fe2c7da0373f382556744f4fd462096dc4cdf1cc0a06965: Status 404 returned error can't find the container with id 797a24dda36296092fe2c7da0373f382556744f4fd462096dc4cdf1cc0a06965 Feb 27 17:04:01 crc kubenswrapper[4830]: I0227 17:04:01.364440 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wpgl4" Feb 27 17:04:01 crc kubenswrapper[4830]: I0227 17:04:01.440644 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536864-4xmzm" event={"ID":"c18c08f4-6698-4963-981d-0678064c6a3e","Type":"ContainerStarted","Data":"797a24dda36296092fe2c7da0373f382556744f4fd462096dc4cdf1cc0a06965"} Feb 27 17:04:01 crc kubenswrapper[4830]: I0227 17:04:01.445160 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wpgl4" Feb 27 17:04:01 crc kubenswrapper[4830]: I0227 17:04:01.614393 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wpgl4"] Feb 27 17:04:02 crc kubenswrapper[4830]: I0227 17:04:02.450452 4830 generic.go:334] "Generic (PLEG): container finished" podID="c18c08f4-6698-4963-981d-0678064c6a3e" containerID="ce641fd11cfcc54fe9cc918f5ce3c1628e6ab0cbfd0d9be2add7a890d701f64a" exitCode=0 Feb 27 17:04:02 crc kubenswrapper[4830]: I0227 17:04:02.450529 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536864-4xmzm" event={"ID":"c18c08f4-6698-4963-981d-0678064c6a3e","Type":"ContainerDied","Data":"ce641fd11cfcc54fe9cc918f5ce3c1628e6ab0cbfd0d9be2add7a890d701f64a"} Feb 27 17:04:02 crc kubenswrapper[4830]: I0227 17:04:02.450911 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wpgl4" podUID="6d37f08a-bebb-4dc2-8589-c0457fa36594" containerName="registry-server" containerID="cri-o://3535d8544450ef0c31cab13841616c9a4c5f6eedf1a445fcefbf6b1fce31df22" gracePeriod=2 Feb 27 17:04:02 crc kubenswrapper[4830]: I0227 17:04:02.975882 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wpgl4" Feb 27 17:04:03 crc kubenswrapper[4830]: I0227 17:04:03.049177 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d37f08a-bebb-4dc2-8589-c0457fa36594-utilities\") pod \"6d37f08a-bebb-4dc2-8589-c0457fa36594\" (UID: \"6d37f08a-bebb-4dc2-8589-c0457fa36594\") " Feb 27 17:04:03 crc kubenswrapper[4830]: I0227 17:04:03.049227 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d37f08a-bebb-4dc2-8589-c0457fa36594-catalog-content\") pod \"6d37f08a-bebb-4dc2-8589-c0457fa36594\" (UID: \"6d37f08a-bebb-4dc2-8589-c0457fa36594\") " Feb 27 17:04:03 crc kubenswrapper[4830]: I0227 17:04:03.049279 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swjdj\" (UniqueName: \"kubernetes.io/projected/6d37f08a-bebb-4dc2-8589-c0457fa36594-kube-api-access-swjdj\") pod \"6d37f08a-bebb-4dc2-8589-c0457fa36594\" (UID: \"6d37f08a-bebb-4dc2-8589-c0457fa36594\") " Feb 27 17:04:03 crc kubenswrapper[4830]: I0227 17:04:03.050074 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d37f08a-bebb-4dc2-8589-c0457fa36594-utilities" (OuterVolumeSpecName: "utilities") pod "6d37f08a-bebb-4dc2-8589-c0457fa36594" (UID: "6d37f08a-bebb-4dc2-8589-c0457fa36594"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:04:03 crc kubenswrapper[4830]: I0227 17:04:03.056157 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d37f08a-bebb-4dc2-8589-c0457fa36594-kube-api-access-swjdj" (OuterVolumeSpecName: "kube-api-access-swjdj") pod "6d37f08a-bebb-4dc2-8589-c0457fa36594" (UID: "6d37f08a-bebb-4dc2-8589-c0457fa36594"). InnerVolumeSpecName "kube-api-access-swjdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:04:03 crc kubenswrapper[4830]: I0227 17:04:03.151179 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d37f08a-bebb-4dc2-8589-c0457fa36594-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:04:03 crc kubenswrapper[4830]: I0227 17:04:03.151238 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swjdj\" (UniqueName: \"kubernetes.io/projected/6d37f08a-bebb-4dc2-8589-c0457fa36594-kube-api-access-swjdj\") on node \"crc\" DevicePath \"\"" Feb 27 17:04:03 crc kubenswrapper[4830]: I0227 17:04:03.160618 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:04:03 crc kubenswrapper[4830]: I0227 17:04:03.160686 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:04:03 crc kubenswrapper[4830]: I0227 17:04:03.272882 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d37f08a-bebb-4dc2-8589-c0457fa36594-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d37f08a-bebb-4dc2-8589-c0457fa36594" (UID: "6d37f08a-bebb-4dc2-8589-c0457fa36594"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:04:03 crc kubenswrapper[4830]: I0227 17:04:03.353301 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d37f08a-bebb-4dc2-8589-c0457fa36594-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:04:03 crc kubenswrapper[4830]: I0227 17:04:03.463740 4830 generic.go:334] "Generic (PLEG): container finished" podID="6d37f08a-bebb-4dc2-8589-c0457fa36594" containerID="3535d8544450ef0c31cab13841616c9a4c5f6eedf1a445fcefbf6b1fce31df22" exitCode=0 Feb 27 17:04:03 crc kubenswrapper[4830]: I0227 17:04:03.464080 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wpgl4" Feb 27 17:04:03 crc kubenswrapper[4830]: I0227 17:04:03.465104 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wpgl4" event={"ID":"6d37f08a-bebb-4dc2-8589-c0457fa36594","Type":"ContainerDied","Data":"3535d8544450ef0c31cab13841616c9a4c5f6eedf1a445fcefbf6b1fce31df22"} Feb 27 17:04:03 crc kubenswrapper[4830]: I0227 17:04:03.465162 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wpgl4" event={"ID":"6d37f08a-bebb-4dc2-8589-c0457fa36594","Type":"ContainerDied","Data":"e1dc59b0758be9252208e4d889fb8b0fd2076912fab02ba3c313e60f168bc97d"} Feb 27 17:04:03 crc kubenswrapper[4830]: I0227 17:04:03.465192 4830 scope.go:117] "RemoveContainer" containerID="3535d8544450ef0c31cab13841616c9a4c5f6eedf1a445fcefbf6b1fce31df22" Feb 27 17:04:03 crc kubenswrapper[4830]: I0227 17:04:03.509549 4830 scope.go:117] "RemoveContainer" containerID="1786dbc530261a7614ce1f6defe231c8ad451a22247e6c0529dd4d453e222672" Feb 27 17:04:03 crc kubenswrapper[4830]: I0227 17:04:03.520257 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wpgl4"] Feb 27 17:04:03 crc kubenswrapper[4830]: I0227 17:04:03.526248 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wpgl4"] Feb 27 17:04:03 crc kubenswrapper[4830]: I0227 17:04:03.554193 4830 scope.go:117] "RemoveContainer" containerID="f2152dfd9a365543c311d2d9246595b4e87c50dd1abf7b076d51fb463d6845de" Feb 27 17:04:03 crc kubenswrapper[4830]: I0227 17:04:03.590712 4830 scope.go:117] "RemoveContainer" containerID="3535d8544450ef0c31cab13841616c9a4c5f6eedf1a445fcefbf6b1fce31df22" Feb 27 17:04:03 crc kubenswrapper[4830]: E0227 17:04:03.591552 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3535d8544450ef0c31cab13841616c9a4c5f6eedf1a445fcefbf6b1fce31df22\": container with ID starting with 3535d8544450ef0c31cab13841616c9a4c5f6eedf1a445fcefbf6b1fce31df22 not found: ID does not exist" containerID="3535d8544450ef0c31cab13841616c9a4c5f6eedf1a445fcefbf6b1fce31df22" Feb 27 17:04:03 crc kubenswrapper[4830]: I0227 17:04:03.591611 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3535d8544450ef0c31cab13841616c9a4c5f6eedf1a445fcefbf6b1fce31df22"} err="failed to get container status \"3535d8544450ef0c31cab13841616c9a4c5f6eedf1a445fcefbf6b1fce31df22\": rpc error: code = NotFound desc = could not find container \"3535d8544450ef0c31cab13841616c9a4c5f6eedf1a445fcefbf6b1fce31df22\": container with ID starting with 3535d8544450ef0c31cab13841616c9a4c5f6eedf1a445fcefbf6b1fce31df22 not found: ID does not exist" Feb 27 17:04:03 crc kubenswrapper[4830]: I0227 17:04:03.591652 4830 scope.go:117] "RemoveContainer" containerID="1786dbc530261a7614ce1f6defe231c8ad451a22247e6c0529dd4d453e222672" Feb 27 17:04:03 crc kubenswrapper[4830]: E0227 17:04:03.592218 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1786dbc530261a7614ce1f6defe231c8ad451a22247e6c0529dd4d453e222672\": container with ID starting with 1786dbc530261a7614ce1f6defe231c8ad451a22247e6c0529dd4d453e222672 not found: ID does not exist" containerID="1786dbc530261a7614ce1f6defe231c8ad451a22247e6c0529dd4d453e222672" Feb 27 17:04:03 crc kubenswrapper[4830]: I0227 17:04:03.592283 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1786dbc530261a7614ce1f6defe231c8ad451a22247e6c0529dd4d453e222672"} err="failed to get container status \"1786dbc530261a7614ce1f6defe231c8ad451a22247e6c0529dd4d453e222672\": rpc error: code = NotFound desc = could not find container \"1786dbc530261a7614ce1f6defe231c8ad451a22247e6c0529dd4d453e222672\": container with ID starting with 1786dbc530261a7614ce1f6defe231c8ad451a22247e6c0529dd4d453e222672 not found: ID does not exist" Feb 27 17:04:03 crc kubenswrapper[4830]: I0227 17:04:03.592446 4830 scope.go:117] "RemoveContainer" containerID="f2152dfd9a365543c311d2d9246595b4e87c50dd1abf7b076d51fb463d6845de" Feb 27 17:04:03 crc kubenswrapper[4830]: E0227 17:04:03.593019 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2152dfd9a365543c311d2d9246595b4e87c50dd1abf7b076d51fb463d6845de\": container with ID starting with f2152dfd9a365543c311d2d9246595b4e87c50dd1abf7b076d51fb463d6845de not found: ID does not exist" containerID="f2152dfd9a365543c311d2d9246595b4e87c50dd1abf7b076d51fb463d6845de" Feb 27 17:04:03 crc kubenswrapper[4830]: I0227 17:04:03.593075 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2152dfd9a365543c311d2d9246595b4e87c50dd1abf7b076d51fb463d6845de"} err="failed to get container status \"f2152dfd9a365543c311d2d9246595b4e87c50dd1abf7b076d51fb463d6845de\": rpc error: code = NotFound desc = could not find container \"f2152dfd9a365543c311d2d9246595b4e87c50dd1abf7b076d51fb463d6845de\": container with ID starting with f2152dfd9a365543c311d2d9246595b4e87c50dd1abf7b076d51fb463d6845de not found: ID does not exist" Feb 27 17:04:03 crc kubenswrapper[4830]: I0227 17:04:03.780896 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536864-4xmzm" Feb 27 17:04:03 crc kubenswrapper[4830]: I0227 17:04:03.962007 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkmc2\" (UniqueName: \"kubernetes.io/projected/c18c08f4-6698-4963-981d-0678064c6a3e-kube-api-access-hkmc2\") pod \"c18c08f4-6698-4963-981d-0678064c6a3e\" (UID: \"c18c08f4-6698-4963-981d-0678064c6a3e\") " Feb 27 17:04:03 crc kubenswrapper[4830]: I0227 17:04:03.969662 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c18c08f4-6698-4963-981d-0678064c6a3e-kube-api-access-hkmc2" (OuterVolumeSpecName: "kube-api-access-hkmc2") pod "c18c08f4-6698-4963-981d-0678064c6a3e" (UID: "c18c08f4-6698-4963-981d-0678064c6a3e"). InnerVolumeSpecName "kube-api-access-hkmc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:04:04 crc kubenswrapper[4830]: I0227 17:04:04.064336 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkmc2\" (UniqueName: \"kubernetes.io/projected/c18c08f4-6698-4963-981d-0678064c6a3e-kube-api-access-hkmc2\") on node \"crc\" DevicePath \"\"" Feb 27 17:04:04 crc kubenswrapper[4830]: I0227 17:04:04.479161 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536864-4xmzm" event={"ID":"c18c08f4-6698-4963-981d-0678064c6a3e","Type":"ContainerDied","Data":"797a24dda36296092fe2c7da0373f382556744f4fd462096dc4cdf1cc0a06965"} Feb 27 17:04:04 crc kubenswrapper[4830]: I0227 17:04:04.479223 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="797a24dda36296092fe2c7da0373f382556744f4fd462096dc4cdf1cc0a06965" Feb 27 17:04:04 crc kubenswrapper[4830]: I0227 17:04:04.479301 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536864-4xmzm" Feb 27 17:04:04 crc kubenswrapper[4830]: I0227 17:04:04.776754 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d37f08a-bebb-4dc2-8589-c0457fa36594" path="/var/lib/kubelet/pods/6d37f08a-bebb-4dc2-8589-c0457fa36594/volumes" Feb 27 17:04:04 crc kubenswrapper[4830]: I0227 17:04:04.865270 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536858-z25p7"] Feb 27 17:04:04 crc kubenswrapper[4830]: I0227 17:04:04.874580 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536858-z25p7"] Feb 27 17:04:06 crc kubenswrapper[4830]: I0227 17:04:06.780625 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cb7b9b4-2210-42b4-85a1-4ce2396e35bc" path="/var/lib/kubelet/pods/6cb7b9b4-2210-42b4-85a1-4ce2396e35bc/volumes" Feb 27 17:04:33 crc kubenswrapper[4830]: I0227 17:04:33.160201 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:04:33 crc kubenswrapper[4830]: I0227 17:04:33.160862 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:04:33 crc kubenswrapper[4830]: I0227 17:04:33.161013 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 17:04:33 crc kubenswrapper[4830]: I0227 17:04:33.161778 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"db78cb7e4ed59ab2b04c3fd90bbd3ca09de79184879f6f7cafe4aab5e64ed8b6"} pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 17:04:33 crc kubenswrapper[4830]: I0227 17:04:33.161877 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" containerID="cri-o://db78cb7e4ed59ab2b04c3fd90bbd3ca09de79184879f6f7cafe4aab5e64ed8b6" gracePeriod=600 Feb 27 17:04:33 crc kubenswrapper[4830]: I0227 17:04:33.773076 4830 generic.go:334] "Generic (PLEG): container finished" podID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerID="db78cb7e4ed59ab2b04c3fd90bbd3ca09de79184879f6f7cafe4aab5e64ed8b6" exitCode=0 Feb 27 17:04:33 crc kubenswrapper[4830]: I0227 17:04:33.773199 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerDied","Data":"db78cb7e4ed59ab2b04c3fd90bbd3ca09de79184879f6f7cafe4aab5e64ed8b6"} Feb 27 17:04:33 crc kubenswrapper[4830]: I0227 17:04:33.773659 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerStarted","Data":"b59a0a2697e1673250d28d22947dbf29d890efdc4f2af61efcb7c8f01573fc27"} Feb 27 17:04:33 crc kubenswrapper[4830]: I0227 17:04:33.773695 4830 scope.go:117] "RemoveContainer" containerID="b1477650672db48c8bb6d798d8fc8040ec3d1666489a66fcec0561cf4cfade74" Feb 27 17:04:46 crc kubenswrapper[4830]: I0227 17:04:46.257466 4830 scope.go:117] "RemoveContainer" containerID="71154cbc3d9eaeaa49ce593042f8c3709c33b9bd72fb38798262821dddda085a" Feb 27 17:05:46 crc kubenswrapper[4830]: I0227 17:05:46.195033 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qxxj9"] Feb 27 17:05:46 crc kubenswrapper[4830]: E0227 17:05:46.196509 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d37f08a-bebb-4dc2-8589-c0457fa36594" containerName="extract-content" Feb 27 17:05:46 crc kubenswrapper[4830]: I0227 17:05:46.196535 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d37f08a-bebb-4dc2-8589-c0457fa36594" containerName="extract-content" Feb 27 17:05:46 crc kubenswrapper[4830]: E0227 17:05:46.196565 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c18c08f4-6698-4963-981d-0678064c6a3e" containerName="oc" Feb 27 17:05:46 crc kubenswrapper[4830]: I0227 17:05:46.196577 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="c18c08f4-6698-4963-981d-0678064c6a3e" containerName="oc" Feb 27 17:05:46 crc kubenswrapper[4830]: E0227 17:05:46.196608 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d37f08a-bebb-4dc2-8589-c0457fa36594" containerName="registry-server" Feb 27 17:05:46 crc kubenswrapper[4830]: I0227 17:05:46.196625 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d37f08a-bebb-4dc2-8589-c0457fa36594" containerName="registry-server" Feb 27 17:05:46 crc kubenswrapper[4830]: E0227 17:05:46.196654 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d37f08a-bebb-4dc2-8589-c0457fa36594" containerName="extract-utilities" Feb 27 17:05:46 crc kubenswrapper[4830]: I0227 17:05:46.196666 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d37f08a-bebb-4dc2-8589-c0457fa36594" containerName="extract-utilities" Feb 27 17:05:46 crc kubenswrapper[4830]: I0227 17:05:46.196921 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d37f08a-bebb-4dc2-8589-c0457fa36594" containerName="registry-server" Feb 27 17:05:46 crc kubenswrapper[4830]: I0227 17:05:46.196937 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="c18c08f4-6698-4963-981d-0678064c6a3e" containerName="oc" Feb 27 17:05:46 crc kubenswrapper[4830]: I0227 17:05:46.198611 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qxxj9" Feb 27 17:05:46 crc kubenswrapper[4830]: I0227 17:05:46.215880 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qxxj9"] Feb 27 17:05:46 crc kubenswrapper[4830]: I0227 17:05:46.233936 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bd7e461-7a47-428a-9391-dcd747f62b20-utilities\") pod \"redhat-marketplace-qxxj9\" (UID: \"0bd7e461-7a47-428a-9391-dcd747f62b20\") " pod="openshift-marketplace/redhat-marketplace-qxxj9" Feb 27 17:05:46 crc kubenswrapper[4830]: I0227 17:05:46.234027 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bd7e461-7a47-428a-9391-dcd747f62b20-catalog-content\") pod \"redhat-marketplace-qxxj9\" (UID: \"0bd7e461-7a47-428a-9391-dcd747f62b20\") " pod="openshift-marketplace/redhat-marketplace-qxxj9" Feb 27 17:05:46 crc kubenswrapper[4830]: I0227 17:05:46.234081 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlxfp\" (UniqueName: \"kubernetes.io/projected/0bd7e461-7a47-428a-9391-dcd747f62b20-kube-api-access-xlxfp\") pod \"redhat-marketplace-qxxj9\" (UID: \"0bd7e461-7a47-428a-9391-dcd747f62b20\") " pod="openshift-marketplace/redhat-marketplace-qxxj9" Feb 27 17:05:46 crc kubenswrapper[4830]: I0227 17:05:46.335873 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bd7e461-7a47-428a-9391-dcd747f62b20-utilities\") pod \"redhat-marketplace-qxxj9\" (UID: \"0bd7e461-7a47-428a-9391-dcd747f62b20\") " pod="openshift-marketplace/redhat-marketplace-qxxj9" Feb 27 17:05:46 crc kubenswrapper[4830]: I0227 17:05:46.335931 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bd7e461-7a47-428a-9391-dcd747f62b20-catalog-content\") pod \"redhat-marketplace-qxxj9\" (UID: \"0bd7e461-7a47-428a-9391-dcd747f62b20\") " pod="openshift-marketplace/redhat-marketplace-qxxj9" Feb 27 17:05:46 crc kubenswrapper[4830]: I0227 17:05:46.335985 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlxfp\" (UniqueName: \"kubernetes.io/projected/0bd7e461-7a47-428a-9391-dcd747f62b20-kube-api-access-xlxfp\") pod \"redhat-marketplace-qxxj9\" (UID: \"0bd7e461-7a47-428a-9391-dcd747f62b20\") " pod="openshift-marketplace/redhat-marketplace-qxxj9" Feb 27 17:05:46 crc kubenswrapper[4830]: I0227 17:05:46.336771 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bd7e461-7a47-428a-9391-dcd747f62b20-utilities\") pod \"redhat-marketplace-qxxj9\" (UID: \"0bd7e461-7a47-428a-9391-dcd747f62b20\") " pod="openshift-marketplace/redhat-marketplace-qxxj9" Feb 27 17:05:46 crc kubenswrapper[4830]: I0227 17:05:46.337072 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bd7e461-7a47-428a-9391-dcd747f62b20-catalog-content\") pod \"redhat-marketplace-qxxj9\" (UID: \"0bd7e461-7a47-428a-9391-dcd747f62b20\") " pod="openshift-marketplace/redhat-marketplace-qxxj9" Feb 27 17:05:46 crc kubenswrapper[4830]: I0227 17:05:46.361596 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlxfp\" (UniqueName: \"kubernetes.io/projected/0bd7e461-7a47-428a-9391-dcd747f62b20-kube-api-access-xlxfp\") pod \"redhat-marketplace-qxxj9\" (UID: \"0bd7e461-7a47-428a-9391-dcd747f62b20\") " pod="openshift-marketplace/redhat-marketplace-qxxj9" Feb 27 17:05:46 crc kubenswrapper[4830]: I0227 17:05:46.549708 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qxxj9" Feb 27 17:05:47 crc kubenswrapper[4830]: I0227 17:05:47.109733 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qxxj9"] Feb 27 17:05:47 crc kubenswrapper[4830]: W0227 17:05:47.129016 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bd7e461_7a47_428a_9391_dcd747f62b20.slice/crio-abbd272b19ec6ecc900362db3303b3c3ae7cc9744fbd8327499a873ecf8e1222 WatchSource:0}: Error finding container abbd272b19ec6ecc900362db3303b3c3ae7cc9744fbd8327499a873ecf8e1222: Status 404 returned error can't find the container with id abbd272b19ec6ecc900362db3303b3c3ae7cc9744fbd8327499a873ecf8e1222 Feb 27 17:05:47 crc kubenswrapper[4830]: I0227 17:05:47.459672 4830 generic.go:334] "Generic (PLEG): container finished" podID="0bd7e461-7a47-428a-9391-dcd747f62b20" containerID="9011c3ade9f757ee6b933ba983fee13c18f57661711034a9a92780b5d26e4ef2" exitCode=0 Feb 27 17:05:47 crc kubenswrapper[4830]: I0227 17:05:47.459739 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qxxj9" event={"ID":"0bd7e461-7a47-428a-9391-dcd747f62b20","Type":"ContainerDied","Data":"9011c3ade9f757ee6b933ba983fee13c18f57661711034a9a92780b5d26e4ef2"} Feb 27 17:05:47 crc kubenswrapper[4830]: I0227 17:05:47.459778 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qxxj9" event={"ID":"0bd7e461-7a47-428a-9391-dcd747f62b20","Type":"ContainerStarted","Data":"abbd272b19ec6ecc900362db3303b3c3ae7cc9744fbd8327499a873ecf8e1222"} Feb 27 17:05:48 crc kubenswrapper[4830]: I0227 17:05:48.470812 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qxxj9" event={"ID":"0bd7e461-7a47-428a-9391-dcd747f62b20","Type":"ContainerStarted","Data":"35e7763c5bfe8e629c892f127387109538010ebd0d46daf4a1fc372e113c041a"} Feb 27 17:05:49 crc kubenswrapper[4830]: I0227 17:05:49.491999 4830 generic.go:334] "Generic (PLEG): container finished" podID="0bd7e461-7a47-428a-9391-dcd747f62b20" containerID="35e7763c5bfe8e629c892f127387109538010ebd0d46daf4a1fc372e113c041a" exitCode=0 Feb 27 17:05:49 crc kubenswrapper[4830]: I0227 17:05:49.492073 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qxxj9" event={"ID":"0bd7e461-7a47-428a-9391-dcd747f62b20","Type":"ContainerDied","Data":"35e7763c5bfe8e629c892f127387109538010ebd0d46daf4a1fc372e113c041a"} Feb 27 17:05:50 crc kubenswrapper[4830]: I0227 17:05:50.502495 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qxxj9" event={"ID":"0bd7e461-7a47-428a-9391-dcd747f62b20","Type":"ContainerStarted","Data":"82f0aefa6471a8c86a16fb462a90685910435ce5584f4ab13077cbbfa58e376b"} Feb 27 17:05:50 crc kubenswrapper[4830]: I0227 17:05:50.523577 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qxxj9" podStartSLOduration=2.020001001 podStartE2EDuration="4.523558803s" podCreationTimestamp="2026-02-27 17:05:46 +0000 UTC" firstStartedPulling="2026-02-27 17:05:47.461812389 +0000 UTC m=+3543.551084872" lastFinishedPulling="2026-02-27 17:05:49.965370171 +0000 UTC m=+3546.054642674" observedRunningTime="2026-02-27 17:05:50.521052594 +0000 UTC m=+3546.610325057" watchObservedRunningTime="2026-02-27 17:05:50.523558803 +0000 UTC m=+3546.612831266" Feb 27 17:05:56 crc kubenswrapper[4830]: I0227 17:05:56.549910 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qxxj9" Feb 27 17:05:56 crc kubenswrapper[4830]: I0227 17:05:56.551354 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qxxj9" Feb 27 17:05:56 crc kubenswrapper[4830]: I0227 17:05:56.634622 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qxxj9" Feb 27 17:05:57 crc kubenswrapper[4830]: I0227 17:05:57.645019 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qxxj9" Feb 27 17:05:57 crc kubenswrapper[4830]: I0227 17:05:57.708818 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qxxj9"] Feb 27 17:05:59 crc kubenswrapper[4830]: I0227 17:05:59.586357 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qxxj9" podUID="0bd7e461-7a47-428a-9391-dcd747f62b20" containerName="registry-server" containerID="cri-o://82f0aefa6471a8c86a16fb462a90685910435ce5584f4ab13077cbbfa58e376b" gracePeriod=2 Feb 27 17:05:59 crc kubenswrapper[4830]: I0227 17:05:59.996425 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qxxj9" Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.063354 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bd7e461-7a47-428a-9391-dcd747f62b20-catalog-content\") pod \"0bd7e461-7a47-428a-9391-dcd747f62b20\" (UID: \"0bd7e461-7a47-428a-9391-dcd747f62b20\") " Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.063610 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlxfp\" (UniqueName: \"kubernetes.io/projected/0bd7e461-7a47-428a-9391-dcd747f62b20-kube-api-access-xlxfp\") pod \"0bd7e461-7a47-428a-9391-dcd747f62b20\" (UID: \"0bd7e461-7a47-428a-9391-dcd747f62b20\") " Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.063640 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bd7e461-7a47-428a-9391-dcd747f62b20-utilities\") pod \"0bd7e461-7a47-428a-9391-dcd747f62b20\" (UID: \"0bd7e461-7a47-428a-9391-dcd747f62b20\") " Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.065302 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bd7e461-7a47-428a-9391-dcd747f62b20-utilities" (OuterVolumeSpecName: "utilities") pod "0bd7e461-7a47-428a-9391-dcd747f62b20" (UID: "0bd7e461-7a47-428a-9391-dcd747f62b20"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.070418 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bd7e461-7a47-428a-9391-dcd747f62b20-kube-api-access-xlxfp" (OuterVolumeSpecName: "kube-api-access-xlxfp") pod "0bd7e461-7a47-428a-9391-dcd747f62b20" (UID: "0bd7e461-7a47-428a-9391-dcd747f62b20"). InnerVolumeSpecName "kube-api-access-xlxfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.110910 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bd7e461-7a47-428a-9391-dcd747f62b20-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0bd7e461-7a47-428a-9391-dcd747f62b20" (UID: "0bd7e461-7a47-428a-9391-dcd747f62b20"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.161065 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536866-nw6lt"] Feb 27 17:06:00 crc kubenswrapper[4830]: E0227 17:06:00.161483 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bd7e461-7a47-428a-9391-dcd747f62b20" containerName="registry-server" Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.161515 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bd7e461-7a47-428a-9391-dcd747f62b20" containerName="registry-server" Feb 27 17:06:00 crc kubenswrapper[4830]: E0227 17:06:00.161529 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bd7e461-7a47-428a-9391-dcd747f62b20" containerName="extract-content" Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.161541 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bd7e461-7a47-428a-9391-dcd747f62b20" containerName="extract-content" Feb 27 17:06:00 crc kubenswrapper[4830]: E0227 17:06:00.161573 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bd7e461-7a47-428a-9391-dcd747f62b20" containerName="extract-utilities" Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.161582 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bd7e461-7a47-428a-9391-dcd747f62b20" containerName="extract-utilities" Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.161798 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bd7e461-7a47-428a-9391-dcd747f62b20" containerName="registry-server" Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.162422 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536866-nw6lt" Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.165213 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.165497 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.165612 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlxfp\" (UniqueName: \"kubernetes.io/projected/0bd7e461-7a47-428a-9391-dcd747f62b20-kube-api-access-xlxfp\") on node \"crc\" DevicePath \"\"" Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.165649 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bd7e461-7a47-428a-9391-dcd747f62b20-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.165668 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bd7e461-7a47-428a-9391-dcd747f62b20-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.165838 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.173849 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536866-nw6lt"] Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.266858 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p2tx\" (UniqueName: \"kubernetes.io/projected/f5e4c7a5-debd-44f3-98c1-d9721748f0a1-kube-api-access-4p2tx\") pod \"auto-csr-approver-29536866-nw6lt\" (UID: \"f5e4c7a5-debd-44f3-98c1-d9721748f0a1\") " pod="openshift-infra/auto-csr-approver-29536866-nw6lt" Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.369384 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p2tx\" (UniqueName: \"kubernetes.io/projected/f5e4c7a5-debd-44f3-98c1-d9721748f0a1-kube-api-access-4p2tx\") pod \"auto-csr-approver-29536866-nw6lt\" (UID: \"f5e4c7a5-debd-44f3-98c1-d9721748f0a1\") " pod="openshift-infra/auto-csr-approver-29536866-nw6lt" Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.396609 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p2tx\" (UniqueName: \"kubernetes.io/projected/f5e4c7a5-debd-44f3-98c1-d9721748f0a1-kube-api-access-4p2tx\") pod \"auto-csr-approver-29536866-nw6lt\" (UID: \"f5e4c7a5-debd-44f3-98c1-d9721748f0a1\") " pod="openshift-infra/auto-csr-approver-29536866-nw6lt" Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.498037 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536866-nw6lt" Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.605235 4830 generic.go:334] "Generic (PLEG): container finished" podID="0bd7e461-7a47-428a-9391-dcd747f62b20" containerID="82f0aefa6471a8c86a16fb462a90685910435ce5584f4ab13077cbbfa58e376b" exitCode=0 Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.605302 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qxxj9" event={"ID":"0bd7e461-7a47-428a-9391-dcd747f62b20","Type":"ContainerDied","Data":"82f0aefa6471a8c86a16fb462a90685910435ce5584f4ab13077cbbfa58e376b"} Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.605339 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qxxj9" Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.605369 4830 scope.go:117] "RemoveContainer" containerID="82f0aefa6471a8c86a16fb462a90685910435ce5584f4ab13077cbbfa58e376b" Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.605346 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qxxj9" event={"ID":"0bd7e461-7a47-428a-9391-dcd747f62b20","Type":"ContainerDied","Data":"abbd272b19ec6ecc900362db3303b3c3ae7cc9744fbd8327499a873ecf8e1222"} Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.633523 4830 scope.go:117] "RemoveContainer" containerID="35e7763c5bfe8e629c892f127387109538010ebd0d46daf4a1fc372e113c041a" Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.677388 4830 scope.go:117] "RemoveContainer" containerID="9011c3ade9f757ee6b933ba983fee13c18f57661711034a9a92780b5d26e4ef2" Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.677599 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qxxj9"] Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.695781 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qxxj9"] Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.698683 4830 scope.go:117] "RemoveContainer" containerID="82f0aefa6471a8c86a16fb462a90685910435ce5584f4ab13077cbbfa58e376b" Feb 27 17:06:00 crc kubenswrapper[4830]: E0227 17:06:00.702281 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82f0aefa6471a8c86a16fb462a90685910435ce5584f4ab13077cbbfa58e376b\": container with ID starting with 82f0aefa6471a8c86a16fb462a90685910435ce5584f4ab13077cbbfa58e376b not found: ID does not exist" containerID="82f0aefa6471a8c86a16fb462a90685910435ce5584f4ab13077cbbfa58e376b" Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.702314 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82f0aefa6471a8c86a16fb462a90685910435ce5584f4ab13077cbbfa58e376b"} err="failed to get container status \"82f0aefa6471a8c86a16fb462a90685910435ce5584f4ab13077cbbfa58e376b\": rpc error: code = NotFound desc = could not find container \"82f0aefa6471a8c86a16fb462a90685910435ce5584f4ab13077cbbfa58e376b\": container with ID starting with 82f0aefa6471a8c86a16fb462a90685910435ce5584f4ab13077cbbfa58e376b not found: ID does not exist" Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.702336 4830 scope.go:117] "RemoveContainer" containerID="35e7763c5bfe8e629c892f127387109538010ebd0d46daf4a1fc372e113c041a" Feb 27 17:06:00 crc kubenswrapper[4830]: E0227 17:06:00.703260 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35e7763c5bfe8e629c892f127387109538010ebd0d46daf4a1fc372e113c041a\": container with ID starting with 35e7763c5bfe8e629c892f127387109538010ebd0d46daf4a1fc372e113c041a not found: ID does not exist" containerID="35e7763c5bfe8e629c892f127387109538010ebd0d46daf4a1fc372e113c041a" Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.703321 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35e7763c5bfe8e629c892f127387109538010ebd0d46daf4a1fc372e113c041a"} err="failed to get container status \"35e7763c5bfe8e629c892f127387109538010ebd0d46daf4a1fc372e113c041a\": rpc error: code = NotFound desc = could not find container \"35e7763c5bfe8e629c892f127387109538010ebd0d46daf4a1fc372e113c041a\": container with ID starting with 35e7763c5bfe8e629c892f127387109538010ebd0d46daf4a1fc372e113c041a not found: ID does not exist" Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.703359 4830 scope.go:117] "RemoveContainer" containerID="9011c3ade9f757ee6b933ba983fee13c18f57661711034a9a92780b5d26e4ef2" Feb 27 17:06:00 crc kubenswrapper[4830]: E0227 17:06:00.703825 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9011c3ade9f757ee6b933ba983fee13c18f57661711034a9a92780b5d26e4ef2\": container with ID starting with 9011c3ade9f757ee6b933ba983fee13c18f57661711034a9a92780b5d26e4ef2 not found: ID does not exist" containerID="9011c3ade9f757ee6b933ba983fee13c18f57661711034a9a92780b5d26e4ef2" Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.703875 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9011c3ade9f757ee6b933ba983fee13c18f57661711034a9a92780b5d26e4ef2"} err="failed to get container status \"9011c3ade9f757ee6b933ba983fee13c18f57661711034a9a92780b5d26e4ef2\": rpc error: code = NotFound desc = could not find container \"9011c3ade9f757ee6b933ba983fee13c18f57661711034a9a92780b5d26e4ef2\": container with ID starting with 9011c3ade9f757ee6b933ba983fee13c18f57661711034a9a92780b5d26e4ef2 not found: ID does not exist" Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.773400 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bd7e461-7a47-428a-9391-dcd747f62b20" path="/var/lib/kubelet/pods/0bd7e461-7a47-428a-9391-dcd747f62b20/volumes" Feb 27 17:06:00 crc kubenswrapper[4830]: I0227 17:06:00.803172 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536866-nw6lt"] Feb 27 17:06:01 crc kubenswrapper[4830]: I0227 17:06:01.618046 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536866-nw6lt" event={"ID":"f5e4c7a5-debd-44f3-98c1-d9721748f0a1","Type":"ContainerStarted","Data":"42b43ee718faaf0c987834ca9d8718037398b099fd2aee4313924c90824b512c"} Feb 27 17:06:02 crc kubenswrapper[4830]: I0227 17:06:02.631216 4830 generic.go:334] "Generic (PLEG): container finished" podID="f5e4c7a5-debd-44f3-98c1-d9721748f0a1" containerID="11728316a2977378a13183287d892146bd92587513db90bc43ef19fc66bf9cb8" exitCode=0 Feb 27 17:06:02 crc kubenswrapper[4830]: I0227 17:06:02.631321 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536866-nw6lt" event={"ID":"f5e4c7a5-debd-44f3-98c1-d9721748f0a1","Type":"ContainerDied","Data":"11728316a2977378a13183287d892146bd92587513db90bc43ef19fc66bf9cb8"} Feb 27 17:06:04 crc kubenswrapper[4830]: I0227 17:06:04.006539 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536866-nw6lt" Feb 27 17:06:04 crc kubenswrapper[4830]: I0227 17:06:04.141891 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4p2tx\" (UniqueName: \"kubernetes.io/projected/f5e4c7a5-debd-44f3-98c1-d9721748f0a1-kube-api-access-4p2tx\") pod \"f5e4c7a5-debd-44f3-98c1-d9721748f0a1\" (UID: \"f5e4c7a5-debd-44f3-98c1-d9721748f0a1\") " Feb 27 17:06:04 crc kubenswrapper[4830]: I0227 17:06:04.152412 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5e4c7a5-debd-44f3-98c1-d9721748f0a1-kube-api-access-4p2tx" (OuterVolumeSpecName: "kube-api-access-4p2tx") pod "f5e4c7a5-debd-44f3-98c1-d9721748f0a1" (UID: "f5e4c7a5-debd-44f3-98c1-d9721748f0a1"). InnerVolumeSpecName "kube-api-access-4p2tx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:06:04 crc kubenswrapper[4830]: I0227 17:06:04.243918 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4p2tx\" (UniqueName: \"kubernetes.io/projected/f5e4c7a5-debd-44f3-98c1-d9721748f0a1-kube-api-access-4p2tx\") on node \"crc\" DevicePath \"\"" Feb 27 17:06:04 crc kubenswrapper[4830]: I0227 17:06:04.653265 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536866-nw6lt" event={"ID":"f5e4c7a5-debd-44f3-98c1-d9721748f0a1","Type":"ContainerDied","Data":"42b43ee718faaf0c987834ca9d8718037398b099fd2aee4313924c90824b512c"} Feb 27 17:06:04 crc kubenswrapper[4830]: I0227 17:06:04.653324 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42b43ee718faaf0c987834ca9d8718037398b099fd2aee4313924c90824b512c" Feb 27 17:06:04 crc kubenswrapper[4830]: I0227 17:06:04.653382 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536866-nw6lt" Feb 27 17:06:05 crc kubenswrapper[4830]: I0227 17:06:05.095458 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536860-96hd5"] Feb 27 17:06:05 crc kubenswrapper[4830]: I0227 17:06:05.103363 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536860-96hd5"] Feb 27 17:06:06 crc kubenswrapper[4830]: I0227 17:06:06.782787 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71af92b9-5cee-48bf-8401-801a0851d27c" path="/var/lib/kubelet/pods/71af92b9-5cee-48bf-8401-801a0851d27c/volumes" Feb 27 17:06:33 crc kubenswrapper[4830]: I0227 17:06:33.160511 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:06:33 crc kubenswrapper[4830]: I0227 17:06:33.161380 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:06:46 crc kubenswrapper[4830]: I0227 17:06:46.393613 4830 scope.go:117] "RemoveContainer" containerID="8d45f9a4a6106e6eb09fd257795deb102bc58ca15d8874897d297e6357ff41c4" Feb 27 17:07:03 crc kubenswrapper[4830]: I0227 17:07:03.160690 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:07:03 crc kubenswrapper[4830]: I0227 17:07:03.161409 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:07:15 crc kubenswrapper[4830]: I0227 17:07:15.719826 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-88jj7"] Feb 27 17:07:15 crc kubenswrapper[4830]: E0227 17:07:15.721550 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5e4c7a5-debd-44f3-98c1-d9721748f0a1" containerName="oc" Feb 27 17:07:15 crc kubenswrapper[4830]: I0227 17:07:15.721577 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5e4c7a5-debd-44f3-98c1-d9721748f0a1" containerName="oc" Feb 27 17:07:15 crc kubenswrapper[4830]: I0227 17:07:15.721891 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5e4c7a5-debd-44f3-98c1-d9721748f0a1" containerName="oc" Feb 27 17:07:15 crc kubenswrapper[4830]: I0227 17:07:15.724392 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-88jj7" Feb 27 17:07:15 crc kubenswrapper[4830]: I0227 17:07:15.739836 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-88jj7"] Feb 27 17:07:15 crc kubenswrapper[4830]: I0227 17:07:15.871961 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbw82\" (UniqueName: \"kubernetes.io/projected/e2c36b25-9f54-4a84-a46e-1dda62252b1e-kube-api-access-kbw82\") pod \"certified-operators-88jj7\" (UID: \"e2c36b25-9f54-4a84-a46e-1dda62252b1e\") " pod="openshift-marketplace/certified-operators-88jj7" Feb 27 17:07:15 crc kubenswrapper[4830]: I0227 17:07:15.872156 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2c36b25-9f54-4a84-a46e-1dda62252b1e-catalog-content\") pod \"certified-operators-88jj7\" (UID: \"e2c36b25-9f54-4a84-a46e-1dda62252b1e\") " pod="openshift-marketplace/certified-operators-88jj7" Feb 27 17:07:15 crc kubenswrapper[4830]: I0227 17:07:15.872302 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2c36b25-9f54-4a84-a46e-1dda62252b1e-utilities\") pod \"certified-operators-88jj7\" (UID: \"e2c36b25-9f54-4a84-a46e-1dda62252b1e\") " pod="openshift-marketplace/certified-operators-88jj7" Feb 27 17:07:15 crc kubenswrapper[4830]: I0227 17:07:15.974093 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbw82\" (UniqueName: \"kubernetes.io/projected/e2c36b25-9f54-4a84-a46e-1dda62252b1e-kube-api-access-kbw82\") pod \"certified-operators-88jj7\" (UID: \"e2c36b25-9f54-4a84-a46e-1dda62252b1e\") " pod="openshift-marketplace/certified-operators-88jj7" Feb 27 17:07:15 crc kubenswrapper[4830]: I0227 17:07:15.974232 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2c36b25-9f54-4a84-a46e-1dda62252b1e-catalog-content\") pod \"certified-operators-88jj7\" (UID: \"e2c36b25-9f54-4a84-a46e-1dda62252b1e\") " pod="openshift-marketplace/certified-operators-88jj7" Feb 27 17:07:15 crc kubenswrapper[4830]: I0227 17:07:15.974776 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2c36b25-9f54-4a84-a46e-1dda62252b1e-catalog-content\") pod \"certified-operators-88jj7\" (UID: \"e2c36b25-9f54-4a84-a46e-1dda62252b1e\") " pod="openshift-marketplace/certified-operators-88jj7" Feb 27 17:07:15 crc kubenswrapper[4830]: I0227 17:07:15.975162 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2c36b25-9f54-4a84-a46e-1dda62252b1e-utilities\") pod \"certified-operators-88jj7\" (UID: \"e2c36b25-9f54-4a84-a46e-1dda62252b1e\") " pod="openshift-marketplace/certified-operators-88jj7" Feb 27 17:07:15 crc kubenswrapper[4830]: I0227 17:07:15.974872 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2c36b25-9f54-4a84-a46e-1dda62252b1e-utilities\") pod \"certified-operators-88jj7\" (UID: \"e2c36b25-9f54-4a84-a46e-1dda62252b1e\") " pod="openshift-marketplace/certified-operators-88jj7" Feb 27 17:07:16 crc kubenswrapper[4830]: I0227 17:07:16.007966 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbw82\" (UniqueName: \"kubernetes.io/projected/e2c36b25-9f54-4a84-a46e-1dda62252b1e-kube-api-access-kbw82\") pod \"certified-operators-88jj7\" (UID: \"e2c36b25-9f54-4a84-a46e-1dda62252b1e\") " pod="openshift-marketplace/certified-operators-88jj7" Feb 27 17:07:16 crc kubenswrapper[4830]: I0227 17:07:16.052999 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-88jj7" Feb 27 17:07:16 crc kubenswrapper[4830]: I0227 17:07:16.385152 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-88jj7"] Feb 27 17:07:17 crc kubenswrapper[4830]: I0227 17:07:17.381307 4830 generic.go:334] "Generic (PLEG): container finished" podID="e2c36b25-9f54-4a84-a46e-1dda62252b1e" containerID="3b8dbc918dbb249f780ac0305e2c9e9bc19ee91523c87f265724ef6316f246f4" exitCode=0 Feb 27 17:07:17 crc kubenswrapper[4830]: I0227 17:07:17.381389 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-88jj7" event={"ID":"e2c36b25-9f54-4a84-a46e-1dda62252b1e","Type":"ContainerDied","Data":"3b8dbc918dbb249f780ac0305e2c9e9bc19ee91523c87f265724ef6316f246f4"} Feb 27 17:07:17 crc kubenswrapper[4830]: I0227 17:07:17.381830 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-88jj7" event={"ID":"e2c36b25-9f54-4a84-a46e-1dda62252b1e","Type":"ContainerStarted","Data":"2e703e1428f378b0a38147844432e2f2b8640c6c67206970be953e8a41941362"} Feb 27 17:07:17 crc kubenswrapper[4830]: I0227 17:07:17.384635 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 17:07:19 crc kubenswrapper[4830]: I0227 17:07:19.406675 4830 generic.go:334] "Generic (PLEG): container finished" podID="e2c36b25-9f54-4a84-a46e-1dda62252b1e" containerID="9ce0fe0b0cd31232f8f2a93bc22ccc610305f8ab4a3319c238f5138e1a6b082f" exitCode=0 Feb 27 17:07:19 crc kubenswrapper[4830]: I0227 17:07:19.406789 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-88jj7" event={"ID":"e2c36b25-9f54-4a84-a46e-1dda62252b1e","Type":"ContainerDied","Data":"9ce0fe0b0cd31232f8f2a93bc22ccc610305f8ab4a3319c238f5138e1a6b082f"} Feb 27 17:07:20 crc kubenswrapper[4830]: I0227 17:07:20.423260 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-88jj7" event={"ID":"e2c36b25-9f54-4a84-a46e-1dda62252b1e","Type":"ContainerStarted","Data":"a2b3b5b8aff26d2b2f551900e3a643bb66e6639d64f11f375bc59872f72d7efc"} Feb 27 17:07:20 crc kubenswrapper[4830]: I0227 17:07:20.462829 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-88jj7" podStartSLOduration=3.010193158 podStartE2EDuration="5.462797664s" podCreationTimestamp="2026-02-27 17:07:15 +0000 UTC" firstStartedPulling="2026-02-27 17:07:17.384188677 +0000 UTC m=+3633.473461180" lastFinishedPulling="2026-02-27 17:07:19.836793213 +0000 UTC m=+3635.926065686" observedRunningTime="2026-02-27 17:07:20.452053293 +0000 UTC m=+3636.541325776" watchObservedRunningTime="2026-02-27 17:07:20.462797664 +0000 UTC m=+3636.552070167" Feb 27 17:07:26 crc kubenswrapper[4830]: I0227 17:07:26.054089 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-88jj7" Feb 27 17:07:26 crc kubenswrapper[4830]: I0227 17:07:26.055132 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-88jj7" Feb 27 17:07:26 crc kubenswrapper[4830]: I0227 17:07:26.130907 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-88jj7" Feb 27 17:07:26 crc kubenswrapper[4830]: I0227 17:07:26.554746 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-88jj7" Feb 27 17:07:26 crc kubenswrapper[4830]: I0227 17:07:26.630644 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-88jj7"] Feb 27 17:07:28 crc kubenswrapper[4830]: I0227 17:07:28.501547 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-88jj7" podUID="e2c36b25-9f54-4a84-a46e-1dda62252b1e" containerName="registry-server" containerID="cri-o://a2b3b5b8aff26d2b2f551900e3a643bb66e6639d64f11f375bc59872f72d7efc" gracePeriod=2 Feb 27 17:07:29 crc kubenswrapper[4830]: I0227 17:07:29.025285 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-88jj7" Feb 27 17:07:29 crc kubenswrapper[4830]: I0227 17:07:29.110166 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2c36b25-9f54-4a84-a46e-1dda62252b1e-catalog-content\") pod \"e2c36b25-9f54-4a84-a46e-1dda62252b1e\" (UID: \"e2c36b25-9f54-4a84-a46e-1dda62252b1e\") " Feb 27 17:07:29 crc kubenswrapper[4830]: I0227 17:07:29.110256 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2c36b25-9f54-4a84-a46e-1dda62252b1e-utilities\") pod \"e2c36b25-9f54-4a84-a46e-1dda62252b1e\" (UID: \"e2c36b25-9f54-4a84-a46e-1dda62252b1e\") " Feb 27 17:07:29 crc kubenswrapper[4830]: I0227 17:07:29.110282 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbw82\" (UniqueName: \"kubernetes.io/projected/e2c36b25-9f54-4a84-a46e-1dda62252b1e-kube-api-access-kbw82\") pod \"e2c36b25-9f54-4a84-a46e-1dda62252b1e\" (UID: \"e2c36b25-9f54-4a84-a46e-1dda62252b1e\") " Feb 27 17:07:29 crc kubenswrapper[4830]: I0227 17:07:29.112408 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2c36b25-9f54-4a84-a46e-1dda62252b1e-utilities" (OuterVolumeSpecName: "utilities") pod "e2c36b25-9f54-4a84-a46e-1dda62252b1e" (UID: "e2c36b25-9f54-4a84-a46e-1dda62252b1e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:07:29 crc kubenswrapper[4830]: I0227 17:07:29.123237 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2c36b25-9f54-4a84-a46e-1dda62252b1e-kube-api-access-kbw82" (OuterVolumeSpecName: "kube-api-access-kbw82") pod "e2c36b25-9f54-4a84-a46e-1dda62252b1e" (UID: "e2c36b25-9f54-4a84-a46e-1dda62252b1e"). InnerVolumeSpecName "kube-api-access-kbw82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:07:29 crc kubenswrapper[4830]: I0227 17:07:29.217444 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2c36b25-9f54-4a84-a46e-1dda62252b1e-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:07:29 crc kubenswrapper[4830]: I0227 17:07:29.217483 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbw82\" (UniqueName: \"kubernetes.io/projected/e2c36b25-9f54-4a84-a46e-1dda62252b1e-kube-api-access-kbw82\") on node \"crc\" DevicePath \"\"" Feb 27 17:07:29 crc kubenswrapper[4830]: I0227 17:07:29.249518 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2c36b25-9f54-4a84-a46e-1dda62252b1e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e2c36b25-9f54-4a84-a46e-1dda62252b1e" (UID: "e2c36b25-9f54-4a84-a46e-1dda62252b1e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:07:29 crc kubenswrapper[4830]: I0227 17:07:29.319459 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2c36b25-9f54-4a84-a46e-1dda62252b1e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:07:29 crc kubenswrapper[4830]: I0227 17:07:29.519099 4830 generic.go:334] "Generic (PLEG): container finished" podID="e2c36b25-9f54-4a84-a46e-1dda62252b1e" containerID="a2b3b5b8aff26d2b2f551900e3a643bb66e6639d64f11f375bc59872f72d7efc" exitCode=0 Feb 27 17:07:29 crc kubenswrapper[4830]: I0227 17:07:29.519167 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-88jj7" event={"ID":"e2c36b25-9f54-4a84-a46e-1dda62252b1e","Type":"ContainerDied","Data":"a2b3b5b8aff26d2b2f551900e3a643bb66e6639d64f11f375bc59872f72d7efc"} Feb 27 17:07:29 crc kubenswrapper[4830]: I0227 17:07:29.519190 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-88jj7" Feb 27 17:07:29 crc kubenswrapper[4830]: I0227 17:07:29.519223 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-88jj7" event={"ID":"e2c36b25-9f54-4a84-a46e-1dda62252b1e","Type":"ContainerDied","Data":"2e703e1428f378b0a38147844432e2f2b8640c6c67206970be953e8a41941362"} Feb 27 17:07:29 crc kubenswrapper[4830]: I0227 17:07:29.519256 4830 scope.go:117] "RemoveContainer" containerID="a2b3b5b8aff26d2b2f551900e3a643bb66e6639d64f11f375bc59872f72d7efc" Feb 27 17:07:29 crc kubenswrapper[4830]: I0227 17:07:29.572077 4830 scope.go:117] "RemoveContainer" containerID="9ce0fe0b0cd31232f8f2a93bc22ccc610305f8ab4a3319c238f5138e1a6b082f" Feb 27 17:07:29 crc kubenswrapper[4830]: I0227 17:07:29.594934 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-88jj7"] Feb 27 17:07:29 crc kubenswrapper[4830]: I0227 17:07:29.601530 4830 scope.go:117] "RemoveContainer" containerID="3b8dbc918dbb249f780ac0305e2c9e9bc19ee91523c87f265724ef6316f246f4" Feb 27 17:07:29 crc kubenswrapper[4830]: I0227 17:07:29.611752 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-88jj7"] Feb 27 17:07:29 crc kubenswrapper[4830]: I0227 17:07:29.649140 4830 scope.go:117] "RemoveContainer" containerID="a2b3b5b8aff26d2b2f551900e3a643bb66e6639d64f11f375bc59872f72d7efc" Feb 27 17:07:29 crc kubenswrapper[4830]: E0227 17:07:29.649978 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2b3b5b8aff26d2b2f551900e3a643bb66e6639d64f11f375bc59872f72d7efc\": container with ID starting with a2b3b5b8aff26d2b2f551900e3a643bb66e6639d64f11f375bc59872f72d7efc not found: ID does not exist" containerID="a2b3b5b8aff26d2b2f551900e3a643bb66e6639d64f11f375bc59872f72d7efc" Feb 27 17:07:29 crc kubenswrapper[4830]: I0227 17:07:29.650053 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2b3b5b8aff26d2b2f551900e3a643bb66e6639d64f11f375bc59872f72d7efc"} err="failed to get container status \"a2b3b5b8aff26d2b2f551900e3a643bb66e6639d64f11f375bc59872f72d7efc\": rpc error: code = NotFound desc = could not find container \"a2b3b5b8aff26d2b2f551900e3a643bb66e6639d64f11f375bc59872f72d7efc\": container with ID starting with a2b3b5b8aff26d2b2f551900e3a643bb66e6639d64f11f375bc59872f72d7efc not found: ID does not exist" Feb 27 17:07:29 crc kubenswrapper[4830]: I0227 17:07:29.650095 4830 scope.go:117] "RemoveContainer" containerID="9ce0fe0b0cd31232f8f2a93bc22ccc610305f8ab4a3319c238f5138e1a6b082f" Feb 27 17:07:29 crc kubenswrapper[4830]: E0227 17:07:29.650723 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ce0fe0b0cd31232f8f2a93bc22ccc610305f8ab4a3319c238f5138e1a6b082f\": container with ID starting with 9ce0fe0b0cd31232f8f2a93bc22ccc610305f8ab4a3319c238f5138e1a6b082f not found: ID does not exist" containerID="9ce0fe0b0cd31232f8f2a93bc22ccc610305f8ab4a3319c238f5138e1a6b082f" Feb 27 17:07:29 crc kubenswrapper[4830]: I0227 17:07:29.650786 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ce0fe0b0cd31232f8f2a93bc22ccc610305f8ab4a3319c238f5138e1a6b082f"} err="failed to get container status \"9ce0fe0b0cd31232f8f2a93bc22ccc610305f8ab4a3319c238f5138e1a6b082f\": rpc error: code = NotFound desc = could not find container \"9ce0fe0b0cd31232f8f2a93bc22ccc610305f8ab4a3319c238f5138e1a6b082f\": container with ID starting with 9ce0fe0b0cd31232f8f2a93bc22ccc610305f8ab4a3319c238f5138e1a6b082f not found: ID does not exist" Feb 27 17:07:29 crc kubenswrapper[4830]: I0227 17:07:29.650826 4830 scope.go:117] "RemoveContainer" containerID="3b8dbc918dbb249f780ac0305e2c9e9bc19ee91523c87f265724ef6316f246f4" Feb 27 17:07:29 crc kubenswrapper[4830]: E0227 17:07:29.651457 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b8dbc918dbb249f780ac0305e2c9e9bc19ee91523c87f265724ef6316f246f4\": container with ID starting with 3b8dbc918dbb249f780ac0305e2c9e9bc19ee91523c87f265724ef6316f246f4 not found: ID does not exist" containerID="3b8dbc918dbb249f780ac0305e2c9e9bc19ee91523c87f265724ef6316f246f4" Feb 27 17:07:29 crc kubenswrapper[4830]: I0227 17:07:29.651503 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b8dbc918dbb249f780ac0305e2c9e9bc19ee91523c87f265724ef6316f246f4"} err="failed to get container status \"3b8dbc918dbb249f780ac0305e2c9e9bc19ee91523c87f265724ef6316f246f4\": rpc error: code = NotFound desc = could not find container \"3b8dbc918dbb249f780ac0305e2c9e9bc19ee91523c87f265724ef6316f246f4\": container with ID starting with 3b8dbc918dbb249f780ac0305e2c9e9bc19ee91523c87f265724ef6316f246f4 not found: ID does not exist" Feb 27 17:07:30 crc kubenswrapper[4830]: I0227 17:07:30.776656 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2c36b25-9f54-4a84-a46e-1dda62252b1e" path="/var/lib/kubelet/pods/e2c36b25-9f54-4a84-a46e-1dda62252b1e/volumes" Feb 27 17:07:33 crc kubenswrapper[4830]: I0227 17:07:33.160200 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:07:33 crc kubenswrapper[4830]: I0227 17:07:33.160905 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:07:33 crc kubenswrapper[4830]: I0227 17:07:33.161053 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 17:07:33 crc kubenswrapper[4830]: I0227 17:07:33.162464 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b59a0a2697e1673250d28d22947dbf29d890efdc4f2af61efcb7c8f01573fc27"} pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 17:07:33 crc kubenswrapper[4830]: I0227 17:07:33.162611 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" containerID="cri-o://b59a0a2697e1673250d28d22947dbf29d890efdc4f2af61efcb7c8f01573fc27" gracePeriod=600 Feb 27 17:07:33 crc kubenswrapper[4830]: E0227 17:07:33.322005 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:07:33 crc kubenswrapper[4830]: I0227 17:07:33.570994 4830 generic.go:334] "Generic (PLEG): container finished" podID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerID="b59a0a2697e1673250d28d22947dbf29d890efdc4f2af61efcb7c8f01573fc27" exitCode=0 Feb 27 17:07:33 crc kubenswrapper[4830]: I0227 17:07:33.571043 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerDied","Data":"b59a0a2697e1673250d28d22947dbf29d890efdc4f2af61efcb7c8f01573fc27"} Feb 27 17:07:33 crc kubenswrapper[4830]: I0227 17:07:33.571836 4830 scope.go:117] "RemoveContainer" containerID="db78cb7e4ed59ab2b04c3fd90bbd3ca09de79184879f6f7cafe4aab5e64ed8b6" Feb 27 17:07:33 crc kubenswrapper[4830]: I0227 17:07:33.572844 4830 scope.go:117] "RemoveContainer" containerID="b59a0a2697e1673250d28d22947dbf29d890efdc4f2af61efcb7c8f01573fc27" Feb 27 17:07:33 crc kubenswrapper[4830]: E0227 17:07:33.573522 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:07:47 crc kubenswrapper[4830]: I0227 17:07:47.859669 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-brxnv"] Feb 27 17:07:47 crc kubenswrapper[4830]: E0227 17:07:47.860812 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2c36b25-9f54-4a84-a46e-1dda62252b1e" containerName="registry-server" Feb 27 17:07:47 crc kubenswrapper[4830]: I0227 17:07:47.860831 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2c36b25-9f54-4a84-a46e-1dda62252b1e" containerName="registry-server" Feb 27 17:07:47 crc kubenswrapper[4830]: E0227 17:07:47.860859 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2c36b25-9f54-4a84-a46e-1dda62252b1e" containerName="extract-content" Feb 27 17:07:47 crc kubenswrapper[4830]: I0227 17:07:47.860866 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2c36b25-9f54-4a84-a46e-1dda62252b1e" containerName="extract-content" Feb 27 17:07:47 crc kubenswrapper[4830]: E0227 17:07:47.860885 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2c36b25-9f54-4a84-a46e-1dda62252b1e" containerName="extract-utilities" Feb 27 17:07:47 crc kubenswrapper[4830]: I0227 17:07:47.860893 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2c36b25-9f54-4a84-a46e-1dda62252b1e" containerName="extract-utilities" Feb 27 17:07:47 crc kubenswrapper[4830]: I0227 17:07:47.861111 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2c36b25-9f54-4a84-a46e-1dda62252b1e" containerName="registry-server" Feb 27 17:07:47 crc kubenswrapper[4830]: I0227 17:07:47.862446 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brxnv" Feb 27 17:07:47 crc kubenswrapper[4830]: I0227 17:07:47.876023 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-brxnv"] Feb 27 17:07:47 crc kubenswrapper[4830]: I0227 17:07:47.988849 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxdc8\" (UniqueName: \"kubernetes.io/projected/d3c88946-1943-45ae-8f2c-f7f0c6ea41b6-kube-api-access-sxdc8\") pod \"community-operators-brxnv\" (UID: \"d3c88946-1943-45ae-8f2c-f7f0c6ea41b6\") " pod="openshift-marketplace/community-operators-brxnv" Feb 27 17:07:47 crc kubenswrapper[4830]: I0227 17:07:47.988960 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3c88946-1943-45ae-8f2c-f7f0c6ea41b6-catalog-content\") pod \"community-operators-brxnv\" (UID: \"d3c88946-1943-45ae-8f2c-f7f0c6ea41b6\") " pod="openshift-marketplace/community-operators-brxnv" Feb 27 17:07:47 crc kubenswrapper[4830]: I0227 17:07:47.989074 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3c88946-1943-45ae-8f2c-f7f0c6ea41b6-utilities\") pod \"community-operators-brxnv\" (UID: \"d3c88946-1943-45ae-8f2c-f7f0c6ea41b6\") " pod="openshift-marketplace/community-operators-brxnv" Feb 27 17:07:48 crc kubenswrapper[4830]: I0227 17:07:48.090154 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3c88946-1943-45ae-8f2c-f7f0c6ea41b6-utilities\") pod \"community-operators-brxnv\" (UID: \"d3c88946-1943-45ae-8f2c-f7f0c6ea41b6\") " pod="openshift-marketplace/community-operators-brxnv" Feb 27 17:07:48 crc kubenswrapper[4830]: I0227 17:07:48.090247 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxdc8\" (UniqueName: \"kubernetes.io/projected/d3c88946-1943-45ae-8f2c-f7f0c6ea41b6-kube-api-access-sxdc8\") pod \"community-operators-brxnv\" (UID: \"d3c88946-1943-45ae-8f2c-f7f0c6ea41b6\") " pod="openshift-marketplace/community-operators-brxnv" Feb 27 17:07:48 crc kubenswrapper[4830]: I0227 17:07:48.090290 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3c88946-1943-45ae-8f2c-f7f0c6ea41b6-catalog-content\") pod \"community-operators-brxnv\" (UID: \"d3c88946-1943-45ae-8f2c-f7f0c6ea41b6\") " pod="openshift-marketplace/community-operators-brxnv" Feb 27 17:07:48 crc kubenswrapper[4830]: I0227 17:07:48.091218 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3c88946-1943-45ae-8f2c-f7f0c6ea41b6-catalog-content\") pod \"community-operators-brxnv\" (UID: \"d3c88946-1943-45ae-8f2c-f7f0c6ea41b6\") " pod="openshift-marketplace/community-operators-brxnv" Feb 27 17:07:48 crc kubenswrapper[4830]: I0227 17:07:48.091249 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3c88946-1943-45ae-8f2c-f7f0c6ea41b6-utilities\") pod \"community-operators-brxnv\" (UID: \"d3c88946-1943-45ae-8f2c-f7f0c6ea41b6\") " pod="openshift-marketplace/community-operators-brxnv" Feb 27 17:07:48 crc kubenswrapper[4830]: I0227 17:07:48.117655 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxdc8\" (UniqueName: \"kubernetes.io/projected/d3c88946-1943-45ae-8f2c-f7f0c6ea41b6-kube-api-access-sxdc8\") pod \"community-operators-brxnv\" (UID: \"d3c88946-1943-45ae-8f2c-f7f0c6ea41b6\") " pod="openshift-marketplace/community-operators-brxnv" Feb 27 17:07:48 crc kubenswrapper[4830]: I0227 17:07:48.198023 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brxnv" Feb 27 17:07:48 crc kubenswrapper[4830]: I0227 17:07:48.573582 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-brxnv"] Feb 27 17:07:48 crc kubenswrapper[4830]: I0227 17:07:48.753537 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brxnv" event={"ID":"d3c88946-1943-45ae-8f2c-f7f0c6ea41b6","Type":"ContainerStarted","Data":"ea574c8245556923822b616bf17c347dc8c2510ca2003df5a38e8e1a1b6fad1a"} Feb 27 17:07:48 crc kubenswrapper[4830]: I0227 17:07:48.763741 4830 scope.go:117] "RemoveContainer" containerID="b59a0a2697e1673250d28d22947dbf29d890efdc4f2af61efcb7c8f01573fc27" Feb 27 17:07:48 crc kubenswrapper[4830]: E0227 17:07:48.763964 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:07:49 crc kubenswrapper[4830]: I0227 17:07:49.766315 4830 generic.go:334] "Generic (PLEG): container finished" podID="d3c88946-1943-45ae-8f2c-f7f0c6ea41b6" containerID="d2f0cb832f5007f8edb83adb4f104b75c4ecf92c27f2f5bdd96a800eefef5af0" exitCode=0 Feb 27 17:07:49 crc kubenswrapper[4830]: I0227 17:07:49.766396 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brxnv" event={"ID":"d3c88946-1943-45ae-8f2c-f7f0c6ea41b6","Type":"ContainerDied","Data":"d2f0cb832f5007f8edb83adb4f104b75c4ecf92c27f2f5bdd96a800eefef5af0"} Feb 27 17:07:50 crc kubenswrapper[4830]: I0227 17:07:50.800215 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brxnv" event={"ID":"d3c88946-1943-45ae-8f2c-f7f0c6ea41b6","Type":"ContainerStarted","Data":"d786598ecd89772810a474d48b78cb2b0613271b40ae4f77b4ffb2916e2cddae"} Feb 27 17:07:51 crc kubenswrapper[4830]: I0227 17:07:51.809064 4830 generic.go:334] "Generic (PLEG): container finished" podID="d3c88946-1943-45ae-8f2c-f7f0c6ea41b6" containerID="d786598ecd89772810a474d48b78cb2b0613271b40ae4f77b4ffb2916e2cddae" exitCode=0 Feb 27 17:07:51 crc kubenswrapper[4830]: I0227 17:07:51.809601 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brxnv" event={"ID":"d3c88946-1943-45ae-8f2c-f7f0c6ea41b6","Type":"ContainerDied","Data":"d786598ecd89772810a474d48b78cb2b0613271b40ae4f77b4ffb2916e2cddae"} Feb 27 17:07:52 crc kubenswrapper[4830]: I0227 17:07:52.827291 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brxnv" event={"ID":"d3c88946-1943-45ae-8f2c-f7f0c6ea41b6","Type":"ContainerStarted","Data":"0ca360016f59360f22cdfc2d526ce7806e59be282729a4ff61a93a263bc517a7"} Feb 27 17:07:52 crc kubenswrapper[4830]: I0227 17:07:52.865727 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-brxnv" podStartSLOduration=3.115072131 podStartE2EDuration="5.865696039s" podCreationTimestamp="2026-02-27 17:07:47 +0000 UTC" firstStartedPulling="2026-02-27 17:07:49.768115741 +0000 UTC m=+3665.857388234" lastFinishedPulling="2026-02-27 17:07:52.518739679 +0000 UTC m=+3668.608012142" observedRunningTime="2026-02-27 17:07:52.855378899 +0000 UTC m=+3668.944651362" watchObservedRunningTime="2026-02-27 17:07:52.865696039 +0000 UTC m=+3668.954968532" Feb 27 17:07:58 crc kubenswrapper[4830]: I0227 17:07:58.199073 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-brxnv" Feb 27 17:07:58 crc kubenswrapper[4830]: I0227 17:07:58.199943 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-brxnv" Feb 27 17:07:58 crc kubenswrapper[4830]: I0227 17:07:58.278202 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-brxnv" Feb 27 17:07:58 crc kubenswrapper[4830]: I0227 17:07:58.978318 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-brxnv" Feb 27 17:07:59 crc kubenswrapper[4830]: I0227 17:07:59.060445 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-brxnv"] Feb 27 17:08:00 crc kubenswrapper[4830]: I0227 17:08:00.167400 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536868-rbx7q"] Feb 27 17:08:00 crc kubenswrapper[4830]: I0227 17:08:00.169562 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536868-rbx7q" Feb 27 17:08:00 crc kubenswrapper[4830]: I0227 17:08:00.174026 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:08:00 crc kubenswrapper[4830]: I0227 17:08:00.175739 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:08:00 crc kubenswrapper[4830]: I0227 17:08:00.180185 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 17:08:00 crc kubenswrapper[4830]: I0227 17:08:00.188521 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536868-rbx7q"] Feb 27 17:08:00 crc kubenswrapper[4830]: I0227 17:08:00.256132 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6k6w\" (UniqueName: \"kubernetes.io/projected/beffbac5-5df9-442e-903e-3abda96a0e09-kube-api-access-v6k6w\") pod \"auto-csr-approver-29536868-rbx7q\" (UID: \"beffbac5-5df9-442e-903e-3abda96a0e09\") " pod="openshift-infra/auto-csr-approver-29536868-rbx7q" Feb 27 17:08:00 crc kubenswrapper[4830]: I0227 17:08:00.359079 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6k6w\" (UniqueName: \"kubernetes.io/projected/beffbac5-5df9-442e-903e-3abda96a0e09-kube-api-access-v6k6w\") pod \"auto-csr-approver-29536868-rbx7q\" (UID: \"beffbac5-5df9-442e-903e-3abda96a0e09\") " pod="openshift-infra/auto-csr-approver-29536868-rbx7q" Feb 27 17:08:00 crc kubenswrapper[4830]: I0227 17:08:00.403812 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6k6w\" (UniqueName: \"kubernetes.io/projected/beffbac5-5df9-442e-903e-3abda96a0e09-kube-api-access-v6k6w\") pod \"auto-csr-approver-29536868-rbx7q\" (UID: \"beffbac5-5df9-442e-903e-3abda96a0e09\") " pod="openshift-infra/auto-csr-approver-29536868-rbx7q" Feb 27 17:08:00 crc kubenswrapper[4830]: I0227 17:08:00.507563 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536868-rbx7q" Feb 27 17:08:00 crc kubenswrapper[4830]: I0227 17:08:00.913761 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-brxnv" podUID="d3c88946-1943-45ae-8f2c-f7f0c6ea41b6" containerName="registry-server" containerID="cri-o://0ca360016f59360f22cdfc2d526ce7806e59be282729a4ff61a93a263bc517a7" gracePeriod=2 Feb 27 17:08:01 crc kubenswrapper[4830]: I0227 17:08:01.061133 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536868-rbx7q"] Feb 27 17:08:01 crc kubenswrapper[4830]: W0227 17:08:01.063202 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbeffbac5_5df9_442e_903e_3abda96a0e09.slice/crio-87548c494b33f78403cbb150612a6329649f727abf0eb57c5825795c0a7cdbd0 WatchSource:0}: Error finding container 87548c494b33f78403cbb150612a6329649f727abf0eb57c5825795c0a7cdbd0: Status 404 returned error can't find the container with id 87548c494b33f78403cbb150612a6329649f727abf0eb57c5825795c0a7cdbd0 Feb 27 17:08:01 crc kubenswrapper[4830]: I0227 17:08:01.940981 4830 generic.go:334] "Generic (PLEG): container finished" podID="d3c88946-1943-45ae-8f2c-f7f0c6ea41b6" containerID="0ca360016f59360f22cdfc2d526ce7806e59be282729a4ff61a93a263bc517a7" exitCode=0 Feb 27 17:08:01 crc kubenswrapper[4830]: I0227 17:08:01.941021 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brxnv" event={"ID":"d3c88946-1943-45ae-8f2c-f7f0c6ea41b6","Type":"ContainerDied","Data":"0ca360016f59360f22cdfc2d526ce7806e59be282729a4ff61a93a263bc517a7"} Feb 27 17:08:01 crc kubenswrapper[4830]: I0227 17:08:01.943532 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536868-rbx7q" event={"ID":"beffbac5-5df9-442e-903e-3abda96a0e09","Type":"ContainerStarted","Data":"87548c494b33f78403cbb150612a6329649f727abf0eb57c5825795c0a7cdbd0"} Feb 27 17:08:02 crc kubenswrapper[4830]: I0227 17:08:02.371373 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brxnv" Feb 27 17:08:02 crc kubenswrapper[4830]: I0227 17:08:02.509718 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxdc8\" (UniqueName: \"kubernetes.io/projected/d3c88946-1943-45ae-8f2c-f7f0c6ea41b6-kube-api-access-sxdc8\") pod \"d3c88946-1943-45ae-8f2c-f7f0c6ea41b6\" (UID: \"d3c88946-1943-45ae-8f2c-f7f0c6ea41b6\") " Feb 27 17:08:02 crc kubenswrapper[4830]: I0227 17:08:02.509881 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3c88946-1943-45ae-8f2c-f7f0c6ea41b6-catalog-content\") pod \"d3c88946-1943-45ae-8f2c-f7f0c6ea41b6\" (UID: \"d3c88946-1943-45ae-8f2c-f7f0c6ea41b6\") " Feb 27 17:08:02 crc kubenswrapper[4830]: I0227 17:08:02.509973 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3c88946-1943-45ae-8f2c-f7f0c6ea41b6-utilities\") pod \"d3c88946-1943-45ae-8f2c-f7f0c6ea41b6\" (UID: \"d3c88946-1943-45ae-8f2c-f7f0c6ea41b6\") " Feb 27 17:08:02 crc kubenswrapper[4830]: I0227 17:08:02.512135 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3c88946-1943-45ae-8f2c-f7f0c6ea41b6-utilities" (OuterVolumeSpecName: "utilities") pod "d3c88946-1943-45ae-8f2c-f7f0c6ea41b6" (UID: "d3c88946-1943-45ae-8f2c-f7f0c6ea41b6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:08:02 crc kubenswrapper[4830]: I0227 17:08:02.523324 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3c88946-1943-45ae-8f2c-f7f0c6ea41b6-kube-api-access-sxdc8" (OuterVolumeSpecName: "kube-api-access-sxdc8") pod "d3c88946-1943-45ae-8f2c-f7f0c6ea41b6" (UID: "d3c88946-1943-45ae-8f2c-f7f0c6ea41b6"). InnerVolumeSpecName "kube-api-access-sxdc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:08:02 crc kubenswrapper[4830]: I0227 17:08:02.594481 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3c88946-1943-45ae-8f2c-f7f0c6ea41b6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3c88946-1943-45ae-8f2c-f7f0c6ea41b6" (UID: "d3c88946-1943-45ae-8f2c-f7f0c6ea41b6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:08:02 crc kubenswrapper[4830]: I0227 17:08:02.612595 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxdc8\" (UniqueName: \"kubernetes.io/projected/d3c88946-1943-45ae-8f2c-f7f0c6ea41b6-kube-api-access-sxdc8\") on node \"crc\" DevicePath \"\"" Feb 27 17:08:02 crc kubenswrapper[4830]: I0227 17:08:02.612640 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3c88946-1943-45ae-8f2c-f7f0c6ea41b6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:08:02 crc kubenswrapper[4830]: I0227 17:08:02.612659 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3c88946-1943-45ae-8f2c-f7f0c6ea41b6-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:08:02 crc kubenswrapper[4830]: I0227 17:08:02.963147 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brxnv" event={"ID":"d3c88946-1943-45ae-8f2c-f7f0c6ea41b6","Type":"ContainerDied","Data":"ea574c8245556923822b616bf17c347dc8c2510ca2003df5a38e8e1a1b6fad1a"} Feb 27 17:08:02 crc kubenswrapper[4830]: I0227 17:08:02.963258 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brxnv" Feb 27 17:08:02 crc kubenswrapper[4830]: I0227 17:08:02.965287 4830 scope.go:117] "RemoveContainer" containerID="0ca360016f59360f22cdfc2d526ce7806e59be282729a4ff61a93a263bc517a7" Feb 27 17:08:03 crc kubenswrapper[4830]: I0227 17:08:03.005689 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-brxnv"] Feb 27 17:08:03 crc kubenswrapper[4830]: I0227 17:08:03.011826 4830 scope.go:117] "RemoveContainer" containerID="d786598ecd89772810a474d48b78cb2b0613271b40ae4f77b4ffb2916e2cddae" Feb 27 17:08:03 crc kubenswrapper[4830]: I0227 17:08:03.016044 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-brxnv"] Feb 27 17:08:03 crc kubenswrapper[4830]: I0227 17:08:03.051641 4830 scope.go:117] "RemoveContainer" containerID="d2f0cb832f5007f8edb83adb4f104b75c4ecf92c27f2f5bdd96a800eefef5af0" Feb 27 17:08:03 crc kubenswrapper[4830]: I0227 17:08:03.762499 4830 scope.go:117] "RemoveContainer" containerID="b59a0a2697e1673250d28d22947dbf29d890efdc4f2af61efcb7c8f01573fc27" Feb 27 17:08:03 crc kubenswrapper[4830]: E0227 17:08:03.763616 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:08:03 crc kubenswrapper[4830]: I0227 17:08:03.980000 4830 generic.go:334] "Generic (PLEG): container finished" podID="beffbac5-5df9-442e-903e-3abda96a0e09" containerID="da63ce22f084da57cefd57d19ae652a972aaad7cdb7166a3a5696e7de4ad50a9" exitCode=0 Feb 27 17:08:03 crc kubenswrapper[4830]: I0227 17:08:03.980074 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536868-rbx7q" event={"ID":"beffbac5-5df9-442e-903e-3abda96a0e09","Type":"ContainerDied","Data":"da63ce22f084da57cefd57d19ae652a972aaad7cdb7166a3a5696e7de4ad50a9"} Feb 27 17:08:04 crc kubenswrapper[4830]: I0227 17:08:04.775253 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3c88946-1943-45ae-8f2c-f7f0c6ea41b6" path="/var/lib/kubelet/pods/d3c88946-1943-45ae-8f2c-f7f0c6ea41b6/volumes" Feb 27 17:08:05 crc kubenswrapper[4830]: I0227 17:08:05.367205 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536868-rbx7q" Feb 27 17:08:05 crc kubenswrapper[4830]: I0227 17:08:05.465492 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6k6w\" (UniqueName: \"kubernetes.io/projected/beffbac5-5df9-442e-903e-3abda96a0e09-kube-api-access-v6k6w\") pod \"beffbac5-5df9-442e-903e-3abda96a0e09\" (UID: \"beffbac5-5df9-442e-903e-3abda96a0e09\") " Feb 27 17:08:05 crc kubenswrapper[4830]: I0227 17:08:05.474315 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beffbac5-5df9-442e-903e-3abda96a0e09-kube-api-access-v6k6w" (OuterVolumeSpecName: "kube-api-access-v6k6w") pod "beffbac5-5df9-442e-903e-3abda96a0e09" (UID: "beffbac5-5df9-442e-903e-3abda96a0e09"). InnerVolumeSpecName "kube-api-access-v6k6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:08:05 crc kubenswrapper[4830]: I0227 17:08:05.567836 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6k6w\" (UniqueName: \"kubernetes.io/projected/beffbac5-5df9-442e-903e-3abda96a0e09-kube-api-access-v6k6w\") on node \"crc\" DevicePath \"\"" Feb 27 17:08:06 crc kubenswrapper[4830]: I0227 17:08:06.010017 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536868-rbx7q" event={"ID":"beffbac5-5df9-442e-903e-3abda96a0e09","Type":"ContainerDied","Data":"87548c494b33f78403cbb150612a6329649f727abf0eb57c5825795c0a7cdbd0"} Feb 27 17:08:06 crc kubenswrapper[4830]: I0227 17:08:06.010083 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536868-rbx7q" Feb 27 17:08:06 crc kubenswrapper[4830]: I0227 17:08:06.010091 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87548c494b33f78403cbb150612a6329649f727abf0eb57c5825795c0a7cdbd0" Feb 27 17:08:06 crc kubenswrapper[4830]: I0227 17:08:06.469063 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536862-8ldb6"] Feb 27 17:08:06 crc kubenswrapper[4830]: I0227 17:08:06.484209 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536862-8ldb6"] Feb 27 17:08:06 crc kubenswrapper[4830]: I0227 17:08:06.780503 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a8fa961-32e8-4d06-b404-e189e2691884" path="/var/lib/kubelet/pods/7a8fa961-32e8-4d06-b404-e189e2691884/volumes" Feb 27 17:08:18 crc kubenswrapper[4830]: I0227 17:08:18.763940 4830 scope.go:117] "RemoveContainer" containerID="b59a0a2697e1673250d28d22947dbf29d890efdc4f2af61efcb7c8f01573fc27" Feb 27 17:08:18 crc kubenswrapper[4830]: E0227 17:08:18.765278 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:08:32 crc kubenswrapper[4830]: I0227 17:08:32.762782 4830 scope.go:117] "RemoveContainer" containerID="b59a0a2697e1673250d28d22947dbf29d890efdc4f2af61efcb7c8f01573fc27" Feb 27 17:08:32 crc kubenswrapper[4830]: E0227 17:08:32.763728 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:08:46 crc kubenswrapper[4830]: I0227 17:08:46.562497 4830 scope.go:117] "RemoveContainer" containerID="3a4c78e5808e87cfda9635b169561ad7833c2bb0bd03dde0beef0bfe42dfe589" Feb 27 17:08:47 crc kubenswrapper[4830]: I0227 17:08:47.763419 4830 scope.go:117] "RemoveContainer" containerID="b59a0a2697e1673250d28d22947dbf29d890efdc4f2af61efcb7c8f01573fc27" Feb 27 17:08:47 crc kubenswrapper[4830]: E0227 17:08:47.764119 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:09:00 crc kubenswrapper[4830]: I0227 17:09:00.762314 4830 scope.go:117] "RemoveContainer" containerID="b59a0a2697e1673250d28d22947dbf29d890efdc4f2af61efcb7c8f01573fc27" Feb 27 17:09:00 crc kubenswrapper[4830]: E0227 17:09:00.763245 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:09:11 crc kubenswrapper[4830]: I0227 17:09:11.762612 4830 scope.go:117] "RemoveContainer" containerID="b59a0a2697e1673250d28d22947dbf29d890efdc4f2af61efcb7c8f01573fc27" Feb 27 17:09:11 crc kubenswrapper[4830]: E0227 17:09:11.763634 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:09:26 crc kubenswrapper[4830]: I0227 17:09:26.761884 4830 scope.go:117] "RemoveContainer" containerID="b59a0a2697e1673250d28d22947dbf29d890efdc4f2af61efcb7c8f01573fc27" Feb 27 17:09:26 crc kubenswrapper[4830]: E0227 17:09:26.762697 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:09:39 crc kubenswrapper[4830]: I0227 17:09:39.762652 4830 scope.go:117] "RemoveContainer" containerID="b59a0a2697e1673250d28d22947dbf29d890efdc4f2af61efcb7c8f01573fc27" Feb 27 17:09:39 crc kubenswrapper[4830]: E0227 17:09:39.763890 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:09:50 crc kubenswrapper[4830]: I0227 17:09:50.763510 4830 scope.go:117] "RemoveContainer" containerID="b59a0a2697e1673250d28d22947dbf29d890efdc4f2af61efcb7c8f01573fc27" Feb 27 17:09:50 crc kubenswrapper[4830]: E0227 17:09:50.764697 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:10:00 crc kubenswrapper[4830]: I0227 17:10:00.177234 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536870-hv9dk"] Feb 27 17:10:00 crc kubenswrapper[4830]: E0227 17:10:00.180280 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3c88946-1943-45ae-8f2c-f7f0c6ea41b6" containerName="extract-content" Feb 27 17:10:00 crc kubenswrapper[4830]: I0227 17:10:00.180321 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3c88946-1943-45ae-8f2c-f7f0c6ea41b6" containerName="extract-content" Feb 27 17:10:00 crc kubenswrapper[4830]: E0227 17:10:00.180384 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beffbac5-5df9-442e-903e-3abda96a0e09" containerName="oc" Feb 27 17:10:00 crc kubenswrapper[4830]: I0227 17:10:00.180403 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="beffbac5-5df9-442e-903e-3abda96a0e09" containerName="oc" Feb 27 17:10:00 crc kubenswrapper[4830]: E0227 17:10:00.180469 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3c88946-1943-45ae-8f2c-f7f0c6ea41b6" containerName="registry-server" Feb 27 17:10:00 crc kubenswrapper[4830]: I0227 17:10:00.180487 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3c88946-1943-45ae-8f2c-f7f0c6ea41b6" containerName="registry-server" Feb 27 17:10:00 crc kubenswrapper[4830]: E0227 17:10:00.180530 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3c88946-1943-45ae-8f2c-f7f0c6ea41b6" containerName="extract-utilities" Feb 27 17:10:00 crc kubenswrapper[4830]: I0227 17:10:00.180547 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3c88946-1943-45ae-8f2c-f7f0c6ea41b6" containerName="extract-utilities" Feb 27 17:10:00 crc kubenswrapper[4830]: I0227 17:10:00.182507 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="beffbac5-5df9-442e-903e-3abda96a0e09" containerName="oc" Feb 27 17:10:00 crc kubenswrapper[4830]: I0227 17:10:00.182601 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3c88946-1943-45ae-8f2c-f7f0c6ea41b6" containerName="registry-server" Feb 27 17:10:00 crc kubenswrapper[4830]: I0227 17:10:00.185301 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536870-hv9dk" Feb 27 17:10:00 crc kubenswrapper[4830]: I0227 17:10:00.190856 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 17:10:00 crc kubenswrapper[4830]: I0227 17:10:00.192676 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:10:00 crc kubenswrapper[4830]: I0227 17:10:00.197027 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:10:00 crc kubenswrapper[4830]: I0227 17:10:00.214835 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536870-hv9dk"] Feb 27 17:10:00 crc kubenswrapper[4830]: I0227 17:10:00.241202 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8v8r\" (UniqueName: \"kubernetes.io/projected/f4691bb0-7469-49a3-a878-21b2f20e43b1-kube-api-access-z8v8r\") pod \"auto-csr-approver-29536870-hv9dk\" (UID: \"f4691bb0-7469-49a3-a878-21b2f20e43b1\") " pod="openshift-infra/auto-csr-approver-29536870-hv9dk" Feb 27 17:10:00 crc kubenswrapper[4830]: I0227 17:10:00.343431 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8v8r\" (UniqueName: \"kubernetes.io/projected/f4691bb0-7469-49a3-a878-21b2f20e43b1-kube-api-access-z8v8r\") pod \"auto-csr-approver-29536870-hv9dk\" (UID: \"f4691bb0-7469-49a3-a878-21b2f20e43b1\") " pod="openshift-infra/auto-csr-approver-29536870-hv9dk" Feb 27 17:10:00 crc kubenswrapper[4830]: I0227 17:10:00.380649 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8v8r\" (UniqueName: \"kubernetes.io/projected/f4691bb0-7469-49a3-a878-21b2f20e43b1-kube-api-access-z8v8r\") pod \"auto-csr-approver-29536870-hv9dk\" (UID: \"f4691bb0-7469-49a3-a878-21b2f20e43b1\") " pod="openshift-infra/auto-csr-approver-29536870-hv9dk" Feb 27 17:10:00 crc kubenswrapper[4830]: I0227 17:10:00.529993 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536870-hv9dk" Feb 27 17:10:01 crc kubenswrapper[4830]: I0227 17:10:01.032151 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536870-hv9dk"] Feb 27 17:10:01 crc kubenswrapper[4830]: I0227 17:10:01.222635 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536870-hv9dk" event={"ID":"f4691bb0-7469-49a3-a878-21b2f20e43b1","Type":"ContainerStarted","Data":"1743b5ba13628eca2df565a2bce10f461bb670b2439f56a6ad42d39727b51f53"} Feb 27 17:10:03 crc kubenswrapper[4830]: I0227 17:10:03.254451 4830 generic.go:334] "Generic (PLEG): container finished" podID="f4691bb0-7469-49a3-a878-21b2f20e43b1" containerID="17d46bad7e5ac46f4fba357597fde713481738cb7bc7178a2d2314620576a8fc" exitCode=0 Feb 27 17:10:03 crc kubenswrapper[4830]: I0227 17:10:03.254588 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536870-hv9dk" event={"ID":"f4691bb0-7469-49a3-a878-21b2f20e43b1","Type":"ContainerDied","Data":"17d46bad7e5ac46f4fba357597fde713481738cb7bc7178a2d2314620576a8fc"} Feb 27 17:10:03 crc kubenswrapper[4830]: I0227 17:10:03.763490 4830 scope.go:117] "RemoveContainer" containerID="b59a0a2697e1673250d28d22947dbf29d890efdc4f2af61efcb7c8f01573fc27" Feb 27 17:10:03 crc kubenswrapper[4830]: E0227 17:10:03.764099 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:10:04 crc kubenswrapper[4830]: I0227 17:10:04.668482 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536870-hv9dk" Feb 27 17:10:04 crc kubenswrapper[4830]: I0227 17:10:04.746375 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8v8r\" (UniqueName: \"kubernetes.io/projected/f4691bb0-7469-49a3-a878-21b2f20e43b1-kube-api-access-z8v8r\") pod \"f4691bb0-7469-49a3-a878-21b2f20e43b1\" (UID: \"f4691bb0-7469-49a3-a878-21b2f20e43b1\") " Feb 27 17:10:04 crc kubenswrapper[4830]: I0227 17:10:04.754507 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4691bb0-7469-49a3-a878-21b2f20e43b1-kube-api-access-z8v8r" (OuterVolumeSpecName: "kube-api-access-z8v8r") pod "f4691bb0-7469-49a3-a878-21b2f20e43b1" (UID: "f4691bb0-7469-49a3-a878-21b2f20e43b1"). InnerVolumeSpecName "kube-api-access-z8v8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:10:04 crc kubenswrapper[4830]: I0227 17:10:04.848338 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8v8r\" (UniqueName: \"kubernetes.io/projected/f4691bb0-7469-49a3-a878-21b2f20e43b1-kube-api-access-z8v8r\") on node \"crc\" DevicePath \"\"" Feb 27 17:10:05 crc kubenswrapper[4830]: I0227 17:10:05.284778 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536870-hv9dk" event={"ID":"f4691bb0-7469-49a3-a878-21b2f20e43b1","Type":"ContainerDied","Data":"1743b5ba13628eca2df565a2bce10f461bb670b2439f56a6ad42d39727b51f53"} Feb 27 17:10:05 crc kubenswrapper[4830]: I0227 17:10:05.284870 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1743b5ba13628eca2df565a2bce10f461bb670b2439f56a6ad42d39727b51f53" Feb 27 17:10:05 crc kubenswrapper[4830]: I0227 17:10:05.284872 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536870-hv9dk" Feb 27 17:10:05 crc kubenswrapper[4830]: I0227 17:10:05.788666 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536864-4xmzm"] Feb 27 17:10:05 crc kubenswrapper[4830]: I0227 17:10:05.800663 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536864-4xmzm"] Feb 27 17:10:06 crc kubenswrapper[4830]: I0227 17:10:06.777575 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c18c08f4-6698-4963-981d-0678064c6a3e" path="/var/lib/kubelet/pods/c18c08f4-6698-4963-981d-0678064c6a3e/volumes" Feb 27 17:10:18 crc kubenswrapper[4830]: I0227 17:10:18.763470 4830 scope.go:117] "RemoveContainer" containerID="b59a0a2697e1673250d28d22947dbf29d890efdc4f2af61efcb7c8f01573fc27" Feb 27 17:10:18 crc kubenswrapper[4830]: E0227 17:10:18.766757 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:10:32 crc kubenswrapper[4830]: I0227 17:10:32.762677 4830 scope.go:117] "RemoveContainer" containerID="b59a0a2697e1673250d28d22947dbf29d890efdc4f2af61efcb7c8f01573fc27" Feb 27 17:10:32 crc kubenswrapper[4830]: E0227 17:10:32.763869 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:10:43 crc kubenswrapper[4830]: I0227 17:10:43.763508 4830 scope.go:117] "RemoveContainer" containerID="b59a0a2697e1673250d28d22947dbf29d890efdc4f2af61efcb7c8f01573fc27" Feb 27 17:10:43 crc kubenswrapper[4830]: E0227 17:10:43.764530 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:10:46 crc kubenswrapper[4830]: I0227 17:10:46.693352 4830 scope.go:117] "RemoveContainer" containerID="ce641fd11cfcc54fe9cc918f5ce3c1628e6ab0cbfd0d9be2add7a890d701f64a" Feb 27 17:10:57 crc kubenswrapper[4830]: I0227 17:10:57.762737 4830 scope.go:117] "RemoveContainer" containerID="b59a0a2697e1673250d28d22947dbf29d890efdc4f2af61efcb7c8f01573fc27" Feb 27 17:10:57 crc kubenswrapper[4830]: E0227 17:10:57.764164 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:11:11 crc kubenswrapper[4830]: I0227 17:11:11.763584 4830 scope.go:117] "RemoveContainer" containerID="b59a0a2697e1673250d28d22947dbf29d890efdc4f2af61efcb7c8f01573fc27" Feb 27 17:11:11 crc kubenswrapper[4830]: E0227 17:11:11.765007 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:11:26 crc kubenswrapper[4830]: I0227 17:11:26.763369 4830 scope.go:117] "RemoveContainer" containerID="b59a0a2697e1673250d28d22947dbf29d890efdc4f2af61efcb7c8f01573fc27" Feb 27 17:11:26 crc kubenswrapper[4830]: E0227 17:11:26.764556 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:11:41 crc kubenswrapper[4830]: I0227 17:11:41.762533 4830 scope.go:117] "RemoveContainer" containerID="b59a0a2697e1673250d28d22947dbf29d890efdc4f2af61efcb7c8f01573fc27" Feb 27 17:11:41 crc kubenswrapper[4830]: E0227 17:11:41.763576 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:11:53 crc kubenswrapper[4830]: I0227 17:11:53.762246 4830 scope.go:117] "RemoveContainer" containerID="b59a0a2697e1673250d28d22947dbf29d890efdc4f2af61efcb7c8f01573fc27" Feb 27 17:11:53 crc kubenswrapper[4830]: E0227 17:11:53.763139 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:12:00 crc kubenswrapper[4830]: I0227 17:12:00.170845 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536872-h4685"] Feb 27 17:12:00 crc kubenswrapper[4830]: E0227 17:12:00.171737 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4691bb0-7469-49a3-a878-21b2f20e43b1" containerName="oc" Feb 27 17:12:00 crc kubenswrapper[4830]: I0227 17:12:00.171751 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4691bb0-7469-49a3-a878-21b2f20e43b1" containerName="oc" Feb 27 17:12:00 crc kubenswrapper[4830]: I0227 17:12:00.171919 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4691bb0-7469-49a3-a878-21b2f20e43b1" containerName="oc" Feb 27 17:12:00 crc kubenswrapper[4830]: I0227 17:12:00.172472 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536872-h4685" Feb 27 17:12:00 crc kubenswrapper[4830]: I0227 17:12:00.175926 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:12:00 crc kubenswrapper[4830]: I0227 17:12:00.176987 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:12:00 crc kubenswrapper[4830]: I0227 17:12:00.181277 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 17:12:00 crc kubenswrapper[4830]: I0227 17:12:00.190810 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536872-h4685"] Feb 27 17:12:00 crc kubenswrapper[4830]: I0227 17:12:00.199308 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msz5c\" (UniqueName: \"kubernetes.io/projected/09e6d9d8-dbef-4c31-b7ed-9867abb8ff1d-kube-api-access-msz5c\") pod \"auto-csr-approver-29536872-h4685\" (UID: \"09e6d9d8-dbef-4c31-b7ed-9867abb8ff1d\") " pod="openshift-infra/auto-csr-approver-29536872-h4685" Feb 27 17:12:00 crc kubenswrapper[4830]: I0227 17:12:00.301923 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msz5c\" (UniqueName: \"kubernetes.io/projected/09e6d9d8-dbef-4c31-b7ed-9867abb8ff1d-kube-api-access-msz5c\") pod \"auto-csr-approver-29536872-h4685\" (UID: \"09e6d9d8-dbef-4c31-b7ed-9867abb8ff1d\") " pod="openshift-infra/auto-csr-approver-29536872-h4685" Feb 27 17:12:00 crc kubenswrapper[4830]: I0227 17:12:00.332243 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msz5c\" (UniqueName: \"kubernetes.io/projected/09e6d9d8-dbef-4c31-b7ed-9867abb8ff1d-kube-api-access-msz5c\") pod \"auto-csr-approver-29536872-h4685\" (UID: \"09e6d9d8-dbef-4c31-b7ed-9867abb8ff1d\") " pod="openshift-infra/auto-csr-approver-29536872-h4685" Feb 27 17:12:00 crc kubenswrapper[4830]: I0227 17:12:00.513791 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536872-h4685" Feb 27 17:12:00 crc kubenswrapper[4830]: I0227 17:12:00.822210 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536872-h4685"] Feb 27 17:12:01 crc kubenswrapper[4830]: I0227 17:12:01.414515 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536872-h4685" event={"ID":"09e6d9d8-dbef-4c31-b7ed-9867abb8ff1d","Type":"ContainerStarted","Data":"8a17dee6a5d301be9002570734d062f8ecb23c865c0cb1af560fd39d734ade40"} Feb 27 17:12:02 crc kubenswrapper[4830]: I0227 17:12:02.425447 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536872-h4685" event={"ID":"09e6d9d8-dbef-4c31-b7ed-9867abb8ff1d","Type":"ContainerStarted","Data":"5b340714b6b6a1403277f3024f0f27850928608bf8d523d8bd615b613b5f3d53"} Feb 27 17:12:02 crc kubenswrapper[4830]: I0227 17:12:02.450737 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536872-h4685" podStartSLOduration=1.44992364 podStartE2EDuration="2.450698865s" podCreationTimestamp="2026-02-27 17:12:00 +0000 UTC" firstStartedPulling="2026-02-27 17:12:00.828836272 +0000 UTC m=+3916.918108745" lastFinishedPulling="2026-02-27 17:12:01.829611477 +0000 UTC m=+3917.918883970" observedRunningTime="2026-02-27 17:12:02.442878095 +0000 UTC m=+3918.532150568" watchObservedRunningTime="2026-02-27 17:12:02.450698865 +0000 UTC m=+3918.539971368" Feb 27 17:12:03 crc kubenswrapper[4830]: I0227 17:12:03.437777 4830 generic.go:334] "Generic (PLEG): container finished" podID="09e6d9d8-dbef-4c31-b7ed-9867abb8ff1d" containerID="5b340714b6b6a1403277f3024f0f27850928608bf8d523d8bd615b613b5f3d53" exitCode=0 Feb 27 17:12:03 crc kubenswrapper[4830]: I0227 17:12:03.437848 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536872-h4685" event={"ID":"09e6d9d8-dbef-4c31-b7ed-9867abb8ff1d","Type":"ContainerDied","Data":"5b340714b6b6a1403277f3024f0f27850928608bf8d523d8bd615b613b5f3d53"} Feb 27 17:12:04 crc kubenswrapper[4830]: I0227 17:12:04.792383 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536872-h4685" Feb 27 17:12:04 crc kubenswrapper[4830]: I0227 17:12:04.877936 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msz5c\" (UniqueName: \"kubernetes.io/projected/09e6d9d8-dbef-4c31-b7ed-9867abb8ff1d-kube-api-access-msz5c\") pod \"09e6d9d8-dbef-4c31-b7ed-9867abb8ff1d\" (UID: \"09e6d9d8-dbef-4c31-b7ed-9867abb8ff1d\") " Feb 27 17:12:04 crc kubenswrapper[4830]: I0227 17:12:04.892435 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09e6d9d8-dbef-4c31-b7ed-9867abb8ff1d-kube-api-access-msz5c" (OuterVolumeSpecName: "kube-api-access-msz5c") pod "09e6d9d8-dbef-4c31-b7ed-9867abb8ff1d" (UID: "09e6d9d8-dbef-4c31-b7ed-9867abb8ff1d"). InnerVolumeSpecName "kube-api-access-msz5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:12:04 crc kubenswrapper[4830]: I0227 17:12:04.981181 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msz5c\" (UniqueName: \"kubernetes.io/projected/09e6d9d8-dbef-4c31-b7ed-9867abb8ff1d-kube-api-access-msz5c\") on node \"crc\" DevicePath \"\"" Feb 27 17:12:05 crc kubenswrapper[4830]: I0227 17:12:05.458444 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536872-h4685" event={"ID":"09e6d9d8-dbef-4c31-b7ed-9867abb8ff1d","Type":"ContainerDied","Data":"8a17dee6a5d301be9002570734d062f8ecb23c865c0cb1af560fd39d734ade40"} Feb 27 17:12:05 crc kubenswrapper[4830]: I0227 17:12:05.458495 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536872-h4685" Feb 27 17:12:05 crc kubenswrapper[4830]: I0227 17:12:05.458503 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a17dee6a5d301be9002570734d062f8ecb23c865c0cb1af560fd39d734ade40" Feb 27 17:12:05 crc kubenswrapper[4830]: I0227 17:12:05.558119 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536866-nw6lt"] Feb 27 17:12:05 crc kubenswrapper[4830]: I0227 17:12:05.565609 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536866-nw6lt"] Feb 27 17:12:06 crc kubenswrapper[4830]: I0227 17:12:06.780897 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5e4c7a5-debd-44f3-98c1-d9721748f0a1" path="/var/lib/kubelet/pods/f5e4c7a5-debd-44f3-98c1-d9721748f0a1/volumes" Feb 27 17:12:08 crc kubenswrapper[4830]: I0227 17:12:08.762183 4830 scope.go:117] "RemoveContainer" containerID="b59a0a2697e1673250d28d22947dbf29d890efdc4f2af61efcb7c8f01573fc27" Feb 27 17:12:08 crc kubenswrapper[4830]: E0227 17:12:08.762705 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:12:22 crc kubenswrapper[4830]: I0227 17:12:22.763822 4830 scope.go:117] "RemoveContainer" containerID="b59a0a2697e1673250d28d22947dbf29d890efdc4f2af61efcb7c8f01573fc27" Feb 27 17:12:22 crc kubenswrapper[4830]: E0227 17:12:22.765652 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:12:37 crc kubenswrapper[4830]: I0227 17:12:37.762814 4830 scope.go:117] "RemoveContainer" containerID="b59a0a2697e1673250d28d22947dbf29d890efdc4f2af61efcb7c8f01573fc27" Feb 27 17:12:38 crc kubenswrapper[4830]: I0227 17:12:38.863539 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerStarted","Data":"4fe4d1b45eabdb72f9fc5ac554899ea9b06c8455f8916258035d1a2fc79f3c9e"} Feb 27 17:12:46 crc kubenswrapper[4830]: I0227 17:12:46.825547 4830 scope.go:117] "RemoveContainer" containerID="11728316a2977378a13183287d892146bd92587513db90bc43ef19fc66bf9cb8" Feb 27 17:14:00 crc kubenswrapper[4830]: I0227 17:14:00.150373 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536874-pmw8j"] Feb 27 17:14:00 crc kubenswrapper[4830]: E0227 17:14:00.151290 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09e6d9d8-dbef-4c31-b7ed-9867abb8ff1d" containerName="oc" Feb 27 17:14:00 crc kubenswrapper[4830]: I0227 17:14:00.151307 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="09e6d9d8-dbef-4c31-b7ed-9867abb8ff1d" containerName="oc" Feb 27 17:14:00 crc kubenswrapper[4830]: I0227 17:14:00.151519 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="09e6d9d8-dbef-4c31-b7ed-9867abb8ff1d" containerName="oc" Feb 27 17:14:00 crc kubenswrapper[4830]: I0227 17:14:00.152086 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536874-pmw8j" Feb 27 17:14:00 crc kubenswrapper[4830]: I0227 17:14:00.159768 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:14:00 crc kubenswrapper[4830]: I0227 17:14:00.159964 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 17:14:00 crc kubenswrapper[4830]: I0227 17:14:00.160383 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:14:00 crc kubenswrapper[4830]: I0227 17:14:00.167458 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536874-pmw8j"] Feb 27 17:14:00 crc kubenswrapper[4830]: I0227 17:14:00.208032 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk2hn\" (UniqueName: \"kubernetes.io/projected/9e9862ab-1028-4404-9d25-908c8ae0da55-kube-api-access-bk2hn\") pod \"auto-csr-approver-29536874-pmw8j\" (UID: \"9e9862ab-1028-4404-9d25-908c8ae0da55\") " pod="openshift-infra/auto-csr-approver-29536874-pmw8j" Feb 27 17:14:00 crc kubenswrapper[4830]: I0227 17:14:00.308910 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk2hn\" (UniqueName: \"kubernetes.io/projected/9e9862ab-1028-4404-9d25-908c8ae0da55-kube-api-access-bk2hn\") pod \"auto-csr-approver-29536874-pmw8j\" (UID: \"9e9862ab-1028-4404-9d25-908c8ae0da55\") " pod="openshift-infra/auto-csr-approver-29536874-pmw8j" Feb 27 17:14:00 crc kubenswrapper[4830]: I0227 17:14:00.342461 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk2hn\" (UniqueName: \"kubernetes.io/projected/9e9862ab-1028-4404-9d25-908c8ae0da55-kube-api-access-bk2hn\") pod \"auto-csr-approver-29536874-pmw8j\" (UID: \"9e9862ab-1028-4404-9d25-908c8ae0da55\") " pod="openshift-infra/auto-csr-approver-29536874-pmw8j" Feb 27 17:14:00 crc kubenswrapper[4830]: I0227 17:14:00.475005 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536874-pmw8j" Feb 27 17:14:01 crc kubenswrapper[4830]: I0227 17:14:01.066940 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536874-pmw8j"] Feb 27 17:14:01 crc kubenswrapper[4830]: W0227 17:14:01.597978 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e9862ab_1028_4404_9d25_908c8ae0da55.slice/crio-6de7a08f8eeeb9cd1790793e7d6e2f4c7925e589e59ee7f655717feb60eaace8 WatchSource:0}: Error finding container 6de7a08f8eeeb9cd1790793e7d6e2f4c7925e589e59ee7f655717feb60eaace8: Status 404 returned error can't find the container with id 6de7a08f8eeeb9cd1790793e7d6e2f4c7925e589e59ee7f655717feb60eaace8 Feb 27 17:14:01 crc kubenswrapper[4830]: I0227 17:14:01.602153 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 17:14:01 crc kubenswrapper[4830]: I0227 17:14:01.664292 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536874-pmw8j" event={"ID":"9e9862ab-1028-4404-9d25-908c8ae0da55","Type":"ContainerStarted","Data":"6de7a08f8eeeb9cd1790793e7d6e2f4c7925e589e59ee7f655717feb60eaace8"} Feb 27 17:14:03 crc kubenswrapper[4830]: I0227 17:14:03.689777 4830 generic.go:334] "Generic (PLEG): container finished" podID="9e9862ab-1028-4404-9d25-908c8ae0da55" containerID="5a6b9bf1d9e2092d5f791d0d4bfb84dd74a31b91d0641c25b4331b38d80bf15e" exitCode=0 Feb 27 17:14:03 crc kubenswrapper[4830]: I0227 17:14:03.689846 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536874-pmw8j" event={"ID":"9e9862ab-1028-4404-9d25-908c8ae0da55","Type":"ContainerDied","Data":"5a6b9bf1d9e2092d5f791d0d4bfb84dd74a31b91d0641c25b4331b38d80bf15e"} Feb 27 17:14:05 crc kubenswrapper[4830]: I0227 17:14:05.096702 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536874-pmw8j" Feb 27 17:14:05 crc kubenswrapper[4830]: I0227 17:14:05.294271 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk2hn\" (UniqueName: \"kubernetes.io/projected/9e9862ab-1028-4404-9d25-908c8ae0da55-kube-api-access-bk2hn\") pod \"9e9862ab-1028-4404-9d25-908c8ae0da55\" (UID: \"9e9862ab-1028-4404-9d25-908c8ae0da55\") " Feb 27 17:14:05 crc kubenswrapper[4830]: I0227 17:14:05.306302 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9862ab-1028-4404-9d25-908c8ae0da55-kube-api-access-bk2hn" (OuterVolumeSpecName: "kube-api-access-bk2hn") pod "9e9862ab-1028-4404-9d25-908c8ae0da55" (UID: "9e9862ab-1028-4404-9d25-908c8ae0da55"). InnerVolumeSpecName "kube-api-access-bk2hn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:14:05 crc kubenswrapper[4830]: I0227 17:14:05.396446 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bk2hn\" (UniqueName: \"kubernetes.io/projected/9e9862ab-1028-4404-9d25-908c8ae0da55-kube-api-access-bk2hn\") on node \"crc\" DevicePath \"\"" Feb 27 17:14:05 crc kubenswrapper[4830]: I0227 17:14:05.715733 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536874-pmw8j" event={"ID":"9e9862ab-1028-4404-9d25-908c8ae0da55","Type":"ContainerDied","Data":"6de7a08f8eeeb9cd1790793e7d6e2f4c7925e589e59ee7f655717feb60eaace8"} Feb 27 17:14:05 crc kubenswrapper[4830]: I0227 17:14:05.715794 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6de7a08f8eeeb9cd1790793e7d6e2f4c7925e589e59ee7f655717feb60eaace8" Feb 27 17:14:05 crc kubenswrapper[4830]: I0227 17:14:05.715794 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536874-pmw8j" Feb 27 17:14:06 crc kubenswrapper[4830]: I0227 17:14:06.190127 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536868-rbx7q"] Feb 27 17:14:06 crc kubenswrapper[4830]: I0227 17:14:06.199378 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536868-rbx7q"] Feb 27 17:14:06 crc kubenswrapper[4830]: I0227 17:14:06.779235 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="beffbac5-5df9-442e-903e-3abda96a0e09" path="/var/lib/kubelet/pods/beffbac5-5df9-442e-903e-3abda96a0e09/volumes" Feb 27 17:14:46 crc kubenswrapper[4830]: I0227 17:14:46.979907 4830 scope.go:117] "RemoveContainer" containerID="da63ce22f084da57cefd57d19ae652a972aaad7cdb7166a3a5696e7de4ad50a9" Feb 27 17:14:48 crc kubenswrapper[4830]: I0227 17:14:48.748126 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2fv4x"] Feb 27 17:14:48 crc kubenswrapper[4830]: E0227 17:14:48.749037 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e9862ab-1028-4404-9d25-908c8ae0da55" containerName="oc" Feb 27 17:14:48 crc kubenswrapper[4830]: I0227 17:14:48.749060 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e9862ab-1028-4404-9d25-908c8ae0da55" containerName="oc" Feb 27 17:14:48 crc kubenswrapper[4830]: I0227 17:14:48.749365 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e9862ab-1028-4404-9d25-908c8ae0da55" containerName="oc" Feb 27 17:14:48 crc kubenswrapper[4830]: I0227 17:14:48.751048 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2fv4x" Feb 27 17:14:48 crc kubenswrapper[4830]: I0227 17:14:48.759996 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2fv4x"] Feb 27 17:14:48 crc kubenswrapper[4830]: I0227 17:14:48.828088 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3fe65c8-4235-4f23-b5d5-09b5fce6c808-catalog-content\") pod \"redhat-operators-2fv4x\" (UID: \"f3fe65c8-4235-4f23-b5d5-09b5fce6c808\") " pod="openshift-marketplace/redhat-operators-2fv4x" Feb 27 17:14:48 crc kubenswrapper[4830]: I0227 17:14:48.828517 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfh5q\" (UniqueName: \"kubernetes.io/projected/f3fe65c8-4235-4f23-b5d5-09b5fce6c808-kube-api-access-kfh5q\") pod \"redhat-operators-2fv4x\" (UID: \"f3fe65c8-4235-4f23-b5d5-09b5fce6c808\") " pod="openshift-marketplace/redhat-operators-2fv4x" Feb 27 17:14:48 crc kubenswrapper[4830]: I0227 17:14:48.828620 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3fe65c8-4235-4f23-b5d5-09b5fce6c808-utilities\") pod \"redhat-operators-2fv4x\" (UID: \"f3fe65c8-4235-4f23-b5d5-09b5fce6c808\") " pod="openshift-marketplace/redhat-operators-2fv4x" Feb 27 17:14:48 crc kubenswrapper[4830]: I0227 17:14:48.930129 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfh5q\" (UniqueName: \"kubernetes.io/projected/f3fe65c8-4235-4f23-b5d5-09b5fce6c808-kube-api-access-kfh5q\") pod \"redhat-operators-2fv4x\" (UID: \"f3fe65c8-4235-4f23-b5d5-09b5fce6c808\") " pod="openshift-marketplace/redhat-operators-2fv4x" Feb 27 17:14:48 crc kubenswrapper[4830]: I0227 17:14:48.930261 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3fe65c8-4235-4f23-b5d5-09b5fce6c808-utilities\") pod \"redhat-operators-2fv4x\" (UID: \"f3fe65c8-4235-4f23-b5d5-09b5fce6c808\") " pod="openshift-marketplace/redhat-operators-2fv4x" Feb 27 17:14:48 crc kubenswrapper[4830]: I0227 17:14:48.930315 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3fe65c8-4235-4f23-b5d5-09b5fce6c808-catalog-content\") pod \"redhat-operators-2fv4x\" (UID: \"f3fe65c8-4235-4f23-b5d5-09b5fce6c808\") " pod="openshift-marketplace/redhat-operators-2fv4x" Feb 27 17:14:48 crc kubenswrapper[4830]: I0227 17:14:48.930852 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3fe65c8-4235-4f23-b5d5-09b5fce6c808-catalog-content\") pod \"redhat-operators-2fv4x\" (UID: \"f3fe65c8-4235-4f23-b5d5-09b5fce6c808\") " pod="openshift-marketplace/redhat-operators-2fv4x" Feb 27 17:14:48 crc kubenswrapper[4830]: I0227 17:14:48.931067 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3fe65c8-4235-4f23-b5d5-09b5fce6c808-utilities\") pod \"redhat-operators-2fv4x\" (UID: \"f3fe65c8-4235-4f23-b5d5-09b5fce6c808\") " pod="openshift-marketplace/redhat-operators-2fv4x" Feb 27 17:14:48 crc kubenswrapper[4830]: I0227 17:14:48.954707 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfh5q\" (UniqueName: \"kubernetes.io/projected/f3fe65c8-4235-4f23-b5d5-09b5fce6c808-kube-api-access-kfh5q\") pod \"redhat-operators-2fv4x\" (UID: \"f3fe65c8-4235-4f23-b5d5-09b5fce6c808\") " pod="openshift-marketplace/redhat-operators-2fv4x" Feb 27 17:14:49 crc kubenswrapper[4830]: I0227 17:14:49.122587 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2fv4x" Feb 27 17:14:49 crc kubenswrapper[4830]: I0227 17:14:49.578109 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2fv4x"] Feb 27 17:14:50 crc kubenswrapper[4830]: I0227 17:14:50.126403 4830 generic.go:334] "Generic (PLEG): container finished" podID="f3fe65c8-4235-4f23-b5d5-09b5fce6c808" containerID="350246b489d42d2c086563590f1fdd0352c3263af5bcc2693d976649fd92e3db" exitCode=0 Feb 27 17:14:50 crc kubenswrapper[4830]: I0227 17:14:50.126452 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2fv4x" event={"ID":"f3fe65c8-4235-4f23-b5d5-09b5fce6c808","Type":"ContainerDied","Data":"350246b489d42d2c086563590f1fdd0352c3263af5bcc2693d976649fd92e3db"} Feb 27 17:14:50 crc kubenswrapper[4830]: I0227 17:14:50.126480 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2fv4x" event={"ID":"f3fe65c8-4235-4f23-b5d5-09b5fce6c808","Type":"ContainerStarted","Data":"e1769d123e38c55c4d98139fd9a81f1c1b311644abf76f7011b56666af791035"} Feb 27 17:14:52 crc kubenswrapper[4830]: I0227 17:14:52.147392 4830 generic.go:334] "Generic (PLEG): container finished" podID="f3fe65c8-4235-4f23-b5d5-09b5fce6c808" containerID="277d62216bdcbbaca03723746050be271651f9be50d8e1c3bea4ba2eaa6a43f3" exitCode=0 Feb 27 17:14:52 crc kubenswrapper[4830]: I0227 17:14:52.147444 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2fv4x" event={"ID":"f3fe65c8-4235-4f23-b5d5-09b5fce6c808","Type":"ContainerDied","Data":"277d62216bdcbbaca03723746050be271651f9be50d8e1c3bea4ba2eaa6a43f3"} Feb 27 17:14:53 crc kubenswrapper[4830]: I0227 17:14:53.161778 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2fv4x" event={"ID":"f3fe65c8-4235-4f23-b5d5-09b5fce6c808","Type":"ContainerStarted","Data":"5916a946e51b23368f5ba7b4510d0f14f86f08eaacc3e44650dad963658befb5"} Feb 27 17:14:53 crc kubenswrapper[4830]: I0227 17:14:53.193415 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2fv4x" podStartSLOduration=2.73136599 podStartE2EDuration="5.193386907s" podCreationTimestamp="2026-02-27 17:14:48 +0000 UTC" firstStartedPulling="2026-02-27 17:14:50.128549035 +0000 UTC m=+4086.217821518" lastFinishedPulling="2026-02-27 17:14:52.590569972 +0000 UTC m=+4088.679842435" observedRunningTime="2026-02-27 17:14:53.186162282 +0000 UTC m=+4089.275434775" watchObservedRunningTime="2026-02-27 17:14:53.193386907 +0000 UTC m=+4089.282659410" Feb 27 17:14:59 crc kubenswrapper[4830]: I0227 17:14:59.123243 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2fv4x" Feb 27 17:14:59 crc kubenswrapper[4830]: I0227 17:14:59.124363 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2fv4x" Feb 27 17:15:00 crc kubenswrapper[4830]: I0227 17:15:00.164027 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536875-kwbgr"] Feb 27 17:15:00 crc kubenswrapper[4830]: I0227 17:15:00.165085 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536875-kwbgr" Feb 27 17:15:00 crc kubenswrapper[4830]: I0227 17:15:00.167239 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 17:15:00 crc kubenswrapper[4830]: I0227 17:15:00.168471 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 17:15:00 crc kubenswrapper[4830]: I0227 17:15:00.183644 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536875-kwbgr"] Feb 27 17:15:00 crc kubenswrapper[4830]: I0227 17:15:00.199212 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c48742fe-3684-4692-b85f-6bd72411af0e-secret-volume\") pod \"collect-profiles-29536875-kwbgr\" (UID: \"c48742fe-3684-4692-b85f-6bd72411af0e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536875-kwbgr" Feb 27 17:15:00 crc kubenswrapper[4830]: I0227 17:15:00.199344 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zzc6\" (UniqueName: \"kubernetes.io/projected/c48742fe-3684-4692-b85f-6bd72411af0e-kube-api-access-5zzc6\") pod \"collect-profiles-29536875-kwbgr\" (UID: \"c48742fe-3684-4692-b85f-6bd72411af0e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536875-kwbgr" Feb 27 17:15:00 crc kubenswrapper[4830]: I0227 17:15:00.199464 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c48742fe-3684-4692-b85f-6bd72411af0e-config-volume\") pod \"collect-profiles-29536875-kwbgr\" (UID: \"c48742fe-3684-4692-b85f-6bd72411af0e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536875-kwbgr" Feb 27 17:15:00 crc kubenswrapper[4830]: I0227 17:15:00.205300 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2fv4x" podUID="f3fe65c8-4235-4f23-b5d5-09b5fce6c808" containerName="registry-server" probeResult="failure" output=< Feb 27 17:15:00 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Feb 27 17:15:00 crc kubenswrapper[4830]: > Feb 27 17:15:00 crc kubenswrapper[4830]: I0227 17:15:00.300091 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c48742fe-3684-4692-b85f-6bd72411af0e-config-volume\") pod \"collect-profiles-29536875-kwbgr\" (UID: \"c48742fe-3684-4692-b85f-6bd72411af0e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536875-kwbgr" Feb 27 17:15:00 crc kubenswrapper[4830]: I0227 17:15:00.300202 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c48742fe-3684-4692-b85f-6bd72411af0e-secret-volume\") pod \"collect-profiles-29536875-kwbgr\" (UID: \"c48742fe-3684-4692-b85f-6bd72411af0e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536875-kwbgr" Feb 27 17:15:00 crc kubenswrapper[4830]: I0227 17:15:00.300242 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zzc6\" (UniqueName: \"kubernetes.io/projected/c48742fe-3684-4692-b85f-6bd72411af0e-kube-api-access-5zzc6\") pod \"collect-profiles-29536875-kwbgr\" (UID: \"c48742fe-3684-4692-b85f-6bd72411af0e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536875-kwbgr" Feb 27 17:15:00 crc kubenswrapper[4830]: I0227 17:15:00.301677 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c48742fe-3684-4692-b85f-6bd72411af0e-config-volume\") pod \"collect-profiles-29536875-kwbgr\" (UID: \"c48742fe-3684-4692-b85f-6bd72411af0e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536875-kwbgr" Feb 27 17:15:00 crc kubenswrapper[4830]: I0227 17:15:00.314071 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c48742fe-3684-4692-b85f-6bd72411af0e-secret-volume\") pod \"collect-profiles-29536875-kwbgr\" (UID: \"c48742fe-3684-4692-b85f-6bd72411af0e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536875-kwbgr" Feb 27 17:15:00 crc kubenswrapper[4830]: I0227 17:15:00.318105 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zzc6\" (UniqueName: \"kubernetes.io/projected/c48742fe-3684-4692-b85f-6bd72411af0e-kube-api-access-5zzc6\") pod \"collect-profiles-29536875-kwbgr\" (UID: \"c48742fe-3684-4692-b85f-6bd72411af0e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536875-kwbgr" Feb 27 17:15:00 crc kubenswrapper[4830]: I0227 17:15:00.487851 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536875-kwbgr" Feb 27 17:15:00 crc kubenswrapper[4830]: I0227 17:15:00.844929 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536875-kwbgr"] Feb 27 17:15:01 crc kubenswrapper[4830]: I0227 17:15:01.240179 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536875-kwbgr" event={"ID":"c48742fe-3684-4692-b85f-6bd72411af0e","Type":"ContainerStarted","Data":"81b41cd29fe515db7fd3a3ba216aacd034da40bbe22aec1cae04c77ed0f6fbba"} Feb 27 17:15:01 crc kubenswrapper[4830]: I0227 17:15:01.240718 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536875-kwbgr" event={"ID":"c48742fe-3684-4692-b85f-6bd72411af0e","Type":"ContainerStarted","Data":"6c14bc0c69aa285a9607149ed5cf2c631c2c789a0b3c4e6de0c99da3a68b8308"} Feb 27 17:15:01 crc kubenswrapper[4830]: I0227 17:15:01.272374 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29536875-kwbgr" podStartSLOduration=1.2723488299999999 podStartE2EDuration="1.27234883s" podCreationTimestamp="2026-02-27 17:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:15:01.26323576 +0000 UTC m=+4097.352508223" watchObservedRunningTime="2026-02-27 17:15:01.27234883 +0000 UTC m=+4097.361621293" Feb 27 17:15:02 crc kubenswrapper[4830]: I0227 17:15:02.252123 4830 generic.go:334] "Generic (PLEG): container finished" podID="c48742fe-3684-4692-b85f-6bd72411af0e" containerID="81b41cd29fe515db7fd3a3ba216aacd034da40bbe22aec1cae04c77ed0f6fbba" exitCode=0 Feb 27 17:15:02 crc kubenswrapper[4830]: I0227 17:15:02.252191 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536875-kwbgr" event={"ID":"c48742fe-3684-4692-b85f-6bd72411af0e","Type":"ContainerDied","Data":"81b41cd29fe515db7fd3a3ba216aacd034da40bbe22aec1cae04c77ed0f6fbba"} Feb 27 17:15:03 crc kubenswrapper[4830]: I0227 17:15:03.160504 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:15:03 crc kubenswrapper[4830]: I0227 17:15:03.160753 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:15:03 crc kubenswrapper[4830]: I0227 17:15:03.655811 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536875-kwbgr" Feb 27 17:15:03 crc kubenswrapper[4830]: I0227 17:15:03.754115 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c48742fe-3684-4692-b85f-6bd72411af0e-secret-volume\") pod \"c48742fe-3684-4692-b85f-6bd72411af0e\" (UID: \"c48742fe-3684-4692-b85f-6bd72411af0e\") " Feb 27 17:15:03 crc kubenswrapper[4830]: I0227 17:15:03.754193 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c48742fe-3684-4692-b85f-6bd72411af0e-config-volume\") pod \"c48742fe-3684-4692-b85f-6bd72411af0e\" (UID: \"c48742fe-3684-4692-b85f-6bd72411af0e\") " Feb 27 17:15:03 crc kubenswrapper[4830]: I0227 17:15:03.754294 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zzc6\" (UniqueName: \"kubernetes.io/projected/c48742fe-3684-4692-b85f-6bd72411af0e-kube-api-access-5zzc6\") pod \"c48742fe-3684-4692-b85f-6bd72411af0e\" (UID: \"c48742fe-3684-4692-b85f-6bd72411af0e\") " Feb 27 17:15:03 crc kubenswrapper[4830]: I0227 17:15:03.757145 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c48742fe-3684-4692-b85f-6bd72411af0e-config-volume" (OuterVolumeSpecName: "config-volume") pod "c48742fe-3684-4692-b85f-6bd72411af0e" (UID: "c48742fe-3684-4692-b85f-6bd72411af0e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:15:03 crc kubenswrapper[4830]: I0227 17:15:03.763762 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c48742fe-3684-4692-b85f-6bd72411af0e-kube-api-access-5zzc6" (OuterVolumeSpecName: "kube-api-access-5zzc6") pod "c48742fe-3684-4692-b85f-6bd72411af0e" (UID: "c48742fe-3684-4692-b85f-6bd72411af0e"). InnerVolumeSpecName "kube-api-access-5zzc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:15:03 crc kubenswrapper[4830]: I0227 17:15:03.764189 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c48742fe-3684-4692-b85f-6bd72411af0e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c48742fe-3684-4692-b85f-6bd72411af0e" (UID: "c48742fe-3684-4692-b85f-6bd72411af0e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:15:03 crc kubenswrapper[4830]: I0227 17:15:03.857128 4830 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c48742fe-3684-4692-b85f-6bd72411af0e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:03 crc kubenswrapper[4830]: I0227 17:15:03.857184 4830 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c48742fe-3684-4692-b85f-6bd72411af0e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:03 crc kubenswrapper[4830]: I0227 17:15:03.857207 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zzc6\" (UniqueName: \"kubernetes.io/projected/c48742fe-3684-4692-b85f-6bd72411af0e-kube-api-access-5zzc6\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:04 crc kubenswrapper[4830]: I0227 17:15:04.270306 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536875-kwbgr" event={"ID":"c48742fe-3684-4692-b85f-6bd72411af0e","Type":"ContainerDied","Data":"6c14bc0c69aa285a9607149ed5cf2c631c2c789a0b3c4e6de0c99da3a68b8308"} Feb 27 17:15:04 crc kubenswrapper[4830]: I0227 17:15:04.270362 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c14bc0c69aa285a9607149ed5cf2c631c2c789a0b3c4e6de0c99da3a68b8308" Feb 27 17:15:04 crc kubenswrapper[4830]: I0227 17:15:04.270368 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536875-kwbgr" Feb 27 17:15:04 crc kubenswrapper[4830]: I0227 17:15:04.357807 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536830-9wp9k"] Feb 27 17:15:04 crc kubenswrapper[4830]: I0227 17:15:04.367229 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536830-9wp9k"] Feb 27 17:15:04 crc kubenswrapper[4830]: I0227 17:15:04.779451 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4827561e-f60d-4b02-b4c6-7af50ab350ce" path="/var/lib/kubelet/pods/4827561e-f60d-4b02-b4c6-7af50ab350ce/volumes" Feb 27 17:15:09 crc kubenswrapper[4830]: I0227 17:15:09.201871 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2fv4x" Feb 27 17:15:09 crc kubenswrapper[4830]: I0227 17:15:09.279888 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2fv4x" Feb 27 17:15:12 crc kubenswrapper[4830]: I0227 17:15:12.584464 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2fv4x"] Feb 27 17:15:12 crc kubenswrapper[4830]: I0227 17:15:12.585309 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2fv4x" podUID="f3fe65c8-4235-4f23-b5d5-09b5fce6c808" containerName="registry-server" containerID="cri-o://5916a946e51b23368f5ba7b4510d0f14f86f08eaacc3e44650dad963658befb5" gracePeriod=2 Feb 27 17:15:13 crc kubenswrapper[4830]: I0227 17:15:13.160688 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2fv4x" Feb 27 17:15:13 crc kubenswrapper[4830]: I0227 17:15:13.229486 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfh5q\" (UniqueName: \"kubernetes.io/projected/f3fe65c8-4235-4f23-b5d5-09b5fce6c808-kube-api-access-kfh5q\") pod \"f3fe65c8-4235-4f23-b5d5-09b5fce6c808\" (UID: \"f3fe65c8-4235-4f23-b5d5-09b5fce6c808\") " Feb 27 17:15:13 crc kubenswrapper[4830]: I0227 17:15:13.229529 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3fe65c8-4235-4f23-b5d5-09b5fce6c808-catalog-content\") pod \"f3fe65c8-4235-4f23-b5d5-09b5fce6c808\" (UID: \"f3fe65c8-4235-4f23-b5d5-09b5fce6c808\") " Feb 27 17:15:13 crc kubenswrapper[4830]: I0227 17:15:13.229614 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3fe65c8-4235-4f23-b5d5-09b5fce6c808-utilities\") pod \"f3fe65c8-4235-4f23-b5d5-09b5fce6c808\" (UID: \"f3fe65c8-4235-4f23-b5d5-09b5fce6c808\") " Feb 27 17:15:13 crc kubenswrapper[4830]: I0227 17:15:13.231326 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3fe65c8-4235-4f23-b5d5-09b5fce6c808-utilities" (OuterVolumeSpecName: "utilities") pod "f3fe65c8-4235-4f23-b5d5-09b5fce6c808" (UID: "f3fe65c8-4235-4f23-b5d5-09b5fce6c808"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:15:13 crc kubenswrapper[4830]: I0227 17:15:13.237930 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3fe65c8-4235-4f23-b5d5-09b5fce6c808-kube-api-access-kfh5q" (OuterVolumeSpecName: "kube-api-access-kfh5q") pod "f3fe65c8-4235-4f23-b5d5-09b5fce6c808" (UID: "f3fe65c8-4235-4f23-b5d5-09b5fce6c808"). InnerVolumeSpecName "kube-api-access-kfh5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:15:13 crc kubenswrapper[4830]: I0227 17:15:13.331092 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3fe65c8-4235-4f23-b5d5-09b5fce6c808-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:13 crc kubenswrapper[4830]: I0227 17:15:13.331145 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfh5q\" (UniqueName: \"kubernetes.io/projected/f3fe65c8-4235-4f23-b5d5-09b5fce6c808-kube-api-access-kfh5q\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:13 crc kubenswrapper[4830]: I0227 17:15:13.378121 4830 generic.go:334] "Generic (PLEG): container finished" podID="f3fe65c8-4235-4f23-b5d5-09b5fce6c808" containerID="5916a946e51b23368f5ba7b4510d0f14f86f08eaacc3e44650dad963658befb5" exitCode=0 Feb 27 17:15:13 crc kubenswrapper[4830]: I0227 17:15:13.378173 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2fv4x" event={"ID":"f3fe65c8-4235-4f23-b5d5-09b5fce6c808","Type":"ContainerDied","Data":"5916a946e51b23368f5ba7b4510d0f14f86f08eaacc3e44650dad963658befb5"} Feb 27 17:15:13 crc kubenswrapper[4830]: I0227 17:15:13.378193 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2fv4x" Feb 27 17:15:13 crc kubenswrapper[4830]: I0227 17:15:13.378210 4830 scope.go:117] "RemoveContainer" containerID="5916a946e51b23368f5ba7b4510d0f14f86f08eaacc3e44650dad963658befb5" Feb 27 17:15:13 crc kubenswrapper[4830]: I0227 17:15:13.378200 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2fv4x" event={"ID":"f3fe65c8-4235-4f23-b5d5-09b5fce6c808","Type":"ContainerDied","Data":"e1769d123e38c55c4d98139fd9a81f1c1b311644abf76f7011b56666af791035"} Feb 27 17:15:13 crc kubenswrapper[4830]: I0227 17:15:13.387563 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3fe65c8-4235-4f23-b5d5-09b5fce6c808-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f3fe65c8-4235-4f23-b5d5-09b5fce6c808" (UID: "f3fe65c8-4235-4f23-b5d5-09b5fce6c808"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:15:13 crc kubenswrapper[4830]: I0227 17:15:13.400792 4830 scope.go:117] "RemoveContainer" containerID="277d62216bdcbbaca03723746050be271651f9be50d8e1c3bea4ba2eaa6a43f3" Feb 27 17:15:13 crc kubenswrapper[4830]: I0227 17:15:13.426559 4830 scope.go:117] "RemoveContainer" containerID="350246b489d42d2c086563590f1fdd0352c3263af5bcc2693d976649fd92e3db" Feb 27 17:15:13 crc kubenswrapper[4830]: I0227 17:15:13.432068 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3fe65c8-4235-4f23-b5d5-09b5fce6c808-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:13 crc kubenswrapper[4830]: I0227 17:15:13.456825 4830 scope.go:117] "RemoveContainer" containerID="5916a946e51b23368f5ba7b4510d0f14f86f08eaacc3e44650dad963658befb5" Feb 27 17:15:13 crc kubenswrapper[4830]: E0227 17:15:13.457414 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5916a946e51b23368f5ba7b4510d0f14f86f08eaacc3e44650dad963658befb5\": container with ID starting with 5916a946e51b23368f5ba7b4510d0f14f86f08eaacc3e44650dad963658befb5 not found: ID does not exist" containerID="5916a946e51b23368f5ba7b4510d0f14f86f08eaacc3e44650dad963658befb5" Feb 27 17:15:13 crc kubenswrapper[4830]: I0227 17:15:13.457505 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5916a946e51b23368f5ba7b4510d0f14f86f08eaacc3e44650dad963658befb5"} err="failed to get container status \"5916a946e51b23368f5ba7b4510d0f14f86f08eaacc3e44650dad963658befb5\": rpc error: code = NotFound desc = could not find container \"5916a946e51b23368f5ba7b4510d0f14f86f08eaacc3e44650dad963658befb5\": container with ID starting with 5916a946e51b23368f5ba7b4510d0f14f86f08eaacc3e44650dad963658befb5 not found: ID does not exist" Feb 27 17:15:13 crc kubenswrapper[4830]: I0227 17:15:13.457561 4830 scope.go:117] "RemoveContainer" containerID="277d62216bdcbbaca03723746050be271651f9be50d8e1c3bea4ba2eaa6a43f3" Feb 27 17:15:13 crc kubenswrapper[4830]: E0227 17:15:13.458115 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"277d62216bdcbbaca03723746050be271651f9be50d8e1c3bea4ba2eaa6a43f3\": container with ID starting with 277d62216bdcbbaca03723746050be271651f9be50d8e1c3bea4ba2eaa6a43f3 not found: ID does not exist" containerID="277d62216bdcbbaca03723746050be271651f9be50d8e1c3bea4ba2eaa6a43f3" Feb 27 17:15:13 crc kubenswrapper[4830]: I0227 17:15:13.458195 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"277d62216bdcbbaca03723746050be271651f9be50d8e1c3bea4ba2eaa6a43f3"} err="failed to get container status \"277d62216bdcbbaca03723746050be271651f9be50d8e1c3bea4ba2eaa6a43f3\": rpc error: code = NotFound desc = could not find container \"277d62216bdcbbaca03723746050be271651f9be50d8e1c3bea4ba2eaa6a43f3\": container with ID starting with 277d62216bdcbbaca03723746050be271651f9be50d8e1c3bea4ba2eaa6a43f3 not found: ID does not exist" Feb 27 17:15:13 crc kubenswrapper[4830]: I0227 17:15:13.458244 4830 scope.go:117] "RemoveContainer" containerID="350246b489d42d2c086563590f1fdd0352c3263af5bcc2693d976649fd92e3db" Feb 27 17:15:13 crc kubenswrapper[4830]: E0227 17:15:13.458569 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"350246b489d42d2c086563590f1fdd0352c3263af5bcc2693d976649fd92e3db\": container with ID starting with 350246b489d42d2c086563590f1fdd0352c3263af5bcc2693d976649fd92e3db not found: ID does not exist" containerID="350246b489d42d2c086563590f1fdd0352c3263af5bcc2693d976649fd92e3db" Feb 27 17:15:13 crc kubenswrapper[4830]: I0227 17:15:13.458634 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"350246b489d42d2c086563590f1fdd0352c3263af5bcc2693d976649fd92e3db"} err="failed to get container status \"350246b489d42d2c086563590f1fdd0352c3263af5bcc2693d976649fd92e3db\": rpc error: code = NotFound desc = could not find container \"350246b489d42d2c086563590f1fdd0352c3263af5bcc2693d976649fd92e3db\": container with ID starting with 350246b489d42d2c086563590f1fdd0352c3263af5bcc2693d976649fd92e3db not found: ID does not exist" Feb 27 17:15:13 crc kubenswrapper[4830]: I0227 17:15:13.737837 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2fv4x"] Feb 27 17:15:13 crc kubenswrapper[4830]: I0227 17:15:13.748186 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2fv4x"] Feb 27 17:15:13 crc kubenswrapper[4830]: E0227 17:15:13.855661 4830 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3fe65c8_4235_4f23_b5d5_09b5fce6c808.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3fe65c8_4235_4f23_b5d5_09b5fce6c808.slice/crio-e1769d123e38c55c4d98139fd9a81f1c1b311644abf76f7011b56666af791035\": RecentStats: unable to find data in memory cache]" Feb 27 17:15:14 crc kubenswrapper[4830]: I0227 17:15:14.778263 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3fe65c8-4235-4f23-b5d5-09b5fce6c808" path="/var/lib/kubelet/pods/f3fe65c8-4235-4f23-b5d5-09b5fce6c808/volumes" Feb 27 17:15:33 crc kubenswrapper[4830]: I0227 17:15:33.160588 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:15:33 crc kubenswrapper[4830]: I0227 17:15:33.161277 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:15:47 crc kubenswrapper[4830]: I0227 17:15:47.080001 4830 scope.go:117] "RemoveContainer" containerID="16aa2b72eb611476bfa1ca732d50197957726b82f3e6029bf856f46816ea160c" Feb 27 17:16:00 crc kubenswrapper[4830]: I0227 17:16:00.155649 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536876-dmbq6"] Feb 27 17:16:00 crc kubenswrapper[4830]: E0227 17:16:00.156822 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3fe65c8-4235-4f23-b5d5-09b5fce6c808" containerName="extract-content" Feb 27 17:16:00 crc kubenswrapper[4830]: I0227 17:16:00.156844 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3fe65c8-4235-4f23-b5d5-09b5fce6c808" containerName="extract-content" Feb 27 17:16:00 crc kubenswrapper[4830]: E0227 17:16:00.156874 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3fe65c8-4235-4f23-b5d5-09b5fce6c808" containerName="extract-utilities" Feb 27 17:16:00 crc kubenswrapper[4830]: I0227 17:16:00.156886 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3fe65c8-4235-4f23-b5d5-09b5fce6c808" containerName="extract-utilities" Feb 27 17:16:00 crc kubenswrapper[4830]: E0227 17:16:00.156938 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3fe65c8-4235-4f23-b5d5-09b5fce6c808" containerName="registry-server" Feb 27 17:16:00 crc kubenswrapper[4830]: I0227 17:16:00.156985 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3fe65c8-4235-4f23-b5d5-09b5fce6c808" containerName="registry-server" Feb 27 17:16:00 crc kubenswrapper[4830]: E0227 17:16:00.156999 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c48742fe-3684-4692-b85f-6bd72411af0e" containerName="collect-profiles" Feb 27 17:16:00 crc kubenswrapper[4830]: I0227 17:16:00.157011 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="c48742fe-3684-4692-b85f-6bd72411af0e" containerName="collect-profiles" Feb 27 17:16:00 crc kubenswrapper[4830]: I0227 17:16:00.157263 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3fe65c8-4235-4f23-b5d5-09b5fce6c808" containerName="registry-server" Feb 27 17:16:00 crc kubenswrapper[4830]: I0227 17:16:00.157281 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="c48742fe-3684-4692-b85f-6bd72411af0e" containerName="collect-profiles" Feb 27 17:16:00 crc kubenswrapper[4830]: I0227 17:16:00.158038 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536876-dmbq6" Feb 27 17:16:00 crc kubenswrapper[4830]: I0227 17:16:00.163597 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 17:16:00 crc kubenswrapper[4830]: I0227 17:16:00.163984 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:16:00 crc kubenswrapper[4830]: I0227 17:16:00.164239 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536876-dmbq6"] Feb 27 17:16:00 crc kubenswrapper[4830]: I0227 17:16:00.164255 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:16:00 crc kubenswrapper[4830]: I0227 17:16:00.199188 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdwf9\" (UniqueName: \"kubernetes.io/projected/ef159740-1a43-4fa1-b365-6cde00b8fdde-kube-api-access-qdwf9\") pod \"auto-csr-approver-29536876-dmbq6\" (UID: \"ef159740-1a43-4fa1-b365-6cde00b8fdde\") " pod="openshift-infra/auto-csr-approver-29536876-dmbq6" Feb 27 17:16:00 crc kubenswrapper[4830]: I0227 17:16:00.301799 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdwf9\" (UniqueName: \"kubernetes.io/projected/ef159740-1a43-4fa1-b365-6cde00b8fdde-kube-api-access-qdwf9\") pod \"auto-csr-approver-29536876-dmbq6\" (UID: \"ef159740-1a43-4fa1-b365-6cde00b8fdde\") " pod="openshift-infra/auto-csr-approver-29536876-dmbq6" Feb 27 17:16:00 crc kubenswrapper[4830]: I0227 17:16:00.393175 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdwf9\" (UniqueName: \"kubernetes.io/projected/ef159740-1a43-4fa1-b365-6cde00b8fdde-kube-api-access-qdwf9\") pod \"auto-csr-approver-29536876-dmbq6\" (UID: \"ef159740-1a43-4fa1-b365-6cde00b8fdde\") " pod="openshift-infra/auto-csr-approver-29536876-dmbq6" Feb 27 17:16:00 crc kubenswrapper[4830]: I0227 17:16:00.492857 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536876-dmbq6" Feb 27 17:16:00 crc kubenswrapper[4830]: I0227 17:16:00.984163 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536876-dmbq6"] Feb 27 17:16:01 crc kubenswrapper[4830]: I0227 17:16:01.845004 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536876-dmbq6" event={"ID":"ef159740-1a43-4fa1-b365-6cde00b8fdde","Type":"ContainerStarted","Data":"f5bdf869272fdffc67db12f1747b2bf5bc2546ccaed95ee85ca2848bb3a5334c"} Feb 27 17:16:02 crc kubenswrapper[4830]: I0227 17:16:02.858739 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536876-dmbq6" event={"ID":"ef159740-1a43-4fa1-b365-6cde00b8fdde","Type":"ContainerStarted","Data":"4c7dfcbe9a27a98b146e4064a13a0f81129b3c7a3d01d42f8bcf186e679741ba"} Feb 27 17:16:02 crc kubenswrapper[4830]: I0227 17:16:02.882860 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536876-dmbq6" podStartSLOduration=1.554390589 podStartE2EDuration="2.882826377s" podCreationTimestamp="2026-02-27 17:16:00 +0000 UTC" firstStartedPulling="2026-02-27 17:16:00.992618746 +0000 UTC m=+4157.081891249" lastFinishedPulling="2026-02-27 17:16:02.321054544 +0000 UTC m=+4158.410327037" observedRunningTime="2026-02-27 17:16:02.882460718 +0000 UTC m=+4158.971733221" watchObservedRunningTime="2026-02-27 17:16:02.882826377 +0000 UTC m=+4158.972098880" Feb 27 17:16:03 crc kubenswrapper[4830]: I0227 17:16:03.160972 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:16:03 crc kubenswrapper[4830]: I0227 17:16:03.161059 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:16:03 crc kubenswrapper[4830]: I0227 17:16:03.161117 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 17:16:03 crc kubenswrapper[4830]: I0227 17:16:03.161762 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4fe4d1b45eabdb72f9fc5ac554899ea9b06c8455f8916258035d1a2fc79f3c9e"} pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 17:16:03 crc kubenswrapper[4830]: I0227 17:16:03.161860 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" containerID="cri-o://4fe4d1b45eabdb72f9fc5ac554899ea9b06c8455f8916258035d1a2fc79f3c9e" gracePeriod=600 Feb 27 17:16:03 crc kubenswrapper[4830]: I0227 17:16:03.873534 4830 generic.go:334] "Generic (PLEG): container finished" podID="ef159740-1a43-4fa1-b365-6cde00b8fdde" containerID="4c7dfcbe9a27a98b146e4064a13a0f81129b3c7a3d01d42f8bcf186e679741ba" exitCode=0 Feb 27 17:16:03 crc kubenswrapper[4830]: I0227 17:16:03.873678 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536876-dmbq6" event={"ID":"ef159740-1a43-4fa1-b365-6cde00b8fdde","Type":"ContainerDied","Data":"4c7dfcbe9a27a98b146e4064a13a0f81129b3c7a3d01d42f8bcf186e679741ba"} Feb 27 17:16:03 crc kubenswrapper[4830]: I0227 17:16:03.879366 4830 generic.go:334] "Generic (PLEG): container finished" podID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerID="4fe4d1b45eabdb72f9fc5ac554899ea9b06c8455f8916258035d1a2fc79f3c9e" exitCode=0 Feb 27 17:16:03 crc kubenswrapper[4830]: I0227 17:16:03.879428 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerDied","Data":"4fe4d1b45eabdb72f9fc5ac554899ea9b06c8455f8916258035d1a2fc79f3c9e"} Feb 27 17:16:03 crc kubenswrapper[4830]: I0227 17:16:03.879472 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerStarted","Data":"d43c5277201627bfc21083e209c49d25c5bf66f11f77c7094001382c8173b2a3"} Feb 27 17:16:03 crc kubenswrapper[4830]: I0227 17:16:03.879513 4830 scope.go:117] "RemoveContainer" containerID="b59a0a2697e1673250d28d22947dbf29d890efdc4f2af61efcb7c8f01573fc27" Feb 27 17:16:05 crc kubenswrapper[4830]: I0227 17:16:05.303725 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536876-dmbq6" Feb 27 17:16:05 crc kubenswrapper[4830]: I0227 17:16:05.391573 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdwf9\" (UniqueName: \"kubernetes.io/projected/ef159740-1a43-4fa1-b365-6cde00b8fdde-kube-api-access-qdwf9\") pod \"ef159740-1a43-4fa1-b365-6cde00b8fdde\" (UID: \"ef159740-1a43-4fa1-b365-6cde00b8fdde\") " Feb 27 17:16:05 crc kubenswrapper[4830]: I0227 17:16:05.403290 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef159740-1a43-4fa1-b365-6cde00b8fdde-kube-api-access-qdwf9" (OuterVolumeSpecName: "kube-api-access-qdwf9") pod "ef159740-1a43-4fa1-b365-6cde00b8fdde" (UID: "ef159740-1a43-4fa1-b365-6cde00b8fdde"). InnerVolumeSpecName "kube-api-access-qdwf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:16:05 crc kubenswrapper[4830]: I0227 17:16:05.494658 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdwf9\" (UniqueName: \"kubernetes.io/projected/ef159740-1a43-4fa1-b365-6cde00b8fdde-kube-api-access-qdwf9\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:05 crc kubenswrapper[4830]: I0227 17:16:05.919410 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536876-dmbq6" event={"ID":"ef159740-1a43-4fa1-b365-6cde00b8fdde","Type":"ContainerDied","Data":"f5bdf869272fdffc67db12f1747b2bf5bc2546ccaed95ee85ca2848bb3a5334c"} Feb 27 17:16:05 crc kubenswrapper[4830]: I0227 17:16:05.919466 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5bdf869272fdffc67db12f1747b2bf5bc2546ccaed95ee85ca2848bb3a5334c" Feb 27 17:16:05 crc kubenswrapper[4830]: I0227 17:16:05.919515 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536876-dmbq6" Feb 27 17:16:05 crc kubenswrapper[4830]: I0227 17:16:05.998902 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536870-hv9dk"] Feb 27 17:16:06 crc kubenswrapper[4830]: I0227 17:16:06.018062 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536870-hv9dk"] Feb 27 17:16:06 crc kubenswrapper[4830]: I0227 17:16:06.778560 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4691bb0-7469-49a3-a878-21b2f20e43b1" path="/var/lib/kubelet/pods/f4691bb0-7469-49a3-a878-21b2f20e43b1/volumes" Feb 27 17:16:47 crc kubenswrapper[4830]: I0227 17:16:47.179768 4830 scope.go:117] "RemoveContainer" containerID="17d46bad7e5ac46f4fba357597fde713481738cb7bc7178a2d2314620576a8fc" Feb 27 17:17:04 crc kubenswrapper[4830]: I0227 17:17:04.024281 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f54kz"] Feb 27 17:17:04 crc kubenswrapper[4830]: E0227 17:17:04.026022 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef159740-1a43-4fa1-b365-6cde00b8fdde" containerName="oc" Feb 27 17:17:04 crc kubenswrapper[4830]: I0227 17:17:04.026056 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef159740-1a43-4fa1-b365-6cde00b8fdde" containerName="oc" Feb 27 17:17:04 crc kubenswrapper[4830]: I0227 17:17:04.026408 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef159740-1a43-4fa1-b365-6cde00b8fdde" containerName="oc" Feb 27 17:17:04 crc kubenswrapper[4830]: I0227 17:17:04.029423 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f54kz" Feb 27 17:17:04 crc kubenswrapper[4830]: I0227 17:17:04.032407 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f54kz"] Feb 27 17:17:04 crc kubenswrapper[4830]: I0227 17:17:04.203571 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95303a4c-1476-4bbb-873c-05b5f6e528d6-utilities\") pod \"redhat-marketplace-f54kz\" (UID: \"95303a4c-1476-4bbb-873c-05b5f6e528d6\") " pod="openshift-marketplace/redhat-marketplace-f54kz" Feb 27 17:17:04 crc kubenswrapper[4830]: I0227 17:17:04.204492 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mk25\" (UniqueName: \"kubernetes.io/projected/95303a4c-1476-4bbb-873c-05b5f6e528d6-kube-api-access-6mk25\") pod \"redhat-marketplace-f54kz\" (UID: \"95303a4c-1476-4bbb-873c-05b5f6e528d6\") " pod="openshift-marketplace/redhat-marketplace-f54kz" Feb 27 17:17:04 crc kubenswrapper[4830]: I0227 17:17:04.204781 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95303a4c-1476-4bbb-873c-05b5f6e528d6-catalog-content\") pod \"redhat-marketplace-f54kz\" (UID: \"95303a4c-1476-4bbb-873c-05b5f6e528d6\") " pod="openshift-marketplace/redhat-marketplace-f54kz" Feb 27 17:17:04 crc kubenswrapper[4830]: I0227 17:17:04.306909 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95303a4c-1476-4bbb-873c-05b5f6e528d6-utilities\") pod \"redhat-marketplace-f54kz\" (UID: \"95303a4c-1476-4bbb-873c-05b5f6e528d6\") " pod="openshift-marketplace/redhat-marketplace-f54kz" Feb 27 17:17:04 crc kubenswrapper[4830]: I0227 17:17:04.307230 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mk25\" (UniqueName: \"kubernetes.io/projected/95303a4c-1476-4bbb-873c-05b5f6e528d6-kube-api-access-6mk25\") pod \"redhat-marketplace-f54kz\" (UID: \"95303a4c-1476-4bbb-873c-05b5f6e528d6\") " pod="openshift-marketplace/redhat-marketplace-f54kz" Feb 27 17:17:04 crc kubenswrapper[4830]: I0227 17:17:04.307284 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95303a4c-1476-4bbb-873c-05b5f6e528d6-catalog-content\") pod \"redhat-marketplace-f54kz\" (UID: \"95303a4c-1476-4bbb-873c-05b5f6e528d6\") " pod="openshift-marketplace/redhat-marketplace-f54kz" Feb 27 17:17:04 crc kubenswrapper[4830]: I0227 17:17:04.307442 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95303a4c-1476-4bbb-873c-05b5f6e528d6-utilities\") pod \"redhat-marketplace-f54kz\" (UID: \"95303a4c-1476-4bbb-873c-05b5f6e528d6\") " pod="openshift-marketplace/redhat-marketplace-f54kz" Feb 27 17:17:04 crc kubenswrapper[4830]: I0227 17:17:04.308022 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95303a4c-1476-4bbb-873c-05b5f6e528d6-catalog-content\") pod \"redhat-marketplace-f54kz\" (UID: \"95303a4c-1476-4bbb-873c-05b5f6e528d6\") " pod="openshift-marketplace/redhat-marketplace-f54kz" Feb 27 17:17:04 crc kubenswrapper[4830]: I0227 17:17:04.338288 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mk25\" (UniqueName: \"kubernetes.io/projected/95303a4c-1476-4bbb-873c-05b5f6e528d6-kube-api-access-6mk25\") pod \"redhat-marketplace-f54kz\" (UID: \"95303a4c-1476-4bbb-873c-05b5f6e528d6\") " pod="openshift-marketplace/redhat-marketplace-f54kz" Feb 27 17:17:04 crc kubenswrapper[4830]: I0227 17:17:04.361099 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f54kz" Feb 27 17:17:04 crc kubenswrapper[4830]: I0227 17:17:04.870981 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f54kz"] Feb 27 17:17:05 crc kubenswrapper[4830]: I0227 17:17:05.517796 4830 generic.go:334] "Generic (PLEG): container finished" podID="95303a4c-1476-4bbb-873c-05b5f6e528d6" containerID="dfd0b1bf7360c742540bf356a3ad250d7b8baa608dff56fa20be8308d862f8d9" exitCode=0 Feb 27 17:17:05 crc kubenswrapper[4830]: I0227 17:17:05.517938 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f54kz" event={"ID":"95303a4c-1476-4bbb-873c-05b5f6e528d6","Type":"ContainerDied","Data":"dfd0b1bf7360c742540bf356a3ad250d7b8baa608dff56fa20be8308d862f8d9"} Feb 27 17:17:05 crc kubenswrapper[4830]: I0227 17:17:05.518399 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f54kz" event={"ID":"95303a4c-1476-4bbb-873c-05b5f6e528d6","Type":"ContainerStarted","Data":"2c1c84c76d67a62bc0d7c504e560b590bde4401a074e8fce0c3bb276ac368d0a"} Feb 27 17:17:06 crc kubenswrapper[4830]: I0227 17:17:06.533611 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f54kz" event={"ID":"95303a4c-1476-4bbb-873c-05b5f6e528d6","Type":"ContainerStarted","Data":"f801fc7797fc0f2aea01ac2c66ca2d68e1d1c81ea96c4f5974e93479daa0d21a"} Feb 27 17:17:07 crc kubenswrapper[4830]: I0227 17:17:07.546712 4830 generic.go:334] "Generic (PLEG): container finished" podID="95303a4c-1476-4bbb-873c-05b5f6e528d6" containerID="f801fc7797fc0f2aea01ac2c66ca2d68e1d1c81ea96c4f5974e93479daa0d21a" exitCode=0 Feb 27 17:17:07 crc kubenswrapper[4830]: I0227 17:17:07.546820 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f54kz" event={"ID":"95303a4c-1476-4bbb-873c-05b5f6e528d6","Type":"ContainerDied","Data":"f801fc7797fc0f2aea01ac2c66ca2d68e1d1c81ea96c4f5974e93479daa0d21a"} Feb 27 17:17:07 crc kubenswrapper[4830]: I0227 17:17:07.547281 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f54kz" event={"ID":"95303a4c-1476-4bbb-873c-05b5f6e528d6","Type":"ContainerStarted","Data":"d058b8c8e428ac1639be7e3bd2aa69dcb621ca731d883bc8be1283835b94be78"} Feb 27 17:17:07 crc kubenswrapper[4830]: I0227 17:17:07.576908 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f54kz" podStartSLOduration=2.959268009 podStartE2EDuration="4.576881949s" podCreationTimestamp="2026-02-27 17:17:03 +0000 UTC" firstStartedPulling="2026-02-27 17:17:05.520160952 +0000 UTC m=+4221.609433445" lastFinishedPulling="2026-02-27 17:17:07.137774882 +0000 UTC m=+4223.227047385" observedRunningTime="2026-02-27 17:17:07.573444096 +0000 UTC m=+4223.662716599" watchObservedRunningTime="2026-02-27 17:17:07.576881949 +0000 UTC m=+4223.666154452" Feb 27 17:17:14 crc kubenswrapper[4830]: I0227 17:17:14.361838 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f54kz" Feb 27 17:17:14 crc kubenswrapper[4830]: I0227 17:17:14.362801 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f54kz" Feb 27 17:17:14 crc kubenswrapper[4830]: I0227 17:17:14.443737 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f54kz" Feb 27 17:17:14 crc kubenswrapper[4830]: I0227 17:17:14.670802 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f54kz" Feb 27 17:17:14 crc kubenswrapper[4830]: I0227 17:17:14.731249 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f54kz"] Feb 27 17:17:16 crc kubenswrapper[4830]: I0227 17:17:16.632129 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-f54kz" podUID="95303a4c-1476-4bbb-873c-05b5f6e528d6" containerName="registry-server" containerID="cri-o://d058b8c8e428ac1639be7e3bd2aa69dcb621ca731d883bc8be1283835b94be78" gracePeriod=2 Feb 27 17:17:17 crc kubenswrapper[4830]: I0227 17:17:17.159380 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f54kz" Feb 27 17:17:17 crc kubenswrapper[4830]: I0227 17:17:17.327807 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mk25\" (UniqueName: \"kubernetes.io/projected/95303a4c-1476-4bbb-873c-05b5f6e528d6-kube-api-access-6mk25\") pod \"95303a4c-1476-4bbb-873c-05b5f6e528d6\" (UID: \"95303a4c-1476-4bbb-873c-05b5f6e528d6\") " Feb 27 17:17:17 crc kubenswrapper[4830]: I0227 17:17:17.327910 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95303a4c-1476-4bbb-873c-05b5f6e528d6-catalog-content\") pod \"95303a4c-1476-4bbb-873c-05b5f6e528d6\" (UID: \"95303a4c-1476-4bbb-873c-05b5f6e528d6\") " Feb 27 17:17:17 crc kubenswrapper[4830]: I0227 17:17:17.327988 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95303a4c-1476-4bbb-873c-05b5f6e528d6-utilities\") pod \"95303a4c-1476-4bbb-873c-05b5f6e528d6\" (UID: \"95303a4c-1476-4bbb-873c-05b5f6e528d6\") " Feb 27 17:17:17 crc kubenswrapper[4830]: I0227 17:17:17.328815 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95303a4c-1476-4bbb-873c-05b5f6e528d6-utilities" (OuterVolumeSpecName: "utilities") pod "95303a4c-1476-4bbb-873c-05b5f6e528d6" (UID: "95303a4c-1476-4bbb-873c-05b5f6e528d6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:17:17 crc kubenswrapper[4830]: I0227 17:17:17.333597 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95303a4c-1476-4bbb-873c-05b5f6e528d6-kube-api-access-6mk25" (OuterVolumeSpecName: "kube-api-access-6mk25") pod "95303a4c-1476-4bbb-873c-05b5f6e528d6" (UID: "95303a4c-1476-4bbb-873c-05b5f6e528d6"). InnerVolumeSpecName "kube-api-access-6mk25". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:17:17 crc kubenswrapper[4830]: I0227 17:17:17.405894 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95303a4c-1476-4bbb-873c-05b5f6e528d6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "95303a4c-1476-4bbb-873c-05b5f6e528d6" (UID: "95303a4c-1476-4bbb-873c-05b5f6e528d6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:17:17 crc kubenswrapper[4830]: I0227 17:17:17.430075 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mk25\" (UniqueName: \"kubernetes.io/projected/95303a4c-1476-4bbb-873c-05b5f6e528d6-kube-api-access-6mk25\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:17 crc kubenswrapper[4830]: I0227 17:17:17.430138 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95303a4c-1476-4bbb-873c-05b5f6e528d6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:17 crc kubenswrapper[4830]: I0227 17:17:17.430151 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95303a4c-1476-4bbb-873c-05b5f6e528d6-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:17 crc kubenswrapper[4830]: I0227 17:17:17.647185 4830 generic.go:334] "Generic (PLEG): container finished" podID="95303a4c-1476-4bbb-873c-05b5f6e528d6" containerID="d058b8c8e428ac1639be7e3bd2aa69dcb621ca731d883bc8be1283835b94be78" exitCode=0 Feb 27 17:17:17 crc kubenswrapper[4830]: I0227 17:17:17.647301 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f54kz" event={"ID":"95303a4c-1476-4bbb-873c-05b5f6e528d6","Type":"ContainerDied","Data":"d058b8c8e428ac1639be7e3bd2aa69dcb621ca731d883bc8be1283835b94be78"} Feb 27 17:17:17 crc kubenswrapper[4830]: I0227 17:17:17.647329 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f54kz" Feb 27 17:17:17 crc kubenswrapper[4830]: I0227 17:17:17.647387 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f54kz" event={"ID":"95303a4c-1476-4bbb-873c-05b5f6e528d6","Type":"ContainerDied","Data":"2c1c84c76d67a62bc0d7c504e560b590bde4401a074e8fce0c3bb276ac368d0a"} Feb 27 17:17:17 crc kubenswrapper[4830]: I0227 17:17:17.647438 4830 scope.go:117] "RemoveContainer" containerID="d058b8c8e428ac1639be7e3bd2aa69dcb621ca731d883bc8be1283835b94be78" Feb 27 17:17:17 crc kubenswrapper[4830]: I0227 17:17:17.686620 4830 scope.go:117] "RemoveContainer" containerID="f801fc7797fc0f2aea01ac2c66ca2d68e1d1c81ea96c4f5974e93479daa0d21a" Feb 27 17:17:17 crc kubenswrapper[4830]: I0227 17:17:17.706996 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f54kz"] Feb 27 17:17:17 crc kubenswrapper[4830]: I0227 17:17:17.714374 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-f54kz"] Feb 27 17:17:17 crc kubenswrapper[4830]: I0227 17:17:17.727582 4830 scope.go:117] "RemoveContainer" containerID="dfd0b1bf7360c742540bf356a3ad250d7b8baa608dff56fa20be8308d862f8d9" Feb 27 17:17:17 crc kubenswrapper[4830]: I0227 17:17:17.764549 4830 scope.go:117] "RemoveContainer" containerID="d058b8c8e428ac1639be7e3bd2aa69dcb621ca731d883bc8be1283835b94be78" Feb 27 17:17:17 crc kubenswrapper[4830]: E0227 17:17:17.765169 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d058b8c8e428ac1639be7e3bd2aa69dcb621ca731d883bc8be1283835b94be78\": container with ID starting with d058b8c8e428ac1639be7e3bd2aa69dcb621ca731d883bc8be1283835b94be78 not found: ID does not exist" containerID="d058b8c8e428ac1639be7e3bd2aa69dcb621ca731d883bc8be1283835b94be78" Feb 27 17:17:17 crc kubenswrapper[4830]: I0227 17:17:17.765220 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d058b8c8e428ac1639be7e3bd2aa69dcb621ca731d883bc8be1283835b94be78"} err="failed to get container status \"d058b8c8e428ac1639be7e3bd2aa69dcb621ca731d883bc8be1283835b94be78\": rpc error: code = NotFound desc = could not find container \"d058b8c8e428ac1639be7e3bd2aa69dcb621ca731d883bc8be1283835b94be78\": container with ID starting with d058b8c8e428ac1639be7e3bd2aa69dcb621ca731d883bc8be1283835b94be78 not found: ID does not exist" Feb 27 17:17:17 crc kubenswrapper[4830]: I0227 17:17:17.765253 4830 scope.go:117] "RemoveContainer" containerID="f801fc7797fc0f2aea01ac2c66ca2d68e1d1c81ea96c4f5974e93479daa0d21a" Feb 27 17:17:17 crc kubenswrapper[4830]: E0227 17:17:17.765737 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f801fc7797fc0f2aea01ac2c66ca2d68e1d1c81ea96c4f5974e93479daa0d21a\": container with ID starting with f801fc7797fc0f2aea01ac2c66ca2d68e1d1c81ea96c4f5974e93479daa0d21a not found: ID does not exist" containerID="f801fc7797fc0f2aea01ac2c66ca2d68e1d1c81ea96c4f5974e93479daa0d21a" Feb 27 17:17:17 crc kubenswrapper[4830]: I0227 17:17:17.765778 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f801fc7797fc0f2aea01ac2c66ca2d68e1d1c81ea96c4f5974e93479daa0d21a"} err="failed to get container status \"f801fc7797fc0f2aea01ac2c66ca2d68e1d1c81ea96c4f5974e93479daa0d21a\": rpc error: code = NotFound desc = could not find container \"f801fc7797fc0f2aea01ac2c66ca2d68e1d1c81ea96c4f5974e93479daa0d21a\": container with ID starting with f801fc7797fc0f2aea01ac2c66ca2d68e1d1c81ea96c4f5974e93479daa0d21a not found: ID does not exist" Feb 27 17:17:17 crc kubenswrapper[4830]: I0227 17:17:17.765807 4830 scope.go:117] "RemoveContainer" containerID="dfd0b1bf7360c742540bf356a3ad250d7b8baa608dff56fa20be8308d862f8d9" Feb 27 17:17:17 crc kubenswrapper[4830]: E0227 17:17:17.766162 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfd0b1bf7360c742540bf356a3ad250d7b8baa608dff56fa20be8308d862f8d9\": container with ID starting with dfd0b1bf7360c742540bf356a3ad250d7b8baa608dff56fa20be8308d862f8d9 not found: ID does not exist" containerID="dfd0b1bf7360c742540bf356a3ad250d7b8baa608dff56fa20be8308d862f8d9" Feb 27 17:17:17 crc kubenswrapper[4830]: I0227 17:17:17.766197 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfd0b1bf7360c742540bf356a3ad250d7b8baa608dff56fa20be8308d862f8d9"} err="failed to get container status \"dfd0b1bf7360c742540bf356a3ad250d7b8baa608dff56fa20be8308d862f8d9\": rpc error: code = NotFound desc = could not find container \"dfd0b1bf7360c742540bf356a3ad250d7b8baa608dff56fa20be8308d862f8d9\": container with ID starting with dfd0b1bf7360c742540bf356a3ad250d7b8baa608dff56fa20be8308d862f8d9 not found: ID does not exist" Feb 27 17:17:18 crc kubenswrapper[4830]: I0227 17:17:18.783934 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95303a4c-1476-4bbb-873c-05b5f6e528d6" path="/var/lib/kubelet/pods/95303a4c-1476-4bbb-873c-05b5f6e528d6/volumes" Feb 27 17:18:00 crc kubenswrapper[4830]: I0227 17:18:00.165751 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536878-vc5vh"] Feb 27 17:18:00 crc kubenswrapper[4830]: E0227 17:18:00.167153 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95303a4c-1476-4bbb-873c-05b5f6e528d6" containerName="registry-server" Feb 27 17:18:00 crc kubenswrapper[4830]: I0227 17:18:00.167176 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="95303a4c-1476-4bbb-873c-05b5f6e528d6" containerName="registry-server" Feb 27 17:18:00 crc kubenswrapper[4830]: E0227 17:18:00.167214 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95303a4c-1476-4bbb-873c-05b5f6e528d6" containerName="extract-utilities" Feb 27 17:18:00 crc kubenswrapper[4830]: I0227 17:18:00.167227 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="95303a4c-1476-4bbb-873c-05b5f6e528d6" containerName="extract-utilities" Feb 27 17:18:00 crc kubenswrapper[4830]: E0227 17:18:00.167248 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95303a4c-1476-4bbb-873c-05b5f6e528d6" containerName="extract-content" Feb 27 17:18:00 crc kubenswrapper[4830]: I0227 17:18:00.167261 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="95303a4c-1476-4bbb-873c-05b5f6e528d6" containerName="extract-content" Feb 27 17:18:00 crc kubenswrapper[4830]: I0227 17:18:00.167568 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="95303a4c-1476-4bbb-873c-05b5f6e528d6" containerName="registry-server" Feb 27 17:18:00 crc kubenswrapper[4830]: I0227 17:18:00.168361 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536878-vc5vh" Feb 27 17:18:00 crc kubenswrapper[4830]: I0227 17:18:00.173452 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:18:00 crc kubenswrapper[4830]: I0227 17:18:00.173506 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:18:00 crc kubenswrapper[4830]: I0227 17:18:00.173576 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 17:18:00 crc kubenswrapper[4830]: I0227 17:18:00.186374 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536878-vc5vh"] Feb 27 17:18:00 crc kubenswrapper[4830]: I0227 17:18:00.193090 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnxhk\" (UniqueName: \"kubernetes.io/projected/57621e4a-b515-4541-83b2-2fe083b7837b-kube-api-access-dnxhk\") pod \"auto-csr-approver-29536878-vc5vh\" (UID: \"57621e4a-b515-4541-83b2-2fe083b7837b\") " pod="openshift-infra/auto-csr-approver-29536878-vc5vh" Feb 27 17:18:00 crc kubenswrapper[4830]: I0227 17:18:00.294930 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnxhk\" (UniqueName: \"kubernetes.io/projected/57621e4a-b515-4541-83b2-2fe083b7837b-kube-api-access-dnxhk\") pod \"auto-csr-approver-29536878-vc5vh\" (UID: \"57621e4a-b515-4541-83b2-2fe083b7837b\") " pod="openshift-infra/auto-csr-approver-29536878-vc5vh" Feb 27 17:18:00 crc kubenswrapper[4830]: I0227 17:18:00.327286 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnxhk\" (UniqueName: \"kubernetes.io/projected/57621e4a-b515-4541-83b2-2fe083b7837b-kube-api-access-dnxhk\") pod \"auto-csr-approver-29536878-vc5vh\" (UID: \"57621e4a-b515-4541-83b2-2fe083b7837b\") " pod="openshift-infra/auto-csr-approver-29536878-vc5vh" Feb 27 17:18:00 crc kubenswrapper[4830]: I0227 17:18:00.492670 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536878-vc5vh" Feb 27 17:18:00 crc kubenswrapper[4830]: I0227 17:18:00.995599 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536878-vc5vh"] Feb 27 17:18:01 crc kubenswrapper[4830]: I0227 17:18:01.061879 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536878-vc5vh" event={"ID":"57621e4a-b515-4541-83b2-2fe083b7837b","Type":"ContainerStarted","Data":"2fa5d6e97944785e43369c92487c9bc69e893e7d98988ca291727684d4216c8b"} Feb 27 17:18:03 crc kubenswrapper[4830]: I0227 17:18:03.160682 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:18:03 crc kubenswrapper[4830]: I0227 17:18:03.161452 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:18:07 crc kubenswrapper[4830]: I0227 17:18:07.120257 4830 generic.go:334] "Generic (PLEG): container finished" podID="57621e4a-b515-4541-83b2-2fe083b7837b" containerID="9757e4000d2c642600d77752805d48f81046fbe58bc99a3b2888a7068c0c1307" exitCode=0 Feb 27 17:18:07 crc kubenswrapper[4830]: I0227 17:18:07.120365 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536878-vc5vh" event={"ID":"57621e4a-b515-4541-83b2-2fe083b7837b","Type":"ContainerDied","Data":"9757e4000d2c642600d77752805d48f81046fbe58bc99a3b2888a7068c0c1307"} Feb 27 17:18:08 crc kubenswrapper[4830]: I0227 17:18:08.521804 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536878-vc5vh" Feb 27 17:18:08 crc kubenswrapper[4830]: I0227 17:18:08.625308 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnxhk\" (UniqueName: \"kubernetes.io/projected/57621e4a-b515-4541-83b2-2fe083b7837b-kube-api-access-dnxhk\") pod \"57621e4a-b515-4541-83b2-2fe083b7837b\" (UID: \"57621e4a-b515-4541-83b2-2fe083b7837b\") " Feb 27 17:18:08 crc kubenswrapper[4830]: I0227 17:18:08.634605 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57621e4a-b515-4541-83b2-2fe083b7837b-kube-api-access-dnxhk" (OuterVolumeSpecName: "kube-api-access-dnxhk") pod "57621e4a-b515-4541-83b2-2fe083b7837b" (UID: "57621e4a-b515-4541-83b2-2fe083b7837b"). InnerVolumeSpecName "kube-api-access-dnxhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:18:08 crc kubenswrapper[4830]: I0227 17:18:08.728305 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnxhk\" (UniqueName: \"kubernetes.io/projected/57621e4a-b515-4541-83b2-2fe083b7837b-kube-api-access-dnxhk\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:09 crc kubenswrapper[4830]: I0227 17:18:09.142239 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536878-vc5vh" event={"ID":"57621e4a-b515-4541-83b2-2fe083b7837b","Type":"ContainerDied","Data":"2fa5d6e97944785e43369c92487c9bc69e893e7d98988ca291727684d4216c8b"} Feb 27 17:18:09 crc kubenswrapper[4830]: I0227 17:18:09.142293 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fa5d6e97944785e43369c92487c9bc69e893e7d98988ca291727684d4216c8b" Feb 27 17:18:09 crc kubenswrapper[4830]: I0227 17:18:09.142352 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536878-vc5vh" Feb 27 17:18:09 crc kubenswrapper[4830]: I0227 17:18:09.623276 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536872-h4685"] Feb 27 17:18:09 crc kubenswrapper[4830]: I0227 17:18:09.629476 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536872-h4685"] Feb 27 17:18:10 crc kubenswrapper[4830]: I0227 17:18:10.776508 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09e6d9d8-dbef-4c31-b7ed-9867abb8ff1d" path="/var/lib/kubelet/pods/09e6d9d8-dbef-4c31-b7ed-9867abb8ff1d/volumes" Feb 27 17:18:33 crc kubenswrapper[4830]: I0227 17:18:33.159916 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:18:33 crc kubenswrapper[4830]: I0227 17:18:33.160723 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:18:47 crc kubenswrapper[4830]: I0227 17:18:47.314587 4830 scope.go:117] "RemoveContainer" containerID="5b340714b6b6a1403277f3024f0f27850928608bf8d523d8bd615b613b5f3d53" Feb 27 17:19:03 crc kubenswrapper[4830]: I0227 17:19:03.160444 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:19:03 crc kubenswrapper[4830]: I0227 17:19:03.161050 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:19:03 crc kubenswrapper[4830]: I0227 17:19:03.161114 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 17:19:03 crc kubenswrapper[4830]: I0227 17:19:03.161917 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d43c5277201627bfc21083e209c49d25c5bf66f11f77c7094001382c8173b2a3"} pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 17:19:03 crc kubenswrapper[4830]: I0227 17:19:03.162035 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" containerID="cri-o://d43c5277201627bfc21083e209c49d25c5bf66f11f77c7094001382c8173b2a3" gracePeriod=600 Feb 27 17:19:03 crc kubenswrapper[4830]: E0227 17:19:03.287208 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:19:03 crc kubenswrapper[4830]: I0227 17:19:03.640715 4830 generic.go:334] "Generic (PLEG): container finished" podID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerID="d43c5277201627bfc21083e209c49d25c5bf66f11f77c7094001382c8173b2a3" exitCode=0 Feb 27 17:19:03 crc kubenswrapper[4830]: I0227 17:19:03.640790 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerDied","Data":"d43c5277201627bfc21083e209c49d25c5bf66f11f77c7094001382c8173b2a3"} Feb 27 17:19:03 crc kubenswrapper[4830]: I0227 17:19:03.640854 4830 scope.go:117] "RemoveContainer" containerID="4fe4d1b45eabdb72f9fc5ac554899ea9b06c8455f8916258035d1a2fc79f3c9e" Feb 27 17:19:03 crc kubenswrapper[4830]: I0227 17:19:03.641903 4830 scope.go:117] "RemoveContainer" containerID="d43c5277201627bfc21083e209c49d25c5bf66f11f77c7094001382c8173b2a3" Feb 27 17:19:03 crc kubenswrapper[4830]: E0227 17:19:03.642258 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:19:16 crc kubenswrapper[4830]: I0227 17:19:16.762431 4830 scope.go:117] "RemoveContainer" containerID="d43c5277201627bfc21083e209c49d25c5bf66f11f77c7094001382c8173b2a3" Feb 27 17:19:16 crc kubenswrapper[4830]: E0227 17:19:16.763375 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:19:29 crc kubenswrapper[4830]: I0227 17:19:29.762916 4830 scope.go:117] "RemoveContainer" containerID="d43c5277201627bfc21083e209c49d25c5bf66f11f77c7094001382c8173b2a3" Feb 27 17:19:29 crc kubenswrapper[4830]: E0227 17:19:29.764131 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:19:43 crc kubenswrapper[4830]: I0227 17:19:43.762594 4830 scope.go:117] "RemoveContainer" containerID="d43c5277201627bfc21083e209c49d25c5bf66f11f77c7094001382c8173b2a3" Feb 27 17:19:43 crc kubenswrapper[4830]: E0227 17:19:43.764318 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:19:55 crc kubenswrapper[4830]: I0227 17:19:55.763231 4830 scope.go:117] "RemoveContainer" containerID="d43c5277201627bfc21083e209c49d25c5bf66f11f77c7094001382c8173b2a3" Feb 27 17:19:55 crc kubenswrapper[4830]: E0227 17:19:55.764369 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:20:00 crc kubenswrapper[4830]: I0227 17:20:00.160537 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536880-ssqdv"] Feb 27 17:20:00 crc kubenswrapper[4830]: E0227 17:20:00.161291 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57621e4a-b515-4541-83b2-2fe083b7837b" containerName="oc" Feb 27 17:20:00 crc kubenswrapper[4830]: I0227 17:20:00.161304 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="57621e4a-b515-4541-83b2-2fe083b7837b" containerName="oc" Feb 27 17:20:00 crc kubenswrapper[4830]: I0227 17:20:00.161477 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="57621e4a-b515-4541-83b2-2fe083b7837b" containerName="oc" Feb 27 17:20:00 crc kubenswrapper[4830]: I0227 17:20:00.162023 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536880-ssqdv" Feb 27 17:20:00 crc kubenswrapper[4830]: I0227 17:20:00.164618 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:20:00 crc kubenswrapper[4830]: I0227 17:20:00.165204 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:20:00 crc kubenswrapper[4830]: I0227 17:20:00.165778 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 17:20:00 crc kubenswrapper[4830]: I0227 17:20:00.170710 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536880-ssqdv"] Feb 27 17:20:00 crc kubenswrapper[4830]: I0227 17:20:00.226816 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml5xh\" (UniqueName: \"kubernetes.io/projected/a3f16ff4-29b6-4ff3-a540-b794d6198ba7-kube-api-access-ml5xh\") pod \"auto-csr-approver-29536880-ssqdv\" (UID: \"a3f16ff4-29b6-4ff3-a540-b794d6198ba7\") " pod="openshift-infra/auto-csr-approver-29536880-ssqdv" Feb 27 17:20:00 crc kubenswrapper[4830]: I0227 17:20:00.328676 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml5xh\" (UniqueName: \"kubernetes.io/projected/a3f16ff4-29b6-4ff3-a540-b794d6198ba7-kube-api-access-ml5xh\") pod \"auto-csr-approver-29536880-ssqdv\" (UID: \"a3f16ff4-29b6-4ff3-a540-b794d6198ba7\") " pod="openshift-infra/auto-csr-approver-29536880-ssqdv" Feb 27 17:20:00 crc kubenswrapper[4830]: I0227 17:20:00.361525 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml5xh\" (UniqueName: \"kubernetes.io/projected/a3f16ff4-29b6-4ff3-a540-b794d6198ba7-kube-api-access-ml5xh\") pod \"auto-csr-approver-29536880-ssqdv\" (UID: \"a3f16ff4-29b6-4ff3-a540-b794d6198ba7\") " pod="openshift-infra/auto-csr-approver-29536880-ssqdv" Feb 27 17:20:00 crc kubenswrapper[4830]: I0227 17:20:00.493824 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536880-ssqdv" Feb 27 17:20:00 crc kubenswrapper[4830]: I0227 17:20:00.815231 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536880-ssqdv"] Feb 27 17:20:00 crc kubenswrapper[4830]: I0227 17:20:00.819978 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 17:20:01 crc kubenswrapper[4830]: I0227 17:20:01.202660 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536880-ssqdv" event={"ID":"a3f16ff4-29b6-4ff3-a540-b794d6198ba7","Type":"ContainerStarted","Data":"21a206b6918644c44625942393f2990784b7d38e26860dfcc37c1d3d0deebc3c"} Feb 27 17:20:04 crc kubenswrapper[4830]: I0227 17:20:04.242103 4830 generic.go:334] "Generic (PLEG): container finished" podID="a3f16ff4-29b6-4ff3-a540-b794d6198ba7" containerID="f0e985b0c3e21a49f3e385060f642e65cda45baff01dd909c3b10da3f7148d9f" exitCode=0 Feb 27 17:20:04 crc kubenswrapper[4830]: I0227 17:20:04.242501 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536880-ssqdv" event={"ID":"a3f16ff4-29b6-4ff3-a540-b794d6198ba7","Type":"ContainerDied","Data":"f0e985b0c3e21a49f3e385060f642e65cda45baff01dd909c3b10da3f7148d9f"} Feb 27 17:20:05 crc kubenswrapper[4830]: I0227 17:20:05.569546 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536880-ssqdv" Feb 27 17:20:05 crc kubenswrapper[4830]: I0227 17:20:05.732755 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ml5xh\" (UniqueName: \"kubernetes.io/projected/a3f16ff4-29b6-4ff3-a540-b794d6198ba7-kube-api-access-ml5xh\") pod \"a3f16ff4-29b6-4ff3-a540-b794d6198ba7\" (UID: \"a3f16ff4-29b6-4ff3-a540-b794d6198ba7\") " Feb 27 17:20:05 crc kubenswrapper[4830]: I0227 17:20:05.740252 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3f16ff4-29b6-4ff3-a540-b794d6198ba7-kube-api-access-ml5xh" (OuterVolumeSpecName: "kube-api-access-ml5xh") pod "a3f16ff4-29b6-4ff3-a540-b794d6198ba7" (UID: "a3f16ff4-29b6-4ff3-a540-b794d6198ba7"). InnerVolumeSpecName "kube-api-access-ml5xh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:20:05 crc kubenswrapper[4830]: I0227 17:20:05.834712 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ml5xh\" (UniqueName: \"kubernetes.io/projected/a3f16ff4-29b6-4ff3-a540-b794d6198ba7-kube-api-access-ml5xh\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:06 crc kubenswrapper[4830]: I0227 17:20:06.265910 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536880-ssqdv" event={"ID":"a3f16ff4-29b6-4ff3-a540-b794d6198ba7","Type":"ContainerDied","Data":"21a206b6918644c44625942393f2990784b7d38e26860dfcc37c1d3d0deebc3c"} Feb 27 17:20:06 crc kubenswrapper[4830]: I0227 17:20:06.266004 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21a206b6918644c44625942393f2990784b7d38e26860dfcc37c1d3d0deebc3c" Feb 27 17:20:06 crc kubenswrapper[4830]: I0227 17:20:06.266391 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536880-ssqdv" Feb 27 17:20:06 crc kubenswrapper[4830]: I0227 17:20:06.660275 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536874-pmw8j"] Feb 27 17:20:06 crc kubenswrapper[4830]: I0227 17:20:06.666316 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536874-pmw8j"] Feb 27 17:20:06 crc kubenswrapper[4830]: I0227 17:20:06.779780 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9862ab-1028-4404-9d25-908c8ae0da55" path="/var/lib/kubelet/pods/9e9862ab-1028-4404-9d25-908c8ae0da55/volumes" Feb 27 17:20:07 crc kubenswrapper[4830]: I0227 17:20:07.763452 4830 scope.go:117] "RemoveContainer" containerID="d43c5277201627bfc21083e209c49d25c5bf66f11f77c7094001382c8173b2a3" Feb 27 17:20:07 crc kubenswrapper[4830]: E0227 17:20:07.763831 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:20:21 crc kubenswrapper[4830]: I0227 17:20:21.763418 4830 scope.go:117] "RemoveContainer" containerID="d43c5277201627bfc21083e209c49d25c5bf66f11f77c7094001382c8173b2a3" Feb 27 17:20:21 crc kubenswrapper[4830]: E0227 17:20:21.764385 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:20:35 crc kubenswrapper[4830]: I0227 17:20:35.764313 4830 scope.go:117] "RemoveContainer" containerID="d43c5277201627bfc21083e209c49d25c5bf66f11f77c7094001382c8173b2a3" Feb 27 17:20:35 crc kubenswrapper[4830]: E0227 17:20:35.765383 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:20:47 crc kubenswrapper[4830]: I0227 17:20:47.406380 4830 scope.go:117] "RemoveContainer" containerID="5a6b9bf1d9e2092d5f791d0d4bfb84dd74a31b91d0641c25b4331b38d80bf15e" Feb 27 17:20:51 crc kubenswrapper[4830]: I0227 17:20:51.762835 4830 scope.go:117] "RemoveContainer" containerID="d43c5277201627bfc21083e209c49d25c5bf66f11f77c7094001382c8173b2a3" Feb 27 17:20:51 crc kubenswrapper[4830]: E0227 17:20:51.764113 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:21:03 crc kubenswrapper[4830]: I0227 17:21:03.763278 4830 scope.go:117] "RemoveContainer" containerID="d43c5277201627bfc21083e209c49d25c5bf66f11f77c7094001382c8173b2a3" Feb 27 17:21:03 crc kubenswrapper[4830]: E0227 17:21:03.764500 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:21:17 crc kubenswrapper[4830]: I0227 17:21:17.763930 4830 scope.go:117] "RemoveContainer" containerID="d43c5277201627bfc21083e209c49d25c5bf66f11f77c7094001382c8173b2a3" Feb 27 17:21:17 crc kubenswrapper[4830]: E0227 17:21:17.764939 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:21:32 crc kubenswrapper[4830]: I0227 17:21:32.763337 4830 scope.go:117] "RemoveContainer" containerID="d43c5277201627bfc21083e209c49d25c5bf66f11f77c7094001382c8173b2a3" Feb 27 17:21:32 crc kubenswrapper[4830]: E0227 17:21:32.764334 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:21:44 crc kubenswrapper[4830]: I0227 17:21:44.771429 4830 scope.go:117] "RemoveContainer" containerID="d43c5277201627bfc21083e209c49d25c5bf66f11f77c7094001382c8173b2a3" Feb 27 17:21:44 crc kubenswrapper[4830]: E0227 17:21:44.772886 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:21:56 crc kubenswrapper[4830]: I0227 17:21:56.763120 4830 scope.go:117] "RemoveContainer" containerID="d43c5277201627bfc21083e209c49d25c5bf66f11f77c7094001382c8173b2a3" Feb 27 17:21:56 crc kubenswrapper[4830]: E0227 17:21:56.764256 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:22:00 crc kubenswrapper[4830]: I0227 17:22:00.168308 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536882-b65kt"] Feb 27 17:22:00 crc kubenswrapper[4830]: E0227 17:22:00.168834 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3f16ff4-29b6-4ff3-a540-b794d6198ba7" containerName="oc" Feb 27 17:22:00 crc kubenswrapper[4830]: I0227 17:22:00.168846 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3f16ff4-29b6-4ff3-a540-b794d6198ba7" containerName="oc" Feb 27 17:22:00 crc kubenswrapper[4830]: I0227 17:22:00.169009 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3f16ff4-29b6-4ff3-a540-b794d6198ba7" containerName="oc" Feb 27 17:22:00 crc kubenswrapper[4830]: I0227 17:22:00.169446 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536882-b65kt" Feb 27 17:22:00 crc kubenswrapper[4830]: I0227 17:22:00.172285 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:22:00 crc kubenswrapper[4830]: I0227 17:22:00.172434 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 17:22:00 crc kubenswrapper[4830]: I0227 17:22:00.180934 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:22:00 crc kubenswrapper[4830]: I0227 17:22:00.222791 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536882-b65kt"] Feb 27 17:22:00 crc kubenswrapper[4830]: I0227 17:22:00.347346 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mxq6\" (UniqueName: \"kubernetes.io/projected/9a451822-4452-414e-8f06-54897714caf9-kube-api-access-4mxq6\") pod \"auto-csr-approver-29536882-b65kt\" (UID: \"9a451822-4452-414e-8f06-54897714caf9\") " pod="openshift-infra/auto-csr-approver-29536882-b65kt" Feb 27 17:22:00 crc kubenswrapper[4830]: I0227 17:22:00.449547 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mxq6\" (UniqueName: \"kubernetes.io/projected/9a451822-4452-414e-8f06-54897714caf9-kube-api-access-4mxq6\") pod \"auto-csr-approver-29536882-b65kt\" (UID: \"9a451822-4452-414e-8f06-54897714caf9\") " pod="openshift-infra/auto-csr-approver-29536882-b65kt" Feb 27 17:22:00 crc kubenswrapper[4830]: I0227 17:22:00.485599 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mxq6\" (UniqueName: \"kubernetes.io/projected/9a451822-4452-414e-8f06-54897714caf9-kube-api-access-4mxq6\") pod \"auto-csr-approver-29536882-b65kt\" (UID: \"9a451822-4452-414e-8f06-54897714caf9\") " pod="openshift-infra/auto-csr-approver-29536882-b65kt" Feb 27 17:22:00 crc kubenswrapper[4830]: I0227 17:22:00.501385 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536882-b65kt" Feb 27 17:22:00 crc kubenswrapper[4830]: W0227 17:22:00.790343 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a451822_4452_414e_8f06_54897714caf9.slice/crio-6c91bec9fd4db7ebba96209e20fe1063379b58730856754dcd219bfcc875d707 WatchSource:0}: Error finding container 6c91bec9fd4db7ebba96209e20fe1063379b58730856754dcd219bfcc875d707: Status 404 returned error can't find the container with id 6c91bec9fd4db7ebba96209e20fe1063379b58730856754dcd219bfcc875d707 Feb 27 17:22:00 crc kubenswrapper[4830]: I0227 17:22:00.793400 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536882-b65kt"] Feb 27 17:22:01 crc kubenswrapper[4830]: I0227 17:22:01.398242 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536882-b65kt" event={"ID":"9a451822-4452-414e-8f06-54897714caf9","Type":"ContainerStarted","Data":"6c91bec9fd4db7ebba96209e20fe1063379b58730856754dcd219bfcc875d707"} Feb 27 17:22:02 crc kubenswrapper[4830]: I0227 17:22:02.416089 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536882-b65kt" event={"ID":"9a451822-4452-414e-8f06-54897714caf9","Type":"ContainerStarted","Data":"9873a9db75d9fe76533b217770ee9ec4f690845a88e8f2f6d2b531d8d0545044"} Feb 27 17:22:02 crc kubenswrapper[4830]: I0227 17:22:02.433368 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536882-b65kt" podStartSLOduration=1.193930631 podStartE2EDuration="2.433335263s" podCreationTimestamp="2026-02-27 17:22:00 +0000 UTC" firstStartedPulling="2026-02-27 17:22:00.795241836 +0000 UTC m=+4516.884514339" lastFinishedPulling="2026-02-27 17:22:02.034646468 +0000 UTC m=+4518.123918971" observedRunningTime="2026-02-27 17:22:02.433052445 +0000 UTC m=+4518.522324928" watchObservedRunningTime="2026-02-27 17:22:02.433335263 +0000 UTC m=+4518.522607726" Feb 27 17:22:02 crc kubenswrapper[4830]: E0227 17:22:02.668107 4830 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a451822_4452_414e_8f06_54897714caf9.slice/crio-9873a9db75d9fe76533b217770ee9ec4f690845a88e8f2f6d2b531d8d0545044.scope\": RecentStats: unable to find data in memory cache]" Feb 27 17:22:03 crc kubenswrapper[4830]: I0227 17:22:03.429195 4830 generic.go:334] "Generic (PLEG): container finished" podID="9a451822-4452-414e-8f06-54897714caf9" containerID="9873a9db75d9fe76533b217770ee9ec4f690845a88e8f2f6d2b531d8d0545044" exitCode=0 Feb 27 17:22:03 crc kubenswrapper[4830]: I0227 17:22:03.429293 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536882-b65kt" event={"ID":"9a451822-4452-414e-8f06-54897714caf9","Type":"ContainerDied","Data":"9873a9db75d9fe76533b217770ee9ec4f690845a88e8f2f6d2b531d8d0545044"} Feb 27 17:22:04 crc kubenswrapper[4830]: I0227 17:22:04.816149 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536882-b65kt" Feb 27 17:22:04 crc kubenswrapper[4830]: I0227 17:22:04.921708 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mxq6\" (UniqueName: \"kubernetes.io/projected/9a451822-4452-414e-8f06-54897714caf9-kube-api-access-4mxq6\") pod \"9a451822-4452-414e-8f06-54897714caf9\" (UID: \"9a451822-4452-414e-8f06-54897714caf9\") " Feb 27 17:22:04 crc kubenswrapper[4830]: I0227 17:22:04.926422 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a451822-4452-414e-8f06-54897714caf9-kube-api-access-4mxq6" (OuterVolumeSpecName: "kube-api-access-4mxq6") pod "9a451822-4452-414e-8f06-54897714caf9" (UID: "9a451822-4452-414e-8f06-54897714caf9"). InnerVolumeSpecName "kube-api-access-4mxq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:22:05 crc kubenswrapper[4830]: I0227 17:22:05.023814 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mxq6\" (UniqueName: \"kubernetes.io/projected/9a451822-4452-414e-8f06-54897714caf9-kube-api-access-4mxq6\") on node \"crc\" DevicePath \"\"" Feb 27 17:22:05 crc kubenswrapper[4830]: I0227 17:22:05.451607 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536882-b65kt" event={"ID":"9a451822-4452-414e-8f06-54897714caf9","Type":"ContainerDied","Data":"6c91bec9fd4db7ebba96209e20fe1063379b58730856754dcd219bfcc875d707"} Feb 27 17:22:05 crc kubenswrapper[4830]: I0227 17:22:05.451828 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c91bec9fd4db7ebba96209e20fe1063379b58730856754dcd219bfcc875d707" Feb 27 17:22:05 crc kubenswrapper[4830]: I0227 17:22:05.451704 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536882-b65kt" Feb 27 17:22:05 crc kubenswrapper[4830]: I0227 17:22:05.529444 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536876-dmbq6"] Feb 27 17:22:05 crc kubenswrapper[4830]: I0227 17:22:05.540492 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536876-dmbq6"] Feb 27 17:22:06 crc kubenswrapper[4830]: I0227 17:22:06.779233 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef159740-1a43-4fa1-b365-6cde00b8fdde" path="/var/lib/kubelet/pods/ef159740-1a43-4fa1-b365-6cde00b8fdde/volumes" Feb 27 17:22:07 crc kubenswrapper[4830]: I0227 17:22:07.763132 4830 scope.go:117] "RemoveContainer" containerID="d43c5277201627bfc21083e209c49d25c5bf66f11f77c7094001382c8173b2a3" Feb 27 17:22:07 crc kubenswrapper[4830]: E0227 17:22:07.764069 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:22:22 crc kubenswrapper[4830]: I0227 17:22:22.762658 4830 scope.go:117] "RemoveContainer" containerID="d43c5277201627bfc21083e209c49d25c5bf66f11f77c7094001382c8173b2a3" Feb 27 17:22:22 crc kubenswrapper[4830]: E0227 17:22:22.765502 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:22:37 crc kubenswrapper[4830]: I0227 17:22:37.762920 4830 scope.go:117] "RemoveContainer" containerID="d43c5277201627bfc21083e209c49d25c5bf66f11f77c7094001382c8173b2a3" Feb 27 17:22:37 crc kubenswrapper[4830]: E0227 17:22:37.764760 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:22:47 crc kubenswrapper[4830]: I0227 17:22:47.504716 4830 scope.go:117] "RemoveContainer" containerID="4c7dfcbe9a27a98b146e4064a13a0f81129b3c7a3d01d42f8bcf186e679741ba" Feb 27 17:22:49 crc kubenswrapper[4830]: I0227 17:22:49.762204 4830 scope.go:117] "RemoveContainer" containerID="d43c5277201627bfc21083e209c49d25c5bf66f11f77c7094001382c8173b2a3" Feb 27 17:22:49 crc kubenswrapper[4830]: E0227 17:22:49.762869 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:23:04 crc kubenswrapper[4830]: I0227 17:23:04.771467 4830 scope.go:117] "RemoveContainer" containerID="d43c5277201627bfc21083e209c49d25c5bf66f11f77c7094001382c8173b2a3" Feb 27 17:23:04 crc kubenswrapper[4830]: E0227 17:23:04.772735 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:23:18 crc kubenswrapper[4830]: I0227 17:23:18.763092 4830 scope.go:117] "RemoveContainer" containerID="d43c5277201627bfc21083e209c49d25c5bf66f11f77c7094001382c8173b2a3" Feb 27 17:23:18 crc kubenswrapper[4830]: E0227 17:23:18.764344 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:23:31 crc kubenswrapper[4830]: I0227 17:23:31.763844 4830 scope.go:117] "RemoveContainer" containerID="d43c5277201627bfc21083e209c49d25c5bf66f11f77c7094001382c8173b2a3" Feb 27 17:23:31 crc kubenswrapper[4830]: E0227 17:23:31.766249 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:23:43 crc kubenswrapper[4830]: I0227 17:23:43.762566 4830 scope.go:117] "RemoveContainer" containerID="d43c5277201627bfc21083e209c49d25c5bf66f11f77c7094001382c8173b2a3" Feb 27 17:23:43 crc kubenswrapper[4830]: E0227 17:23:43.763588 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:23:55 crc kubenswrapper[4830]: I0227 17:23:55.763180 4830 scope.go:117] "RemoveContainer" containerID="d43c5277201627bfc21083e209c49d25c5bf66f11f77c7094001382c8173b2a3" Feb 27 17:23:55 crc kubenswrapper[4830]: E0227 17:23:55.764269 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:24:00 crc kubenswrapper[4830]: I0227 17:24:00.159827 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536884-jxtll"] Feb 27 17:24:00 crc kubenswrapper[4830]: E0227 17:24:00.160893 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a451822-4452-414e-8f06-54897714caf9" containerName="oc" Feb 27 17:24:00 crc kubenswrapper[4830]: I0227 17:24:00.160916 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a451822-4452-414e-8f06-54897714caf9" containerName="oc" Feb 27 17:24:00 crc kubenswrapper[4830]: I0227 17:24:00.161245 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a451822-4452-414e-8f06-54897714caf9" containerName="oc" Feb 27 17:24:00 crc kubenswrapper[4830]: I0227 17:24:00.162002 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536884-jxtll" Feb 27 17:24:00 crc kubenswrapper[4830]: I0227 17:24:00.164600 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 17:24:00 crc kubenswrapper[4830]: I0227 17:24:00.165292 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:24:00 crc kubenswrapper[4830]: I0227 17:24:00.165646 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:24:00 crc kubenswrapper[4830]: I0227 17:24:00.176174 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536884-jxtll"] Feb 27 17:24:00 crc kubenswrapper[4830]: I0227 17:24:00.352091 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdlsk\" (UniqueName: \"kubernetes.io/projected/bd756bd4-9902-4040-8342-9886fcd96a41-kube-api-access-gdlsk\") pod \"auto-csr-approver-29536884-jxtll\" (UID: \"bd756bd4-9902-4040-8342-9886fcd96a41\") " pod="openshift-infra/auto-csr-approver-29536884-jxtll" Feb 27 17:24:00 crc kubenswrapper[4830]: I0227 17:24:00.453306 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdlsk\" (UniqueName: \"kubernetes.io/projected/bd756bd4-9902-4040-8342-9886fcd96a41-kube-api-access-gdlsk\") pod \"auto-csr-approver-29536884-jxtll\" (UID: \"bd756bd4-9902-4040-8342-9886fcd96a41\") " pod="openshift-infra/auto-csr-approver-29536884-jxtll" Feb 27 17:24:00 crc kubenswrapper[4830]: I0227 17:24:00.474188 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdlsk\" (UniqueName: \"kubernetes.io/projected/bd756bd4-9902-4040-8342-9886fcd96a41-kube-api-access-gdlsk\") pod \"auto-csr-approver-29536884-jxtll\" (UID: \"bd756bd4-9902-4040-8342-9886fcd96a41\") " pod="openshift-infra/auto-csr-approver-29536884-jxtll" Feb 27 17:24:00 crc kubenswrapper[4830]: I0227 17:24:00.496239 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536884-jxtll" Feb 27 17:24:00 crc kubenswrapper[4830]: I0227 17:24:00.966862 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536884-jxtll"] Feb 27 17:24:01 crc kubenswrapper[4830]: I0227 17:24:01.546819 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536884-jxtll" event={"ID":"bd756bd4-9902-4040-8342-9886fcd96a41","Type":"ContainerStarted","Data":"a7b5a746940180edcad2893e2e334f7c58304c2ae5f732a3b1b2e49663f80a26"} Feb 27 17:24:02 crc kubenswrapper[4830]: I0227 17:24:02.557233 4830 generic.go:334] "Generic (PLEG): container finished" podID="bd756bd4-9902-4040-8342-9886fcd96a41" containerID="e234bdf9d383b5a101302a0d6cd53ac32e00c4a16d6430203957335ff2082563" exitCode=0 Feb 27 17:24:02 crc kubenswrapper[4830]: I0227 17:24:02.557304 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536884-jxtll" event={"ID":"bd756bd4-9902-4040-8342-9886fcd96a41","Type":"ContainerDied","Data":"e234bdf9d383b5a101302a0d6cd53ac32e00c4a16d6430203957335ff2082563"} Feb 27 17:24:03 crc kubenswrapper[4830]: I0227 17:24:03.928835 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536884-jxtll" Feb 27 17:24:04 crc kubenswrapper[4830]: I0227 17:24:04.107854 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdlsk\" (UniqueName: \"kubernetes.io/projected/bd756bd4-9902-4040-8342-9886fcd96a41-kube-api-access-gdlsk\") pod \"bd756bd4-9902-4040-8342-9886fcd96a41\" (UID: \"bd756bd4-9902-4040-8342-9886fcd96a41\") " Feb 27 17:24:04 crc kubenswrapper[4830]: I0227 17:24:04.114350 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd756bd4-9902-4040-8342-9886fcd96a41-kube-api-access-gdlsk" (OuterVolumeSpecName: "kube-api-access-gdlsk") pod "bd756bd4-9902-4040-8342-9886fcd96a41" (UID: "bd756bd4-9902-4040-8342-9886fcd96a41"). InnerVolumeSpecName "kube-api-access-gdlsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:24:04 crc kubenswrapper[4830]: I0227 17:24:04.210493 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdlsk\" (UniqueName: \"kubernetes.io/projected/bd756bd4-9902-4040-8342-9886fcd96a41-kube-api-access-gdlsk\") on node \"crc\" DevicePath \"\"" Feb 27 17:24:04 crc kubenswrapper[4830]: I0227 17:24:04.576625 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536884-jxtll" event={"ID":"bd756bd4-9902-4040-8342-9886fcd96a41","Type":"ContainerDied","Data":"a7b5a746940180edcad2893e2e334f7c58304c2ae5f732a3b1b2e49663f80a26"} Feb 27 17:24:04 crc kubenswrapper[4830]: I0227 17:24:04.576681 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7b5a746940180edcad2893e2e334f7c58304c2ae5f732a3b1b2e49663f80a26" Feb 27 17:24:04 crc kubenswrapper[4830]: I0227 17:24:04.577207 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536884-jxtll" Feb 27 17:24:05 crc kubenswrapper[4830]: I0227 17:24:05.013330 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536878-vc5vh"] Feb 27 17:24:05 crc kubenswrapper[4830]: I0227 17:24:05.019744 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536878-vc5vh"] Feb 27 17:24:06 crc kubenswrapper[4830]: I0227 17:24:06.775730 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57621e4a-b515-4541-83b2-2fe083b7837b" path="/var/lib/kubelet/pods/57621e4a-b515-4541-83b2-2fe083b7837b/volumes" Feb 27 17:24:09 crc kubenswrapper[4830]: I0227 17:24:09.763090 4830 scope.go:117] "RemoveContainer" containerID="d43c5277201627bfc21083e209c49d25c5bf66f11f77c7094001382c8173b2a3" Feb 27 17:24:10 crc kubenswrapper[4830]: I0227 17:24:10.637031 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerStarted","Data":"19e6a24991d0874a855368f8e306131672121f114d688786c52f7e0dafcd4823"} Feb 27 17:24:47 crc kubenswrapper[4830]: I0227 17:24:47.614359 4830 scope.go:117] "RemoveContainer" containerID="9757e4000d2c642600d77752805d48f81046fbe58bc99a3b2888a7068c0c1307" Feb 27 17:24:51 crc kubenswrapper[4830]: I0227 17:24:51.823771 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-kk7k8"] Feb 27 17:24:51 crc kubenswrapper[4830]: I0227 17:24:51.832964 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-kk7k8"] Feb 27 17:24:52 crc kubenswrapper[4830]: I0227 17:24:52.014388 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-hfr4x"] Feb 27 17:24:52 crc kubenswrapper[4830]: E0227 17:24:52.015012 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd756bd4-9902-4040-8342-9886fcd96a41" containerName="oc" Feb 27 17:24:52 crc kubenswrapper[4830]: I0227 17:24:52.015042 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd756bd4-9902-4040-8342-9886fcd96a41" containerName="oc" Feb 27 17:24:52 crc kubenswrapper[4830]: I0227 17:24:52.015332 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd756bd4-9902-4040-8342-9886fcd96a41" containerName="oc" Feb 27 17:24:52 crc kubenswrapper[4830]: I0227 17:24:52.016168 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hfr4x" Feb 27 17:24:52 crc kubenswrapper[4830]: I0227 17:24:52.020636 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Feb 27 17:24:52 crc kubenswrapper[4830]: I0227 17:24:52.021870 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Feb 27 17:24:52 crc kubenswrapper[4830]: I0227 17:24:52.033271 4830 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-8pl97" Feb 27 17:24:52 crc kubenswrapper[4830]: I0227 17:24:52.033372 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Feb 27 17:24:52 crc kubenswrapper[4830]: I0227 17:24:52.036458 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-hfr4x"] Feb 27 17:24:52 crc kubenswrapper[4830]: I0227 17:24:52.158244 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/c8049cd6-3d72-4dec-9087-9bf35c926272-node-mnt\") pod \"crc-storage-crc-hfr4x\" (UID: \"c8049cd6-3d72-4dec-9087-9bf35c926272\") " pod="crc-storage/crc-storage-crc-hfr4x" Feb 27 17:24:52 crc kubenswrapper[4830]: I0227 17:24:52.158671 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzsjf\" (UniqueName: \"kubernetes.io/projected/c8049cd6-3d72-4dec-9087-9bf35c926272-kube-api-access-xzsjf\") pod \"crc-storage-crc-hfr4x\" (UID: \"c8049cd6-3d72-4dec-9087-9bf35c926272\") " pod="crc-storage/crc-storage-crc-hfr4x" Feb 27 17:24:52 crc kubenswrapper[4830]: I0227 17:24:52.158907 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/c8049cd6-3d72-4dec-9087-9bf35c926272-crc-storage\") pod \"crc-storage-crc-hfr4x\" (UID: \"c8049cd6-3d72-4dec-9087-9bf35c926272\") " pod="crc-storage/crc-storage-crc-hfr4x" Feb 27 17:24:52 crc kubenswrapper[4830]: I0227 17:24:52.260288 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/c8049cd6-3d72-4dec-9087-9bf35c926272-crc-storage\") pod \"crc-storage-crc-hfr4x\" (UID: \"c8049cd6-3d72-4dec-9087-9bf35c926272\") " pod="crc-storage/crc-storage-crc-hfr4x" Feb 27 17:24:52 crc kubenswrapper[4830]: I0227 17:24:52.260419 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/c8049cd6-3d72-4dec-9087-9bf35c926272-node-mnt\") pod \"crc-storage-crc-hfr4x\" (UID: \"c8049cd6-3d72-4dec-9087-9bf35c926272\") " pod="crc-storage/crc-storage-crc-hfr4x" Feb 27 17:24:52 crc kubenswrapper[4830]: I0227 17:24:52.260455 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzsjf\" (UniqueName: \"kubernetes.io/projected/c8049cd6-3d72-4dec-9087-9bf35c926272-kube-api-access-xzsjf\") pod \"crc-storage-crc-hfr4x\" (UID: \"c8049cd6-3d72-4dec-9087-9bf35c926272\") " pod="crc-storage/crc-storage-crc-hfr4x" Feb 27 17:24:52 crc kubenswrapper[4830]: I0227 17:24:52.260793 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/c8049cd6-3d72-4dec-9087-9bf35c926272-node-mnt\") pod \"crc-storage-crc-hfr4x\" (UID: \"c8049cd6-3d72-4dec-9087-9bf35c926272\") " pod="crc-storage/crc-storage-crc-hfr4x" Feb 27 17:24:52 crc kubenswrapper[4830]: I0227 17:24:52.261582 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/c8049cd6-3d72-4dec-9087-9bf35c926272-crc-storage\") pod \"crc-storage-crc-hfr4x\" (UID: \"c8049cd6-3d72-4dec-9087-9bf35c926272\") " pod="crc-storage/crc-storage-crc-hfr4x" Feb 27 17:24:52 crc kubenswrapper[4830]: I0227 17:24:52.292563 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzsjf\" (UniqueName: \"kubernetes.io/projected/c8049cd6-3d72-4dec-9087-9bf35c926272-kube-api-access-xzsjf\") pod \"crc-storage-crc-hfr4x\" (UID: \"c8049cd6-3d72-4dec-9087-9bf35c926272\") " pod="crc-storage/crc-storage-crc-hfr4x" Feb 27 17:24:52 crc kubenswrapper[4830]: I0227 17:24:52.377132 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hfr4x" Feb 27 17:24:52 crc kubenswrapper[4830]: I0227 17:24:52.778320 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="961ae73e-ba27-404a-9805-a10277c078b1" path="/var/lib/kubelet/pods/961ae73e-ba27-404a-9805-a10277c078b1/volumes" Feb 27 17:24:52 crc kubenswrapper[4830]: I0227 17:24:52.884816 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-hfr4x"] Feb 27 17:24:53 crc kubenswrapper[4830]: I0227 17:24:53.049794 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-hfr4x" event={"ID":"c8049cd6-3d72-4dec-9087-9bf35c926272","Type":"ContainerStarted","Data":"0ba855f3b5fa083c9e909e603ff2015ff085842108732102f5e409da65e6966b"} Feb 27 17:24:54 crc kubenswrapper[4830]: I0227 17:24:54.062779 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-hfr4x" event={"ID":"c8049cd6-3d72-4dec-9087-9bf35c926272","Type":"ContainerStarted","Data":"efd82b4d34a33dc4375da46991f444e0c32d0c4ae41a0f5b068dbde2830438ef"} Feb 27 17:24:55 crc kubenswrapper[4830]: I0227 17:24:55.078699 4830 generic.go:334] "Generic (PLEG): container finished" podID="c8049cd6-3d72-4dec-9087-9bf35c926272" containerID="efd82b4d34a33dc4375da46991f444e0c32d0c4ae41a0f5b068dbde2830438ef" exitCode=0 Feb 27 17:24:55 crc kubenswrapper[4830]: I0227 17:24:55.078761 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-hfr4x" event={"ID":"c8049cd6-3d72-4dec-9087-9bf35c926272","Type":"ContainerDied","Data":"efd82b4d34a33dc4375da46991f444e0c32d0c4ae41a0f5b068dbde2830438ef"} Feb 27 17:24:55 crc kubenswrapper[4830]: I0227 17:24:55.408826 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hfr4x" Feb 27 17:24:55 crc kubenswrapper[4830]: I0227 17:24:55.515179 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/c8049cd6-3d72-4dec-9087-9bf35c926272-node-mnt\") pod \"c8049cd6-3d72-4dec-9087-9bf35c926272\" (UID: \"c8049cd6-3d72-4dec-9087-9bf35c926272\") " Feb 27 17:24:55 crc kubenswrapper[4830]: I0227 17:24:55.515305 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8049cd6-3d72-4dec-9087-9bf35c926272-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "c8049cd6-3d72-4dec-9087-9bf35c926272" (UID: "c8049cd6-3d72-4dec-9087-9bf35c926272"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 17:24:55 crc kubenswrapper[4830]: I0227 17:24:55.515451 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/c8049cd6-3d72-4dec-9087-9bf35c926272-crc-storage\") pod \"c8049cd6-3d72-4dec-9087-9bf35c926272\" (UID: \"c8049cd6-3d72-4dec-9087-9bf35c926272\") " Feb 27 17:24:55 crc kubenswrapper[4830]: I0227 17:24:55.515657 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzsjf\" (UniqueName: \"kubernetes.io/projected/c8049cd6-3d72-4dec-9087-9bf35c926272-kube-api-access-xzsjf\") pod \"c8049cd6-3d72-4dec-9087-9bf35c926272\" (UID: \"c8049cd6-3d72-4dec-9087-9bf35c926272\") " Feb 27 17:24:55 crc kubenswrapper[4830]: I0227 17:24:55.516230 4830 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/c8049cd6-3d72-4dec-9087-9bf35c926272-node-mnt\") on node \"crc\" DevicePath \"\"" Feb 27 17:24:55 crc kubenswrapper[4830]: I0227 17:24:55.525511 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8049cd6-3d72-4dec-9087-9bf35c926272-kube-api-access-xzsjf" (OuterVolumeSpecName: "kube-api-access-xzsjf") pod "c8049cd6-3d72-4dec-9087-9bf35c926272" (UID: "c8049cd6-3d72-4dec-9087-9bf35c926272"). InnerVolumeSpecName "kube-api-access-xzsjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:24:55 crc kubenswrapper[4830]: I0227 17:24:55.549138 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8049cd6-3d72-4dec-9087-9bf35c926272-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "c8049cd6-3d72-4dec-9087-9bf35c926272" (UID: "c8049cd6-3d72-4dec-9087-9bf35c926272"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:24:55 crc kubenswrapper[4830]: I0227 17:24:55.618241 4830 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/c8049cd6-3d72-4dec-9087-9bf35c926272-crc-storage\") on node \"crc\" DevicePath \"\"" Feb 27 17:24:55 crc kubenswrapper[4830]: I0227 17:24:55.618323 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzsjf\" (UniqueName: \"kubernetes.io/projected/c8049cd6-3d72-4dec-9087-9bf35c926272-kube-api-access-xzsjf\") on node \"crc\" DevicePath \"\"" Feb 27 17:24:56 crc kubenswrapper[4830]: I0227 17:24:56.090462 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-hfr4x" event={"ID":"c8049cd6-3d72-4dec-9087-9bf35c926272","Type":"ContainerDied","Data":"0ba855f3b5fa083c9e909e603ff2015ff085842108732102f5e409da65e6966b"} Feb 27 17:24:56 crc kubenswrapper[4830]: I0227 17:24:56.090516 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ba855f3b5fa083c9e909e603ff2015ff085842108732102f5e409da65e6966b" Feb 27 17:24:56 crc kubenswrapper[4830]: I0227 17:24:56.090562 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hfr4x" Feb 27 17:24:57 crc kubenswrapper[4830]: I0227 17:24:57.794280 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-hfr4x"] Feb 27 17:24:57 crc kubenswrapper[4830]: I0227 17:24:57.810016 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-hfr4x"] Feb 27 17:24:57 crc kubenswrapper[4830]: I0227 17:24:57.968679 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-mx28p"] Feb 27 17:24:57 crc kubenswrapper[4830]: E0227 17:24:57.969164 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8049cd6-3d72-4dec-9087-9bf35c926272" containerName="storage" Feb 27 17:24:57 crc kubenswrapper[4830]: I0227 17:24:57.969195 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8049cd6-3d72-4dec-9087-9bf35c926272" containerName="storage" Feb 27 17:24:57 crc kubenswrapper[4830]: I0227 17:24:57.969513 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8049cd6-3d72-4dec-9087-9bf35c926272" containerName="storage" Feb 27 17:24:57 crc kubenswrapper[4830]: I0227 17:24:57.970244 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mx28p" Feb 27 17:24:57 crc kubenswrapper[4830]: I0227 17:24:57.973331 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Feb 27 17:24:57 crc kubenswrapper[4830]: I0227 17:24:57.973953 4830 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-8pl97" Feb 27 17:24:57 crc kubenswrapper[4830]: I0227 17:24:57.974360 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Feb 27 17:24:57 crc kubenswrapper[4830]: I0227 17:24:57.975219 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Feb 27 17:24:58 crc kubenswrapper[4830]: I0227 17:24:58.004936 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-mx28p"] Feb 27 17:24:58 crc kubenswrapper[4830]: I0227 17:24:58.056852 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776-crc-storage\") pod \"crc-storage-crc-mx28p\" (UID: \"955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776\") " pod="crc-storage/crc-storage-crc-mx28p" Feb 27 17:24:58 crc kubenswrapper[4830]: I0227 17:24:58.056998 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5rh5\" (UniqueName: \"kubernetes.io/projected/955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776-kube-api-access-x5rh5\") pod \"crc-storage-crc-mx28p\" (UID: \"955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776\") " pod="crc-storage/crc-storage-crc-mx28p" Feb 27 17:24:58 crc kubenswrapper[4830]: I0227 17:24:58.057071 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776-node-mnt\") pod \"crc-storage-crc-mx28p\" (UID: \"955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776\") " pod="crc-storage/crc-storage-crc-mx28p" Feb 27 17:24:58 crc kubenswrapper[4830]: I0227 17:24:58.159382 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776-crc-storage\") pod \"crc-storage-crc-mx28p\" (UID: \"955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776\") " pod="crc-storage/crc-storage-crc-mx28p" Feb 27 17:24:58 crc kubenswrapper[4830]: I0227 17:24:58.159540 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5rh5\" (UniqueName: \"kubernetes.io/projected/955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776-kube-api-access-x5rh5\") pod \"crc-storage-crc-mx28p\" (UID: \"955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776\") " pod="crc-storage/crc-storage-crc-mx28p" Feb 27 17:24:58 crc kubenswrapper[4830]: I0227 17:24:58.159620 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776-node-mnt\") pod \"crc-storage-crc-mx28p\" (UID: \"955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776\") " pod="crc-storage/crc-storage-crc-mx28p" Feb 27 17:24:58 crc kubenswrapper[4830]: I0227 17:24:58.160067 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776-node-mnt\") pod \"crc-storage-crc-mx28p\" (UID: \"955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776\") " pod="crc-storage/crc-storage-crc-mx28p" Feb 27 17:24:58 crc kubenswrapper[4830]: I0227 17:24:58.161107 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776-crc-storage\") pod \"crc-storage-crc-mx28p\" (UID: \"955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776\") " pod="crc-storage/crc-storage-crc-mx28p" Feb 27 17:24:58 crc kubenswrapper[4830]: I0227 17:24:58.196034 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5rh5\" (UniqueName: \"kubernetes.io/projected/955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776-kube-api-access-x5rh5\") pod \"crc-storage-crc-mx28p\" (UID: \"955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776\") " pod="crc-storage/crc-storage-crc-mx28p" Feb 27 17:24:58 crc kubenswrapper[4830]: I0227 17:24:58.300378 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mx28p" Feb 27 17:24:58 crc kubenswrapper[4830]: I0227 17:24:58.657128 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-mx28p"] Feb 27 17:24:58 crc kubenswrapper[4830]: W0227 17:24:58.667665 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod955a5e77_ec9e_4b3a_b2b1_dfa3f5f23776.slice/crio-e94aacc42db00287f52123f98447c8eb7c172357c94e745dff1d846a49996e1c WatchSource:0}: Error finding container e94aacc42db00287f52123f98447c8eb7c172357c94e745dff1d846a49996e1c: Status 404 returned error can't find the container with id e94aacc42db00287f52123f98447c8eb7c172357c94e745dff1d846a49996e1c Feb 27 17:24:58 crc kubenswrapper[4830]: I0227 17:24:58.774038 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8049cd6-3d72-4dec-9087-9bf35c926272" path="/var/lib/kubelet/pods/c8049cd6-3d72-4dec-9087-9bf35c926272/volumes" Feb 27 17:24:59 crc kubenswrapper[4830]: I0227 17:24:59.126438 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-mx28p" event={"ID":"955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776","Type":"ContainerStarted","Data":"e94aacc42db00287f52123f98447c8eb7c172357c94e745dff1d846a49996e1c"} Feb 27 17:25:00 crc kubenswrapper[4830]: I0227 17:25:00.138173 4830 generic.go:334] "Generic (PLEG): container finished" podID="955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776" containerID="9f140cc720306308cbbdb7faf522e5e6056c83cf159d4c78daac97a7c2ebe7bd" exitCode=0 Feb 27 17:25:00 crc kubenswrapper[4830]: I0227 17:25:00.138307 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-mx28p" event={"ID":"955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776","Type":"ContainerDied","Data":"9f140cc720306308cbbdb7faf522e5e6056c83cf159d4c78daac97a7c2ebe7bd"} Feb 27 17:25:01 crc kubenswrapper[4830]: I0227 17:25:01.475242 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mx28p" Feb 27 17:25:01 crc kubenswrapper[4830]: I0227 17:25:01.635608 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776-crc-storage\") pod \"955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776\" (UID: \"955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776\") " Feb 27 17:25:01 crc kubenswrapper[4830]: I0227 17:25:01.635784 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776-node-mnt\") pod \"955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776\" (UID: \"955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776\") " Feb 27 17:25:01 crc kubenswrapper[4830]: I0227 17:25:01.635809 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5rh5\" (UniqueName: \"kubernetes.io/projected/955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776-kube-api-access-x5rh5\") pod \"955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776\" (UID: \"955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776\") " Feb 27 17:25:01 crc kubenswrapper[4830]: I0227 17:25:01.635892 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776" (UID: "955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 17:25:01 crc kubenswrapper[4830]: I0227 17:25:01.636081 4830 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776-node-mnt\") on node \"crc\" DevicePath \"\"" Feb 27 17:25:01 crc kubenswrapper[4830]: I0227 17:25:01.655645 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776-kube-api-access-x5rh5" (OuterVolumeSpecName: "kube-api-access-x5rh5") pod "955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776" (UID: "955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776"). InnerVolumeSpecName "kube-api-access-x5rh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:25:01 crc kubenswrapper[4830]: I0227 17:25:01.659580 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776" (UID: "955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:25:01 crc kubenswrapper[4830]: I0227 17:25:01.737061 4830 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776-crc-storage\") on node \"crc\" DevicePath \"\"" Feb 27 17:25:01 crc kubenswrapper[4830]: I0227 17:25:01.737110 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5rh5\" (UniqueName: \"kubernetes.io/projected/955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776-kube-api-access-x5rh5\") on node \"crc\" DevicePath \"\"" Feb 27 17:25:02 crc kubenswrapper[4830]: I0227 17:25:02.155850 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-mx28p" event={"ID":"955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776","Type":"ContainerDied","Data":"e94aacc42db00287f52123f98447c8eb7c172357c94e745dff1d846a49996e1c"} Feb 27 17:25:02 crc kubenswrapper[4830]: I0227 17:25:02.155896 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e94aacc42db00287f52123f98447c8eb7c172357c94e745dff1d846a49996e1c" Feb 27 17:25:02 crc kubenswrapper[4830]: I0227 17:25:02.155986 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-mx28p" Feb 27 17:25:33 crc kubenswrapper[4830]: I0227 17:25:33.586053 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xcv5r"] Feb 27 17:25:33 crc kubenswrapper[4830]: E0227 17:25:33.588168 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776" containerName="storage" Feb 27 17:25:33 crc kubenswrapper[4830]: I0227 17:25:33.588323 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776" containerName="storage" Feb 27 17:25:33 crc kubenswrapper[4830]: I0227 17:25:33.588724 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="955a5e77-ec9e-4b3a-b2b1-dfa3f5f23776" containerName="storage" Feb 27 17:25:33 crc kubenswrapper[4830]: I0227 17:25:33.591045 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcv5r" Feb 27 17:25:33 crc kubenswrapper[4830]: I0227 17:25:33.609466 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xcv5r"] Feb 27 17:25:33 crc kubenswrapper[4830]: I0227 17:25:33.698506 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c3aa295-07ae-4594-935f-b9a902a83770-utilities\") pod \"certified-operators-xcv5r\" (UID: \"2c3aa295-07ae-4594-935f-b9a902a83770\") " pod="openshift-marketplace/certified-operators-xcv5r" Feb 27 17:25:33 crc kubenswrapper[4830]: I0227 17:25:33.698562 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5m2b\" (UniqueName: \"kubernetes.io/projected/2c3aa295-07ae-4594-935f-b9a902a83770-kube-api-access-h5m2b\") pod \"certified-operators-xcv5r\" (UID: \"2c3aa295-07ae-4594-935f-b9a902a83770\") " pod="openshift-marketplace/certified-operators-xcv5r" Feb 27 17:25:33 crc kubenswrapper[4830]: I0227 17:25:33.698592 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c3aa295-07ae-4594-935f-b9a902a83770-catalog-content\") pod \"certified-operators-xcv5r\" (UID: \"2c3aa295-07ae-4594-935f-b9a902a83770\") " pod="openshift-marketplace/certified-operators-xcv5r" Feb 27 17:25:33 crc kubenswrapper[4830]: I0227 17:25:33.803730 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c3aa295-07ae-4594-935f-b9a902a83770-utilities\") pod \"certified-operators-xcv5r\" (UID: \"2c3aa295-07ae-4594-935f-b9a902a83770\") " pod="openshift-marketplace/certified-operators-xcv5r" Feb 27 17:25:33 crc kubenswrapper[4830]: I0227 17:25:33.803784 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5m2b\" (UniqueName: \"kubernetes.io/projected/2c3aa295-07ae-4594-935f-b9a902a83770-kube-api-access-h5m2b\") pod \"certified-operators-xcv5r\" (UID: \"2c3aa295-07ae-4594-935f-b9a902a83770\") " pod="openshift-marketplace/certified-operators-xcv5r" Feb 27 17:25:33 crc kubenswrapper[4830]: I0227 17:25:33.803811 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c3aa295-07ae-4594-935f-b9a902a83770-catalog-content\") pod \"certified-operators-xcv5r\" (UID: \"2c3aa295-07ae-4594-935f-b9a902a83770\") " pod="openshift-marketplace/certified-operators-xcv5r" Feb 27 17:25:33 crc kubenswrapper[4830]: I0227 17:25:33.804553 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c3aa295-07ae-4594-935f-b9a902a83770-catalog-content\") pod \"certified-operators-xcv5r\" (UID: \"2c3aa295-07ae-4594-935f-b9a902a83770\") " pod="openshift-marketplace/certified-operators-xcv5r" Feb 27 17:25:33 crc kubenswrapper[4830]: I0227 17:25:33.805160 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c3aa295-07ae-4594-935f-b9a902a83770-utilities\") pod \"certified-operators-xcv5r\" (UID: \"2c3aa295-07ae-4594-935f-b9a902a83770\") " pod="openshift-marketplace/certified-operators-xcv5r" Feb 27 17:25:33 crc kubenswrapper[4830]: I0227 17:25:33.808684 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dlmbr"] Feb 27 17:25:33 crc kubenswrapper[4830]: I0227 17:25:33.812669 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dlmbr" Feb 27 17:25:33 crc kubenswrapper[4830]: I0227 17:25:33.849091 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5m2b\" (UniqueName: \"kubernetes.io/projected/2c3aa295-07ae-4594-935f-b9a902a83770-kube-api-access-h5m2b\") pod \"certified-operators-xcv5r\" (UID: \"2c3aa295-07ae-4594-935f-b9a902a83770\") " pod="openshift-marketplace/certified-operators-xcv5r" Feb 27 17:25:33 crc kubenswrapper[4830]: I0227 17:25:33.855041 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dlmbr"] Feb 27 17:25:33 crc kubenswrapper[4830]: I0227 17:25:33.905016 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bda8a3e-040c-4a7b-8977-0c505a218294-utilities\") pod \"community-operators-dlmbr\" (UID: \"2bda8a3e-040c-4a7b-8977-0c505a218294\") " pod="openshift-marketplace/community-operators-dlmbr" Feb 27 17:25:33 crc kubenswrapper[4830]: I0227 17:25:33.905201 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfrt6\" (UniqueName: \"kubernetes.io/projected/2bda8a3e-040c-4a7b-8977-0c505a218294-kube-api-access-vfrt6\") pod \"community-operators-dlmbr\" (UID: \"2bda8a3e-040c-4a7b-8977-0c505a218294\") " pod="openshift-marketplace/community-operators-dlmbr" Feb 27 17:25:33 crc kubenswrapper[4830]: I0227 17:25:33.905444 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bda8a3e-040c-4a7b-8977-0c505a218294-catalog-content\") pod \"community-operators-dlmbr\" (UID: \"2bda8a3e-040c-4a7b-8977-0c505a218294\") " pod="openshift-marketplace/community-operators-dlmbr" Feb 27 17:25:33 crc kubenswrapper[4830]: I0227 17:25:33.949924 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xcv5r" Feb 27 17:25:34 crc kubenswrapper[4830]: I0227 17:25:34.007496 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfrt6\" (UniqueName: \"kubernetes.io/projected/2bda8a3e-040c-4a7b-8977-0c505a218294-kube-api-access-vfrt6\") pod \"community-operators-dlmbr\" (UID: \"2bda8a3e-040c-4a7b-8977-0c505a218294\") " pod="openshift-marketplace/community-operators-dlmbr" Feb 27 17:25:34 crc kubenswrapper[4830]: I0227 17:25:34.008225 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bda8a3e-040c-4a7b-8977-0c505a218294-catalog-content\") pod \"community-operators-dlmbr\" (UID: \"2bda8a3e-040c-4a7b-8977-0c505a218294\") " pod="openshift-marketplace/community-operators-dlmbr" Feb 27 17:25:34 crc kubenswrapper[4830]: I0227 17:25:34.008779 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bda8a3e-040c-4a7b-8977-0c505a218294-catalog-content\") pod \"community-operators-dlmbr\" (UID: \"2bda8a3e-040c-4a7b-8977-0c505a218294\") " pod="openshift-marketplace/community-operators-dlmbr" Feb 27 17:25:34 crc kubenswrapper[4830]: I0227 17:25:34.008870 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bda8a3e-040c-4a7b-8977-0c505a218294-utilities\") pod \"community-operators-dlmbr\" (UID: \"2bda8a3e-040c-4a7b-8977-0c505a218294\") " pod="openshift-marketplace/community-operators-dlmbr" Feb 27 17:25:34 crc kubenswrapper[4830]: I0227 17:25:34.009148 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bda8a3e-040c-4a7b-8977-0c505a218294-utilities\") pod \"community-operators-dlmbr\" (UID: \"2bda8a3e-040c-4a7b-8977-0c505a218294\") " pod="openshift-marketplace/community-operators-dlmbr" Feb 27 17:25:34 crc kubenswrapper[4830]: I0227 17:25:34.030022 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfrt6\" (UniqueName: \"kubernetes.io/projected/2bda8a3e-040c-4a7b-8977-0c505a218294-kube-api-access-vfrt6\") pod \"community-operators-dlmbr\" (UID: \"2bda8a3e-040c-4a7b-8977-0c505a218294\") " pod="openshift-marketplace/community-operators-dlmbr" Feb 27 17:25:34 crc kubenswrapper[4830]: I0227 17:25:34.177033 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dlmbr" Feb 27 17:25:34 crc kubenswrapper[4830]: I0227 17:25:34.452645 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xcv5r"] Feb 27 17:25:34 crc kubenswrapper[4830]: I0227 17:25:34.494878 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcv5r" event={"ID":"2c3aa295-07ae-4594-935f-b9a902a83770","Type":"ContainerStarted","Data":"13c5262d3c5d906c6e473a7c56a5f2b8e6331c73bb43576d09c0d1810c03a584"} Feb 27 17:25:34 crc kubenswrapper[4830]: I0227 17:25:34.700294 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dlmbr"] Feb 27 17:25:34 crc kubenswrapper[4830]: W0227 17:25:34.746724 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bda8a3e_040c_4a7b_8977_0c505a218294.slice/crio-bd9c2c789060baf53af2d430e9f507db5723ebc4a714a1b602a46e258ff1820c WatchSource:0}: Error finding container bd9c2c789060baf53af2d430e9f507db5723ebc4a714a1b602a46e258ff1820c: Status 404 returned error can't find the container with id bd9c2c789060baf53af2d430e9f507db5723ebc4a714a1b602a46e258ff1820c Feb 27 17:25:35 crc kubenswrapper[4830]: I0227 17:25:35.508359 4830 generic.go:334] "Generic (PLEG): container finished" podID="2c3aa295-07ae-4594-935f-b9a902a83770" containerID="46062a626fb43beb15eaa054d63dc4a12cb1050ff12ca133ccfbe821deb176b6" exitCode=0 Feb 27 17:25:35 crc kubenswrapper[4830]: I0227 17:25:35.508412 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcv5r" event={"ID":"2c3aa295-07ae-4594-935f-b9a902a83770","Type":"ContainerDied","Data":"46062a626fb43beb15eaa054d63dc4a12cb1050ff12ca133ccfbe821deb176b6"} Feb 27 17:25:35 crc kubenswrapper[4830]: I0227 17:25:35.511456 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 17:25:35 crc kubenswrapper[4830]: I0227 17:25:35.512709 4830 generic.go:334] "Generic (PLEG): container finished" podID="2bda8a3e-040c-4a7b-8977-0c505a218294" containerID="0c84676b0dd61b4808854028331ffe40ad41d1e61026103a8bbe09aa794dd6a1" exitCode=0 Feb 27 17:25:35 crc kubenswrapper[4830]: I0227 17:25:35.512780 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlmbr" event={"ID":"2bda8a3e-040c-4a7b-8977-0c505a218294","Type":"ContainerDied","Data":"0c84676b0dd61b4808854028331ffe40ad41d1e61026103a8bbe09aa794dd6a1"} Feb 27 17:25:35 crc kubenswrapper[4830]: I0227 17:25:35.512827 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlmbr" event={"ID":"2bda8a3e-040c-4a7b-8977-0c505a218294","Type":"ContainerStarted","Data":"bd9c2c789060baf53af2d430e9f507db5723ebc4a714a1b602a46e258ff1820c"} Feb 27 17:25:36 crc kubenswrapper[4830]: I0227 17:25:36.183210 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jxw8t"] Feb 27 17:25:36 crc kubenswrapper[4830]: I0227 17:25:36.185867 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jxw8t" Feb 27 17:25:36 crc kubenswrapper[4830]: I0227 17:25:36.198695 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jxw8t"] Feb 27 17:25:36 crc kubenswrapper[4830]: I0227 17:25:36.250575 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f-catalog-content\") pod \"redhat-operators-jxw8t\" (UID: \"f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f\") " pod="openshift-marketplace/redhat-operators-jxw8t" Feb 27 17:25:36 crc kubenswrapper[4830]: I0227 17:25:36.250728 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhmmq\" (UniqueName: \"kubernetes.io/projected/f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f-kube-api-access-vhmmq\") pod \"redhat-operators-jxw8t\" (UID: \"f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f\") " pod="openshift-marketplace/redhat-operators-jxw8t" Feb 27 17:25:36 crc kubenswrapper[4830]: I0227 17:25:36.250808 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f-utilities\") pod \"redhat-operators-jxw8t\" (UID: \"f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f\") " pod="openshift-marketplace/redhat-operators-jxw8t" Feb 27 17:25:36 crc kubenswrapper[4830]: I0227 17:25:36.352315 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f-utilities\") pod \"redhat-operators-jxw8t\" (UID: \"f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f\") " pod="openshift-marketplace/redhat-operators-jxw8t" Feb 27 17:25:36 crc kubenswrapper[4830]: I0227 17:25:36.352834 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f-catalog-content\") pod \"redhat-operators-jxw8t\" (UID: \"f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f\") " pod="openshift-marketplace/redhat-operators-jxw8t" Feb 27 17:25:36 crc kubenswrapper[4830]: I0227 17:25:36.353099 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhmmq\" (UniqueName: \"kubernetes.io/projected/f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f-kube-api-access-vhmmq\") pod \"redhat-operators-jxw8t\" (UID: \"f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f\") " pod="openshift-marketplace/redhat-operators-jxw8t" Feb 27 17:25:36 crc kubenswrapper[4830]: I0227 17:25:36.353501 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f-utilities\") pod \"redhat-operators-jxw8t\" (UID: \"f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f\") " pod="openshift-marketplace/redhat-operators-jxw8t" Feb 27 17:25:36 crc kubenswrapper[4830]: I0227 17:25:36.353592 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f-catalog-content\") pod \"redhat-operators-jxw8t\" (UID: \"f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f\") " pod="openshift-marketplace/redhat-operators-jxw8t" Feb 27 17:25:36 crc kubenswrapper[4830]: I0227 17:25:36.380909 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhmmq\" (UniqueName: \"kubernetes.io/projected/f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f-kube-api-access-vhmmq\") pod \"redhat-operators-jxw8t\" (UID: \"f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f\") " pod="openshift-marketplace/redhat-operators-jxw8t" Feb 27 17:25:36 crc kubenswrapper[4830]: I0227 17:25:36.520429 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jxw8t" Feb 27 17:25:37 crc kubenswrapper[4830]: I0227 17:25:37.032302 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jxw8t"] Feb 27 17:25:37 crc kubenswrapper[4830]: I0227 17:25:37.556191 4830 generic.go:334] "Generic (PLEG): container finished" podID="f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f" containerID="d42a6d8295bfbb9141258ace347c396d56c2141e553d7e26879361eb9c6e0a3e" exitCode=0 Feb 27 17:25:37 crc kubenswrapper[4830]: I0227 17:25:37.556263 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jxw8t" event={"ID":"f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f","Type":"ContainerDied","Data":"d42a6d8295bfbb9141258ace347c396d56c2141e553d7e26879361eb9c6e0a3e"} Feb 27 17:25:37 crc kubenswrapper[4830]: I0227 17:25:37.556291 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jxw8t" event={"ID":"f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f","Type":"ContainerStarted","Data":"0f8a04e694ea9f921b08751aa57b11083053ff70dabec7b795aa1fab89f261a0"} Feb 27 17:25:37 crc kubenswrapper[4830]: I0227 17:25:37.563733 4830 generic.go:334] "Generic (PLEG): container finished" podID="2bda8a3e-040c-4a7b-8977-0c505a218294" containerID="bcf18e25b0dbfe7b563ca5aa2db33841e1516cbb2055ec308454293397754f87" exitCode=0 Feb 27 17:25:37 crc kubenswrapper[4830]: I0227 17:25:37.563776 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlmbr" event={"ID":"2bda8a3e-040c-4a7b-8977-0c505a218294","Type":"ContainerDied","Data":"bcf18e25b0dbfe7b563ca5aa2db33841e1516cbb2055ec308454293397754f87"} Feb 27 17:25:38 crc kubenswrapper[4830]: I0227 17:25:38.573218 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jxw8t" event={"ID":"f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f","Type":"ContainerStarted","Data":"3281846c8cf83b79406f5f69ec7738d06bbdf5b2f8d1da1e55964dadbfe1c5e1"} Feb 27 17:25:38 crc kubenswrapper[4830]: I0227 17:25:38.581506 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlmbr" event={"ID":"2bda8a3e-040c-4a7b-8977-0c505a218294","Type":"ContainerStarted","Data":"0cfdec637de90c67bff414224ea77ae1d1a4a6e2a6964afbe7a99cc55452493e"} Feb 27 17:25:38 crc kubenswrapper[4830]: I0227 17:25:38.628677 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dlmbr" podStartSLOduration=3.168538111 podStartE2EDuration="5.628657882s" podCreationTimestamp="2026-02-27 17:25:33 +0000 UTC" firstStartedPulling="2026-02-27 17:25:35.515913475 +0000 UTC m=+4731.605185968" lastFinishedPulling="2026-02-27 17:25:37.976033276 +0000 UTC m=+4734.065305739" observedRunningTime="2026-02-27 17:25:38.620810517 +0000 UTC m=+4734.710082990" watchObservedRunningTime="2026-02-27 17:25:38.628657882 +0000 UTC m=+4734.717930345" Feb 27 17:25:39 crc kubenswrapper[4830]: I0227 17:25:39.607172 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jxw8t" event={"ID":"f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f","Type":"ContainerDied","Data":"3281846c8cf83b79406f5f69ec7738d06bbdf5b2f8d1da1e55964dadbfe1c5e1"} Feb 27 17:25:39 crc kubenswrapper[4830]: I0227 17:25:39.609081 4830 generic.go:334] "Generic (PLEG): container finished" podID="f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f" containerID="3281846c8cf83b79406f5f69ec7738d06bbdf5b2f8d1da1e55964dadbfe1c5e1" exitCode=0 Feb 27 17:25:41 crc kubenswrapper[4830]: I0227 17:25:41.629919 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcv5r" event={"ID":"2c3aa295-07ae-4594-935f-b9a902a83770","Type":"ContainerStarted","Data":"7ca25a8cb710fd0083d436dc117f0a29175e46fc7c05e945e390edc7145d0e14"} Feb 27 17:25:42 crc kubenswrapper[4830]: I0227 17:25:42.643070 4830 generic.go:334] "Generic (PLEG): container finished" podID="2c3aa295-07ae-4594-935f-b9a902a83770" containerID="7ca25a8cb710fd0083d436dc117f0a29175e46fc7c05e945e390edc7145d0e14" exitCode=0 Feb 27 17:25:42 crc kubenswrapper[4830]: I0227 17:25:42.643184 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcv5r" event={"ID":"2c3aa295-07ae-4594-935f-b9a902a83770","Type":"ContainerDied","Data":"7ca25a8cb710fd0083d436dc117f0a29175e46fc7c05e945e390edc7145d0e14"} Feb 27 17:25:42 crc kubenswrapper[4830]: I0227 17:25:42.647082 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jxw8t" event={"ID":"f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f","Type":"ContainerStarted","Data":"80f0b73197f216a180f6ce1893bdd5a86f6f7e12efe0382ef68d277cca2cbb12"} Feb 27 17:25:42 crc kubenswrapper[4830]: I0227 17:25:42.700524 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jxw8t" podStartSLOduration=2.726045054 podStartE2EDuration="6.700495325s" podCreationTimestamp="2026-02-27 17:25:36 +0000 UTC" firstStartedPulling="2026-02-27 17:25:37.558090105 +0000 UTC m=+4733.647362568" lastFinishedPulling="2026-02-27 17:25:41.532540336 +0000 UTC m=+4737.621812839" observedRunningTime="2026-02-27 17:25:42.698965508 +0000 UTC m=+4738.788237981" watchObservedRunningTime="2026-02-27 17:25:42.700495325 +0000 UTC m=+4738.789767828" Feb 27 17:25:43 crc kubenswrapper[4830]: I0227 17:25:43.666390 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xcv5r" event={"ID":"2c3aa295-07ae-4594-935f-b9a902a83770","Type":"ContainerStarted","Data":"8556c3b37adbf824de8a75ac0ce4b2a503756129ed498fab6991ba1660d00413"} Feb 27 17:25:43 crc kubenswrapper[4830]: I0227 17:25:43.701917 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xcv5r" podStartSLOduration=3.16043894 podStartE2EDuration="10.701888492s" podCreationTimestamp="2026-02-27 17:25:33 +0000 UTC" firstStartedPulling="2026-02-27 17:25:35.511136612 +0000 UTC m=+4731.600409095" lastFinishedPulling="2026-02-27 17:25:43.052586174 +0000 UTC m=+4739.141858647" observedRunningTime="2026-02-27 17:25:43.699660349 +0000 UTC m=+4739.788932822" watchObservedRunningTime="2026-02-27 17:25:43.701888492 +0000 UTC m=+4739.791160995" Feb 27 17:25:43 crc kubenswrapper[4830]: I0227 17:25:43.950864 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xcv5r" Feb 27 17:25:43 crc kubenswrapper[4830]: I0227 17:25:43.950927 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xcv5r" Feb 27 17:25:44 crc kubenswrapper[4830]: I0227 17:25:44.178294 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dlmbr" Feb 27 17:25:44 crc kubenswrapper[4830]: I0227 17:25:44.178376 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dlmbr" Feb 27 17:25:44 crc kubenswrapper[4830]: I0227 17:25:44.256312 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dlmbr" Feb 27 17:25:44 crc kubenswrapper[4830]: I0227 17:25:44.813870 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dlmbr" Feb 27 17:25:45 crc kubenswrapper[4830]: I0227 17:25:45.007616 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-xcv5r" podUID="2c3aa295-07ae-4594-935f-b9a902a83770" containerName="registry-server" probeResult="failure" output=< Feb 27 17:25:45 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Feb 27 17:25:45 crc kubenswrapper[4830]: > Feb 27 17:25:46 crc kubenswrapper[4830]: I0227 17:25:46.521276 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jxw8t" Feb 27 17:25:46 crc kubenswrapper[4830]: I0227 17:25:46.521650 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jxw8t" Feb 27 17:25:46 crc kubenswrapper[4830]: I0227 17:25:46.776986 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dlmbr"] Feb 27 17:25:46 crc kubenswrapper[4830]: I0227 17:25:46.777317 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dlmbr" podUID="2bda8a3e-040c-4a7b-8977-0c505a218294" containerName="registry-server" containerID="cri-o://0cfdec637de90c67bff414224ea77ae1d1a4a6e2a6964afbe7a99cc55452493e" gracePeriod=2 Feb 27 17:25:47 crc kubenswrapper[4830]: I0227 17:25:47.579144 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jxw8t" podUID="f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f" containerName="registry-server" probeResult="failure" output=< Feb 27 17:25:47 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Feb 27 17:25:47 crc kubenswrapper[4830]: > Feb 27 17:25:47 crc kubenswrapper[4830]: I0227 17:25:47.721407 4830 scope.go:117] "RemoveContainer" containerID="6551ca8307c310307738252d5c343368661b9835bd2aa4841de7ad6adee6d3b5" Feb 27 17:25:48 crc kubenswrapper[4830]: I0227 17:25:48.644088 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dlmbr" Feb 27 17:25:48 crc kubenswrapper[4830]: I0227 17:25:48.800300 4830 generic.go:334] "Generic (PLEG): container finished" podID="2bda8a3e-040c-4a7b-8977-0c505a218294" containerID="0cfdec637de90c67bff414224ea77ae1d1a4a6e2a6964afbe7a99cc55452493e" exitCode=0 Feb 27 17:25:48 crc kubenswrapper[4830]: I0227 17:25:48.800362 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dlmbr" Feb 27 17:25:48 crc kubenswrapper[4830]: I0227 17:25:48.800620 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlmbr" event={"ID":"2bda8a3e-040c-4a7b-8977-0c505a218294","Type":"ContainerDied","Data":"0cfdec637de90c67bff414224ea77ae1d1a4a6e2a6964afbe7a99cc55452493e"} Feb 27 17:25:48 crc kubenswrapper[4830]: I0227 17:25:48.800726 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlmbr" event={"ID":"2bda8a3e-040c-4a7b-8977-0c505a218294","Type":"ContainerDied","Data":"bd9c2c789060baf53af2d430e9f507db5723ebc4a714a1b602a46e258ff1820c"} Feb 27 17:25:48 crc kubenswrapper[4830]: I0227 17:25:48.800815 4830 scope.go:117] "RemoveContainer" containerID="0cfdec637de90c67bff414224ea77ae1d1a4a6e2a6964afbe7a99cc55452493e" Feb 27 17:25:48 crc kubenswrapper[4830]: I0227 17:25:48.807445 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfrt6\" (UniqueName: \"kubernetes.io/projected/2bda8a3e-040c-4a7b-8977-0c505a218294-kube-api-access-vfrt6\") pod \"2bda8a3e-040c-4a7b-8977-0c505a218294\" (UID: \"2bda8a3e-040c-4a7b-8977-0c505a218294\") " Feb 27 17:25:48 crc kubenswrapper[4830]: I0227 17:25:48.807523 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bda8a3e-040c-4a7b-8977-0c505a218294-utilities\") pod \"2bda8a3e-040c-4a7b-8977-0c505a218294\" (UID: \"2bda8a3e-040c-4a7b-8977-0c505a218294\") " Feb 27 17:25:48 crc kubenswrapper[4830]: I0227 17:25:48.807632 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bda8a3e-040c-4a7b-8977-0c505a218294-catalog-content\") pod \"2bda8a3e-040c-4a7b-8977-0c505a218294\" (UID: \"2bda8a3e-040c-4a7b-8977-0c505a218294\") " Feb 27 17:25:48 crc kubenswrapper[4830]: I0227 17:25:48.808259 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bda8a3e-040c-4a7b-8977-0c505a218294-utilities" (OuterVolumeSpecName: "utilities") pod "2bda8a3e-040c-4a7b-8977-0c505a218294" (UID: "2bda8a3e-040c-4a7b-8977-0c505a218294"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:25:48 crc kubenswrapper[4830]: I0227 17:25:48.813665 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bda8a3e-040c-4a7b-8977-0c505a218294-kube-api-access-vfrt6" (OuterVolumeSpecName: "kube-api-access-vfrt6") pod "2bda8a3e-040c-4a7b-8977-0c505a218294" (UID: "2bda8a3e-040c-4a7b-8977-0c505a218294"). InnerVolumeSpecName "kube-api-access-vfrt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:25:48 crc kubenswrapper[4830]: I0227 17:25:48.834178 4830 scope.go:117] "RemoveContainer" containerID="bcf18e25b0dbfe7b563ca5aa2db33841e1516cbb2055ec308454293397754f87" Feb 27 17:25:48 crc kubenswrapper[4830]: I0227 17:25:48.861015 4830 scope.go:117] "RemoveContainer" containerID="0c84676b0dd61b4808854028331ffe40ad41d1e61026103a8bbe09aa794dd6a1" Feb 27 17:25:48 crc kubenswrapper[4830]: I0227 17:25:48.862475 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bda8a3e-040c-4a7b-8977-0c505a218294-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2bda8a3e-040c-4a7b-8977-0c505a218294" (UID: "2bda8a3e-040c-4a7b-8977-0c505a218294"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:25:48 crc kubenswrapper[4830]: I0227 17:25:48.900765 4830 scope.go:117] "RemoveContainer" containerID="0cfdec637de90c67bff414224ea77ae1d1a4a6e2a6964afbe7a99cc55452493e" Feb 27 17:25:48 crc kubenswrapper[4830]: E0227 17:25:48.901540 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cfdec637de90c67bff414224ea77ae1d1a4a6e2a6964afbe7a99cc55452493e\": container with ID starting with 0cfdec637de90c67bff414224ea77ae1d1a4a6e2a6964afbe7a99cc55452493e not found: ID does not exist" containerID="0cfdec637de90c67bff414224ea77ae1d1a4a6e2a6964afbe7a99cc55452493e" Feb 27 17:25:48 crc kubenswrapper[4830]: I0227 17:25:48.901585 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cfdec637de90c67bff414224ea77ae1d1a4a6e2a6964afbe7a99cc55452493e"} err="failed to get container status \"0cfdec637de90c67bff414224ea77ae1d1a4a6e2a6964afbe7a99cc55452493e\": rpc error: code = NotFound desc = could not find container \"0cfdec637de90c67bff414224ea77ae1d1a4a6e2a6964afbe7a99cc55452493e\": container with ID starting with 0cfdec637de90c67bff414224ea77ae1d1a4a6e2a6964afbe7a99cc55452493e not found: ID does not exist" Feb 27 17:25:48 crc kubenswrapper[4830]: I0227 17:25:48.901627 4830 scope.go:117] "RemoveContainer" containerID="bcf18e25b0dbfe7b563ca5aa2db33841e1516cbb2055ec308454293397754f87" Feb 27 17:25:48 crc kubenswrapper[4830]: E0227 17:25:48.902107 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcf18e25b0dbfe7b563ca5aa2db33841e1516cbb2055ec308454293397754f87\": container with ID starting with bcf18e25b0dbfe7b563ca5aa2db33841e1516cbb2055ec308454293397754f87 not found: ID does not exist" containerID="bcf18e25b0dbfe7b563ca5aa2db33841e1516cbb2055ec308454293397754f87" Feb 27 17:25:48 crc kubenswrapper[4830]: I0227 17:25:48.902193 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcf18e25b0dbfe7b563ca5aa2db33841e1516cbb2055ec308454293397754f87"} err="failed to get container status \"bcf18e25b0dbfe7b563ca5aa2db33841e1516cbb2055ec308454293397754f87\": rpc error: code = NotFound desc = could not find container \"bcf18e25b0dbfe7b563ca5aa2db33841e1516cbb2055ec308454293397754f87\": container with ID starting with bcf18e25b0dbfe7b563ca5aa2db33841e1516cbb2055ec308454293397754f87 not found: ID does not exist" Feb 27 17:25:48 crc kubenswrapper[4830]: I0227 17:25:48.902265 4830 scope.go:117] "RemoveContainer" containerID="0c84676b0dd61b4808854028331ffe40ad41d1e61026103a8bbe09aa794dd6a1" Feb 27 17:25:48 crc kubenswrapper[4830]: E0227 17:25:48.902694 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c84676b0dd61b4808854028331ffe40ad41d1e61026103a8bbe09aa794dd6a1\": container with ID starting with 0c84676b0dd61b4808854028331ffe40ad41d1e61026103a8bbe09aa794dd6a1 not found: ID does not exist" containerID="0c84676b0dd61b4808854028331ffe40ad41d1e61026103a8bbe09aa794dd6a1" Feb 27 17:25:48 crc kubenswrapper[4830]: I0227 17:25:48.902721 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c84676b0dd61b4808854028331ffe40ad41d1e61026103a8bbe09aa794dd6a1"} err="failed to get container status \"0c84676b0dd61b4808854028331ffe40ad41d1e61026103a8bbe09aa794dd6a1\": rpc error: code = NotFound desc = could not find container \"0c84676b0dd61b4808854028331ffe40ad41d1e61026103a8bbe09aa794dd6a1\": container with ID starting with 0c84676b0dd61b4808854028331ffe40ad41d1e61026103a8bbe09aa794dd6a1 not found: ID does not exist" Feb 27 17:25:48 crc kubenswrapper[4830]: I0227 17:25:48.909716 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bda8a3e-040c-4a7b-8977-0c505a218294-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:25:48 crc kubenswrapper[4830]: I0227 17:25:48.909850 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bda8a3e-040c-4a7b-8977-0c505a218294-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:25:48 crc kubenswrapper[4830]: I0227 17:25:48.909933 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfrt6\" (UniqueName: \"kubernetes.io/projected/2bda8a3e-040c-4a7b-8977-0c505a218294-kube-api-access-vfrt6\") on node \"crc\" DevicePath \"\"" Feb 27 17:25:49 crc kubenswrapper[4830]: I0227 17:25:49.135536 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dlmbr"] Feb 27 17:25:49 crc kubenswrapper[4830]: I0227 17:25:49.140413 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dlmbr"] Feb 27 17:25:50 crc kubenswrapper[4830]: I0227 17:25:50.778081 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bda8a3e-040c-4a7b-8977-0c505a218294" path="/var/lib/kubelet/pods/2bda8a3e-040c-4a7b-8977-0c505a218294/volumes" Feb 27 17:25:54 crc kubenswrapper[4830]: I0227 17:25:54.023814 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xcv5r" Feb 27 17:25:54 crc kubenswrapper[4830]: I0227 17:25:54.101226 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xcv5r" Feb 27 17:25:54 crc kubenswrapper[4830]: I0227 17:25:54.195998 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xcv5r"] Feb 27 17:25:54 crc kubenswrapper[4830]: I0227 17:25:54.276434 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jt6jl"] Feb 27 17:25:54 crc kubenswrapper[4830]: I0227 17:25:54.277683 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jt6jl" podUID="07c2162b-fcb8-4423-b0c6-75eefad7b1f8" containerName="registry-server" containerID="cri-o://1b6401b8d147a2db37c19e8880e46cca00360015fce4613f84d8255fb65355c2" gracePeriod=2 Feb 27 17:25:54 crc kubenswrapper[4830]: I0227 17:25:54.810490 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jt6jl" Feb 27 17:25:54 crc kubenswrapper[4830]: I0227 17:25:54.826388 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07c2162b-fcb8-4423-b0c6-75eefad7b1f8-catalog-content\") pod \"07c2162b-fcb8-4423-b0c6-75eefad7b1f8\" (UID: \"07c2162b-fcb8-4423-b0c6-75eefad7b1f8\") " Feb 27 17:25:54 crc kubenswrapper[4830]: I0227 17:25:54.826444 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp74f\" (UniqueName: \"kubernetes.io/projected/07c2162b-fcb8-4423-b0c6-75eefad7b1f8-kube-api-access-sp74f\") pod \"07c2162b-fcb8-4423-b0c6-75eefad7b1f8\" (UID: \"07c2162b-fcb8-4423-b0c6-75eefad7b1f8\") " Feb 27 17:25:54 crc kubenswrapper[4830]: I0227 17:25:54.826504 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07c2162b-fcb8-4423-b0c6-75eefad7b1f8-utilities\") pod \"07c2162b-fcb8-4423-b0c6-75eefad7b1f8\" (UID: \"07c2162b-fcb8-4423-b0c6-75eefad7b1f8\") " Feb 27 17:25:54 crc kubenswrapper[4830]: I0227 17:25:54.836863 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07c2162b-fcb8-4423-b0c6-75eefad7b1f8-utilities" (OuterVolumeSpecName: "utilities") pod "07c2162b-fcb8-4423-b0c6-75eefad7b1f8" (UID: "07c2162b-fcb8-4423-b0c6-75eefad7b1f8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:25:54 crc kubenswrapper[4830]: I0227 17:25:54.869357 4830 generic.go:334] "Generic (PLEG): container finished" podID="07c2162b-fcb8-4423-b0c6-75eefad7b1f8" containerID="1b6401b8d147a2db37c19e8880e46cca00360015fce4613f84d8255fb65355c2" exitCode=0 Feb 27 17:25:54 crc kubenswrapper[4830]: I0227 17:25:54.870856 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jt6jl" Feb 27 17:25:54 crc kubenswrapper[4830]: I0227 17:25:54.870977 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jt6jl" event={"ID":"07c2162b-fcb8-4423-b0c6-75eefad7b1f8","Type":"ContainerDied","Data":"1b6401b8d147a2db37c19e8880e46cca00360015fce4613f84d8255fb65355c2"} Feb 27 17:25:54 crc kubenswrapper[4830]: I0227 17:25:54.871009 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jt6jl" event={"ID":"07c2162b-fcb8-4423-b0c6-75eefad7b1f8","Type":"ContainerDied","Data":"8062a7294767a91727df0948ed7ae665be46cde3e52644c0e41a65f779e353c2"} Feb 27 17:25:54 crc kubenswrapper[4830]: I0227 17:25:54.871033 4830 scope.go:117] "RemoveContainer" containerID="1b6401b8d147a2db37c19e8880e46cca00360015fce4613f84d8255fb65355c2" Feb 27 17:25:54 crc kubenswrapper[4830]: I0227 17:25:54.872920 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07c2162b-fcb8-4423-b0c6-75eefad7b1f8-kube-api-access-sp74f" (OuterVolumeSpecName: "kube-api-access-sp74f") pod "07c2162b-fcb8-4423-b0c6-75eefad7b1f8" (UID: "07c2162b-fcb8-4423-b0c6-75eefad7b1f8"). InnerVolumeSpecName "kube-api-access-sp74f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:25:54 crc kubenswrapper[4830]: I0227 17:25:54.912165 4830 scope.go:117] "RemoveContainer" containerID="9a9ff0563ccdf509a46a54799d82b5e6abf49aee6f0c8ae60d4db9d084ff65d3" Feb 27 17:25:54 crc kubenswrapper[4830]: I0227 17:25:54.928080 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07c2162b-fcb8-4423-b0c6-75eefad7b1f8-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:25:54 crc kubenswrapper[4830]: I0227 17:25:54.928121 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp74f\" (UniqueName: \"kubernetes.io/projected/07c2162b-fcb8-4423-b0c6-75eefad7b1f8-kube-api-access-sp74f\") on node \"crc\" DevicePath \"\"" Feb 27 17:25:54 crc kubenswrapper[4830]: I0227 17:25:54.929044 4830 scope.go:117] "RemoveContainer" containerID="e3dab58ca71daa8e06ecd55b99936ff2fd36914c8c19964004de91fffec7a5e0" Feb 27 17:25:54 crc kubenswrapper[4830]: I0227 17:25:54.937340 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07c2162b-fcb8-4423-b0c6-75eefad7b1f8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07c2162b-fcb8-4423-b0c6-75eefad7b1f8" (UID: "07c2162b-fcb8-4423-b0c6-75eefad7b1f8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:25:54 crc kubenswrapper[4830]: I0227 17:25:54.961370 4830 scope.go:117] "RemoveContainer" containerID="1b6401b8d147a2db37c19e8880e46cca00360015fce4613f84d8255fb65355c2" Feb 27 17:25:54 crc kubenswrapper[4830]: E0227 17:25:54.962934 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b6401b8d147a2db37c19e8880e46cca00360015fce4613f84d8255fb65355c2\": container with ID starting with 1b6401b8d147a2db37c19e8880e46cca00360015fce4613f84d8255fb65355c2 not found: ID does not exist" containerID="1b6401b8d147a2db37c19e8880e46cca00360015fce4613f84d8255fb65355c2" Feb 27 17:25:54 crc kubenswrapper[4830]: I0227 17:25:54.962989 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b6401b8d147a2db37c19e8880e46cca00360015fce4613f84d8255fb65355c2"} err="failed to get container status \"1b6401b8d147a2db37c19e8880e46cca00360015fce4613f84d8255fb65355c2\": rpc error: code = NotFound desc = could not find container \"1b6401b8d147a2db37c19e8880e46cca00360015fce4613f84d8255fb65355c2\": container with ID starting with 1b6401b8d147a2db37c19e8880e46cca00360015fce4613f84d8255fb65355c2 not found: ID does not exist" Feb 27 17:25:54 crc kubenswrapper[4830]: I0227 17:25:54.963014 4830 scope.go:117] "RemoveContainer" containerID="9a9ff0563ccdf509a46a54799d82b5e6abf49aee6f0c8ae60d4db9d084ff65d3" Feb 27 17:25:54 crc kubenswrapper[4830]: E0227 17:25:54.963413 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a9ff0563ccdf509a46a54799d82b5e6abf49aee6f0c8ae60d4db9d084ff65d3\": container with ID starting with 9a9ff0563ccdf509a46a54799d82b5e6abf49aee6f0c8ae60d4db9d084ff65d3 not found: ID does not exist" containerID="9a9ff0563ccdf509a46a54799d82b5e6abf49aee6f0c8ae60d4db9d084ff65d3" Feb 27 17:25:54 crc kubenswrapper[4830]: I0227 17:25:54.963434 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a9ff0563ccdf509a46a54799d82b5e6abf49aee6f0c8ae60d4db9d084ff65d3"} err="failed to get container status \"9a9ff0563ccdf509a46a54799d82b5e6abf49aee6f0c8ae60d4db9d084ff65d3\": rpc error: code = NotFound desc = could not find container \"9a9ff0563ccdf509a46a54799d82b5e6abf49aee6f0c8ae60d4db9d084ff65d3\": container with ID starting with 9a9ff0563ccdf509a46a54799d82b5e6abf49aee6f0c8ae60d4db9d084ff65d3 not found: ID does not exist" Feb 27 17:25:54 crc kubenswrapper[4830]: I0227 17:25:54.963446 4830 scope.go:117] "RemoveContainer" containerID="e3dab58ca71daa8e06ecd55b99936ff2fd36914c8c19964004de91fffec7a5e0" Feb 27 17:25:54 crc kubenswrapper[4830]: E0227 17:25:54.963702 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3dab58ca71daa8e06ecd55b99936ff2fd36914c8c19964004de91fffec7a5e0\": container with ID starting with e3dab58ca71daa8e06ecd55b99936ff2fd36914c8c19964004de91fffec7a5e0 not found: ID does not exist" containerID="e3dab58ca71daa8e06ecd55b99936ff2fd36914c8c19964004de91fffec7a5e0" Feb 27 17:25:54 crc kubenswrapper[4830]: I0227 17:25:54.963725 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3dab58ca71daa8e06ecd55b99936ff2fd36914c8c19964004de91fffec7a5e0"} err="failed to get container status \"e3dab58ca71daa8e06ecd55b99936ff2fd36914c8c19964004de91fffec7a5e0\": rpc error: code = NotFound desc = could not find container \"e3dab58ca71daa8e06ecd55b99936ff2fd36914c8c19964004de91fffec7a5e0\": container with ID starting with e3dab58ca71daa8e06ecd55b99936ff2fd36914c8c19964004de91fffec7a5e0 not found: ID does not exist" Feb 27 17:25:55 crc kubenswrapper[4830]: I0227 17:25:55.029693 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07c2162b-fcb8-4423-b0c6-75eefad7b1f8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:25:55 crc kubenswrapper[4830]: I0227 17:25:55.196725 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jt6jl"] Feb 27 17:25:55 crc kubenswrapper[4830]: I0227 17:25:55.201408 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jt6jl"] Feb 27 17:25:56 crc kubenswrapper[4830]: I0227 17:25:56.593999 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jxw8t" Feb 27 17:25:56 crc kubenswrapper[4830]: I0227 17:25:56.672253 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jxw8t" Feb 27 17:25:56 crc kubenswrapper[4830]: I0227 17:25:56.778155 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07c2162b-fcb8-4423-b0c6-75eefad7b1f8" path="/var/lib/kubelet/pods/07c2162b-fcb8-4423-b0c6-75eefad7b1f8/volumes" Feb 27 17:25:58 crc kubenswrapper[4830]: I0227 17:25:58.887508 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jxw8t"] Feb 27 17:25:58 crc kubenswrapper[4830]: I0227 17:25:58.888538 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jxw8t" podUID="f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f" containerName="registry-server" containerID="cri-o://80f0b73197f216a180f6ce1893bdd5a86f6f7e12efe0382ef68d277cca2cbb12" gracePeriod=2 Feb 27 17:25:59 crc kubenswrapper[4830]: I0227 17:25:59.328478 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jxw8t" Feb 27 17:25:59 crc kubenswrapper[4830]: I0227 17:25:59.511738 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f-utilities\") pod \"f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f\" (UID: \"f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f\") " Feb 27 17:25:59 crc kubenswrapper[4830]: I0227 17:25:59.512125 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhmmq\" (UniqueName: \"kubernetes.io/projected/f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f-kube-api-access-vhmmq\") pod \"f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f\" (UID: \"f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f\") " Feb 27 17:25:59 crc kubenswrapper[4830]: I0227 17:25:59.512374 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f-catalog-content\") pod \"f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f\" (UID: \"f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f\") " Feb 27 17:25:59 crc kubenswrapper[4830]: I0227 17:25:59.514308 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f-utilities" (OuterVolumeSpecName: "utilities") pod "f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f" (UID: "f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:25:59 crc kubenswrapper[4830]: I0227 17:25:59.533869 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f-kube-api-access-vhmmq" (OuterVolumeSpecName: "kube-api-access-vhmmq") pod "f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f" (UID: "f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f"). InnerVolumeSpecName "kube-api-access-vhmmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:25:59 crc kubenswrapper[4830]: I0227 17:25:59.614101 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:25:59 crc kubenswrapper[4830]: I0227 17:25:59.614138 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhmmq\" (UniqueName: \"kubernetes.io/projected/f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f-kube-api-access-vhmmq\") on node \"crc\" DevicePath \"\"" Feb 27 17:25:59 crc kubenswrapper[4830]: I0227 17:25:59.647821 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f" (UID: "f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:25:59 crc kubenswrapper[4830]: I0227 17:25:59.715681 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:25:59 crc kubenswrapper[4830]: I0227 17:25:59.921887 4830 generic.go:334] "Generic (PLEG): container finished" podID="f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f" containerID="80f0b73197f216a180f6ce1893bdd5a86f6f7e12efe0382ef68d277cca2cbb12" exitCode=0 Feb 27 17:25:59 crc kubenswrapper[4830]: I0227 17:25:59.921936 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jxw8t" event={"ID":"f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f","Type":"ContainerDied","Data":"80f0b73197f216a180f6ce1893bdd5a86f6f7e12efe0382ef68d277cca2cbb12"} Feb 27 17:25:59 crc kubenswrapper[4830]: I0227 17:25:59.921983 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jxw8t" event={"ID":"f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f","Type":"ContainerDied","Data":"0f8a04e694ea9f921b08751aa57b11083053ff70dabec7b795aa1fab89f261a0"} Feb 27 17:25:59 crc kubenswrapper[4830]: I0227 17:25:59.922008 4830 scope.go:117] "RemoveContainer" containerID="80f0b73197f216a180f6ce1893bdd5a86f6f7e12efe0382ef68d277cca2cbb12" Feb 27 17:25:59 crc kubenswrapper[4830]: I0227 17:25:59.922170 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jxw8t" Feb 27 17:25:59 crc kubenswrapper[4830]: I0227 17:25:59.941936 4830 scope.go:117] "RemoveContainer" containerID="3281846c8cf83b79406f5f69ec7738d06bbdf5b2f8d1da1e55964dadbfe1c5e1" Feb 27 17:25:59 crc kubenswrapper[4830]: I0227 17:25:59.957760 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jxw8t"] Feb 27 17:25:59 crc kubenswrapper[4830]: I0227 17:25:59.963144 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jxw8t"] Feb 27 17:25:59 crc kubenswrapper[4830]: I0227 17:25:59.976212 4830 scope.go:117] "RemoveContainer" containerID="d42a6d8295bfbb9141258ace347c396d56c2141e553d7e26879361eb9c6e0a3e" Feb 27 17:25:59 crc kubenswrapper[4830]: I0227 17:25:59.991504 4830 scope.go:117] "RemoveContainer" containerID="80f0b73197f216a180f6ce1893bdd5a86f6f7e12efe0382ef68d277cca2cbb12" Feb 27 17:25:59 crc kubenswrapper[4830]: E0227 17:25:59.992282 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80f0b73197f216a180f6ce1893bdd5a86f6f7e12efe0382ef68d277cca2cbb12\": container with ID starting with 80f0b73197f216a180f6ce1893bdd5a86f6f7e12efe0382ef68d277cca2cbb12 not found: ID does not exist" containerID="80f0b73197f216a180f6ce1893bdd5a86f6f7e12efe0382ef68d277cca2cbb12" Feb 27 17:25:59 crc kubenswrapper[4830]: I0227 17:25:59.992321 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80f0b73197f216a180f6ce1893bdd5a86f6f7e12efe0382ef68d277cca2cbb12"} err="failed to get container status \"80f0b73197f216a180f6ce1893bdd5a86f6f7e12efe0382ef68d277cca2cbb12\": rpc error: code = NotFound desc = could not find container \"80f0b73197f216a180f6ce1893bdd5a86f6f7e12efe0382ef68d277cca2cbb12\": container with ID starting with 80f0b73197f216a180f6ce1893bdd5a86f6f7e12efe0382ef68d277cca2cbb12 not found: ID does not exist" Feb 27 17:25:59 crc kubenswrapper[4830]: I0227 17:25:59.992350 4830 scope.go:117] "RemoveContainer" containerID="3281846c8cf83b79406f5f69ec7738d06bbdf5b2f8d1da1e55964dadbfe1c5e1" Feb 27 17:25:59 crc kubenswrapper[4830]: E0227 17:25:59.992681 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3281846c8cf83b79406f5f69ec7738d06bbdf5b2f8d1da1e55964dadbfe1c5e1\": container with ID starting with 3281846c8cf83b79406f5f69ec7738d06bbdf5b2f8d1da1e55964dadbfe1c5e1 not found: ID does not exist" containerID="3281846c8cf83b79406f5f69ec7738d06bbdf5b2f8d1da1e55964dadbfe1c5e1" Feb 27 17:25:59 crc kubenswrapper[4830]: I0227 17:25:59.992707 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3281846c8cf83b79406f5f69ec7738d06bbdf5b2f8d1da1e55964dadbfe1c5e1"} err="failed to get container status \"3281846c8cf83b79406f5f69ec7738d06bbdf5b2f8d1da1e55964dadbfe1c5e1\": rpc error: code = NotFound desc = could not find container \"3281846c8cf83b79406f5f69ec7738d06bbdf5b2f8d1da1e55964dadbfe1c5e1\": container with ID starting with 3281846c8cf83b79406f5f69ec7738d06bbdf5b2f8d1da1e55964dadbfe1c5e1 not found: ID does not exist" Feb 27 17:25:59 crc kubenswrapper[4830]: I0227 17:25:59.992720 4830 scope.go:117] "RemoveContainer" containerID="d42a6d8295bfbb9141258ace347c396d56c2141e553d7e26879361eb9c6e0a3e" Feb 27 17:25:59 crc kubenswrapper[4830]: E0227 17:25:59.993042 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d42a6d8295bfbb9141258ace347c396d56c2141e553d7e26879361eb9c6e0a3e\": container with ID starting with d42a6d8295bfbb9141258ace347c396d56c2141e553d7e26879361eb9c6e0a3e not found: ID does not exist" containerID="d42a6d8295bfbb9141258ace347c396d56c2141e553d7e26879361eb9c6e0a3e" Feb 27 17:25:59 crc kubenswrapper[4830]: I0227 17:25:59.993062 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d42a6d8295bfbb9141258ace347c396d56c2141e553d7e26879361eb9c6e0a3e"} err="failed to get container status \"d42a6d8295bfbb9141258ace347c396d56c2141e553d7e26879361eb9c6e0a3e\": rpc error: code = NotFound desc = could not find container \"d42a6d8295bfbb9141258ace347c396d56c2141e553d7e26879361eb9c6e0a3e\": container with ID starting with d42a6d8295bfbb9141258ace347c396d56c2141e553d7e26879361eb9c6e0a3e not found: ID does not exist" Feb 27 17:26:00 crc kubenswrapper[4830]: I0227 17:26:00.136799 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536886-tb8fw"] Feb 27 17:26:00 crc kubenswrapper[4830]: E0227 17:26:00.137123 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f" containerName="extract-content" Feb 27 17:26:00 crc kubenswrapper[4830]: I0227 17:26:00.137135 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f" containerName="extract-content" Feb 27 17:26:00 crc kubenswrapper[4830]: E0227 17:26:00.137145 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f" containerName="registry-server" Feb 27 17:26:00 crc kubenswrapper[4830]: I0227 17:26:00.137150 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f" containerName="registry-server" Feb 27 17:26:00 crc kubenswrapper[4830]: E0227 17:26:00.137161 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c2162b-fcb8-4423-b0c6-75eefad7b1f8" containerName="extract-content" Feb 27 17:26:00 crc kubenswrapper[4830]: I0227 17:26:00.137167 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c2162b-fcb8-4423-b0c6-75eefad7b1f8" containerName="extract-content" Feb 27 17:26:00 crc kubenswrapper[4830]: E0227 17:26:00.137185 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c2162b-fcb8-4423-b0c6-75eefad7b1f8" containerName="extract-utilities" Feb 27 17:26:00 crc kubenswrapper[4830]: I0227 17:26:00.137191 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c2162b-fcb8-4423-b0c6-75eefad7b1f8" containerName="extract-utilities" Feb 27 17:26:00 crc kubenswrapper[4830]: E0227 17:26:00.137206 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bda8a3e-040c-4a7b-8977-0c505a218294" containerName="registry-server" Feb 27 17:26:00 crc kubenswrapper[4830]: I0227 17:26:00.137213 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bda8a3e-040c-4a7b-8977-0c505a218294" containerName="registry-server" Feb 27 17:26:00 crc kubenswrapper[4830]: E0227 17:26:00.137223 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f" containerName="extract-utilities" Feb 27 17:26:00 crc kubenswrapper[4830]: I0227 17:26:00.137230 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f" containerName="extract-utilities" Feb 27 17:26:00 crc kubenswrapper[4830]: E0227 17:26:00.137240 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bda8a3e-040c-4a7b-8977-0c505a218294" containerName="extract-content" Feb 27 17:26:00 crc kubenswrapper[4830]: I0227 17:26:00.137246 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bda8a3e-040c-4a7b-8977-0c505a218294" containerName="extract-content" Feb 27 17:26:00 crc kubenswrapper[4830]: E0227 17:26:00.137258 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bda8a3e-040c-4a7b-8977-0c505a218294" containerName="extract-utilities" Feb 27 17:26:00 crc kubenswrapper[4830]: I0227 17:26:00.137263 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bda8a3e-040c-4a7b-8977-0c505a218294" containerName="extract-utilities" Feb 27 17:26:00 crc kubenswrapper[4830]: E0227 17:26:00.137275 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c2162b-fcb8-4423-b0c6-75eefad7b1f8" containerName="registry-server" Feb 27 17:26:00 crc kubenswrapper[4830]: I0227 17:26:00.137281 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c2162b-fcb8-4423-b0c6-75eefad7b1f8" containerName="registry-server" Feb 27 17:26:00 crc kubenswrapper[4830]: I0227 17:26:00.137410 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="07c2162b-fcb8-4423-b0c6-75eefad7b1f8" containerName="registry-server" Feb 27 17:26:00 crc kubenswrapper[4830]: I0227 17:26:00.137420 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bda8a3e-040c-4a7b-8977-0c505a218294" containerName="registry-server" Feb 27 17:26:00 crc kubenswrapper[4830]: I0227 17:26:00.137441 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f" containerName="registry-server" Feb 27 17:26:00 crc kubenswrapper[4830]: I0227 17:26:00.137904 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536886-tb8fw" Feb 27 17:26:00 crc kubenswrapper[4830]: I0227 17:26:00.140635 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:26:00 crc kubenswrapper[4830]: I0227 17:26:00.142125 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:26:00 crc kubenswrapper[4830]: I0227 17:26:00.142780 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 17:26:00 crc kubenswrapper[4830]: I0227 17:26:00.150722 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536886-tb8fw"] Feb 27 17:26:00 crc kubenswrapper[4830]: I0227 17:26:00.221831 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxf69\" (UniqueName: \"kubernetes.io/projected/e8d1db84-59f7-464c-958a-2f1c2b6744d8-kube-api-access-hxf69\") pod \"auto-csr-approver-29536886-tb8fw\" (UID: \"e8d1db84-59f7-464c-958a-2f1c2b6744d8\") " pod="openshift-infra/auto-csr-approver-29536886-tb8fw" Feb 27 17:26:00 crc kubenswrapper[4830]: I0227 17:26:00.323198 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxf69\" (UniqueName: \"kubernetes.io/projected/e8d1db84-59f7-464c-958a-2f1c2b6744d8-kube-api-access-hxf69\") pod \"auto-csr-approver-29536886-tb8fw\" (UID: \"e8d1db84-59f7-464c-958a-2f1c2b6744d8\") " pod="openshift-infra/auto-csr-approver-29536886-tb8fw" Feb 27 17:26:00 crc kubenswrapper[4830]: I0227 17:26:00.343957 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxf69\" (UniqueName: \"kubernetes.io/projected/e8d1db84-59f7-464c-958a-2f1c2b6744d8-kube-api-access-hxf69\") pod \"auto-csr-approver-29536886-tb8fw\" (UID: \"e8d1db84-59f7-464c-958a-2f1c2b6744d8\") " pod="openshift-infra/auto-csr-approver-29536886-tb8fw" Feb 27 17:26:00 crc kubenswrapper[4830]: I0227 17:26:00.452548 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536886-tb8fw" Feb 27 17:26:00 crc kubenswrapper[4830]: I0227 17:26:00.643613 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536886-tb8fw"] Feb 27 17:26:00 crc kubenswrapper[4830]: W0227 17:26:00.649786 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8d1db84_59f7_464c_958a_2f1c2b6744d8.slice/crio-cddf661a76a3cb3198eb3ff3c0c8e053a613a85a0b82314d2be099b0833dddb8 WatchSource:0}: Error finding container cddf661a76a3cb3198eb3ff3c0c8e053a613a85a0b82314d2be099b0833dddb8: Status 404 returned error can't find the container with id cddf661a76a3cb3198eb3ff3c0c8e053a613a85a0b82314d2be099b0833dddb8 Feb 27 17:26:00 crc kubenswrapper[4830]: I0227 17:26:00.772870 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f" path="/var/lib/kubelet/pods/f5ae40f6-78ca-46d5-8cdb-fc71baad1c4f/volumes" Feb 27 17:26:00 crc kubenswrapper[4830]: I0227 17:26:00.936066 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536886-tb8fw" event={"ID":"e8d1db84-59f7-464c-958a-2f1c2b6744d8","Type":"ContainerStarted","Data":"cddf661a76a3cb3198eb3ff3c0c8e053a613a85a0b82314d2be099b0833dddb8"} Feb 27 17:26:02 crc kubenswrapper[4830]: I0227 17:26:02.959460 4830 generic.go:334] "Generic (PLEG): container finished" podID="e8d1db84-59f7-464c-958a-2f1c2b6744d8" containerID="8fe2bd0b693bf56df228c47272f949bc0bc0a3d4192b3ac4591598d6b0153d7d" exitCode=0 Feb 27 17:26:02 crc kubenswrapper[4830]: I0227 17:26:02.959543 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536886-tb8fw" event={"ID":"e8d1db84-59f7-464c-958a-2f1c2b6744d8","Type":"ContainerDied","Data":"8fe2bd0b693bf56df228c47272f949bc0bc0a3d4192b3ac4591598d6b0153d7d"} Feb 27 17:26:04 crc kubenswrapper[4830]: I0227 17:26:04.291151 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536886-tb8fw" Feb 27 17:26:04 crc kubenswrapper[4830]: I0227 17:26:04.385799 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxf69\" (UniqueName: \"kubernetes.io/projected/e8d1db84-59f7-464c-958a-2f1c2b6744d8-kube-api-access-hxf69\") pod \"e8d1db84-59f7-464c-958a-2f1c2b6744d8\" (UID: \"e8d1db84-59f7-464c-958a-2f1c2b6744d8\") " Feb 27 17:26:04 crc kubenswrapper[4830]: I0227 17:26:04.393096 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8d1db84-59f7-464c-958a-2f1c2b6744d8-kube-api-access-hxf69" (OuterVolumeSpecName: "kube-api-access-hxf69") pod "e8d1db84-59f7-464c-958a-2f1c2b6744d8" (UID: "e8d1db84-59f7-464c-958a-2f1c2b6744d8"). InnerVolumeSpecName "kube-api-access-hxf69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:26:04 crc kubenswrapper[4830]: I0227 17:26:04.487606 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxf69\" (UniqueName: \"kubernetes.io/projected/e8d1db84-59f7-464c-958a-2f1c2b6744d8-kube-api-access-hxf69\") on node \"crc\" DevicePath \"\"" Feb 27 17:26:04 crc kubenswrapper[4830]: I0227 17:26:04.982253 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536886-tb8fw" event={"ID":"e8d1db84-59f7-464c-958a-2f1c2b6744d8","Type":"ContainerDied","Data":"cddf661a76a3cb3198eb3ff3c0c8e053a613a85a0b82314d2be099b0833dddb8"} Feb 27 17:26:04 crc kubenswrapper[4830]: I0227 17:26:04.982313 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cddf661a76a3cb3198eb3ff3c0c8e053a613a85a0b82314d2be099b0833dddb8" Feb 27 17:26:04 crc kubenswrapper[4830]: I0227 17:26:04.982313 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536886-tb8fw" Feb 27 17:26:05 crc kubenswrapper[4830]: I0227 17:26:05.384221 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536880-ssqdv"] Feb 27 17:26:05 crc kubenswrapper[4830]: I0227 17:26:05.393893 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536880-ssqdv"] Feb 27 17:26:06 crc kubenswrapper[4830]: I0227 17:26:06.780179 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3f16ff4-29b6-4ff3-a540-b794d6198ba7" path="/var/lib/kubelet/pods/a3f16ff4-29b6-4ff3-a540-b794d6198ba7/volumes" Feb 27 17:26:33 crc kubenswrapper[4830]: I0227 17:26:33.160235 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:26:33 crc kubenswrapper[4830]: I0227 17:26:33.161114 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:26:47 crc kubenswrapper[4830]: I0227 17:26:47.827290 4830 scope.go:117] "RemoveContainer" containerID="f0e985b0c3e21a49f3e385060f642e65cda45baff01dd909c3b10da3f7148d9f" Feb 27 17:27:03 crc kubenswrapper[4830]: I0227 17:27:03.160135 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:27:03 crc kubenswrapper[4830]: I0227 17:27:03.160718 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:27:26 crc kubenswrapper[4830]: I0227 17:27:26.678155 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j7q42"] Feb 27 17:27:26 crc kubenswrapper[4830]: E0227 17:27:26.679197 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8d1db84-59f7-464c-958a-2f1c2b6744d8" containerName="oc" Feb 27 17:27:26 crc kubenswrapper[4830]: I0227 17:27:26.679216 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d1db84-59f7-464c-958a-2f1c2b6744d8" containerName="oc" Feb 27 17:27:26 crc kubenswrapper[4830]: I0227 17:27:26.679388 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8d1db84-59f7-464c-958a-2f1c2b6744d8" containerName="oc" Feb 27 17:27:26 crc kubenswrapper[4830]: I0227 17:27:26.680588 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j7q42" Feb 27 17:27:26 crc kubenswrapper[4830]: I0227 17:27:26.702891 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j7q42"] Feb 27 17:27:26 crc kubenswrapper[4830]: I0227 17:27:26.794413 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79334803-c789-4221-bfe2-bcad857c5af2-utilities\") pod \"redhat-marketplace-j7q42\" (UID: \"79334803-c789-4221-bfe2-bcad857c5af2\") " pod="openshift-marketplace/redhat-marketplace-j7q42" Feb 27 17:27:26 crc kubenswrapper[4830]: I0227 17:27:26.794554 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79334803-c789-4221-bfe2-bcad857c5af2-catalog-content\") pod \"redhat-marketplace-j7q42\" (UID: \"79334803-c789-4221-bfe2-bcad857c5af2\") " pod="openshift-marketplace/redhat-marketplace-j7q42" Feb 27 17:27:26 crc kubenswrapper[4830]: I0227 17:27:26.794622 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dn8j\" (UniqueName: \"kubernetes.io/projected/79334803-c789-4221-bfe2-bcad857c5af2-kube-api-access-5dn8j\") pod \"redhat-marketplace-j7q42\" (UID: \"79334803-c789-4221-bfe2-bcad857c5af2\") " pod="openshift-marketplace/redhat-marketplace-j7q42" Feb 27 17:27:26 crc kubenswrapper[4830]: I0227 17:27:26.895890 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79334803-c789-4221-bfe2-bcad857c5af2-catalog-content\") pod \"redhat-marketplace-j7q42\" (UID: \"79334803-c789-4221-bfe2-bcad857c5af2\") " pod="openshift-marketplace/redhat-marketplace-j7q42" Feb 27 17:27:26 crc kubenswrapper[4830]: I0227 17:27:26.895981 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dn8j\" (UniqueName: \"kubernetes.io/projected/79334803-c789-4221-bfe2-bcad857c5af2-kube-api-access-5dn8j\") pod \"redhat-marketplace-j7q42\" (UID: \"79334803-c789-4221-bfe2-bcad857c5af2\") " pod="openshift-marketplace/redhat-marketplace-j7q42" Feb 27 17:27:26 crc kubenswrapper[4830]: I0227 17:27:26.896056 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79334803-c789-4221-bfe2-bcad857c5af2-utilities\") pod \"redhat-marketplace-j7q42\" (UID: \"79334803-c789-4221-bfe2-bcad857c5af2\") " pod="openshift-marketplace/redhat-marketplace-j7q42" Feb 27 17:27:26 crc kubenswrapper[4830]: I0227 17:27:26.896608 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79334803-c789-4221-bfe2-bcad857c5af2-utilities\") pod \"redhat-marketplace-j7q42\" (UID: \"79334803-c789-4221-bfe2-bcad857c5af2\") " pod="openshift-marketplace/redhat-marketplace-j7q42" Feb 27 17:27:26 crc kubenswrapper[4830]: I0227 17:27:26.896902 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79334803-c789-4221-bfe2-bcad857c5af2-catalog-content\") pod \"redhat-marketplace-j7q42\" (UID: \"79334803-c789-4221-bfe2-bcad857c5af2\") " pod="openshift-marketplace/redhat-marketplace-j7q42" Feb 27 17:27:26 crc kubenswrapper[4830]: I0227 17:27:26.926638 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dn8j\" (UniqueName: \"kubernetes.io/projected/79334803-c789-4221-bfe2-bcad857c5af2-kube-api-access-5dn8j\") pod \"redhat-marketplace-j7q42\" (UID: \"79334803-c789-4221-bfe2-bcad857c5af2\") " pod="openshift-marketplace/redhat-marketplace-j7q42" Feb 27 17:27:27 crc kubenswrapper[4830]: I0227 17:27:27.015674 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j7q42" Feb 27 17:27:27 crc kubenswrapper[4830]: I0227 17:27:27.328111 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j7q42"] Feb 27 17:27:27 crc kubenswrapper[4830]: I0227 17:27:27.839277 4830 generic.go:334] "Generic (PLEG): container finished" podID="79334803-c789-4221-bfe2-bcad857c5af2" containerID="bb16a24a48c41bbc490d3882d33f0849cd8d26ede184b2a592a88ffda55284ec" exitCode=0 Feb 27 17:27:27 crc kubenswrapper[4830]: I0227 17:27:27.839369 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7q42" event={"ID":"79334803-c789-4221-bfe2-bcad857c5af2","Type":"ContainerDied","Data":"bb16a24a48c41bbc490d3882d33f0849cd8d26ede184b2a592a88ffda55284ec"} Feb 27 17:27:27 crc kubenswrapper[4830]: I0227 17:27:27.839645 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7q42" event={"ID":"79334803-c789-4221-bfe2-bcad857c5af2","Type":"ContainerStarted","Data":"35943ee630022b3cb99540ef34d637f8110a49946f8b91ff9ba4b97285f5046d"} Feb 27 17:27:28 crc kubenswrapper[4830]: I0227 17:27:28.851648 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7q42" event={"ID":"79334803-c789-4221-bfe2-bcad857c5af2","Type":"ContainerStarted","Data":"4077f7a249bb63db2ec3a5f9d7f8779d5f52a99e01b0786513b3eca25da91ccc"} Feb 27 17:27:29 crc kubenswrapper[4830]: I0227 17:27:29.863074 4830 generic.go:334] "Generic (PLEG): container finished" podID="79334803-c789-4221-bfe2-bcad857c5af2" containerID="4077f7a249bb63db2ec3a5f9d7f8779d5f52a99e01b0786513b3eca25da91ccc" exitCode=0 Feb 27 17:27:29 crc kubenswrapper[4830]: I0227 17:27:29.863145 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7q42" event={"ID":"79334803-c789-4221-bfe2-bcad857c5af2","Type":"ContainerDied","Data":"4077f7a249bb63db2ec3a5f9d7f8779d5f52a99e01b0786513b3eca25da91ccc"} Feb 27 17:27:31 crc kubenswrapper[4830]: I0227 17:27:31.886754 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7q42" event={"ID":"79334803-c789-4221-bfe2-bcad857c5af2","Type":"ContainerStarted","Data":"ef90812efd94644aed00eefaa59429b5c2a36555f92712e0a6be3f9d56a1a448"} Feb 27 17:27:31 crc kubenswrapper[4830]: I0227 17:27:31.917195 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j7q42" podStartSLOduration=3.202330413 podStartE2EDuration="5.917162253s" podCreationTimestamp="2026-02-27 17:27:26 +0000 UTC" firstStartedPulling="2026-02-27 17:27:27.841384479 +0000 UTC m=+4843.930656982" lastFinishedPulling="2026-02-27 17:27:30.556216309 +0000 UTC m=+4846.645488822" observedRunningTime="2026-02-27 17:27:31.913077446 +0000 UTC m=+4848.002349929" watchObservedRunningTime="2026-02-27 17:27:31.917162253 +0000 UTC m=+4848.006434766" Feb 27 17:27:33 crc kubenswrapper[4830]: I0227 17:27:33.160682 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:27:33 crc kubenswrapper[4830]: I0227 17:27:33.160787 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:27:33 crc kubenswrapper[4830]: I0227 17:27:33.160856 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 17:27:33 crc kubenswrapper[4830]: I0227 17:27:33.161805 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"19e6a24991d0874a855368f8e306131672121f114d688786c52f7e0dafcd4823"} pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 17:27:33 crc kubenswrapper[4830]: I0227 17:27:33.161888 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" containerID="cri-o://19e6a24991d0874a855368f8e306131672121f114d688786c52f7e0dafcd4823" gracePeriod=600 Feb 27 17:27:33 crc kubenswrapper[4830]: I0227 17:27:33.911249 4830 generic.go:334] "Generic (PLEG): container finished" podID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerID="19e6a24991d0874a855368f8e306131672121f114d688786c52f7e0dafcd4823" exitCode=0 Feb 27 17:27:33 crc kubenswrapper[4830]: I0227 17:27:33.911390 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerDied","Data":"19e6a24991d0874a855368f8e306131672121f114d688786c52f7e0dafcd4823"} Feb 27 17:27:33 crc kubenswrapper[4830]: I0227 17:27:33.911686 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerStarted","Data":"313fc4aac8e61b508c32aefdb01d7489ea5d8194a163882d5d4b5ec4665839cb"} Feb 27 17:27:33 crc kubenswrapper[4830]: I0227 17:27:33.911722 4830 scope.go:117] "RemoveContainer" containerID="d43c5277201627bfc21083e209c49d25c5bf66f11f77c7094001382c8173b2a3" Feb 27 17:27:37 crc kubenswrapper[4830]: I0227 17:27:37.015999 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j7q42" Feb 27 17:27:37 crc kubenswrapper[4830]: I0227 17:27:37.016524 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j7q42" Feb 27 17:27:37 crc kubenswrapper[4830]: I0227 17:27:37.090471 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j7q42" Feb 27 17:27:38 crc kubenswrapper[4830]: I0227 17:27:38.031239 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j7q42" Feb 27 17:27:38 crc kubenswrapper[4830]: I0227 17:27:38.102870 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j7q42"] Feb 27 17:27:39 crc kubenswrapper[4830]: I0227 17:27:39.968839 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j7q42" podUID="79334803-c789-4221-bfe2-bcad857c5af2" containerName="registry-server" containerID="cri-o://ef90812efd94644aed00eefaa59429b5c2a36555f92712e0a6be3f9d56a1a448" gracePeriod=2 Feb 27 17:27:40 crc kubenswrapper[4830]: E0227 17:27:40.152106 4830 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79334803_c789_4221_bfe2_bcad857c5af2.slice/crio-ef90812efd94644aed00eefaa59429b5c2a36555f92712e0a6be3f9d56a1a448.scope\": RecentStats: unable to find data in memory cache]" Feb 27 17:27:40 crc kubenswrapper[4830]: I0227 17:27:40.472532 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j7q42" Feb 27 17:27:40 crc kubenswrapper[4830]: I0227 17:27:40.640404 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dn8j\" (UniqueName: \"kubernetes.io/projected/79334803-c789-4221-bfe2-bcad857c5af2-kube-api-access-5dn8j\") pod \"79334803-c789-4221-bfe2-bcad857c5af2\" (UID: \"79334803-c789-4221-bfe2-bcad857c5af2\") " Feb 27 17:27:40 crc kubenswrapper[4830]: I0227 17:27:40.640559 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79334803-c789-4221-bfe2-bcad857c5af2-utilities\") pod \"79334803-c789-4221-bfe2-bcad857c5af2\" (UID: \"79334803-c789-4221-bfe2-bcad857c5af2\") " Feb 27 17:27:40 crc kubenswrapper[4830]: I0227 17:27:40.640735 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79334803-c789-4221-bfe2-bcad857c5af2-catalog-content\") pod \"79334803-c789-4221-bfe2-bcad857c5af2\" (UID: \"79334803-c789-4221-bfe2-bcad857c5af2\") " Feb 27 17:27:40 crc kubenswrapper[4830]: I0227 17:27:40.642004 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79334803-c789-4221-bfe2-bcad857c5af2-utilities" (OuterVolumeSpecName: "utilities") pod "79334803-c789-4221-bfe2-bcad857c5af2" (UID: "79334803-c789-4221-bfe2-bcad857c5af2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:27:40 crc kubenswrapper[4830]: I0227 17:27:40.646548 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79334803-c789-4221-bfe2-bcad857c5af2-kube-api-access-5dn8j" (OuterVolumeSpecName: "kube-api-access-5dn8j") pod "79334803-c789-4221-bfe2-bcad857c5af2" (UID: "79334803-c789-4221-bfe2-bcad857c5af2"). InnerVolumeSpecName "kube-api-access-5dn8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:27:40 crc kubenswrapper[4830]: I0227 17:27:40.698611 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79334803-c789-4221-bfe2-bcad857c5af2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "79334803-c789-4221-bfe2-bcad857c5af2" (UID: "79334803-c789-4221-bfe2-bcad857c5af2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:27:40 crc kubenswrapper[4830]: I0227 17:27:40.744022 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79334803-c789-4221-bfe2-bcad857c5af2-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:27:40 crc kubenswrapper[4830]: I0227 17:27:40.744073 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79334803-c789-4221-bfe2-bcad857c5af2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:27:40 crc kubenswrapper[4830]: I0227 17:27:40.744096 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dn8j\" (UniqueName: \"kubernetes.io/projected/79334803-c789-4221-bfe2-bcad857c5af2-kube-api-access-5dn8j\") on node \"crc\" DevicePath \"\"" Feb 27 17:27:40 crc kubenswrapper[4830]: I0227 17:27:40.982244 4830 generic.go:334] "Generic (PLEG): container finished" podID="79334803-c789-4221-bfe2-bcad857c5af2" containerID="ef90812efd94644aed00eefaa59429b5c2a36555f92712e0a6be3f9d56a1a448" exitCode=0 Feb 27 17:27:40 crc kubenswrapper[4830]: I0227 17:27:40.982346 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j7q42" Feb 27 17:27:40 crc kubenswrapper[4830]: I0227 17:27:40.982375 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7q42" event={"ID":"79334803-c789-4221-bfe2-bcad857c5af2","Type":"ContainerDied","Data":"ef90812efd94644aed00eefaa59429b5c2a36555f92712e0a6be3f9d56a1a448"} Feb 27 17:27:40 crc kubenswrapper[4830]: I0227 17:27:40.982808 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7q42" event={"ID":"79334803-c789-4221-bfe2-bcad857c5af2","Type":"ContainerDied","Data":"35943ee630022b3cb99540ef34d637f8110a49946f8b91ff9ba4b97285f5046d"} Feb 27 17:27:40 crc kubenswrapper[4830]: I0227 17:27:40.982839 4830 scope.go:117] "RemoveContainer" containerID="ef90812efd94644aed00eefaa59429b5c2a36555f92712e0a6be3f9d56a1a448" Feb 27 17:27:41 crc kubenswrapper[4830]: I0227 17:27:41.022860 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j7q42"] Feb 27 17:27:41 crc kubenswrapper[4830]: I0227 17:27:41.023701 4830 scope.go:117] "RemoveContainer" containerID="4077f7a249bb63db2ec3a5f9d7f8779d5f52a99e01b0786513b3eca25da91ccc" Feb 27 17:27:41 crc kubenswrapper[4830]: I0227 17:27:41.032941 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j7q42"] Feb 27 17:27:41 crc kubenswrapper[4830]: I0227 17:27:41.054283 4830 scope.go:117] "RemoveContainer" containerID="bb16a24a48c41bbc490d3882d33f0849cd8d26ede184b2a592a88ffda55284ec" Feb 27 17:27:41 crc kubenswrapper[4830]: I0227 17:27:41.092811 4830 scope.go:117] "RemoveContainer" containerID="ef90812efd94644aed00eefaa59429b5c2a36555f92712e0a6be3f9d56a1a448" Feb 27 17:27:41 crc kubenswrapper[4830]: E0227 17:27:41.093410 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef90812efd94644aed00eefaa59429b5c2a36555f92712e0a6be3f9d56a1a448\": container with ID starting with ef90812efd94644aed00eefaa59429b5c2a36555f92712e0a6be3f9d56a1a448 not found: ID does not exist" containerID="ef90812efd94644aed00eefaa59429b5c2a36555f92712e0a6be3f9d56a1a448" Feb 27 17:27:41 crc kubenswrapper[4830]: I0227 17:27:41.093471 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef90812efd94644aed00eefaa59429b5c2a36555f92712e0a6be3f9d56a1a448"} err="failed to get container status \"ef90812efd94644aed00eefaa59429b5c2a36555f92712e0a6be3f9d56a1a448\": rpc error: code = NotFound desc = could not find container \"ef90812efd94644aed00eefaa59429b5c2a36555f92712e0a6be3f9d56a1a448\": container with ID starting with ef90812efd94644aed00eefaa59429b5c2a36555f92712e0a6be3f9d56a1a448 not found: ID does not exist" Feb 27 17:27:41 crc kubenswrapper[4830]: I0227 17:27:41.093505 4830 scope.go:117] "RemoveContainer" containerID="4077f7a249bb63db2ec3a5f9d7f8779d5f52a99e01b0786513b3eca25da91ccc" Feb 27 17:27:41 crc kubenswrapper[4830]: E0227 17:27:41.093905 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4077f7a249bb63db2ec3a5f9d7f8779d5f52a99e01b0786513b3eca25da91ccc\": container with ID starting with 4077f7a249bb63db2ec3a5f9d7f8779d5f52a99e01b0786513b3eca25da91ccc not found: ID does not exist" containerID="4077f7a249bb63db2ec3a5f9d7f8779d5f52a99e01b0786513b3eca25da91ccc" Feb 27 17:27:41 crc kubenswrapper[4830]: I0227 17:27:41.093988 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4077f7a249bb63db2ec3a5f9d7f8779d5f52a99e01b0786513b3eca25da91ccc"} err="failed to get container status \"4077f7a249bb63db2ec3a5f9d7f8779d5f52a99e01b0786513b3eca25da91ccc\": rpc error: code = NotFound desc = could not find container \"4077f7a249bb63db2ec3a5f9d7f8779d5f52a99e01b0786513b3eca25da91ccc\": container with ID starting with 4077f7a249bb63db2ec3a5f9d7f8779d5f52a99e01b0786513b3eca25da91ccc not found: ID does not exist" Feb 27 17:27:41 crc kubenswrapper[4830]: I0227 17:27:41.094015 4830 scope.go:117] "RemoveContainer" containerID="bb16a24a48c41bbc490d3882d33f0849cd8d26ede184b2a592a88ffda55284ec" Feb 27 17:27:41 crc kubenswrapper[4830]: E0227 17:27:41.094365 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb16a24a48c41bbc490d3882d33f0849cd8d26ede184b2a592a88ffda55284ec\": container with ID starting with bb16a24a48c41bbc490d3882d33f0849cd8d26ede184b2a592a88ffda55284ec not found: ID does not exist" containerID="bb16a24a48c41bbc490d3882d33f0849cd8d26ede184b2a592a88ffda55284ec" Feb 27 17:27:41 crc kubenswrapper[4830]: I0227 17:27:41.094402 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb16a24a48c41bbc490d3882d33f0849cd8d26ede184b2a592a88ffda55284ec"} err="failed to get container status \"bb16a24a48c41bbc490d3882d33f0849cd8d26ede184b2a592a88ffda55284ec\": rpc error: code = NotFound desc = could not find container \"bb16a24a48c41bbc490d3882d33f0849cd8d26ede184b2a592a88ffda55284ec\": container with ID starting with bb16a24a48c41bbc490d3882d33f0849cd8d26ede184b2a592a88ffda55284ec not found: ID does not exist" Feb 27 17:27:42 crc kubenswrapper[4830]: I0227 17:27:42.779253 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79334803-c789-4221-bfe2-bcad857c5af2" path="/var/lib/kubelet/pods/79334803-c789-4221-bfe2-bcad857c5af2/volumes" Feb 27 17:28:00 crc kubenswrapper[4830]: I0227 17:28:00.159008 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536888-cf7qh"] Feb 27 17:28:00 crc kubenswrapper[4830]: E0227 17:28:00.160421 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79334803-c789-4221-bfe2-bcad857c5af2" containerName="registry-server" Feb 27 17:28:00 crc kubenswrapper[4830]: I0227 17:28:00.160452 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="79334803-c789-4221-bfe2-bcad857c5af2" containerName="registry-server" Feb 27 17:28:00 crc kubenswrapper[4830]: E0227 17:28:00.160516 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79334803-c789-4221-bfe2-bcad857c5af2" containerName="extract-content" Feb 27 17:28:00 crc kubenswrapper[4830]: I0227 17:28:00.160534 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="79334803-c789-4221-bfe2-bcad857c5af2" containerName="extract-content" Feb 27 17:28:00 crc kubenswrapper[4830]: E0227 17:28:00.160555 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79334803-c789-4221-bfe2-bcad857c5af2" containerName="extract-utilities" Feb 27 17:28:00 crc kubenswrapper[4830]: I0227 17:28:00.160573 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="79334803-c789-4221-bfe2-bcad857c5af2" containerName="extract-utilities" Feb 27 17:28:00 crc kubenswrapper[4830]: I0227 17:28:00.160867 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="79334803-c789-4221-bfe2-bcad857c5af2" containerName="registry-server" Feb 27 17:28:00 crc kubenswrapper[4830]: I0227 17:28:00.161722 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536888-cf7qh" Feb 27 17:28:00 crc kubenswrapper[4830]: I0227 17:28:00.165496 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 17:28:00 crc kubenswrapper[4830]: I0227 17:28:00.165553 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:28:00 crc kubenswrapper[4830]: I0227 17:28:00.165780 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:28:00 crc kubenswrapper[4830]: I0227 17:28:00.173877 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536888-cf7qh"] Feb 27 17:28:00 crc kubenswrapper[4830]: I0227 17:28:00.301808 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qnw5\" (UniqueName: \"kubernetes.io/projected/022c16c8-6b4c-4b11-a860-f9212af89fdd-kube-api-access-4qnw5\") pod \"auto-csr-approver-29536888-cf7qh\" (UID: \"022c16c8-6b4c-4b11-a860-f9212af89fdd\") " pod="openshift-infra/auto-csr-approver-29536888-cf7qh" Feb 27 17:28:00 crc kubenswrapper[4830]: I0227 17:28:00.403618 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qnw5\" (UniqueName: \"kubernetes.io/projected/022c16c8-6b4c-4b11-a860-f9212af89fdd-kube-api-access-4qnw5\") pod \"auto-csr-approver-29536888-cf7qh\" (UID: \"022c16c8-6b4c-4b11-a860-f9212af89fdd\") " pod="openshift-infra/auto-csr-approver-29536888-cf7qh" Feb 27 17:28:00 crc kubenswrapper[4830]: I0227 17:28:00.450304 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qnw5\" (UniqueName: \"kubernetes.io/projected/022c16c8-6b4c-4b11-a860-f9212af89fdd-kube-api-access-4qnw5\") pod \"auto-csr-approver-29536888-cf7qh\" (UID: \"022c16c8-6b4c-4b11-a860-f9212af89fdd\") " pod="openshift-infra/auto-csr-approver-29536888-cf7qh" Feb 27 17:28:00 crc kubenswrapper[4830]: I0227 17:28:00.496107 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536888-cf7qh" Feb 27 17:28:00 crc kubenswrapper[4830]: I0227 17:28:00.760819 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536888-cf7qh"] Feb 27 17:28:01 crc kubenswrapper[4830]: I0227 17:28:01.206748 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536888-cf7qh" event={"ID":"022c16c8-6b4c-4b11-a860-f9212af89fdd","Type":"ContainerStarted","Data":"abd73b1dd1ed79b02cc98b61da0277441e4aaf17020a550726a8f80c083d9149"} Feb 27 17:28:03 crc kubenswrapper[4830]: I0227 17:28:03.248065 4830 generic.go:334] "Generic (PLEG): container finished" podID="022c16c8-6b4c-4b11-a860-f9212af89fdd" containerID="a89cfd00e6a4fb866e3b23015795bcff47fb47761897bd84d6cdd1d3433adafb" exitCode=0 Feb 27 17:28:03 crc kubenswrapper[4830]: I0227 17:28:03.248152 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536888-cf7qh" event={"ID":"022c16c8-6b4c-4b11-a860-f9212af89fdd","Type":"ContainerDied","Data":"a89cfd00e6a4fb866e3b23015795bcff47fb47761897bd84d6cdd1d3433adafb"} Feb 27 17:28:04 crc kubenswrapper[4830]: I0227 17:28:04.561320 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536888-cf7qh" Feb 27 17:28:04 crc kubenswrapper[4830]: I0227 17:28:04.693594 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qnw5\" (UniqueName: \"kubernetes.io/projected/022c16c8-6b4c-4b11-a860-f9212af89fdd-kube-api-access-4qnw5\") pod \"022c16c8-6b4c-4b11-a860-f9212af89fdd\" (UID: \"022c16c8-6b4c-4b11-a860-f9212af89fdd\") " Feb 27 17:28:04 crc kubenswrapper[4830]: I0227 17:28:04.703467 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/022c16c8-6b4c-4b11-a860-f9212af89fdd-kube-api-access-4qnw5" (OuterVolumeSpecName: "kube-api-access-4qnw5") pod "022c16c8-6b4c-4b11-a860-f9212af89fdd" (UID: "022c16c8-6b4c-4b11-a860-f9212af89fdd"). InnerVolumeSpecName "kube-api-access-4qnw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:28:04 crc kubenswrapper[4830]: I0227 17:28:04.795936 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qnw5\" (UniqueName: \"kubernetes.io/projected/022c16c8-6b4c-4b11-a860-f9212af89fdd-kube-api-access-4qnw5\") on node \"crc\" DevicePath \"\"" Feb 27 17:28:05 crc kubenswrapper[4830]: I0227 17:28:05.269728 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536888-cf7qh" event={"ID":"022c16c8-6b4c-4b11-a860-f9212af89fdd","Type":"ContainerDied","Data":"abd73b1dd1ed79b02cc98b61da0277441e4aaf17020a550726a8f80c083d9149"} Feb 27 17:28:05 crc kubenswrapper[4830]: I0227 17:28:05.269788 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abd73b1dd1ed79b02cc98b61da0277441e4aaf17020a550726a8f80c083d9149" Feb 27 17:28:05 crc kubenswrapper[4830]: I0227 17:28:05.269819 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536888-cf7qh" Feb 27 17:28:05 crc kubenswrapper[4830]: I0227 17:28:05.675436 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536882-b65kt"] Feb 27 17:28:05 crc kubenswrapper[4830]: I0227 17:28:05.687196 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536882-b65kt"] Feb 27 17:28:06 crc kubenswrapper[4830]: I0227 17:28:06.778293 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a451822-4452-414e-8f06-54897714caf9" path="/var/lib/kubelet/pods/9a451822-4452-414e-8f06-54897714caf9/volumes" Feb 27 17:28:16 crc kubenswrapper[4830]: I0227 17:28:16.507052 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-m74vx"] Feb 27 17:28:16 crc kubenswrapper[4830]: E0227 17:28:16.507988 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="022c16c8-6b4c-4b11-a860-f9212af89fdd" containerName="oc" Feb 27 17:28:16 crc kubenswrapper[4830]: I0227 17:28:16.508001 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="022c16c8-6b4c-4b11-a860-f9212af89fdd" containerName="oc" Feb 27 17:28:16 crc kubenswrapper[4830]: I0227 17:28:16.508147 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="022c16c8-6b4c-4b11-a860-f9212af89fdd" containerName="oc" Feb 27 17:28:16 crc kubenswrapper[4830]: I0227 17:28:16.508842 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-m74vx" Feb 27 17:28:16 crc kubenswrapper[4830]: I0227 17:28:16.514793 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 27 17:28:16 crc kubenswrapper[4830]: I0227 17:28:16.515562 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 27 17:28:16 crc kubenswrapper[4830]: I0227 17:28:16.515613 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 27 17:28:16 crc kubenswrapper[4830]: I0227 17:28:16.515730 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 27 17:28:16 crc kubenswrapper[4830]: I0227 17:28:16.516361 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-grrg5" Feb 27 17:28:16 crc kubenswrapper[4830]: I0227 17:28:16.520848 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-m74vx"] Feb 27 17:28:16 crc kubenswrapper[4830]: I0227 17:28:16.690552 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h867c\" (UniqueName: \"kubernetes.io/projected/d3feb34d-77f7-457c-bf94-37680d7cf3a3-kube-api-access-h867c\") pod \"dnsmasq-dns-5d7b5456f5-m74vx\" (UID: \"d3feb34d-77f7-457c-bf94-37680d7cf3a3\") " pod="openstack/dnsmasq-dns-5d7b5456f5-m74vx" Feb 27 17:28:16 crc kubenswrapper[4830]: I0227 17:28:16.690617 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3feb34d-77f7-457c-bf94-37680d7cf3a3-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-m74vx\" (UID: \"d3feb34d-77f7-457c-bf94-37680d7cf3a3\") " pod="openstack/dnsmasq-dns-5d7b5456f5-m74vx" Feb 27 17:28:16 crc kubenswrapper[4830]: I0227 17:28:16.690711 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3feb34d-77f7-457c-bf94-37680d7cf3a3-config\") pod \"dnsmasq-dns-5d7b5456f5-m74vx\" (UID: \"d3feb34d-77f7-457c-bf94-37680d7cf3a3\") " pod="openstack/dnsmasq-dns-5d7b5456f5-m74vx" Feb 27 17:28:16 crc kubenswrapper[4830]: I0227 17:28:16.781809 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-djh5r"] Feb 27 17:28:16 crc kubenswrapper[4830]: I0227 17:28:16.783217 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-djh5r" Feb 27 17:28:16 crc kubenswrapper[4830]: I0227 17:28:16.794864 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3feb34d-77f7-457c-bf94-37680d7cf3a3-config\") pod \"dnsmasq-dns-5d7b5456f5-m74vx\" (UID: \"d3feb34d-77f7-457c-bf94-37680d7cf3a3\") " pod="openstack/dnsmasq-dns-5d7b5456f5-m74vx" Feb 27 17:28:16 crc kubenswrapper[4830]: I0227 17:28:16.794969 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h867c\" (UniqueName: \"kubernetes.io/projected/d3feb34d-77f7-457c-bf94-37680d7cf3a3-kube-api-access-h867c\") pod \"dnsmasq-dns-5d7b5456f5-m74vx\" (UID: \"d3feb34d-77f7-457c-bf94-37680d7cf3a3\") " pod="openstack/dnsmasq-dns-5d7b5456f5-m74vx" Feb 27 17:28:16 crc kubenswrapper[4830]: I0227 17:28:16.795006 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3feb34d-77f7-457c-bf94-37680d7cf3a3-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-m74vx\" (UID: \"d3feb34d-77f7-457c-bf94-37680d7cf3a3\") " pod="openstack/dnsmasq-dns-5d7b5456f5-m74vx" Feb 27 17:28:16 crc kubenswrapper[4830]: I0227 17:28:16.796905 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3feb34d-77f7-457c-bf94-37680d7cf3a3-config\") pod \"dnsmasq-dns-5d7b5456f5-m74vx\" (UID: \"d3feb34d-77f7-457c-bf94-37680d7cf3a3\") " pod="openstack/dnsmasq-dns-5d7b5456f5-m74vx" Feb 27 17:28:16 crc kubenswrapper[4830]: I0227 17:28:16.802673 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-djh5r"] Feb 27 17:28:16 crc kubenswrapper[4830]: I0227 17:28:16.803376 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3feb34d-77f7-457c-bf94-37680d7cf3a3-dns-svc\") pod \"dnsmasq-dns-5d7b5456f5-m74vx\" (UID: \"d3feb34d-77f7-457c-bf94-37680d7cf3a3\") " pod="openstack/dnsmasq-dns-5d7b5456f5-m74vx" Feb 27 17:28:16 crc kubenswrapper[4830]: I0227 17:28:16.896080 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5t22\" (UniqueName: \"kubernetes.io/projected/7b2c005f-d6a9-444d-94e4-e5d431c3bd6d-kube-api-access-b5t22\") pod \"dnsmasq-dns-98ddfc8f-djh5r\" (UID: \"7b2c005f-d6a9-444d-94e4-e5d431c3bd6d\") " pod="openstack/dnsmasq-dns-98ddfc8f-djh5r" Feb 27 17:28:16 crc kubenswrapper[4830]: I0227 17:28:16.896121 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b2c005f-d6a9-444d-94e4-e5d431c3bd6d-config\") pod \"dnsmasq-dns-98ddfc8f-djh5r\" (UID: \"7b2c005f-d6a9-444d-94e4-e5d431c3bd6d\") " pod="openstack/dnsmasq-dns-98ddfc8f-djh5r" Feb 27 17:28:16 crc kubenswrapper[4830]: I0227 17:28:16.896141 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b2c005f-d6a9-444d-94e4-e5d431c3bd6d-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-djh5r\" (UID: \"7b2c005f-d6a9-444d-94e4-e5d431c3bd6d\") " pod="openstack/dnsmasq-dns-98ddfc8f-djh5r" Feb 27 17:28:16 crc kubenswrapper[4830]: I0227 17:28:16.996318 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h867c\" (UniqueName: \"kubernetes.io/projected/d3feb34d-77f7-457c-bf94-37680d7cf3a3-kube-api-access-h867c\") pod \"dnsmasq-dns-5d7b5456f5-m74vx\" (UID: \"d3feb34d-77f7-457c-bf94-37680d7cf3a3\") " pod="openstack/dnsmasq-dns-5d7b5456f5-m74vx" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.002215 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5t22\" (UniqueName: \"kubernetes.io/projected/7b2c005f-d6a9-444d-94e4-e5d431c3bd6d-kube-api-access-b5t22\") pod \"dnsmasq-dns-98ddfc8f-djh5r\" (UID: \"7b2c005f-d6a9-444d-94e4-e5d431c3bd6d\") " pod="openstack/dnsmasq-dns-98ddfc8f-djh5r" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.002271 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b2c005f-d6a9-444d-94e4-e5d431c3bd6d-config\") pod \"dnsmasq-dns-98ddfc8f-djh5r\" (UID: \"7b2c005f-d6a9-444d-94e4-e5d431c3bd6d\") " pod="openstack/dnsmasq-dns-98ddfc8f-djh5r" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.002296 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b2c005f-d6a9-444d-94e4-e5d431c3bd6d-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-djh5r\" (UID: \"7b2c005f-d6a9-444d-94e4-e5d431c3bd6d\") " pod="openstack/dnsmasq-dns-98ddfc8f-djh5r" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.003134 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b2c005f-d6a9-444d-94e4-e5d431c3bd6d-dns-svc\") pod \"dnsmasq-dns-98ddfc8f-djh5r\" (UID: \"7b2c005f-d6a9-444d-94e4-e5d431c3bd6d\") " pod="openstack/dnsmasq-dns-98ddfc8f-djh5r" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.003219 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b2c005f-d6a9-444d-94e4-e5d431c3bd6d-config\") pod \"dnsmasq-dns-98ddfc8f-djh5r\" (UID: \"7b2c005f-d6a9-444d-94e4-e5d431c3bd6d\") " pod="openstack/dnsmasq-dns-98ddfc8f-djh5r" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.023179 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5t22\" (UniqueName: \"kubernetes.io/projected/7b2c005f-d6a9-444d-94e4-e5d431c3bd6d-kube-api-access-b5t22\") pod \"dnsmasq-dns-98ddfc8f-djh5r\" (UID: \"7b2c005f-d6a9-444d-94e4-e5d431c3bd6d\") " pod="openstack/dnsmasq-dns-98ddfc8f-djh5r" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.109389 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-djh5r" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.127764 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-m74vx" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.369828 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-djh5r"] Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.653856 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-m74vx"] Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.680150 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.682773 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.687470 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.687805 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.688168 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.688409 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.688847 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-k49l2" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.691929 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.817129 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/372cd8c4-0006-4cea-8408-2fe8bbb4844b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\") " pod="openstack/rabbitmq-server-0" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.817243 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/372cd8c4-0006-4cea-8408-2fe8bbb4844b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\") " pod="openstack/rabbitmq-server-0" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.817289 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/372cd8c4-0006-4cea-8408-2fe8bbb4844b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\") " pod="openstack/rabbitmq-server-0" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.817338 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98bbc\" (UniqueName: \"kubernetes.io/projected/372cd8c4-0006-4cea-8408-2fe8bbb4844b-kube-api-access-98bbc\") pod \"rabbitmq-server-0\" (UID: \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\") " pod="openstack/rabbitmq-server-0" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.817382 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/372cd8c4-0006-4cea-8408-2fe8bbb4844b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\") " pod="openstack/rabbitmq-server-0" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.817445 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/372cd8c4-0006-4cea-8408-2fe8bbb4844b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\") " pod="openstack/rabbitmq-server-0" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.817497 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/372cd8c4-0006-4cea-8408-2fe8bbb4844b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\") " pod="openstack/rabbitmq-server-0" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.817556 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-504966dc-acc0-4918-899b-693b7ff91a9e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-504966dc-acc0-4918-899b-693b7ff91a9e\") pod \"rabbitmq-server-0\" (UID: \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\") " pod="openstack/rabbitmq-server-0" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.817659 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/372cd8c4-0006-4cea-8408-2fe8bbb4844b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\") " pod="openstack/rabbitmq-server-0" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.918970 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/372cd8c4-0006-4cea-8408-2fe8bbb4844b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\") " pod="openstack/rabbitmq-server-0" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.919625 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/372cd8c4-0006-4cea-8408-2fe8bbb4844b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\") " pod="openstack/rabbitmq-server-0" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.919666 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/372cd8c4-0006-4cea-8408-2fe8bbb4844b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\") " pod="openstack/rabbitmq-server-0" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.919707 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98bbc\" (UniqueName: \"kubernetes.io/projected/372cd8c4-0006-4cea-8408-2fe8bbb4844b-kube-api-access-98bbc\") pod \"rabbitmq-server-0\" (UID: \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\") " pod="openstack/rabbitmq-server-0" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.919749 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/372cd8c4-0006-4cea-8408-2fe8bbb4844b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\") " pod="openstack/rabbitmq-server-0" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.919777 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/372cd8c4-0006-4cea-8408-2fe8bbb4844b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\") " pod="openstack/rabbitmq-server-0" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.919816 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/372cd8c4-0006-4cea-8408-2fe8bbb4844b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\") " pod="openstack/rabbitmq-server-0" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.919866 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-504966dc-acc0-4918-899b-693b7ff91a9e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-504966dc-acc0-4918-899b-693b7ff91a9e\") pod \"rabbitmq-server-0\" (UID: \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\") " pod="openstack/rabbitmq-server-0" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.919925 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/372cd8c4-0006-4cea-8408-2fe8bbb4844b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\") " pod="openstack/rabbitmq-server-0" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.920370 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.921586 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/372cd8c4-0006-4cea-8408-2fe8bbb4844b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\") " pod="openstack/rabbitmq-server-0" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.922240 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.923065 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/372cd8c4-0006-4cea-8408-2fe8bbb4844b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\") " pod="openstack/rabbitmq-server-0" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.923435 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/372cd8c4-0006-4cea-8408-2fe8bbb4844b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\") " pod="openstack/rabbitmq-server-0" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.924780 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/372cd8c4-0006-4cea-8408-2fe8bbb4844b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\") " pod="openstack/rabbitmq-server-0" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.926398 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/372cd8c4-0006-4cea-8408-2fe8bbb4844b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\") " pod="openstack/rabbitmq-server-0" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.926599 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/372cd8c4-0006-4cea-8408-2fe8bbb4844b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\") " pod="openstack/rabbitmq-server-0" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.926991 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.927012 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.927255 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.927301 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.927381 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-hkw2t" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.929331 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/372cd8c4-0006-4cea-8408-2fe8bbb4844b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\") " pod="openstack/rabbitmq-server-0" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.930220 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.930279 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-504966dc-acc0-4918-899b-693b7ff91a9e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-504966dc-acc0-4918-899b-693b7ff91a9e\") pod \"rabbitmq-server-0\" (UID: \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f37391db7ae8e70fcc253c31f98faea82560b0cba03609ce63b3ca31ec3ce3d2/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.952865 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98bbc\" (UniqueName: \"kubernetes.io/projected/372cd8c4-0006-4cea-8408-2fe8bbb4844b-kube-api-access-98bbc\") pod \"rabbitmq-server-0\" (UID: \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\") " pod="openstack/rabbitmq-server-0" Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.955937 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 17:28:17 crc kubenswrapper[4830]: I0227 17:28:17.990289 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-504966dc-acc0-4918-899b-693b7ff91a9e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-504966dc-acc0-4918-899b-693b7ff91a9e\") pod \"rabbitmq-server-0\" (UID: \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\") " pod="openstack/rabbitmq-server-0" Feb 27 17:28:18 crc kubenswrapper[4830]: I0227 17:28:18.006688 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 27 17:28:18 crc kubenswrapper[4830]: I0227 17:28:18.021551 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zh58\" (UniqueName: \"kubernetes.io/projected/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-kube-api-access-2zh58\") pod \"rabbitmq-cell1-server-0\" (UID: \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:28:18 crc kubenswrapper[4830]: I0227 17:28:18.021601 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:28:18 crc kubenswrapper[4830]: I0227 17:28:18.021636 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5d2b1020-d566-4194-ade4-e8bfc67eabb5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d2b1020-d566-4194-ade4-e8bfc67eabb5\") pod \"rabbitmq-cell1-server-0\" (UID: \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:28:18 crc kubenswrapper[4830]: I0227 17:28:18.021667 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:28:18 crc kubenswrapper[4830]: I0227 17:28:18.021694 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:28:18 crc kubenswrapper[4830]: I0227 17:28:18.021721 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:28:18 crc kubenswrapper[4830]: I0227 17:28:18.021890 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:28:18 crc kubenswrapper[4830]: I0227 17:28:18.021934 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:28:18 crc kubenswrapper[4830]: I0227 17:28:18.022017 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:28:18 crc kubenswrapper[4830]: I0227 17:28:18.123932 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zh58\" (UniqueName: \"kubernetes.io/projected/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-kube-api-access-2zh58\") pod \"rabbitmq-cell1-server-0\" (UID: \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:28:18 crc kubenswrapper[4830]: I0227 17:28:18.124001 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:28:18 crc kubenswrapper[4830]: I0227 17:28:18.124034 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5d2b1020-d566-4194-ade4-e8bfc67eabb5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d2b1020-d566-4194-ade4-e8bfc67eabb5\") pod \"rabbitmq-cell1-server-0\" (UID: \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:28:18 crc kubenswrapper[4830]: I0227 17:28:18.124063 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:28:18 crc kubenswrapper[4830]: I0227 17:28:18.124088 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:28:18 crc kubenswrapper[4830]: I0227 17:28:18.124115 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:28:18 crc kubenswrapper[4830]: I0227 17:28:18.124160 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:28:18 crc kubenswrapper[4830]: I0227 17:28:18.124176 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:28:18 crc kubenswrapper[4830]: I0227 17:28:18.124197 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:28:18 crc kubenswrapper[4830]: I0227 17:28:18.125074 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:28:18 crc kubenswrapper[4830]: I0227 17:28:18.125512 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:28:18 crc kubenswrapper[4830]: I0227 17:28:18.126477 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:28:18 crc kubenswrapper[4830]: I0227 17:28:18.126555 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:28:18 crc kubenswrapper[4830]: I0227 17:28:18.130626 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 17:28:18 crc kubenswrapper[4830]: I0227 17:28:18.131044 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5d2b1020-d566-4194-ade4-e8bfc67eabb5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d2b1020-d566-4194-ade4-e8bfc67eabb5\") pod \"rabbitmq-cell1-server-0\" (UID: \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ad898285b7c922a94df9db2fe5d884eccf586a5b0445da091fa79edefcd75c9e/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:28:18 crc kubenswrapper[4830]: I0227 17:28:18.130796 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:28:18 crc kubenswrapper[4830]: I0227 17:28:18.130689 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:28:18 crc kubenswrapper[4830]: I0227 17:28:18.149067 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:28:18 crc kubenswrapper[4830]: I0227 17:28:18.165340 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zh58\" (UniqueName: \"kubernetes.io/projected/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-kube-api-access-2zh58\") pod \"rabbitmq-cell1-server-0\" (UID: \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:28:18 crc kubenswrapper[4830]: I0227 17:28:18.181562 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5d2b1020-d566-4194-ade4-e8bfc67eabb5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d2b1020-d566-4194-ade4-e8bfc67eabb5\") pod \"rabbitmq-cell1-server-0\" (UID: \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:28:18 crc kubenswrapper[4830]: I0227 17:28:18.291051 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:28:18 crc kubenswrapper[4830]: I0227 17:28:18.385686 4830 generic.go:334] "Generic (PLEG): container finished" podID="7b2c005f-d6a9-444d-94e4-e5d431c3bd6d" containerID="1dec16f58fe7b82aed12484c93b1da1a523b9e1eb92fc6ee5b7c52642b3cd504" exitCode=0 Feb 27 17:28:18 crc kubenswrapper[4830]: I0227 17:28:18.385754 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-djh5r" event={"ID":"7b2c005f-d6a9-444d-94e4-e5d431c3bd6d","Type":"ContainerDied","Data":"1dec16f58fe7b82aed12484c93b1da1a523b9e1eb92fc6ee5b7c52642b3cd504"} Feb 27 17:28:18 crc kubenswrapper[4830]: I0227 17:28:18.385779 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-djh5r" event={"ID":"7b2c005f-d6a9-444d-94e4-e5d431c3bd6d","Type":"ContainerStarted","Data":"d67cbe350688a01dd1d1dd126808171a5ff53daaa93ed40e2f985e60ffa06f30"} Feb 27 17:28:18 crc kubenswrapper[4830]: I0227 17:28:18.392730 4830 generic.go:334] "Generic (PLEG): container finished" podID="d3feb34d-77f7-457c-bf94-37680d7cf3a3" containerID="c37b09ff2362124485dc57a73b9a6f43a06b428a210a8f7b20a73c0714d04848" exitCode=0 Feb 27 17:28:18 crc kubenswrapper[4830]: I0227 17:28:18.392776 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-m74vx" event={"ID":"d3feb34d-77f7-457c-bf94-37680d7cf3a3","Type":"ContainerDied","Data":"c37b09ff2362124485dc57a73b9a6f43a06b428a210a8f7b20a73c0714d04848"} Feb 27 17:28:18 crc kubenswrapper[4830]: I0227 17:28:18.392803 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-m74vx" event={"ID":"d3feb34d-77f7-457c-bf94-37680d7cf3a3","Type":"ContainerStarted","Data":"e32d4a5f4bb7216a14e9144c56423616f8e18fa8472664b61294ddccc111117a"} Feb 27 17:28:18 crc kubenswrapper[4830]: I0227 17:28:18.506335 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 17:28:18 crc kubenswrapper[4830]: I0227 17:28:18.834829 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.288032 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.290573 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.294675 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.294909 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-645dn" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.295137 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.295290 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.299030 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.303551 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.404572 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-djh5r" event={"ID":"7b2c005f-d6a9-444d-94e4-e5d431c3bd6d","Type":"ContainerStarted","Data":"509d4ab66bbd4d950f97afc152764a475a9522e1a714347a5da5245aab15930b"} Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.405097 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-98ddfc8f-djh5r" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.406615 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"372cd8c4-0006-4cea-8408-2fe8bbb4844b","Type":"ContainerStarted","Data":"bd62dd68983468960351bc37021c386dc6ef4818c7439f6649d3efd989fe91cb"} Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.409859 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-m74vx" event={"ID":"d3feb34d-77f7-457c-bf94-37680d7cf3a3","Type":"ContainerStarted","Data":"ca4554ba35ee5b5788d15f0391475f172b4528839da759773125abe85b16d34a"} Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.415234 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f","Type":"ContainerStarted","Data":"4b97744520637d0d13864ec551ec17d79b43a296c30c16d19eca39be0bedb9d6"} Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.431582 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-98ddfc8f-djh5r" podStartSLOduration=3.431560713 podStartE2EDuration="3.431560713s" podCreationTimestamp="2026-02-27 17:28:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:28:19.427169049 +0000 UTC m=+4895.516441512" watchObservedRunningTime="2026-02-27 17:28:19.431560713 +0000 UTC m=+4895.520833176" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.449853 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d7b5456f5-m74vx" podStartSLOduration=3.449824086 podStartE2EDuration="3.449824086s" podCreationTimestamp="2026-02-27 17:28:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:28:19.44744675 +0000 UTC m=+4895.536719213" watchObservedRunningTime="2026-02-27 17:28:19.449824086 +0000 UTC m=+4895.539096549" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.458379 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgpwk\" (UniqueName: \"kubernetes.io/projected/74c21e05-7e2b-4653-b6fa-a9a814716cc1-kube-api-access-rgpwk\") pod \"openstack-galera-0\" (UID: \"74c21e05-7e2b-4653-b6fa-a9a814716cc1\") " pod="openstack/openstack-galera-0" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.458441 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74c21e05-7e2b-4653-b6fa-a9a814716cc1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"74c21e05-7e2b-4653-b6fa-a9a814716cc1\") " pod="openstack/openstack-galera-0" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.458472 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/74c21e05-7e2b-4653-b6fa-a9a814716cc1-kolla-config\") pod \"openstack-galera-0\" (UID: \"74c21e05-7e2b-4653-b6fa-a9a814716cc1\") " pod="openstack/openstack-galera-0" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.458513 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9cf4351f-fb5d-46cd-8dc4-57366706eb0a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9cf4351f-fb5d-46cd-8dc4-57366706eb0a\") pod \"openstack-galera-0\" (UID: \"74c21e05-7e2b-4653-b6fa-a9a814716cc1\") " pod="openstack/openstack-galera-0" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.458541 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74c21e05-7e2b-4653-b6fa-a9a814716cc1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"74c21e05-7e2b-4653-b6fa-a9a814716cc1\") " pod="openstack/openstack-galera-0" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.458559 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/74c21e05-7e2b-4653-b6fa-a9a814716cc1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"74c21e05-7e2b-4653-b6fa-a9a814716cc1\") " pod="openstack/openstack-galera-0" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.458583 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/74c21e05-7e2b-4653-b6fa-a9a814716cc1-config-data-default\") pod \"openstack-galera-0\" (UID: \"74c21e05-7e2b-4653-b6fa-a9a814716cc1\") " pod="openstack/openstack-galera-0" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.458620 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/74c21e05-7e2b-4653-b6fa-a9a814716cc1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"74c21e05-7e2b-4653-b6fa-a9a814716cc1\") " pod="openstack/openstack-galera-0" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.559710 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/74c21e05-7e2b-4653-b6fa-a9a814716cc1-config-data-default\") pod \"openstack-galera-0\" (UID: \"74c21e05-7e2b-4653-b6fa-a9a814716cc1\") " pod="openstack/openstack-galera-0" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.559808 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/74c21e05-7e2b-4653-b6fa-a9a814716cc1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"74c21e05-7e2b-4653-b6fa-a9a814716cc1\") " pod="openstack/openstack-galera-0" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.559841 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgpwk\" (UniqueName: \"kubernetes.io/projected/74c21e05-7e2b-4653-b6fa-a9a814716cc1-kube-api-access-rgpwk\") pod \"openstack-galera-0\" (UID: \"74c21e05-7e2b-4653-b6fa-a9a814716cc1\") " pod="openstack/openstack-galera-0" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.559871 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74c21e05-7e2b-4653-b6fa-a9a814716cc1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"74c21e05-7e2b-4653-b6fa-a9a814716cc1\") " pod="openstack/openstack-galera-0" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.559903 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/74c21e05-7e2b-4653-b6fa-a9a814716cc1-kolla-config\") pod \"openstack-galera-0\" (UID: \"74c21e05-7e2b-4653-b6fa-a9a814716cc1\") " pod="openstack/openstack-galera-0" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.559935 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9cf4351f-fb5d-46cd-8dc4-57366706eb0a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9cf4351f-fb5d-46cd-8dc4-57366706eb0a\") pod \"openstack-galera-0\" (UID: \"74c21e05-7e2b-4653-b6fa-a9a814716cc1\") " pod="openstack/openstack-galera-0" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.560040 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74c21e05-7e2b-4653-b6fa-a9a814716cc1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"74c21e05-7e2b-4653-b6fa-a9a814716cc1\") " pod="openstack/openstack-galera-0" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.560062 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/74c21e05-7e2b-4653-b6fa-a9a814716cc1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"74c21e05-7e2b-4653-b6fa-a9a814716cc1\") " pod="openstack/openstack-galera-0" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.560965 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/74c21e05-7e2b-4653-b6fa-a9a814716cc1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"74c21e05-7e2b-4653-b6fa-a9a814716cc1\") " pod="openstack/openstack-galera-0" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.561875 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/74c21e05-7e2b-4653-b6fa-a9a814716cc1-kolla-config\") pod \"openstack-galera-0\" (UID: \"74c21e05-7e2b-4653-b6fa-a9a814716cc1\") " pod="openstack/openstack-galera-0" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.562032 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/74c21e05-7e2b-4653-b6fa-a9a814716cc1-config-data-default\") pod \"openstack-galera-0\" (UID: \"74c21e05-7e2b-4653-b6fa-a9a814716cc1\") " pod="openstack/openstack-galera-0" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.562556 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74c21e05-7e2b-4653-b6fa-a9a814716cc1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"74c21e05-7e2b-4653-b6fa-a9a814716cc1\") " pod="openstack/openstack-galera-0" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.568512 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.568591 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9cf4351f-fb5d-46cd-8dc4-57366706eb0a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9cf4351f-fb5d-46cd-8dc4-57366706eb0a\") pod \"openstack-galera-0\" (UID: \"74c21e05-7e2b-4653-b6fa-a9a814716cc1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ca9c346b39ad4e8d9688e824d133293a684fdc106ba5bb77b9d78ca6b0e882c8/globalmount\"" pod="openstack/openstack-galera-0" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.595576 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74c21e05-7e2b-4653-b6fa-a9a814716cc1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"74c21e05-7e2b-4653-b6fa-a9a814716cc1\") " pod="openstack/openstack-galera-0" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.596238 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/74c21e05-7e2b-4653-b6fa-a9a814716cc1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"74c21e05-7e2b-4653-b6fa-a9a814716cc1\") " pod="openstack/openstack-galera-0" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.606360 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgpwk\" (UniqueName: \"kubernetes.io/projected/74c21e05-7e2b-4653-b6fa-a9a814716cc1-kube-api-access-rgpwk\") pod \"openstack-galera-0\" (UID: \"74c21e05-7e2b-4653-b6fa-a9a814716cc1\") " pod="openstack/openstack-galera-0" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.611125 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.612332 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.614859 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.615221 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-kvnrm" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.627801 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.762968 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f185a288-a581-46e4-8ed5-d0ce81a59f00-kolla-config\") pod \"memcached-0\" (UID: \"f185a288-a581-46e4-8ed5-d0ce81a59f00\") " pod="openstack/memcached-0" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.763331 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f185a288-a581-46e4-8ed5-d0ce81a59f00-config-data\") pod \"memcached-0\" (UID: \"f185a288-a581-46e4-8ed5-d0ce81a59f00\") " pod="openstack/memcached-0" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.763441 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnzpw\" (UniqueName: \"kubernetes.io/projected/f185a288-a581-46e4-8ed5-d0ce81a59f00-kube-api-access-pnzpw\") pod \"memcached-0\" (UID: \"f185a288-a581-46e4-8ed5-d0ce81a59f00\") " pod="openstack/memcached-0" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.864927 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f185a288-a581-46e4-8ed5-d0ce81a59f00-kolla-config\") pod \"memcached-0\" (UID: \"f185a288-a581-46e4-8ed5-d0ce81a59f00\") " pod="openstack/memcached-0" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.865033 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f185a288-a581-46e4-8ed5-d0ce81a59f00-config-data\") pod \"memcached-0\" (UID: \"f185a288-a581-46e4-8ed5-d0ce81a59f00\") " pod="openstack/memcached-0" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.865111 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnzpw\" (UniqueName: \"kubernetes.io/projected/f185a288-a581-46e4-8ed5-d0ce81a59f00-kube-api-access-pnzpw\") pod \"memcached-0\" (UID: \"f185a288-a581-46e4-8ed5-d0ce81a59f00\") " pod="openstack/memcached-0" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.867072 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f185a288-a581-46e4-8ed5-d0ce81a59f00-kolla-config\") pod \"memcached-0\" (UID: \"f185a288-a581-46e4-8ed5-d0ce81a59f00\") " pod="openstack/memcached-0" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.867102 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f185a288-a581-46e4-8ed5-d0ce81a59f00-config-data\") pod \"memcached-0\" (UID: \"f185a288-a581-46e4-8ed5-d0ce81a59f00\") " pod="openstack/memcached-0" Feb 27 17:28:19 crc kubenswrapper[4830]: I0227 17:28:19.931912 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnzpw\" (UniqueName: \"kubernetes.io/projected/f185a288-a581-46e4-8ed5-d0ce81a59f00-kube-api-access-pnzpw\") pod \"memcached-0\" (UID: \"f185a288-a581-46e4-8ed5-d0ce81a59f00\") " pod="openstack/memcached-0" Feb 27 17:28:20 crc kubenswrapper[4830]: I0227 17:28:20.092759 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9cf4351f-fb5d-46cd-8dc4-57366706eb0a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9cf4351f-fb5d-46cd-8dc4-57366706eb0a\") pod \"openstack-galera-0\" (UID: \"74c21e05-7e2b-4653-b6fa-a9a814716cc1\") " pod="openstack/openstack-galera-0" Feb 27 17:28:20 crc kubenswrapper[4830]: I0227 17:28:20.132752 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 27 17:28:20 crc kubenswrapper[4830]: I0227 17:28:20.222182 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 27 17:28:20 crc kubenswrapper[4830]: I0227 17:28:20.446064 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"372cd8c4-0006-4cea-8408-2fe8bbb4844b","Type":"ContainerStarted","Data":"74d5999edd4b0b6d813de11160d8c361ea0a022b3cf223667ce5fada4e7f0549"} Feb 27 17:28:20 crc kubenswrapper[4830]: I0227 17:28:20.451775 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d7b5456f5-m74vx" Feb 27 17:28:20 crc kubenswrapper[4830]: I0227 17:28:20.624330 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 27 17:28:20 crc kubenswrapper[4830]: W0227 17:28:20.629830 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf185a288_a581_46e4_8ed5_d0ce81a59f00.slice/crio-219dacd8184040dbcc1e047299393a10561ba4c4a58f7c26ed07abed57c81789 WatchSource:0}: Error finding container 219dacd8184040dbcc1e047299393a10561ba4c4a58f7c26ed07abed57c81789: Status 404 returned error can't find the container with id 219dacd8184040dbcc1e047299393a10561ba4c4a58f7c26ed07abed57c81789 Feb 27 17:28:20 crc kubenswrapper[4830]: I0227 17:28:20.708357 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 27 17:28:20 crc kubenswrapper[4830]: I0227 17:28:20.710583 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 27 17:28:20 crc kubenswrapper[4830]: I0227 17:28:20.715444 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 27 17:28:20 crc kubenswrapper[4830]: I0227 17:28:20.715859 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 27 17:28:20 crc kubenswrapper[4830]: I0227 17:28:20.716035 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 27 17:28:20 crc kubenswrapper[4830]: I0227 17:28:20.719978 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-rlvw4" Feb 27 17:28:20 crc kubenswrapper[4830]: I0227 17:28:20.724304 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 27 17:28:20 crc kubenswrapper[4830]: I0227 17:28:20.816815 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 27 17:28:20 crc kubenswrapper[4830]: W0227 17:28:20.824353 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74c21e05_7e2b_4653_b6fa_a9a814716cc1.slice/crio-fd2e22615d30e659e3d94d472954bed9e5c30970bd1c0c431680e28d4b96c7c7 WatchSource:0}: Error finding container fd2e22615d30e659e3d94d472954bed9e5c30970bd1c0c431680e28d4b96c7c7: Status 404 returned error can't find the container with id fd2e22615d30e659e3d94d472954bed9e5c30970bd1c0c431680e28d4b96c7c7 Feb 27 17:28:20 crc kubenswrapper[4830]: I0227 17:28:20.884214 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2e2ff35-b569-4ab4-b1f3-47ec2327caeb-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e2e2ff35-b569-4ab4-b1f3-47ec2327caeb\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:28:20 crc kubenswrapper[4830]: I0227 17:28:20.885602 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9lsp\" (UniqueName: \"kubernetes.io/projected/e2e2ff35-b569-4ab4-b1f3-47ec2327caeb-kube-api-access-z9lsp\") pod \"openstack-cell1-galera-0\" (UID: \"e2e2ff35-b569-4ab4-b1f3-47ec2327caeb\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:28:20 crc kubenswrapper[4830]: I0227 17:28:20.885727 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2e2ff35-b569-4ab4-b1f3-47ec2327caeb-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e2e2ff35-b569-4ab4-b1f3-47ec2327caeb\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:28:20 crc kubenswrapper[4830]: I0227 17:28:20.885909 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e2e2ff35-b569-4ab4-b1f3-47ec2327caeb-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e2e2ff35-b569-4ab4-b1f3-47ec2327caeb\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:28:20 crc kubenswrapper[4830]: I0227 17:28:20.886216 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e2e2ff35-b569-4ab4-b1f3-47ec2327caeb-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e2e2ff35-b569-4ab4-b1f3-47ec2327caeb\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:28:20 crc kubenswrapper[4830]: I0227 17:28:20.886516 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2e2ff35-b569-4ab4-b1f3-47ec2327caeb-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e2e2ff35-b569-4ab4-b1f3-47ec2327caeb\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:28:20 crc kubenswrapper[4830]: I0227 17:28:20.886655 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cd43fdb1-3ea1-42ef-82f7-e34d4f71ef1f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd43fdb1-3ea1-42ef-82f7-e34d4f71ef1f\") pod \"openstack-cell1-galera-0\" (UID: \"e2e2ff35-b569-4ab4-b1f3-47ec2327caeb\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:28:20 crc kubenswrapper[4830]: I0227 17:28:20.886782 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e2e2ff35-b569-4ab4-b1f3-47ec2327caeb-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e2e2ff35-b569-4ab4-b1f3-47ec2327caeb\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:28:20 crc kubenswrapper[4830]: I0227 17:28:20.987786 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e2e2ff35-b569-4ab4-b1f3-47ec2327caeb-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e2e2ff35-b569-4ab4-b1f3-47ec2327caeb\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:28:20 crc kubenswrapper[4830]: I0227 17:28:20.987869 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2e2ff35-b569-4ab4-b1f3-47ec2327caeb-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e2e2ff35-b569-4ab4-b1f3-47ec2327caeb\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:28:20 crc kubenswrapper[4830]: I0227 17:28:20.987897 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cd43fdb1-3ea1-42ef-82f7-e34d4f71ef1f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd43fdb1-3ea1-42ef-82f7-e34d4f71ef1f\") pod \"openstack-cell1-galera-0\" (UID: \"e2e2ff35-b569-4ab4-b1f3-47ec2327caeb\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:28:20 crc kubenswrapper[4830]: I0227 17:28:20.987921 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e2e2ff35-b569-4ab4-b1f3-47ec2327caeb-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e2e2ff35-b569-4ab4-b1f3-47ec2327caeb\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:28:20 crc kubenswrapper[4830]: I0227 17:28:20.987971 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2e2ff35-b569-4ab4-b1f3-47ec2327caeb-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e2e2ff35-b569-4ab4-b1f3-47ec2327caeb\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:28:20 crc kubenswrapper[4830]: I0227 17:28:20.987994 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9lsp\" (UniqueName: \"kubernetes.io/projected/e2e2ff35-b569-4ab4-b1f3-47ec2327caeb-kube-api-access-z9lsp\") pod \"openstack-cell1-galera-0\" (UID: \"e2e2ff35-b569-4ab4-b1f3-47ec2327caeb\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:28:20 crc kubenswrapper[4830]: I0227 17:28:20.988013 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2e2ff35-b569-4ab4-b1f3-47ec2327caeb-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e2e2ff35-b569-4ab4-b1f3-47ec2327caeb\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:28:20 crc kubenswrapper[4830]: I0227 17:28:20.988034 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e2e2ff35-b569-4ab4-b1f3-47ec2327caeb-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e2e2ff35-b569-4ab4-b1f3-47ec2327caeb\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:28:20 crc kubenswrapper[4830]: I0227 17:28:20.988712 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e2e2ff35-b569-4ab4-b1f3-47ec2327caeb-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e2e2ff35-b569-4ab4-b1f3-47ec2327caeb\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:28:20 crc kubenswrapper[4830]: I0227 17:28:20.988974 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e2e2ff35-b569-4ab4-b1f3-47ec2327caeb-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e2e2ff35-b569-4ab4-b1f3-47ec2327caeb\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:28:20 crc kubenswrapper[4830]: I0227 17:28:20.989115 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e2e2ff35-b569-4ab4-b1f3-47ec2327caeb-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e2e2ff35-b569-4ab4-b1f3-47ec2327caeb\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:28:20 crc kubenswrapper[4830]: I0227 17:28:20.989785 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2e2ff35-b569-4ab4-b1f3-47ec2327caeb-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e2e2ff35-b569-4ab4-b1f3-47ec2327caeb\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:28:20 crc kubenswrapper[4830]: I0227 17:28:20.997020 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2e2ff35-b569-4ab4-b1f3-47ec2327caeb-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e2e2ff35-b569-4ab4-b1f3-47ec2327caeb\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:28:20 crc kubenswrapper[4830]: I0227 17:28:20.997906 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 17:28:20 crc kubenswrapper[4830]: I0227 17:28:20.998070 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cd43fdb1-3ea1-42ef-82f7-e34d4f71ef1f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd43fdb1-3ea1-42ef-82f7-e34d4f71ef1f\") pod \"openstack-cell1-galera-0\" (UID: \"e2e2ff35-b569-4ab4-b1f3-47ec2327caeb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/176b1fc741549c2dc27fb23ff41b780d95886434fdd3cf50197852b9f6a1261c/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 27 17:28:21 crc kubenswrapper[4830]: I0227 17:28:21.000151 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2e2ff35-b569-4ab4-b1f3-47ec2327caeb-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e2e2ff35-b569-4ab4-b1f3-47ec2327caeb\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:28:21 crc kubenswrapper[4830]: I0227 17:28:21.016084 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9lsp\" (UniqueName: \"kubernetes.io/projected/e2e2ff35-b569-4ab4-b1f3-47ec2327caeb-kube-api-access-z9lsp\") pod \"openstack-cell1-galera-0\" (UID: \"e2e2ff35-b569-4ab4-b1f3-47ec2327caeb\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:28:21 crc kubenswrapper[4830]: I0227 17:28:21.038475 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cd43fdb1-3ea1-42ef-82f7-e34d4f71ef1f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd43fdb1-3ea1-42ef-82f7-e34d4f71ef1f\") pod \"openstack-cell1-galera-0\" (UID: \"e2e2ff35-b569-4ab4-b1f3-47ec2327caeb\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:28:21 crc kubenswrapper[4830]: I0227 17:28:21.050936 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 27 17:28:21 crc kubenswrapper[4830]: I0227 17:28:21.459514 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"74c21e05-7e2b-4653-b6fa-a9a814716cc1","Type":"ContainerStarted","Data":"8ecfd46894705d9115f8e2a9db4b05fc2a90057a326a7a75eb0e77172d1a6311"} Feb 27 17:28:21 crc kubenswrapper[4830]: I0227 17:28:21.460138 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"74c21e05-7e2b-4653-b6fa-a9a814716cc1","Type":"ContainerStarted","Data":"fd2e22615d30e659e3d94d472954bed9e5c30970bd1c0c431680e28d4b96c7c7"} Feb 27 17:28:21 crc kubenswrapper[4830]: I0227 17:28:21.461853 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f","Type":"ContainerStarted","Data":"15671b859bb553bb9640aad04b6323abbd5e3a905fc972d50ced9b9d9bbff8fa"} Feb 27 17:28:21 crc kubenswrapper[4830]: I0227 17:28:21.467178 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f185a288-a581-46e4-8ed5-d0ce81a59f00","Type":"ContainerStarted","Data":"ac9cb321c008ad1f6c9ed776e1ae9d8276b1ce3c6fb4858138f7c50f9f57e7a1"} Feb 27 17:28:21 crc kubenswrapper[4830]: I0227 17:28:21.467265 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f185a288-a581-46e4-8ed5-d0ce81a59f00","Type":"ContainerStarted","Data":"219dacd8184040dbcc1e047299393a10561ba4c4a58f7c26ed07abed57c81789"} Feb 27 17:28:21 crc kubenswrapper[4830]: I0227 17:28:21.467482 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 27 17:28:21 crc kubenswrapper[4830]: I0227 17:28:21.550125 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.550099693 podStartE2EDuration="2.550099693s" podCreationTimestamp="2026-02-27 17:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:28:21.542620756 +0000 UTC m=+4897.631893229" watchObservedRunningTime="2026-02-27 17:28:21.550099693 +0000 UTC m=+4897.639372196" Feb 27 17:28:21 crc kubenswrapper[4830]: I0227 17:28:21.605693 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 27 17:28:22 crc kubenswrapper[4830]: I0227 17:28:22.481386 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e2e2ff35-b569-4ab4-b1f3-47ec2327caeb","Type":"ContainerStarted","Data":"b96247c7046dca0c5dd3fc97485dfdc2ef716212ad9be819a168ebb022607d71"} Feb 27 17:28:22 crc kubenswrapper[4830]: I0227 17:28:22.482076 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e2e2ff35-b569-4ab4-b1f3-47ec2327caeb","Type":"ContainerStarted","Data":"190374fb30480adb75940007b04d7b4b45db2397951792686280a26e152b5d41"} Feb 27 17:28:25 crc kubenswrapper[4830]: I0227 17:28:25.135212 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 27 17:28:25 crc kubenswrapper[4830]: I0227 17:28:25.512068 4830 generic.go:334] "Generic (PLEG): container finished" podID="74c21e05-7e2b-4653-b6fa-a9a814716cc1" containerID="8ecfd46894705d9115f8e2a9db4b05fc2a90057a326a7a75eb0e77172d1a6311" exitCode=0 Feb 27 17:28:25 crc kubenswrapper[4830]: I0227 17:28:25.512154 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"74c21e05-7e2b-4653-b6fa-a9a814716cc1","Type":"ContainerDied","Data":"8ecfd46894705d9115f8e2a9db4b05fc2a90057a326a7a75eb0e77172d1a6311"} Feb 27 17:28:26 crc kubenswrapper[4830]: I0227 17:28:26.522637 4830 generic.go:334] "Generic (PLEG): container finished" podID="e2e2ff35-b569-4ab4-b1f3-47ec2327caeb" containerID="b96247c7046dca0c5dd3fc97485dfdc2ef716212ad9be819a168ebb022607d71" exitCode=0 Feb 27 17:28:26 crc kubenswrapper[4830]: I0227 17:28:26.522791 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e2e2ff35-b569-4ab4-b1f3-47ec2327caeb","Type":"ContainerDied","Data":"b96247c7046dca0c5dd3fc97485dfdc2ef716212ad9be819a168ebb022607d71"} Feb 27 17:28:26 crc kubenswrapper[4830]: I0227 17:28:26.527918 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"74c21e05-7e2b-4653-b6fa-a9a814716cc1","Type":"ContainerStarted","Data":"af5843517a4ef4889e4d3f12ebe406e1ce7be6ae3979d26791d5aaa6cc7147a1"} Feb 27 17:28:26 crc kubenswrapper[4830]: I0227 17:28:26.602009 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=8.601984186 podStartE2EDuration="8.601984186s" podCreationTimestamp="2026-02-27 17:28:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:28:26.580524247 +0000 UTC m=+4902.669796750" watchObservedRunningTime="2026-02-27 17:28:26.601984186 +0000 UTC m=+4902.691256659" Feb 27 17:28:27 crc kubenswrapper[4830]: I0227 17:28:27.112248 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-98ddfc8f-djh5r" Feb 27 17:28:27 crc kubenswrapper[4830]: I0227 17:28:27.130632 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d7b5456f5-m74vx" Feb 27 17:28:27 crc kubenswrapper[4830]: I0227 17:28:27.223283 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-m74vx"] Feb 27 17:28:27 crc kubenswrapper[4830]: I0227 17:28:27.537509 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e2e2ff35-b569-4ab4-b1f3-47ec2327caeb","Type":"ContainerStarted","Data":"90415a24e3a81fa6b996ed26164f9b270fdb910e4d769565744a3ecd285cb26f"} Feb 27 17:28:27 crc kubenswrapper[4830]: I0227 17:28:27.537678 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d7b5456f5-m74vx" podUID="d3feb34d-77f7-457c-bf94-37680d7cf3a3" containerName="dnsmasq-dns" containerID="cri-o://ca4554ba35ee5b5788d15f0391475f172b4528839da759773125abe85b16d34a" gracePeriod=10 Feb 27 17:28:27 crc kubenswrapper[4830]: I0227 17:28:27.568643 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=8.56862172 podStartE2EDuration="8.56862172s" podCreationTimestamp="2026-02-27 17:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:28:27.565088726 +0000 UTC m=+4903.654361189" watchObservedRunningTime="2026-02-27 17:28:27.56862172 +0000 UTC m=+4903.657894183" Feb 27 17:28:28 crc kubenswrapper[4830]: I0227 17:28:28.028800 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-m74vx" Feb 27 17:28:28 crc kubenswrapper[4830]: I0227 17:28:28.087905 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3feb34d-77f7-457c-bf94-37680d7cf3a3-config\") pod \"d3feb34d-77f7-457c-bf94-37680d7cf3a3\" (UID: \"d3feb34d-77f7-457c-bf94-37680d7cf3a3\") " Feb 27 17:28:28 crc kubenswrapper[4830]: I0227 17:28:28.088017 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h867c\" (UniqueName: \"kubernetes.io/projected/d3feb34d-77f7-457c-bf94-37680d7cf3a3-kube-api-access-h867c\") pod \"d3feb34d-77f7-457c-bf94-37680d7cf3a3\" (UID: \"d3feb34d-77f7-457c-bf94-37680d7cf3a3\") " Feb 27 17:28:28 crc kubenswrapper[4830]: I0227 17:28:28.088045 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3feb34d-77f7-457c-bf94-37680d7cf3a3-dns-svc\") pod \"d3feb34d-77f7-457c-bf94-37680d7cf3a3\" (UID: \"d3feb34d-77f7-457c-bf94-37680d7cf3a3\") " Feb 27 17:28:28 crc kubenswrapper[4830]: I0227 17:28:28.491604 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3feb34d-77f7-457c-bf94-37680d7cf3a3-kube-api-access-h867c" (OuterVolumeSpecName: "kube-api-access-h867c") pod "d3feb34d-77f7-457c-bf94-37680d7cf3a3" (UID: "d3feb34d-77f7-457c-bf94-37680d7cf3a3"). InnerVolumeSpecName "kube-api-access-h867c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:28:28 crc kubenswrapper[4830]: I0227 17:28:28.497299 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h867c\" (UniqueName: \"kubernetes.io/projected/d3feb34d-77f7-457c-bf94-37680d7cf3a3-kube-api-access-h867c\") on node \"crc\" DevicePath \"\"" Feb 27 17:28:28 crc kubenswrapper[4830]: I0227 17:28:28.555608 4830 generic.go:334] "Generic (PLEG): container finished" podID="d3feb34d-77f7-457c-bf94-37680d7cf3a3" containerID="ca4554ba35ee5b5788d15f0391475f172b4528839da759773125abe85b16d34a" exitCode=0 Feb 27 17:28:28 crc kubenswrapper[4830]: I0227 17:28:28.555680 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7b5456f5-m74vx" Feb 27 17:28:28 crc kubenswrapper[4830]: I0227 17:28:28.555678 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-m74vx" event={"ID":"d3feb34d-77f7-457c-bf94-37680d7cf3a3","Type":"ContainerDied","Data":"ca4554ba35ee5b5788d15f0391475f172b4528839da759773125abe85b16d34a"} Feb 27 17:28:28 crc kubenswrapper[4830]: I0227 17:28:28.555767 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7b5456f5-m74vx" event={"ID":"d3feb34d-77f7-457c-bf94-37680d7cf3a3","Type":"ContainerDied","Data":"e32d4a5f4bb7216a14e9144c56423616f8e18fa8472664b61294ddccc111117a"} Feb 27 17:28:28 crc kubenswrapper[4830]: I0227 17:28:28.555805 4830 scope.go:117] "RemoveContainer" containerID="ca4554ba35ee5b5788d15f0391475f172b4528839da759773125abe85b16d34a" Feb 27 17:28:28 crc kubenswrapper[4830]: I0227 17:28:28.719464 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3feb34d-77f7-457c-bf94-37680d7cf3a3-config" (OuterVolumeSpecName: "config") pod "d3feb34d-77f7-457c-bf94-37680d7cf3a3" (UID: "d3feb34d-77f7-457c-bf94-37680d7cf3a3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:28:28 crc kubenswrapper[4830]: I0227 17:28:28.756139 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3feb34d-77f7-457c-bf94-37680d7cf3a3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d3feb34d-77f7-457c-bf94-37680d7cf3a3" (UID: "d3feb34d-77f7-457c-bf94-37680d7cf3a3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:28:28 crc kubenswrapper[4830]: I0227 17:28:28.799718 4830 scope.go:117] "RemoveContainer" containerID="c37b09ff2362124485dc57a73b9a6f43a06b428a210a8f7b20a73c0714d04848" Feb 27 17:28:28 crc kubenswrapper[4830]: I0227 17:28:28.804161 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3feb34d-77f7-457c-bf94-37680d7cf3a3-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:28:28 crc kubenswrapper[4830]: I0227 17:28:28.804191 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3feb34d-77f7-457c-bf94-37680d7cf3a3-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 17:28:28 crc kubenswrapper[4830]: I0227 17:28:28.828690 4830 scope.go:117] "RemoveContainer" containerID="ca4554ba35ee5b5788d15f0391475f172b4528839da759773125abe85b16d34a" Feb 27 17:28:28 crc kubenswrapper[4830]: E0227 17:28:28.829559 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca4554ba35ee5b5788d15f0391475f172b4528839da759773125abe85b16d34a\": container with ID starting with ca4554ba35ee5b5788d15f0391475f172b4528839da759773125abe85b16d34a not found: ID does not exist" containerID="ca4554ba35ee5b5788d15f0391475f172b4528839da759773125abe85b16d34a" Feb 27 17:28:28 crc kubenswrapper[4830]: I0227 17:28:28.829629 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca4554ba35ee5b5788d15f0391475f172b4528839da759773125abe85b16d34a"} err="failed to get container status \"ca4554ba35ee5b5788d15f0391475f172b4528839da759773125abe85b16d34a\": rpc error: code = NotFound desc = could not find container \"ca4554ba35ee5b5788d15f0391475f172b4528839da759773125abe85b16d34a\": container with ID starting with ca4554ba35ee5b5788d15f0391475f172b4528839da759773125abe85b16d34a not found: ID does not exist" Feb 27 17:28:28 crc kubenswrapper[4830]: I0227 17:28:28.829677 4830 scope.go:117] "RemoveContainer" containerID="c37b09ff2362124485dc57a73b9a6f43a06b428a210a8f7b20a73c0714d04848" Feb 27 17:28:28 crc kubenswrapper[4830]: E0227 17:28:28.830308 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c37b09ff2362124485dc57a73b9a6f43a06b428a210a8f7b20a73c0714d04848\": container with ID starting with c37b09ff2362124485dc57a73b9a6f43a06b428a210a8f7b20a73c0714d04848 not found: ID does not exist" containerID="c37b09ff2362124485dc57a73b9a6f43a06b428a210a8f7b20a73c0714d04848" Feb 27 17:28:28 crc kubenswrapper[4830]: I0227 17:28:28.830376 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c37b09ff2362124485dc57a73b9a6f43a06b428a210a8f7b20a73c0714d04848"} err="failed to get container status \"c37b09ff2362124485dc57a73b9a6f43a06b428a210a8f7b20a73c0714d04848\": rpc error: code = NotFound desc = could not find container \"c37b09ff2362124485dc57a73b9a6f43a06b428a210a8f7b20a73c0714d04848\": container with ID starting with c37b09ff2362124485dc57a73b9a6f43a06b428a210a8f7b20a73c0714d04848 not found: ID does not exist" Feb 27 17:28:28 crc kubenswrapper[4830]: I0227 17:28:28.886856 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-m74vx"] Feb 27 17:28:28 crc kubenswrapper[4830]: I0227 17:28:28.915833 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d7b5456f5-m74vx"] Feb 27 17:28:30 crc kubenswrapper[4830]: I0227 17:28:30.222723 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 27 17:28:30 crc kubenswrapper[4830]: I0227 17:28:30.224980 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 27 17:28:30 crc kubenswrapper[4830]: I0227 17:28:30.349915 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 27 17:28:30 crc kubenswrapper[4830]: I0227 17:28:30.777372 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3feb34d-77f7-457c-bf94-37680d7cf3a3" path="/var/lib/kubelet/pods/d3feb34d-77f7-457c-bf94-37680d7cf3a3/volumes" Feb 27 17:28:30 crc kubenswrapper[4830]: I0227 17:28:30.994814 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 27 17:28:31 crc kubenswrapper[4830]: I0227 17:28:31.051234 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 27 17:28:31 crc kubenswrapper[4830]: I0227 17:28:31.065248 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 27 17:28:32 crc kubenswrapper[4830]: E0227 17:28:32.276226 4830 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.36:53284->38.129.56.36:42557: write tcp 38.129.56.36:53284->38.129.56.36:42557: write: broken pipe Feb 27 17:28:33 crc kubenswrapper[4830]: I0227 17:28:33.678575 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 27 17:28:33 crc kubenswrapper[4830]: I0227 17:28:33.814572 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 27 17:28:38 crc kubenswrapper[4830]: I0227 17:28:38.286779 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-bxm8w"] Feb 27 17:28:38 crc kubenswrapper[4830]: E0227 17:28:38.287804 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3feb34d-77f7-457c-bf94-37680d7cf3a3" containerName="init" Feb 27 17:28:38 crc kubenswrapper[4830]: I0227 17:28:38.287821 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3feb34d-77f7-457c-bf94-37680d7cf3a3" containerName="init" Feb 27 17:28:38 crc kubenswrapper[4830]: E0227 17:28:38.287857 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3feb34d-77f7-457c-bf94-37680d7cf3a3" containerName="dnsmasq-dns" Feb 27 17:28:38 crc kubenswrapper[4830]: I0227 17:28:38.287866 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3feb34d-77f7-457c-bf94-37680d7cf3a3" containerName="dnsmasq-dns" Feb 27 17:28:38 crc kubenswrapper[4830]: I0227 17:28:38.290371 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3feb34d-77f7-457c-bf94-37680d7cf3a3" containerName="dnsmasq-dns" Feb 27 17:28:38 crc kubenswrapper[4830]: I0227 17:28:38.293102 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bxm8w" Feb 27 17:28:38 crc kubenswrapper[4830]: I0227 17:28:38.298879 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 27 17:28:38 crc kubenswrapper[4830]: I0227 17:28:38.312698 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8b2n\" (UniqueName: \"kubernetes.io/projected/4b259268-412b-4481-a8f6-023c2e689acb-kube-api-access-x8b2n\") pod \"root-account-create-update-bxm8w\" (UID: \"4b259268-412b-4481-a8f6-023c2e689acb\") " pod="openstack/root-account-create-update-bxm8w" Feb 27 17:28:38 crc kubenswrapper[4830]: I0227 17:28:38.312814 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b259268-412b-4481-a8f6-023c2e689acb-operator-scripts\") pod \"root-account-create-update-bxm8w\" (UID: \"4b259268-412b-4481-a8f6-023c2e689acb\") " pod="openstack/root-account-create-update-bxm8w" Feb 27 17:28:38 crc kubenswrapper[4830]: I0227 17:28:38.328190 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-bxm8w"] Feb 27 17:28:38 crc kubenswrapper[4830]: I0227 17:28:38.414842 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b259268-412b-4481-a8f6-023c2e689acb-operator-scripts\") pod \"root-account-create-update-bxm8w\" (UID: \"4b259268-412b-4481-a8f6-023c2e689acb\") " pod="openstack/root-account-create-update-bxm8w" Feb 27 17:28:38 crc kubenswrapper[4830]: I0227 17:28:38.414978 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8b2n\" (UniqueName: \"kubernetes.io/projected/4b259268-412b-4481-a8f6-023c2e689acb-kube-api-access-x8b2n\") pod \"root-account-create-update-bxm8w\" (UID: \"4b259268-412b-4481-a8f6-023c2e689acb\") " pod="openstack/root-account-create-update-bxm8w" Feb 27 17:28:38 crc kubenswrapper[4830]: I0227 17:28:38.416178 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b259268-412b-4481-a8f6-023c2e689acb-operator-scripts\") pod \"root-account-create-update-bxm8w\" (UID: \"4b259268-412b-4481-a8f6-023c2e689acb\") " pod="openstack/root-account-create-update-bxm8w" Feb 27 17:28:38 crc kubenswrapper[4830]: I0227 17:28:38.447694 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8b2n\" (UniqueName: \"kubernetes.io/projected/4b259268-412b-4481-a8f6-023c2e689acb-kube-api-access-x8b2n\") pod \"root-account-create-update-bxm8w\" (UID: \"4b259268-412b-4481-a8f6-023c2e689acb\") " pod="openstack/root-account-create-update-bxm8w" Feb 27 17:28:38 crc kubenswrapper[4830]: I0227 17:28:38.636344 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bxm8w" Feb 27 17:28:38 crc kubenswrapper[4830]: I0227 17:28:38.977984 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-bxm8w"] Feb 27 17:28:39 crc kubenswrapper[4830]: I0227 17:28:39.675823 4830 generic.go:334] "Generic (PLEG): container finished" podID="4b259268-412b-4481-a8f6-023c2e689acb" containerID="a480bad0cbfab70d22afdc63f371aa0c1ccec24ce2eea886522351c227e7342f" exitCode=0 Feb 27 17:28:39 crc kubenswrapper[4830]: I0227 17:28:39.675897 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bxm8w" event={"ID":"4b259268-412b-4481-a8f6-023c2e689acb","Type":"ContainerDied","Data":"a480bad0cbfab70d22afdc63f371aa0c1ccec24ce2eea886522351c227e7342f"} Feb 27 17:28:39 crc kubenswrapper[4830]: I0227 17:28:39.675996 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bxm8w" event={"ID":"4b259268-412b-4481-a8f6-023c2e689acb","Type":"ContainerStarted","Data":"9edaac3e2db4201a10ef66fa058ea8ade0b088706c7c625baa7ec25c4fd51f20"} Feb 27 17:28:41 crc kubenswrapper[4830]: I0227 17:28:41.132526 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bxm8w" Feb 27 17:28:41 crc kubenswrapper[4830]: I0227 17:28:41.265926 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b259268-412b-4481-a8f6-023c2e689acb-operator-scripts\") pod \"4b259268-412b-4481-a8f6-023c2e689acb\" (UID: \"4b259268-412b-4481-a8f6-023c2e689acb\") " Feb 27 17:28:41 crc kubenswrapper[4830]: I0227 17:28:41.266093 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8b2n\" (UniqueName: \"kubernetes.io/projected/4b259268-412b-4481-a8f6-023c2e689acb-kube-api-access-x8b2n\") pod \"4b259268-412b-4481-a8f6-023c2e689acb\" (UID: \"4b259268-412b-4481-a8f6-023c2e689acb\") " Feb 27 17:28:41 crc kubenswrapper[4830]: I0227 17:28:41.267226 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b259268-412b-4481-a8f6-023c2e689acb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4b259268-412b-4481-a8f6-023c2e689acb" (UID: "4b259268-412b-4481-a8f6-023c2e689acb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:28:41 crc kubenswrapper[4830]: I0227 17:28:41.272897 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b259268-412b-4481-a8f6-023c2e689acb-kube-api-access-x8b2n" (OuterVolumeSpecName: "kube-api-access-x8b2n") pod "4b259268-412b-4481-a8f6-023c2e689acb" (UID: "4b259268-412b-4481-a8f6-023c2e689acb"). InnerVolumeSpecName "kube-api-access-x8b2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:28:41 crc kubenswrapper[4830]: I0227 17:28:41.368440 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b259268-412b-4481-a8f6-023c2e689acb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:28:41 crc kubenswrapper[4830]: I0227 17:28:41.368467 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8b2n\" (UniqueName: \"kubernetes.io/projected/4b259268-412b-4481-a8f6-023c2e689acb-kube-api-access-x8b2n\") on node \"crc\" DevicePath \"\"" Feb 27 17:28:41 crc kubenswrapper[4830]: I0227 17:28:41.699820 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bxm8w" event={"ID":"4b259268-412b-4481-a8f6-023c2e689acb","Type":"ContainerDied","Data":"9edaac3e2db4201a10ef66fa058ea8ade0b088706c7c625baa7ec25c4fd51f20"} Feb 27 17:28:41 crc kubenswrapper[4830]: I0227 17:28:41.700296 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9edaac3e2db4201a10ef66fa058ea8ade0b088706c7c625baa7ec25c4fd51f20" Feb 27 17:28:41 crc kubenswrapper[4830]: I0227 17:28:41.699932 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bxm8w" Feb 27 17:28:44 crc kubenswrapper[4830]: I0227 17:28:44.701455 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-bxm8w"] Feb 27 17:28:44 crc kubenswrapper[4830]: I0227 17:28:44.712191 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-bxm8w"] Feb 27 17:28:44 crc kubenswrapper[4830]: I0227 17:28:44.781506 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b259268-412b-4481-a8f6-023c2e689acb" path="/var/lib/kubelet/pods/4b259268-412b-4481-a8f6-023c2e689acb/volumes" Feb 27 17:28:47 crc kubenswrapper[4830]: I0227 17:28:47.994260 4830 scope.go:117] "RemoveContainer" containerID="9873a9db75d9fe76533b217770ee9ec4f690845a88e8f2f6d2b531d8d0545044" Feb 27 17:28:49 crc kubenswrapper[4830]: I0227 17:28:49.711120 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-5ntzt"] Feb 27 17:28:49 crc kubenswrapper[4830]: E0227 17:28:49.711776 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b259268-412b-4481-a8f6-023c2e689acb" containerName="mariadb-account-create-update" Feb 27 17:28:49 crc kubenswrapper[4830]: I0227 17:28:49.711816 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b259268-412b-4481-a8f6-023c2e689acb" containerName="mariadb-account-create-update" Feb 27 17:28:49 crc kubenswrapper[4830]: I0227 17:28:49.712270 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b259268-412b-4481-a8f6-023c2e689acb" containerName="mariadb-account-create-update" Feb 27 17:28:49 crc kubenswrapper[4830]: I0227 17:28:49.713810 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5ntzt" Feb 27 17:28:49 crc kubenswrapper[4830]: I0227 17:28:49.718622 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 27 17:28:49 crc kubenswrapper[4830]: I0227 17:28:49.735207 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5ntzt"] Feb 27 17:28:49 crc kubenswrapper[4830]: I0227 17:28:49.736431 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrhj4\" (UniqueName: \"kubernetes.io/projected/cfce94bc-640b-4eb2-88a1-b77db6d2dd03-kube-api-access-vrhj4\") pod \"root-account-create-update-5ntzt\" (UID: \"cfce94bc-640b-4eb2-88a1-b77db6d2dd03\") " pod="openstack/root-account-create-update-5ntzt" Feb 27 17:28:49 crc kubenswrapper[4830]: I0227 17:28:49.736658 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfce94bc-640b-4eb2-88a1-b77db6d2dd03-operator-scripts\") pod \"root-account-create-update-5ntzt\" (UID: \"cfce94bc-640b-4eb2-88a1-b77db6d2dd03\") " pod="openstack/root-account-create-update-5ntzt" Feb 27 17:28:49 crc kubenswrapper[4830]: I0227 17:28:49.838317 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfce94bc-640b-4eb2-88a1-b77db6d2dd03-operator-scripts\") pod \"root-account-create-update-5ntzt\" (UID: \"cfce94bc-640b-4eb2-88a1-b77db6d2dd03\") " pod="openstack/root-account-create-update-5ntzt" Feb 27 17:28:49 crc kubenswrapper[4830]: I0227 17:28:49.838685 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrhj4\" (UniqueName: \"kubernetes.io/projected/cfce94bc-640b-4eb2-88a1-b77db6d2dd03-kube-api-access-vrhj4\") pod \"root-account-create-update-5ntzt\" (UID: \"cfce94bc-640b-4eb2-88a1-b77db6d2dd03\") " pod="openstack/root-account-create-update-5ntzt" Feb 27 17:28:49 crc kubenswrapper[4830]: I0227 17:28:49.839845 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfce94bc-640b-4eb2-88a1-b77db6d2dd03-operator-scripts\") pod \"root-account-create-update-5ntzt\" (UID: \"cfce94bc-640b-4eb2-88a1-b77db6d2dd03\") " pod="openstack/root-account-create-update-5ntzt" Feb 27 17:28:49 crc kubenswrapper[4830]: I0227 17:28:49.885474 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrhj4\" (UniqueName: \"kubernetes.io/projected/cfce94bc-640b-4eb2-88a1-b77db6d2dd03-kube-api-access-vrhj4\") pod \"root-account-create-update-5ntzt\" (UID: \"cfce94bc-640b-4eb2-88a1-b77db6d2dd03\") " pod="openstack/root-account-create-update-5ntzt" Feb 27 17:28:50 crc kubenswrapper[4830]: I0227 17:28:50.060036 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5ntzt" Feb 27 17:28:50 crc kubenswrapper[4830]: I0227 17:28:50.522054 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5ntzt"] Feb 27 17:28:50 crc kubenswrapper[4830]: W0227 17:28:50.531099 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfce94bc_640b_4eb2_88a1_b77db6d2dd03.slice/crio-a1cc90a3456242e805c67673a0a8f9f690823f30938405c3236699ac88be52be WatchSource:0}: Error finding container a1cc90a3456242e805c67673a0a8f9f690823f30938405c3236699ac88be52be: Status 404 returned error can't find the container with id a1cc90a3456242e805c67673a0a8f9f690823f30938405c3236699ac88be52be Feb 27 17:28:50 crc kubenswrapper[4830]: I0227 17:28:50.811842 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5ntzt" event={"ID":"cfce94bc-640b-4eb2-88a1-b77db6d2dd03","Type":"ContainerStarted","Data":"26b377974242a081e8eaee1435ffab810e202e6e94f1f90f1cfedc4d2dfe3e20"} Feb 27 17:28:50 crc kubenswrapper[4830]: I0227 17:28:50.812218 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5ntzt" event={"ID":"cfce94bc-640b-4eb2-88a1-b77db6d2dd03","Type":"ContainerStarted","Data":"a1cc90a3456242e805c67673a0a8f9f690823f30938405c3236699ac88be52be"} Feb 27 17:28:50 crc kubenswrapper[4830]: I0227 17:28:50.835930 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-5ntzt" podStartSLOduration=1.835910642 podStartE2EDuration="1.835910642s" podCreationTimestamp="2026-02-27 17:28:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:28:50.831771604 +0000 UTC m=+4926.921044117" watchObservedRunningTime="2026-02-27 17:28:50.835910642 +0000 UTC m=+4926.925183125" Feb 27 17:28:51 crc kubenswrapper[4830]: I0227 17:28:51.825533 4830 generic.go:334] "Generic (PLEG): container finished" podID="cfce94bc-640b-4eb2-88a1-b77db6d2dd03" containerID="26b377974242a081e8eaee1435ffab810e202e6e94f1f90f1cfedc4d2dfe3e20" exitCode=0 Feb 27 17:28:51 crc kubenswrapper[4830]: I0227 17:28:51.825599 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5ntzt" event={"ID":"cfce94bc-640b-4eb2-88a1-b77db6d2dd03","Type":"ContainerDied","Data":"26b377974242a081e8eaee1435ffab810e202e6e94f1f90f1cfedc4d2dfe3e20"} Feb 27 17:28:52 crc kubenswrapper[4830]: I0227 17:28:52.838466 4830 generic.go:334] "Generic (PLEG): container finished" podID="372cd8c4-0006-4cea-8408-2fe8bbb4844b" containerID="74d5999edd4b0b6d813de11160d8c361ea0a022b3cf223667ce5fada4e7f0549" exitCode=0 Feb 27 17:28:52 crc kubenswrapper[4830]: I0227 17:28:52.838542 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"372cd8c4-0006-4cea-8408-2fe8bbb4844b","Type":"ContainerDied","Data":"74d5999edd4b0b6d813de11160d8c361ea0a022b3cf223667ce5fada4e7f0549"} Feb 27 17:28:53 crc kubenswrapper[4830]: I0227 17:28:53.256641 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5ntzt" Feb 27 17:28:53 crc kubenswrapper[4830]: I0227 17:28:53.392872 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfce94bc-640b-4eb2-88a1-b77db6d2dd03-operator-scripts\") pod \"cfce94bc-640b-4eb2-88a1-b77db6d2dd03\" (UID: \"cfce94bc-640b-4eb2-88a1-b77db6d2dd03\") " Feb 27 17:28:53 crc kubenswrapper[4830]: I0227 17:28:53.393249 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrhj4\" (UniqueName: \"kubernetes.io/projected/cfce94bc-640b-4eb2-88a1-b77db6d2dd03-kube-api-access-vrhj4\") pod \"cfce94bc-640b-4eb2-88a1-b77db6d2dd03\" (UID: \"cfce94bc-640b-4eb2-88a1-b77db6d2dd03\") " Feb 27 17:28:53 crc kubenswrapper[4830]: I0227 17:28:53.394004 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfce94bc-640b-4eb2-88a1-b77db6d2dd03-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cfce94bc-640b-4eb2-88a1-b77db6d2dd03" (UID: "cfce94bc-640b-4eb2-88a1-b77db6d2dd03"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:28:53 crc kubenswrapper[4830]: I0227 17:28:53.397798 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfce94bc-640b-4eb2-88a1-b77db6d2dd03-kube-api-access-vrhj4" (OuterVolumeSpecName: "kube-api-access-vrhj4") pod "cfce94bc-640b-4eb2-88a1-b77db6d2dd03" (UID: "cfce94bc-640b-4eb2-88a1-b77db6d2dd03"). InnerVolumeSpecName "kube-api-access-vrhj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:28:53 crc kubenswrapper[4830]: I0227 17:28:53.495735 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrhj4\" (UniqueName: \"kubernetes.io/projected/cfce94bc-640b-4eb2-88a1-b77db6d2dd03-kube-api-access-vrhj4\") on node \"crc\" DevicePath \"\"" Feb 27 17:28:53 crc kubenswrapper[4830]: I0227 17:28:53.495794 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfce94bc-640b-4eb2-88a1-b77db6d2dd03-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:28:53 crc kubenswrapper[4830]: I0227 17:28:53.848914 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5ntzt" Feb 27 17:28:53 crc kubenswrapper[4830]: I0227 17:28:53.848909 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5ntzt" event={"ID":"cfce94bc-640b-4eb2-88a1-b77db6d2dd03","Type":"ContainerDied","Data":"a1cc90a3456242e805c67673a0a8f9f690823f30938405c3236699ac88be52be"} Feb 27 17:28:53 crc kubenswrapper[4830]: I0227 17:28:53.849056 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1cc90a3456242e805c67673a0a8f9f690823f30938405c3236699ac88be52be" Feb 27 17:28:53 crc kubenswrapper[4830]: I0227 17:28:53.850485 4830 generic.go:334] "Generic (PLEG): container finished" podID="bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f" containerID="15671b859bb553bb9640aad04b6323abbd5e3a905fc972d50ced9b9d9bbff8fa" exitCode=0 Feb 27 17:28:53 crc kubenswrapper[4830]: I0227 17:28:53.850533 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f","Type":"ContainerDied","Data":"15671b859bb553bb9640aad04b6323abbd5e3a905fc972d50ced9b9d9bbff8fa"} Feb 27 17:28:53 crc kubenswrapper[4830]: I0227 17:28:53.852687 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"372cd8c4-0006-4cea-8408-2fe8bbb4844b","Type":"ContainerStarted","Data":"017a0cc8fc094c9b206c4c667a673755b404f9bd32252559a33eb31723c09288"} Feb 27 17:28:53 crc kubenswrapper[4830]: I0227 17:28:53.852906 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 27 17:28:53 crc kubenswrapper[4830]: I0227 17:28:53.916942 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.916922126 podStartE2EDuration="37.916922126s" podCreationTimestamp="2026-02-27 17:28:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:28:53.915985094 +0000 UTC m=+4930.005257567" watchObservedRunningTime="2026-02-27 17:28:53.916922126 +0000 UTC m=+4930.006194599" Feb 27 17:28:54 crc kubenswrapper[4830]: I0227 17:28:54.862111 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f","Type":"ContainerStarted","Data":"bd4e4a7612e29b18fcc9b8d5657beb75c5fa5e5a48e3a6120adcdcb564ad8475"} Feb 27 17:28:54 crc kubenswrapper[4830]: I0227 17:28:54.862390 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:28:54 crc kubenswrapper[4830]: I0227 17:28:54.888011 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.887992615 podStartE2EDuration="38.887992615s" podCreationTimestamp="2026-02-27 17:28:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:28:54.886142431 +0000 UTC m=+4930.975414914" watchObservedRunningTime="2026-02-27 17:28:54.887992615 +0000 UTC m=+4930.977265068" Feb 27 17:29:08 crc kubenswrapper[4830]: I0227 17:29:08.012392 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 27 17:29:08 crc kubenswrapper[4830]: I0227 17:29:08.294231 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:29:13 crc kubenswrapper[4830]: I0227 17:29:13.786247 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-68742"] Feb 27 17:29:13 crc kubenswrapper[4830]: E0227 17:29:13.787171 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfce94bc-640b-4eb2-88a1-b77db6d2dd03" containerName="mariadb-account-create-update" Feb 27 17:29:13 crc kubenswrapper[4830]: I0227 17:29:13.787187 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfce94bc-640b-4eb2-88a1-b77db6d2dd03" containerName="mariadb-account-create-update" Feb 27 17:29:13 crc kubenswrapper[4830]: I0227 17:29:13.787363 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfce94bc-640b-4eb2-88a1-b77db6d2dd03" containerName="mariadb-account-create-update" Feb 27 17:29:13 crc kubenswrapper[4830]: I0227 17:29:13.788264 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-68742" Feb 27 17:29:13 crc kubenswrapper[4830]: I0227 17:29:13.813265 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-68742"] Feb 27 17:29:13 crc kubenswrapper[4830]: I0227 17:29:13.856725 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9d32d14-02d4-46b6-8949-d183cf055428-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-68742\" (UID: \"f9d32d14-02d4-46b6-8949-d183cf055428\") " pod="openstack/dnsmasq-dns-5b7946d7b9-68742" Feb 27 17:29:13 crc kubenswrapper[4830]: I0227 17:29:13.856811 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdd27\" (UniqueName: \"kubernetes.io/projected/f9d32d14-02d4-46b6-8949-d183cf055428-kube-api-access-rdd27\") pod \"dnsmasq-dns-5b7946d7b9-68742\" (UID: \"f9d32d14-02d4-46b6-8949-d183cf055428\") " pod="openstack/dnsmasq-dns-5b7946d7b9-68742" Feb 27 17:29:13 crc kubenswrapper[4830]: I0227 17:29:13.856865 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9d32d14-02d4-46b6-8949-d183cf055428-config\") pod \"dnsmasq-dns-5b7946d7b9-68742\" (UID: \"f9d32d14-02d4-46b6-8949-d183cf055428\") " pod="openstack/dnsmasq-dns-5b7946d7b9-68742" Feb 27 17:29:13 crc kubenswrapper[4830]: I0227 17:29:13.958599 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9d32d14-02d4-46b6-8949-d183cf055428-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-68742\" (UID: \"f9d32d14-02d4-46b6-8949-d183cf055428\") " pod="openstack/dnsmasq-dns-5b7946d7b9-68742" Feb 27 17:29:13 crc kubenswrapper[4830]: I0227 17:29:13.958707 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdd27\" (UniqueName: \"kubernetes.io/projected/f9d32d14-02d4-46b6-8949-d183cf055428-kube-api-access-rdd27\") pod \"dnsmasq-dns-5b7946d7b9-68742\" (UID: \"f9d32d14-02d4-46b6-8949-d183cf055428\") " pod="openstack/dnsmasq-dns-5b7946d7b9-68742" Feb 27 17:29:13 crc kubenswrapper[4830]: I0227 17:29:13.958763 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9d32d14-02d4-46b6-8949-d183cf055428-config\") pod \"dnsmasq-dns-5b7946d7b9-68742\" (UID: \"f9d32d14-02d4-46b6-8949-d183cf055428\") " pod="openstack/dnsmasq-dns-5b7946d7b9-68742" Feb 27 17:29:13 crc kubenswrapper[4830]: I0227 17:29:13.960367 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9d32d14-02d4-46b6-8949-d183cf055428-config\") pod \"dnsmasq-dns-5b7946d7b9-68742\" (UID: \"f9d32d14-02d4-46b6-8949-d183cf055428\") " pod="openstack/dnsmasq-dns-5b7946d7b9-68742" Feb 27 17:29:13 crc kubenswrapper[4830]: I0227 17:29:13.960940 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9d32d14-02d4-46b6-8949-d183cf055428-dns-svc\") pod \"dnsmasq-dns-5b7946d7b9-68742\" (UID: \"f9d32d14-02d4-46b6-8949-d183cf055428\") " pod="openstack/dnsmasq-dns-5b7946d7b9-68742" Feb 27 17:29:13 crc kubenswrapper[4830]: I0227 17:29:13.988627 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdd27\" (UniqueName: \"kubernetes.io/projected/f9d32d14-02d4-46b6-8949-d183cf055428-kube-api-access-rdd27\") pod \"dnsmasq-dns-5b7946d7b9-68742\" (UID: \"f9d32d14-02d4-46b6-8949-d183cf055428\") " pod="openstack/dnsmasq-dns-5b7946d7b9-68742" Feb 27 17:29:14 crc kubenswrapper[4830]: I0227 17:29:14.121459 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-68742" Feb 27 17:29:14 crc kubenswrapper[4830]: I0227 17:29:14.417265 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-68742"] Feb 27 17:29:14 crc kubenswrapper[4830]: I0227 17:29:14.651783 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 17:29:15 crc kubenswrapper[4830]: I0227 17:29:15.068631 4830 generic.go:334] "Generic (PLEG): container finished" podID="f9d32d14-02d4-46b6-8949-d183cf055428" containerID="dbcf98ff9ac9c7f2e652167587134f123b523487119042c835f8c68e8558e7db" exitCode=0 Feb 27 17:29:15 crc kubenswrapper[4830]: I0227 17:29:15.068674 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-68742" event={"ID":"f9d32d14-02d4-46b6-8949-d183cf055428","Type":"ContainerDied","Data":"dbcf98ff9ac9c7f2e652167587134f123b523487119042c835f8c68e8558e7db"} Feb 27 17:29:15 crc kubenswrapper[4830]: I0227 17:29:15.068700 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-68742" event={"ID":"f9d32d14-02d4-46b6-8949-d183cf055428","Type":"ContainerStarted","Data":"1069f06859508065d87df99d1eb23d6f1d28eeb4b602959cbfa5f2b43d5e58d1"} Feb 27 17:29:15 crc kubenswrapper[4830]: I0227 17:29:15.394508 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 17:29:16 crc kubenswrapper[4830]: I0227 17:29:16.080214 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-68742" event={"ID":"f9d32d14-02d4-46b6-8949-d183cf055428","Type":"ContainerStarted","Data":"398a2068cd2085ee139379a1f89e4167dd96f986424eb802cf1c0618fcb22970"} Feb 27 17:29:16 crc kubenswrapper[4830]: I0227 17:29:16.081018 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b7946d7b9-68742" Feb 27 17:29:16 crc kubenswrapper[4830]: I0227 17:29:16.111904 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b7946d7b9-68742" podStartSLOduration=3.111886739 podStartE2EDuration="3.111886739s" podCreationTimestamp="2026-02-27 17:29:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:29:16.104402452 +0000 UTC m=+4952.193674915" watchObservedRunningTime="2026-02-27 17:29:16.111886739 +0000 UTC m=+4952.201159202" Feb 27 17:29:16 crc kubenswrapper[4830]: I0227 17:29:16.366046 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="372cd8c4-0006-4cea-8408-2fe8bbb4844b" containerName="rabbitmq" containerID="cri-o://017a0cc8fc094c9b206c4c667a673755b404f9bd32252559a33eb31723c09288" gracePeriod=604799 Feb 27 17:29:17 crc kubenswrapper[4830]: I0227 17:29:17.323847 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f" containerName="rabbitmq" containerID="cri-o://bd4e4a7612e29b18fcc9b8d5657beb75c5fa5e5a48e3a6120adcdcb564ad8475" gracePeriod=604799 Feb 27 17:29:18 crc kubenswrapper[4830]: I0227 17:29:18.008169 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="372cd8c4-0006-4cea-8408-2fe8bbb4844b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.22:5672: connect: connection refused" Feb 27 17:29:18 crc kubenswrapper[4830]: I0227 17:29:18.292198 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.23:5672: connect: connection refused" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.084426 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.153158 4830 generic.go:334] "Generic (PLEG): container finished" podID="372cd8c4-0006-4cea-8408-2fe8bbb4844b" containerID="017a0cc8fc094c9b206c4c667a673755b404f9bd32252559a33eb31723c09288" exitCode=0 Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.153221 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"372cd8c4-0006-4cea-8408-2fe8bbb4844b","Type":"ContainerDied","Data":"017a0cc8fc094c9b206c4c667a673755b404f9bd32252559a33eb31723c09288"} Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.153260 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"372cd8c4-0006-4cea-8408-2fe8bbb4844b","Type":"ContainerDied","Data":"bd62dd68983468960351bc37021c386dc6ef4818c7439f6649d3efd989fe91cb"} Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.153288 4830 scope.go:117] "RemoveContainer" containerID="017a0cc8fc094c9b206c4c667a673755b404f9bd32252559a33eb31723c09288" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.153459 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.193401 4830 scope.go:117] "RemoveContainer" containerID="74d5999edd4b0b6d813de11160d8c361ea0a022b3cf223667ce5fada4e7f0549" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.219439 4830 scope.go:117] "RemoveContainer" containerID="017a0cc8fc094c9b206c4c667a673755b404f9bd32252559a33eb31723c09288" Feb 27 17:29:23 crc kubenswrapper[4830]: E0227 17:29:23.225544 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"017a0cc8fc094c9b206c4c667a673755b404f9bd32252559a33eb31723c09288\": container with ID starting with 017a0cc8fc094c9b206c4c667a673755b404f9bd32252559a33eb31723c09288 not found: ID does not exist" containerID="017a0cc8fc094c9b206c4c667a673755b404f9bd32252559a33eb31723c09288" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.225900 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"017a0cc8fc094c9b206c4c667a673755b404f9bd32252559a33eb31723c09288"} err="failed to get container status \"017a0cc8fc094c9b206c4c667a673755b404f9bd32252559a33eb31723c09288\": rpc error: code = NotFound desc = could not find container \"017a0cc8fc094c9b206c4c667a673755b404f9bd32252559a33eb31723c09288\": container with ID starting with 017a0cc8fc094c9b206c4c667a673755b404f9bd32252559a33eb31723c09288 not found: ID does not exist" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.225940 4830 scope.go:117] "RemoveContainer" containerID="74d5999edd4b0b6d813de11160d8c361ea0a022b3cf223667ce5fada4e7f0549" Feb 27 17:29:23 crc kubenswrapper[4830]: E0227 17:29:23.230275 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74d5999edd4b0b6d813de11160d8c361ea0a022b3cf223667ce5fada4e7f0549\": container with ID starting with 74d5999edd4b0b6d813de11160d8c361ea0a022b3cf223667ce5fada4e7f0549 not found: ID does not exist" containerID="74d5999edd4b0b6d813de11160d8c361ea0a022b3cf223667ce5fada4e7f0549" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.230340 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74d5999edd4b0b6d813de11160d8c361ea0a022b3cf223667ce5fada4e7f0549"} err="failed to get container status \"74d5999edd4b0b6d813de11160d8c361ea0a022b3cf223667ce5fada4e7f0549\": rpc error: code = NotFound desc = could not find container \"74d5999edd4b0b6d813de11160d8c361ea0a022b3cf223667ce5fada4e7f0549\": container with ID starting with 74d5999edd4b0b6d813de11160d8c361ea0a022b3cf223667ce5fada4e7f0549 not found: ID does not exist" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.239891 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-504966dc-acc0-4918-899b-693b7ff91a9e\") pod \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\" (UID: \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\") " Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.240064 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/372cd8c4-0006-4cea-8408-2fe8bbb4844b-erlang-cookie-secret\") pod \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\" (UID: \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\") " Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.240114 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/372cd8c4-0006-4cea-8408-2fe8bbb4844b-pod-info\") pod \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\" (UID: \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\") " Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.240414 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98bbc\" (UniqueName: \"kubernetes.io/projected/372cd8c4-0006-4cea-8408-2fe8bbb4844b-kube-api-access-98bbc\") pod \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\" (UID: \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\") " Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.241224 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/372cd8c4-0006-4cea-8408-2fe8bbb4844b-rabbitmq-erlang-cookie\") pod \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\" (UID: \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\") " Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.241274 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/372cd8c4-0006-4cea-8408-2fe8bbb4844b-plugins-conf\") pod \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\" (UID: \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\") " Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.241339 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/372cd8c4-0006-4cea-8408-2fe8bbb4844b-rabbitmq-confd\") pod \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\" (UID: \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\") " Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.241376 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/372cd8c4-0006-4cea-8408-2fe8bbb4844b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "372cd8c4-0006-4cea-8408-2fe8bbb4844b" (UID: "372cd8c4-0006-4cea-8408-2fe8bbb4844b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.241398 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/372cd8c4-0006-4cea-8408-2fe8bbb4844b-server-conf\") pod \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\" (UID: \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\") " Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.241459 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/372cd8c4-0006-4cea-8408-2fe8bbb4844b-rabbitmq-plugins\") pod \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\" (UID: \"372cd8c4-0006-4cea-8408-2fe8bbb4844b\") " Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.242276 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/372cd8c4-0006-4cea-8408-2fe8bbb4844b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "372cd8c4-0006-4cea-8408-2fe8bbb4844b" (UID: "372cd8c4-0006-4cea-8408-2fe8bbb4844b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.242392 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/372cd8c4-0006-4cea-8408-2fe8bbb4844b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "372cd8c4-0006-4cea-8408-2fe8bbb4844b" (UID: "372cd8c4-0006-4cea-8408-2fe8bbb4844b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.242881 4830 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/372cd8c4-0006-4cea-8408-2fe8bbb4844b-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.242924 4830 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/372cd8c4-0006-4cea-8408-2fe8bbb4844b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.242940 4830 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/372cd8c4-0006-4cea-8408-2fe8bbb4844b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.248718 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/372cd8c4-0006-4cea-8408-2fe8bbb4844b-pod-info" (OuterVolumeSpecName: "pod-info") pod "372cd8c4-0006-4cea-8408-2fe8bbb4844b" (UID: "372cd8c4-0006-4cea-8408-2fe8bbb4844b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.254052 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/372cd8c4-0006-4cea-8408-2fe8bbb4844b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "372cd8c4-0006-4cea-8408-2fe8bbb4844b" (UID: "372cd8c4-0006-4cea-8408-2fe8bbb4844b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.257470 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/372cd8c4-0006-4cea-8408-2fe8bbb4844b-kube-api-access-98bbc" (OuterVolumeSpecName: "kube-api-access-98bbc") pod "372cd8c4-0006-4cea-8408-2fe8bbb4844b" (UID: "372cd8c4-0006-4cea-8408-2fe8bbb4844b"). InnerVolumeSpecName "kube-api-access-98bbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.259826 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-504966dc-acc0-4918-899b-693b7ff91a9e" (OuterVolumeSpecName: "persistence") pod "372cd8c4-0006-4cea-8408-2fe8bbb4844b" (UID: "372cd8c4-0006-4cea-8408-2fe8bbb4844b"). InnerVolumeSpecName "pvc-504966dc-acc0-4918-899b-693b7ff91a9e". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.268474 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/372cd8c4-0006-4cea-8408-2fe8bbb4844b-server-conf" (OuterVolumeSpecName: "server-conf") pod "372cd8c4-0006-4cea-8408-2fe8bbb4844b" (UID: "372cd8c4-0006-4cea-8408-2fe8bbb4844b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.344875 4830 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/372cd8c4-0006-4cea-8408-2fe8bbb4844b-server-conf\") on node \"crc\" DevicePath \"\"" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.344998 4830 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-504966dc-acc0-4918-899b-693b7ff91a9e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-504966dc-acc0-4918-899b-693b7ff91a9e\") on node \"crc\" " Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.345020 4830 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/372cd8c4-0006-4cea-8408-2fe8bbb4844b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.345037 4830 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/372cd8c4-0006-4cea-8408-2fe8bbb4844b-pod-info\") on node \"crc\" DevicePath \"\"" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.345054 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98bbc\" (UniqueName: \"kubernetes.io/projected/372cd8c4-0006-4cea-8408-2fe8bbb4844b-kube-api-access-98bbc\") on node \"crc\" DevicePath \"\"" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.349577 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/372cd8c4-0006-4cea-8408-2fe8bbb4844b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "372cd8c4-0006-4cea-8408-2fe8bbb4844b" (UID: "372cd8c4-0006-4cea-8408-2fe8bbb4844b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.369437 4830 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.369626 4830 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-504966dc-acc0-4918-899b-693b7ff91a9e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-504966dc-acc0-4918-899b-693b7ff91a9e") on node "crc" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.447375 4830 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/372cd8c4-0006-4cea-8408-2fe8bbb4844b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.447810 4830 reconciler_common.go:293] "Volume detached for volume \"pvc-504966dc-acc0-4918-899b-693b7ff91a9e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-504966dc-acc0-4918-899b-693b7ff91a9e\") on node \"crc\" DevicePath \"\"" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.583410 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.599109 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.614858 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 17:29:23 crc kubenswrapper[4830]: E0227 17:29:23.615409 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="372cd8c4-0006-4cea-8408-2fe8bbb4844b" containerName="setup-container" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.615439 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="372cd8c4-0006-4cea-8408-2fe8bbb4844b" containerName="setup-container" Feb 27 17:29:23 crc kubenswrapper[4830]: E0227 17:29:23.615454 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="372cd8c4-0006-4cea-8408-2fe8bbb4844b" containerName="rabbitmq" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.615464 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="372cd8c4-0006-4cea-8408-2fe8bbb4844b" containerName="rabbitmq" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.615709 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="372cd8c4-0006-4cea-8408-2fe8bbb4844b" containerName="rabbitmq" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.618626 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.621669 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.621873 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.623026 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.623539 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-k49l2" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.631415 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.641275 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.759661 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ff3f1819-c196-4202-a77b-6272462a9671-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ff3f1819-c196-4202-a77b-6272462a9671\") " pod="openstack/rabbitmq-server-0" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.759995 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7drmn\" (UniqueName: \"kubernetes.io/projected/ff3f1819-c196-4202-a77b-6272462a9671-kube-api-access-7drmn\") pod \"rabbitmq-server-0\" (UID: \"ff3f1819-c196-4202-a77b-6272462a9671\") " pod="openstack/rabbitmq-server-0" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.760139 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ff3f1819-c196-4202-a77b-6272462a9671-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ff3f1819-c196-4202-a77b-6272462a9671\") " pod="openstack/rabbitmq-server-0" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.760284 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ff3f1819-c196-4202-a77b-6272462a9671-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ff3f1819-c196-4202-a77b-6272462a9671\") " pod="openstack/rabbitmq-server-0" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.760397 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ff3f1819-c196-4202-a77b-6272462a9671-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ff3f1819-c196-4202-a77b-6272462a9671\") " pod="openstack/rabbitmq-server-0" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.760528 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ff3f1819-c196-4202-a77b-6272462a9671-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ff3f1819-c196-4202-a77b-6272462a9671\") " pod="openstack/rabbitmq-server-0" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.760687 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ff3f1819-c196-4202-a77b-6272462a9671-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ff3f1819-c196-4202-a77b-6272462a9671\") " pod="openstack/rabbitmq-server-0" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.760814 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ff3f1819-c196-4202-a77b-6272462a9671-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ff3f1819-c196-4202-a77b-6272462a9671\") " pod="openstack/rabbitmq-server-0" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.760977 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-504966dc-acc0-4918-899b-693b7ff91a9e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-504966dc-acc0-4918-899b-693b7ff91a9e\") pod \"rabbitmq-server-0\" (UID: \"ff3f1819-c196-4202-a77b-6272462a9671\") " pod="openstack/rabbitmq-server-0" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.862668 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ff3f1819-c196-4202-a77b-6272462a9671-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ff3f1819-c196-4202-a77b-6272462a9671\") " pod="openstack/rabbitmq-server-0" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.862739 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7drmn\" (UniqueName: \"kubernetes.io/projected/ff3f1819-c196-4202-a77b-6272462a9671-kube-api-access-7drmn\") pod \"rabbitmq-server-0\" (UID: \"ff3f1819-c196-4202-a77b-6272462a9671\") " pod="openstack/rabbitmq-server-0" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.862762 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ff3f1819-c196-4202-a77b-6272462a9671-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ff3f1819-c196-4202-a77b-6272462a9671\") " pod="openstack/rabbitmq-server-0" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.862825 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ff3f1819-c196-4202-a77b-6272462a9671-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ff3f1819-c196-4202-a77b-6272462a9671\") " pod="openstack/rabbitmq-server-0" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.862844 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ff3f1819-c196-4202-a77b-6272462a9671-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ff3f1819-c196-4202-a77b-6272462a9671\") " pod="openstack/rabbitmq-server-0" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.862907 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ff3f1819-c196-4202-a77b-6272462a9671-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ff3f1819-c196-4202-a77b-6272462a9671\") " pod="openstack/rabbitmq-server-0" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.862977 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ff3f1819-c196-4202-a77b-6272462a9671-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ff3f1819-c196-4202-a77b-6272462a9671\") " pod="openstack/rabbitmq-server-0" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.862997 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ff3f1819-c196-4202-a77b-6272462a9671-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ff3f1819-c196-4202-a77b-6272462a9671\") " pod="openstack/rabbitmq-server-0" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.863057 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-504966dc-acc0-4918-899b-693b7ff91a9e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-504966dc-acc0-4918-899b-693b7ff91a9e\") pod \"rabbitmq-server-0\" (UID: \"ff3f1819-c196-4202-a77b-6272462a9671\") " pod="openstack/rabbitmq-server-0" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.863863 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ff3f1819-c196-4202-a77b-6272462a9671-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ff3f1819-c196-4202-a77b-6272462a9671\") " pod="openstack/rabbitmq-server-0" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.864101 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ff3f1819-c196-4202-a77b-6272462a9671-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ff3f1819-c196-4202-a77b-6272462a9671\") " pod="openstack/rabbitmq-server-0" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.864347 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ff3f1819-c196-4202-a77b-6272462a9671-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ff3f1819-c196-4202-a77b-6272462a9671\") " pod="openstack/rabbitmq-server-0" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.865047 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ff3f1819-c196-4202-a77b-6272462a9671-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ff3f1819-c196-4202-a77b-6272462a9671\") " pod="openstack/rabbitmq-server-0" Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.886662 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 17:29:23 crc kubenswrapper[4830]: I0227 17:29:23.886724 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-504966dc-acc0-4918-899b-693b7ff91a9e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-504966dc-acc0-4918-899b-693b7ff91a9e\") pod \"rabbitmq-server-0\" (UID: \"ff3f1819-c196-4202-a77b-6272462a9671\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f37391db7ae8e70fcc253c31f98faea82560b0cba03609ce63b3ca31ec3ce3d2/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.009824 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ff3f1819-c196-4202-a77b-6272462a9671-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ff3f1819-c196-4202-a77b-6272462a9671\") " pod="openstack/rabbitmq-server-0" Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.009882 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ff3f1819-c196-4202-a77b-6272462a9671-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ff3f1819-c196-4202-a77b-6272462a9671\") " pod="openstack/rabbitmq-server-0" Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.010395 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ff3f1819-c196-4202-a77b-6272462a9671-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ff3f1819-c196-4202-a77b-6272462a9671\") " pod="openstack/rabbitmq-server-0" Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.022336 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7drmn\" (UniqueName: \"kubernetes.io/projected/ff3f1819-c196-4202-a77b-6272462a9671-kube-api-access-7drmn\") pod \"rabbitmq-server-0\" (UID: \"ff3f1819-c196-4202-a77b-6272462a9671\") " pod="openstack/rabbitmq-server-0" Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.036816 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-504966dc-acc0-4918-899b-693b7ff91a9e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-504966dc-acc0-4918-899b-693b7ff91a9e\") pod \"rabbitmq-server-0\" (UID: \"ff3f1819-c196-4202-a77b-6272462a9671\") " pod="openstack/rabbitmq-server-0" Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.084714 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.123188 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b7946d7b9-68742" Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.169846 4830 generic.go:334] "Generic (PLEG): container finished" podID="bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f" containerID="bd4e4a7612e29b18fcc9b8d5657beb75c5fa5e5a48e3a6120adcdcb564ad8475" exitCode=0 Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.169890 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f","Type":"ContainerDied","Data":"bd4e4a7612e29b18fcc9b8d5657beb75c5fa5e5a48e3a6120adcdcb564ad8475"} Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.194820 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-djh5r"] Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.195385 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-98ddfc8f-djh5r" podUID="7b2c005f-d6a9-444d-94e4-e5d431c3bd6d" containerName="dnsmasq-dns" containerID="cri-o://509d4ab66bbd4d950f97afc152764a475a9522e1a714347a5da5245aab15930b" gracePeriod=10 Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.370174 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.477532 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-rabbitmq-plugins\") pod \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\" (UID: \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\") " Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.477601 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zh58\" (UniqueName: \"kubernetes.io/projected/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-kube-api-access-2zh58\") pod \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\" (UID: \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\") " Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.477665 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-rabbitmq-confd\") pod \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\" (UID: \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\") " Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.477745 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-rabbitmq-erlang-cookie\") pod \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\" (UID: \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\") " Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.477931 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d2b1020-d566-4194-ade4-e8bfc67eabb5\") pod \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\" (UID: \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\") " Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.477992 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-erlang-cookie-secret\") pod \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\" (UID: \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\") " Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.478022 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-plugins-conf\") pod \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\" (UID: \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\") " Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.478060 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-pod-info\") pod \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\" (UID: \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\") " Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.478097 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-server-conf\") pod \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\" (UID: \"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f\") " Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.478126 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f" (UID: "bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.478570 4830 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.479548 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f" (UID: "bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.480188 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f" (UID: "bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.482792 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f" (UID: "bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.492628 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-kube-api-access-2zh58" (OuterVolumeSpecName: "kube-api-access-2zh58") pod "bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f" (UID: "bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f"). InnerVolumeSpecName "kube-api-access-2zh58". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.493634 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-pod-info" (OuterVolumeSpecName: "pod-info") pod "bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f" (UID: "bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.503527 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-server-conf" (OuterVolumeSpecName: "server-conf") pod "bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f" (UID: "bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.509724 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d2b1020-d566-4194-ade4-e8bfc67eabb5" (OuterVolumeSpecName: "persistence") pod "bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f" (UID: "bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f"). InnerVolumeSpecName "pvc-5d2b1020-d566-4194-ade4-e8bfc67eabb5". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.579971 4830 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.580029 4830 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-5d2b1020-d566-4194-ade4-e8bfc67eabb5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d2b1020-d566-4194-ade4-e8bfc67eabb5\") on node \"crc\" " Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.580045 4830 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.580057 4830 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.580065 4830 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-pod-info\") on node \"crc\" DevicePath \"\"" Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.580073 4830 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-server-conf\") on node \"crc\" DevicePath \"\"" Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.580082 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zh58\" (UniqueName: \"kubernetes.io/projected/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-kube-api-access-2zh58\") on node \"crc\" DevicePath \"\"" Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.634209 4830 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.634362 4830 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-5d2b1020-d566-4194-ade4-e8bfc67eabb5" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d2b1020-d566-4194-ade4-e8bfc67eabb5") on node "crc" Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.634877 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.635034 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f" (UID: "bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.682038 4830 reconciler_common.go:293] "Volume detached for volume \"pvc-5d2b1020-d566-4194-ade4-e8bfc67eabb5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d2b1020-d566-4194-ade4-e8bfc67eabb5\") on node \"crc\" DevicePath \"\"" Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.682078 4830 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.755633 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-djh5r" Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.777315 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="372cd8c4-0006-4cea-8408-2fe8bbb4844b" path="/var/lib/kubelet/pods/372cd8c4-0006-4cea-8408-2fe8bbb4844b/volumes" Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.884462 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b2c005f-d6a9-444d-94e4-e5d431c3bd6d-config\") pod \"7b2c005f-d6a9-444d-94e4-e5d431c3bd6d\" (UID: \"7b2c005f-d6a9-444d-94e4-e5d431c3bd6d\") " Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.884606 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b2c005f-d6a9-444d-94e4-e5d431c3bd6d-dns-svc\") pod \"7b2c005f-d6a9-444d-94e4-e5d431c3bd6d\" (UID: \"7b2c005f-d6a9-444d-94e4-e5d431c3bd6d\") " Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.884659 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5t22\" (UniqueName: \"kubernetes.io/projected/7b2c005f-d6a9-444d-94e4-e5d431c3bd6d-kube-api-access-b5t22\") pod \"7b2c005f-d6a9-444d-94e4-e5d431c3bd6d\" (UID: \"7b2c005f-d6a9-444d-94e4-e5d431c3bd6d\") " Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.891360 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b2c005f-d6a9-444d-94e4-e5d431c3bd6d-kube-api-access-b5t22" (OuterVolumeSpecName: "kube-api-access-b5t22") pod "7b2c005f-d6a9-444d-94e4-e5d431c3bd6d" (UID: "7b2c005f-d6a9-444d-94e4-e5d431c3bd6d"). InnerVolumeSpecName "kube-api-access-b5t22". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.923993 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b2c005f-d6a9-444d-94e4-e5d431c3bd6d-config" (OuterVolumeSpecName: "config") pod "7b2c005f-d6a9-444d-94e4-e5d431c3bd6d" (UID: "7b2c005f-d6a9-444d-94e4-e5d431c3bd6d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.943573 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b2c005f-d6a9-444d-94e4-e5d431c3bd6d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7b2c005f-d6a9-444d-94e4-e5d431c3bd6d" (UID: "7b2c005f-d6a9-444d-94e4-e5d431c3bd6d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.986620 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5t22\" (UniqueName: \"kubernetes.io/projected/7b2c005f-d6a9-444d-94e4-e5d431c3bd6d-kube-api-access-b5t22\") on node \"crc\" DevicePath \"\"" Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.986664 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b2c005f-d6a9-444d-94e4-e5d431c3bd6d-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:29:24 crc kubenswrapper[4830]: I0227 17:29:24.986678 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7b2c005f-d6a9-444d-94e4-e5d431c3bd6d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.188114 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f","Type":"ContainerDied","Data":"4b97744520637d0d13864ec551ec17d79b43a296c30c16d19eca39be0bedb9d6"} Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.188290 4830 scope.go:117] "RemoveContainer" containerID="bd4e4a7612e29b18fcc9b8d5657beb75c5fa5e5a48e3a6120adcdcb564ad8475" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.188156 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.202109 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ff3f1819-c196-4202-a77b-6272462a9671","Type":"ContainerStarted","Data":"b790fac04192be6c6ba9770002606481dd0562413eec140cb2afa3bc15b97e8a"} Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.208364 4830 generic.go:334] "Generic (PLEG): container finished" podID="7b2c005f-d6a9-444d-94e4-e5d431c3bd6d" containerID="509d4ab66bbd4d950f97afc152764a475a9522e1a714347a5da5245aab15930b" exitCode=0 Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.208496 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-98ddfc8f-djh5r" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.208495 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-djh5r" event={"ID":"7b2c005f-d6a9-444d-94e4-e5d431c3bd6d","Type":"ContainerDied","Data":"509d4ab66bbd4d950f97afc152764a475a9522e1a714347a5da5245aab15930b"} Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.208830 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-98ddfc8f-djh5r" event={"ID":"7b2c005f-d6a9-444d-94e4-e5d431c3bd6d","Type":"ContainerDied","Data":"d67cbe350688a01dd1d1dd126808171a5ff53daaa93ed40e2f985e60ffa06f30"} Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.228805 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.237935 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.244943 4830 scope.go:117] "RemoveContainer" containerID="15671b859bb553bb9640aad04b6323abbd5e3a905fc972d50ced9b9d9bbff8fa" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.276524 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 17:29:25 crc kubenswrapper[4830]: E0227 17:29:25.277028 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f" containerName="rabbitmq" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.277069 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f" containerName="rabbitmq" Feb 27 17:29:25 crc kubenswrapper[4830]: E0227 17:29:25.277090 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b2c005f-d6a9-444d-94e4-e5d431c3bd6d" containerName="init" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.277102 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b2c005f-d6a9-444d-94e4-e5d431c3bd6d" containerName="init" Feb 27 17:29:25 crc kubenswrapper[4830]: E0227 17:29:25.277120 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b2c005f-d6a9-444d-94e4-e5d431c3bd6d" containerName="dnsmasq-dns" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.277130 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b2c005f-d6a9-444d-94e4-e5d431c3bd6d" containerName="dnsmasq-dns" Feb 27 17:29:25 crc kubenswrapper[4830]: E0227 17:29:25.277154 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f" containerName="setup-container" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.277164 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f" containerName="setup-container" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.277363 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f" containerName="rabbitmq" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.277387 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b2c005f-d6a9-444d-94e4-e5d431c3bd6d" containerName="dnsmasq-dns" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.278444 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.282813 4830 scope.go:117] "RemoveContainer" containerID="509d4ab66bbd4d950f97afc152764a475a9522e1a714347a5da5245aab15930b" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.283376 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.283440 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.283704 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-hkw2t" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.283926 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.284005 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.290780 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-djh5r"] Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.307181 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-98ddfc8f-djh5r"] Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.325146 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.338514 4830 scope.go:117] "RemoveContainer" containerID="1dec16f58fe7b82aed12484c93b1da1a523b9e1eb92fc6ee5b7c52642b3cd504" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.368063 4830 scope.go:117] "RemoveContainer" containerID="509d4ab66bbd4d950f97afc152764a475a9522e1a714347a5da5245aab15930b" Feb 27 17:29:25 crc kubenswrapper[4830]: E0227 17:29:25.369128 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"509d4ab66bbd4d950f97afc152764a475a9522e1a714347a5da5245aab15930b\": container with ID starting with 509d4ab66bbd4d950f97afc152764a475a9522e1a714347a5da5245aab15930b not found: ID does not exist" containerID="509d4ab66bbd4d950f97afc152764a475a9522e1a714347a5da5245aab15930b" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.369196 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"509d4ab66bbd4d950f97afc152764a475a9522e1a714347a5da5245aab15930b"} err="failed to get container status \"509d4ab66bbd4d950f97afc152764a475a9522e1a714347a5da5245aab15930b\": rpc error: code = NotFound desc = could not find container \"509d4ab66bbd4d950f97afc152764a475a9522e1a714347a5da5245aab15930b\": container with ID starting with 509d4ab66bbd4d950f97afc152764a475a9522e1a714347a5da5245aab15930b not found: ID does not exist" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.369243 4830 scope.go:117] "RemoveContainer" containerID="1dec16f58fe7b82aed12484c93b1da1a523b9e1eb92fc6ee5b7c52642b3cd504" Feb 27 17:29:25 crc kubenswrapper[4830]: E0227 17:29:25.370001 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dec16f58fe7b82aed12484c93b1da1a523b9e1eb92fc6ee5b7c52642b3cd504\": container with ID starting with 1dec16f58fe7b82aed12484c93b1da1a523b9e1eb92fc6ee5b7c52642b3cd504 not found: ID does not exist" containerID="1dec16f58fe7b82aed12484c93b1da1a523b9e1eb92fc6ee5b7c52642b3cd504" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.370050 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dec16f58fe7b82aed12484c93b1da1a523b9e1eb92fc6ee5b7c52642b3cd504"} err="failed to get container status \"1dec16f58fe7b82aed12484c93b1da1a523b9e1eb92fc6ee5b7c52642b3cd504\": rpc error: code = NotFound desc = could not find container \"1dec16f58fe7b82aed12484c93b1da1a523b9e1eb92fc6ee5b7c52642b3cd504\": container with ID starting with 1dec16f58fe7b82aed12484c93b1da1a523b9e1eb92fc6ee5b7c52642b3cd504 not found: ID does not exist" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.395134 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/57696a20-06e7-4dd6-9a1e-e4b0cb8013bf-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"57696a20-06e7-4dd6-9a1e-e4b0cb8013bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.395194 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/57696a20-06e7-4dd6-9a1e-e4b0cb8013bf-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"57696a20-06e7-4dd6-9a1e-e4b0cb8013bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.395215 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/57696a20-06e7-4dd6-9a1e-e4b0cb8013bf-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"57696a20-06e7-4dd6-9a1e-e4b0cb8013bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.395262 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/57696a20-06e7-4dd6-9a1e-e4b0cb8013bf-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"57696a20-06e7-4dd6-9a1e-e4b0cb8013bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.395292 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdhjc\" (UniqueName: \"kubernetes.io/projected/57696a20-06e7-4dd6-9a1e-e4b0cb8013bf-kube-api-access-kdhjc\") pod \"rabbitmq-cell1-server-0\" (UID: \"57696a20-06e7-4dd6-9a1e-e4b0cb8013bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.395329 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/57696a20-06e7-4dd6-9a1e-e4b0cb8013bf-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"57696a20-06e7-4dd6-9a1e-e4b0cb8013bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.395372 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/57696a20-06e7-4dd6-9a1e-e4b0cb8013bf-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"57696a20-06e7-4dd6-9a1e-e4b0cb8013bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.395397 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/57696a20-06e7-4dd6-9a1e-e4b0cb8013bf-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"57696a20-06e7-4dd6-9a1e-e4b0cb8013bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.395426 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5d2b1020-d566-4194-ade4-e8bfc67eabb5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d2b1020-d566-4194-ade4-e8bfc67eabb5\") pod \"rabbitmq-cell1-server-0\" (UID: \"57696a20-06e7-4dd6-9a1e-e4b0cb8013bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.497558 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/57696a20-06e7-4dd6-9a1e-e4b0cb8013bf-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"57696a20-06e7-4dd6-9a1e-e4b0cb8013bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.497679 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/57696a20-06e7-4dd6-9a1e-e4b0cb8013bf-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"57696a20-06e7-4dd6-9a1e-e4b0cb8013bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.497718 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/57696a20-06e7-4dd6-9a1e-e4b0cb8013bf-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"57696a20-06e7-4dd6-9a1e-e4b0cb8013bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.497767 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/57696a20-06e7-4dd6-9a1e-e4b0cb8013bf-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"57696a20-06e7-4dd6-9a1e-e4b0cb8013bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.497830 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdhjc\" (UniqueName: \"kubernetes.io/projected/57696a20-06e7-4dd6-9a1e-e4b0cb8013bf-kube-api-access-kdhjc\") pod \"rabbitmq-cell1-server-0\" (UID: \"57696a20-06e7-4dd6-9a1e-e4b0cb8013bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.497903 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/57696a20-06e7-4dd6-9a1e-e4b0cb8013bf-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"57696a20-06e7-4dd6-9a1e-e4b0cb8013bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.498012 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/57696a20-06e7-4dd6-9a1e-e4b0cb8013bf-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"57696a20-06e7-4dd6-9a1e-e4b0cb8013bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.498048 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/57696a20-06e7-4dd6-9a1e-e4b0cb8013bf-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"57696a20-06e7-4dd6-9a1e-e4b0cb8013bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.498097 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5d2b1020-d566-4194-ade4-e8bfc67eabb5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d2b1020-d566-4194-ade4-e8bfc67eabb5\") pod \"rabbitmq-cell1-server-0\" (UID: \"57696a20-06e7-4dd6-9a1e-e4b0cb8013bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.499594 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/57696a20-06e7-4dd6-9a1e-e4b0cb8013bf-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"57696a20-06e7-4dd6-9a1e-e4b0cb8013bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.500586 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/57696a20-06e7-4dd6-9a1e-e4b0cb8013bf-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"57696a20-06e7-4dd6-9a1e-e4b0cb8013bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.503666 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/57696a20-06e7-4dd6-9a1e-e4b0cb8013bf-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"57696a20-06e7-4dd6-9a1e-e4b0cb8013bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.504835 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/57696a20-06e7-4dd6-9a1e-e4b0cb8013bf-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"57696a20-06e7-4dd6-9a1e-e4b0cb8013bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.505027 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.505082 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5d2b1020-d566-4194-ade4-e8bfc67eabb5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d2b1020-d566-4194-ade4-e8bfc67eabb5\") pod \"rabbitmq-cell1-server-0\" (UID: \"57696a20-06e7-4dd6-9a1e-e4b0cb8013bf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ad898285b7c922a94df9db2fe5d884eccf586a5b0445da091fa79edefcd75c9e/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.506196 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/57696a20-06e7-4dd6-9a1e-e4b0cb8013bf-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"57696a20-06e7-4dd6-9a1e-e4b0cb8013bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.507113 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/57696a20-06e7-4dd6-9a1e-e4b0cb8013bf-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"57696a20-06e7-4dd6-9a1e-e4b0cb8013bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.507202 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/57696a20-06e7-4dd6-9a1e-e4b0cb8013bf-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"57696a20-06e7-4dd6-9a1e-e4b0cb8013bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.531268 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdhjc\" (UniqueName: \"kubernetes.io/projected/57696a20-06e7-4dd6-9a1e-e4b0cb8013bf-kube-api-access-kdhjc\") pod \"rabbitmq-cell1-server-0\" (UID: \"57696a20-06e7-4dd6-9a1e-e4b0cb8013bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.555520 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5d2b1020-d566-4194-ade4-e8bfc67eabb5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d2b1020-d566-4194-ade4-e8bfc67eabb5\") pod \"rabbitmq-cell1-server-0\" (UID: \"57696a20-06e7-4dd6-9a1e-e4b0cb8013bf\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.649011 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:29:25 crc kubenswrapper[4830]: I0227 17:29:25.962543 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 17:29:26 crc kubenswrapper[4830]: W0227 17:29:26.500843 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57696a20_06e7_4dd6_9a1e_e4b0cb8013bf.slice/crio-2a113e695cbc3fc28e84b097b65d40b8fdd25a172b21f6221da3690cfb1690d0 WatchSource:0}: Error finding container 2a113e695cbc3fc28e84b097b65d40b8fdd25a172b21f6221da3690cfb1690d0: Status 404 returned error can't find the container with id 2a113e695cbc3fc28e84b097b65d40b8fdd25a172b21f6221da3690cfb1690d0 Feb 27 17:29:26 crc kubenswrapper[4830]: I0227 17:29:26.780048 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b2c005f-d6a9-444d-94e4-e5d431c3bd6d" path="/var/lib/kubelet/pods/7b2c005f-d6a9-444d-94e4-e5d431c3bd6d/volumes" Feb 27 17:29:26 crc kubenswrapper[4830]: I0227 17:29:26.782310 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f" path="/var/lib/kubelet/pods/bce2dd2f-e667-4d8a-bc01-6f0a0e6ff61f/volumes" Feb 27 17:29:27 crc kubenswrapper[4830]: I0227 17:29:27.234207 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ff3f1819-c196-4202-a77b-6272462a9671","Type":"ContainerStarted","Data":"8c65c1f901d2a5d9c6e25e8a7affa6b6a65bcb6a49003580ad1e377389e873ae"} Feb 27 17:29:27 crc kubenswrapper[4830]: I0227 17:29:27.236530 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"57696a20-06e7-4dd6-9a1e-e4b0cb8013bf","Type":"ContainerStarted","Data":"2a113e695cbc3fc28e84b097b65d40b8fdd25a172b21f6221da3690cfb1690d0"} Feb 27 17:29:28 crc kubenswrapper[4830]: I0227 17:29:28.247301 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"57696a20-06e7-4dd6-9a1e-e4b0cb8013bf","Type":"ContainerStarted","Data":"bd7f3849eea4713afeb625037ca24a325bcd3c2011efcc69ecc620cadf5473f0"} Feb 27 17:29:33 crc kubenswrapper[4830]: I0227 17:29:33.159990 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:29:33 crc kubenswrapper[4830]: I0227 17:29:33.161055 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:29:59 crc kubenswrapper[4830]: I0227 17:29:59.691801 4830 generic.go:334] "Generic (PLEG): container finished" podID="ff3f1819-c196-4202-a77b-6272462a9671" containerID="8c65c1f901d2a5d9c6e25e8a7affa6b6a65bcb6a49003580ad1e377389e873ae" exitCode=0 Feb 27 17:29:59 crc kubenswrapper[4830]: I0227 17:29:59.691878 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ff3f1819-c196-4202-a77b-6272462a9671","Type":"ContainerDied","Data":"8c65c1f901d2a5d9c6e25e8a7affa6b6a65bcb6a49003580ad1e377389e873ae"} Feb 27 17:30:00 crc kubenswrapper[4830]: I0227 17:30:00.159490 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536890-hsmxf"] Feb 27 17:30:00 crc kubenswrapper[4830]: I0227 17:30:00.160466 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536890-hsmxf" Feb 27 17:30:00 crc kubenswrapper[4830]: I0227 17:30:00.162988 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:30:00 crc kubenswrapper[4830]: I0227 17:30:00.164262 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:30:00 crc kubenswrapper[4830]: I0227 17:30:00.165313 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 17:30:00 crc kubenswrapper[4830]: I0227 17:30:00.182722 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536890-hsmxf"] Feb 27 17:30:00 crc kubenswrapper[4830]: I0227 17:30:00.255834 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvz78\" (UniqueName: \"kubernetes.io/projected/d9d30f3b-8912-4203-88dc-194bd00d4a71-kube-api-access-pvz78\") pod \"auto-csr-approver-29536890-hsmxf\" (UID: \"d9d30f3b-8912-4203-88dc-194bd00d4a71\") " pod="openshift-infra/auto-csr-approver-29536890-hsmxf" Feb 27 17:30:00 crc kubenswrapper[4830]: I0227 17:30:00.269929 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536890-t6hdx"] Feb 27 17:30:00 crc kubenswrapper[4830]: I0227 17:30:00.271092 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536890-t6hdx" Feb 27 17:30:00 crc kubenswrapper[4830]: I0227 17:30:00.274231 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 17:30:00 crc kubenswrapper[4830]: I0227 17:30:00.274567 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 17:30:00 crc kubenswrapper[4830]: I0227 17:30:00.278034 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536890-t6hdx"] Feb 27 17:30:00 crc kubenswrapper[4830]: I0227 17:30:00.358647 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvz78\" (UniqueName: \"kubernetes.io/projected/d9d30f3b-8912-4203-88dc-194bd00d4a71-kube-api-access-pvz78\") pod \"auto-csr-approver-29536890-hsmxf\" (UID: \"d9d30f3b-8912-4203-88dc-194bd00d4a71\") " pod="openshift-infra/auto-csr-approver-29536890-hsmxf" Feb 27 17:30:00 crc kubenswrapper[4830]: I0227 17:30:00.390278 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvz78\" (UniqueName: \"kubernetes.io/projected/d9d30f3b-8912-4203-88dc-194bd00d4a71-kube-api-access-pvz78\") pod \"auto-csr-approver-29536890-hsmxf\" (UID: \"d9d30f3b-8912-4203-88dc-194bd00d4a71\") " pod="openshift-infra/auto-csr-approver-29536890-hsmxf" Feb 27 17:30:00 crc kubenswrapper[4830]: I0227 17:30:00.460830 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgd7n\" (UniqueName: \"kubernetes.io/projected/28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa-kube-api-access-fgd7n\") pod \"collect-profiles-29536890-t6hdx\" (UID: \"28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536890-t6hdx" Feb 27 17:30:00 crc kubenswrapper[4830]: I0227 17:30:00.461143 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa-config-volume\") pod \"collect-profiles-29536890-t6hdx\" (UID: \"28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536890-t6hdx" Feb 27 17:30:00 crc kubenswrapper[4830]: I0227 17:30:00.461175 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa-secret-volume\") pod \"collect-profiles-29536890-t6hdx\" (UID: \"28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536890-t6hdx" Feb 27 17:30:00 crc kubenswrapper[4830]: I0227 17:30:00.484490 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536890-hsmxf" Feb 27 17:30:00 crc kubenswrapper[4830]: I0227 17:30:00.563520 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa-config-volume\") pod \"collect-profiles-29536890-t6hdx\" (UID: \"28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536890-t6hdx" Feb 27 17:30:00 crc kubenswrapper[4830]: I0227 17:30:00.563577 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa-secret-volume\") pod \"collect-profiles-29536890-t6hdx\" (UID: \"28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536890-t6hdx" Feb 27 17:30:00 crc kubenswrapper[4830]: I0227 17:30:00.563655 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgd7n\" (UniqueName: \"kubernetes.io/projected/28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa-kube-api-access-fgd7n\") pod \"collect-profiles-29536890-t6hdx\" (UID: \"28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536890-t6hdx" Feb 27 17:30:00 crc kubenswrapper[4830]: I0227 17:30:00.565339 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa-config-volume\") pod \"collect-profiles-29536890-t6hdx\" (UID: \"28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536890-t6hdx" Feb 27 17:30:00 crc kubenswrapper[4830]: I0227 17:30:00.706607 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ff3f1819-c196-4202-a77b-6272462a9671","Type":"ContainerStarted","Data":"7bf7e12042d65784c0684ba96bd84943d9603f0c72482dcdcea1cd1f6ca2d872"} Feb 27 17:30:00 crc kubenswrapper[4830]: I0227 17:30:00.706995 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 27 17:30:00 crc kubenswrapper[4830]: I0227 17:30:00.742686 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.742654001 podStartE2EDuration="37.742654001s" podCreationTimestamp="2026-02-27 17:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:30:00.739595357 +0000 UTC m=+4996.828867820" watchObservedRunningTime="2026-02-27 17:30:00.742654001 +0000 UTC m=+4996.831926504" Feb 27 17:30:00 crc kubenswrapper[4830]: I0227 17:30:00.800526 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa-secret-volume\") pod \"collect-profiles-29536890-t6hdx\" (UID: \"28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536890-t6hdx" Feb 27 17:30:00 crc kubenswrapper[4830]: I0227 17:30:00.803253 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgd7n\" (UniqueName: \"kubernetes.io/projected/28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa-kube-api-access-fgd7n\") pod \"collect-profiles-29536890-t6hdx\" (UID: \"28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536890-t6hdx" Feb 27 17:30:00 crc kubenswrapper[4830]: I0227 17:30:00.898249 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536890-t6hdx" Feb 27 17:30:01 crc kubenswrapper[4830]: I0227 17:30:01.334668 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536890-t6hdx"] Feb 27 17:30:01 crc kubenswrapper[4830]: I0227 17:30:01.393522 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536890-hsmxf"] Feb 27 17:30:01 crc kubenswrapper[4830]: W0227 17:30:01.402789 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9d30f3b_8912_4203_88dc_194bd00d4a71.slice/crio-93ac74058c40a6c3d3ad79dc2354ccab7c2456ecc7010f566fc9a31a09b938f4 WatchSource:0}: Error finding container 93ac74058c40a6c3d3ad79dc2354ccab7c2456ecc7010f566fc9a31a09b938f4: Status 404 returned error can't find the container with id 93ac74058c40a6c3d3ad79dc2354ccab7c2456ecc7010f566fc9a31a09b938f4 Feb 27 17:30:01 crc kubenswrapper[4830]: I0227 17:30:01.715132 4830 generic.go:334] "Generic (PLEG): container finished" podID="57696a20-06e7-4dd6-9a1e-e4b0cb8013bf" containerID="bd7f3849eea4713afeb625037ca24a325bcd3c2011efcc69ecc620cadf5473f0" exitCode=0 Feb 27 17:30:01 crc kubenswrapper[4830]: I0227 17:30:01.715207 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"57696a20-06e7-4dd6-9a1e-e4b0cb8013bf","Type":"ContainerDied","Data":"bd7f3849eea4713afeb625037ca24a325bcd3c2011efcc69ecc620cadf5473f0"} Feb 27 17:30:01 crc kubenswrapper[4830]: I0227 17:30:01.718085 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536890-hsmxf" event={"ID":"d9d30f3b-8912-4203-88dc-194bd00d4a71","Type":"ContainerStarted","Data":"93ac74058c40a6c3d3ad79dc2354ccab7c2456ecc7010f566fc9a31a09b938f4"} Feb 27 17:30:01 crc kubenswrapper[4830]: I0227 17:30:01.722238 4830 generic.go:334] "Generic (PLEG): container finished" podID="28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa" containerID="97e7bd3f804dde277a2e36e53ab3a6dab5844013cd0137d6d65aec6747014104" exitCode=0 Feb 27 17:30:01 crc kubenswrapper[4830]: I0227 17:30:01.722351 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536890-t6hdx" event={"ID":"28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa","Type":"ContainerDied","Data":"97e7bd3f804dde277a2e36e53ab3a6dab5844013cd0137d6d65aec6747014104"} Feb 27 17:30:01 crc kubenswrapper[4830]: I0227 17:30:01.722374 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536890-t6hdx" event={"ID":"28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa","Type":"ContainerStarted","Data":"3b192e5468f7d7e5671a8ef9b5f6cbd20179ab9e5a46a7e0410f1f894309db26"} Feb 27 17:30:02 crc kubenswrapper[4830]: I0227 17:30:02.736311 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"57696a20-06e7-4dd6-9a1e-e4b0cb8013bf","Type":"ContainerStarted","Data":"5bbc651c350ce8d721a7d911d784f772d7d0cbc4a357e83dc3e099e5bada2e27"} Feb 27 17:30:02 crc kubenswrapper[4830]: I0227 17:30:02.737057 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:30:02 crc kubenswrapper[4830]: I0227 17:30:02.769519 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.769497630000004 podStartE2EDuration="37.76949763s" podCreationTimestamp="2026-02-27 17:29:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:30:02.764226193 +0000 UTC m=+4998.853498696" watchObservedRunningTime="2026-02-27 17:30:02.76949763 +0000 UTC m=+4998.858770103" Feb 27 17:30:03 crc kubenswrapper[4830]: I0227 17:30:03.160294 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:30:03 crc kubenswrapper[4830]: I0227 17:30:03.160776 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:30:03 crc kubenswrapper[4830]: I0227 17:30:03.275816 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536890-t6hdx" Feb 27 17:30:03 crc kubenswrapper[4830]: I0227 17:30:03.431590 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgd7n\" (UniqueName: \"kubernetes.io/projected/28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa-kube-api-access-fgd7n\") pod \"28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa\" (UID: \"28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa\") " Feb 27 17:30:03 crc kubenswrapper[4830]: I0227 17:30:03.431702 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa-secret-volume\") pod \"28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa\" (UID: \"28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa\") " Feb 27 17:30:03 crc kubenswrapper[4830]: I0227 17:30:03.431767 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa-config-volume\") pod \"28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa\" (UID: \"28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa\") " Feb 27 17:30:03 crc kubenswrapper[4830]: I0227 17:30:03.432833 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa-config-volume" (OuterVolumeSpecName: "config-volume") pod "28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa" (UID: "28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:30:03 crc kubenswrapper[4830]: I0227 17:30:03.443419 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa-kube-api-access-fgd7n" (OuterVolumeSpecName: "kube-api-access-fgd7n") pod "28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa" (UID: "28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa"). InnerVolumeSpecName "kube-api-access-fgd7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:30:03 crc kubenswrapper[4830]: I0227 17:30:03.444152 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa" (UID: "28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:30:03 crc kubenswrapper[4830]: I0227 17:30:03.534483 4830 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 17:30:03 crc kubenswrapper[4830]: I0227 17:30:03.534550 4830 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 17:30:03 crc kubenswrapper[4830]: I0227 17:30:03.534571 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgd7n\" (UniqueName: \"kubernetes.io/projected/28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa-kube-api-access-fgd7n\") on node \"crc\" DevicePath \"\"" Feb 27 17:30:03 crc kubenswrapper[4830]: I0227 17:30:03.747331 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536890-t6hdx" Feb 27 17:30:03 crc kubenswrapper[4830]: I0227 17:30:03.749883 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536890-t6hdx" event={"ID":"28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa","Type":"ContainerDied","Data":"3b192e5468f7d7e5671a8ef9b5f6cbd20179ab9e5a46a7e0410f1f894309db26"} Feb 27 17:30:03 crc kubenswrapper[4830]: I0227 17:30:03.749971 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b192e5468f7d7e5671a8ef9b5f6cbd20179ab9e5a46a7e0410f1f894309db26" Feb 27 17:30:04 crc kubenswrapper[4830]: I0227 17:30:04.380437 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536845-jr2fm"] Feb 27 17:30:04 crc kubenswrapper[4830]: I0227 17:30:04.385444 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536845-jr2fm"] Feb 27 17:30:04 crc kubenswrapper[4830]: I0227 17:30:04.759171 4830 generic.go:334] "Generic (PLEG): container finished" podID="d9d30f3b-8912-4203-88dc-194bd00d4a71" containerID="c16f444ed61a9f55c5af7ad328f6f0bbdb14381e68fc9af10bf6a89a1841edff" exitCode=0 Feb 27 17:30:04 crc kubenswrapper[4830]: I0227 17:30:04.759252 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536890-hsmxf" event={"ID":"d9d30f3b-8912-4203-88dc-194bd00d4a71","Type":"ContainerDied","Data":"c16f444ed61a9f55c5af7ad328f6f0bbdb14381e68fc9af10bf6a89a1841edff"} Feb 27 17:30:04 crc kubenswrapper[4830]: I0227 17:30:04.782672 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7acf39c-5119-438f-bfca-2aa403a29a4b" path="/var/lib/kubelet/pods/c7acf39c-5119-438f-bfca-2aa403a29a4b/volumes" Feb 27 17:30:06 crc kubenswrapper[4830]: I0227 17:30:06.128220 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536890-hsmxf" Feb 27 17:30:06 crc kubenswrapper[4830]: I0227 17:30:06.195892 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvz78\" (UniqueName: \"kubernetes.io/projected/d9d30f3b-8912-4203-88dc-194bd00d4a71-kube-api-access-pvz78\") pod \"d9d30f3b-8912-4203-88dc-194bd00d4a71\" (UID: \"d9d30f3b-8912-4203-88dc-194bd00d4a71\") " Feb 27 17:30:06 crc kubenswrapper[4830]: I0227 17:30:06.200532 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9d30f3b-8912-4203-88dc-194bd00d4a71-kube-api-access-pvz78" (OuterVolumeSpecName: "kube-api-access-pvz78") pod "d9d30f3b-8912-4203-88dc-194bd00d4a71" (UID: "d9d30f3b-8912-4203-88dc-194bd00d4a71"). InnerVolumeSpecName "kube-api-access-pvz78". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:30:06 crc kubenswrapper[4830]: I0227 17:30:06.298260 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvz78\" (UniqueName: \"kubernetes.io/projected/d9d30f3b-8912-4203-88dc-194bd00d4a71-kube-api-access-pvz78\") on node \"crc\" DevicePath \"\"" Feb 27 17:30:06 crc kubenswrapper[4830]: I0227 17:30:06.783314 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536890-hsmxf" event={"ID":"d9d30f3b-8912-4203-88dc-194bd00d4a71","Type":"ContainerDied","Data":"93ac74058c40a6c3d3ad79dc2354ccab7c2456ecc7010f566fc9a31a09b938f4"} Feb 27 17:30:06 crc kubenswrapper[4830]: I0227 17:30:06.783581 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93ac74058c40a6c3d3ad79dc2354ccab7c2456ecc7010f566fc9a31a09b938f4" Feb 27 17:30:06 crc kubenswrapper[4830]: I0227 17:30:06.783393 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536890-hsmxf" Feb 27 17:30:07 crc kubenswrapper[4830]: I0227 17:30:07.225723 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536884-jxtll"] Feb 27 17:30:07 crc kubenswrapper[4830]: I0227 17:30:07.238660 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536884-jxtll"] Feb 27 17:30:08 crc kubenswrapper[4830]: I0227 17:30:08.774543 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd756bd4-9902-4040-8342-9886fcd96a41" path="/var/lib/kubelet/pods/bd756bd4-9902-4040-8342-9886fcd96a41/volumes" Feb 27 17:30:14 crc kubenswrapper[4830]: I0227 17:30:14.089241 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 27 17:30:15 crc kubenswrapper[4830]: I0227 17:30:15.652161 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:30:28 crc kubenswrapper[4830]: I0227 17:30:28.685665 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Feb 27 17:30:28 crc kubenswrapper[4830]: E0227 17:30:28.686595 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9d30f3b-8912-4203-88dc-194bd00d4a71" containerName="oc" Feb 27 17:30:28 crc kubenswrapper[4830]: I0227 17:30:28.686615 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9d30f3b-8912-4203-88dc-194bd00d4a71" containerName="oc" Feb 27 17:30:28 crc kubenswrapper[4830]: E0227 17:30:28.686646 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa" containerName="collect-profiles" Feb 27 17:30:28 crc kubenswrapper[4830]: I0227 17:30:28.686657 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa" containerName="collect-profiles" Feb 27 17:30:28 crc kubenswrapper[4830]: I0227 17:30:28.686870 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa" containerName="collect-profiles" Feb 27 17:30:28 crc kubenswrapper[4830]: I0227 17:30:28.686894 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9d30f3b-8912-4203-88dc-194bd00d4a71" containerName="oc" Feb 27 17:30:28 crc kubenswrapper[4830]: I0227 17:30:28.687531 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 27 17:30:28 crc kubenswrapper[4830]: I0227 17:30:28.690710 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-7xk7b" Feb 27 17:30:28 crc kubenswrapper[4830]: I0227 17:30:28.701652 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 27 17:30:28 crc kubenswrapper[4830]: I0227 17:30:28.825649 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2l7n\" (UniqueName: \"kubernetes.io/projected/dec880c9-ae48-432b-8791-a7e82acaeb1e-kube-api-access-r2l7n\") pod \"mariadb-client\" (UID: \"dec880c9-ae48-432b-8791-a7e82acaeb1e\") " pod="openstack/mariadb-client" Feb 27 17:30:28 crc kubenswrapper[4830]: I0227 17:30:28.927295 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2l7n\" (UniqueName: \"kubernetes.io/projected/dec880c9-ae48-432b-8791-a7e82acaeb1e-kube-api-access-r2l7n\") pod \"mariadb-client\" (UID: \"dec880c9-ae48-432b-8791-a7e82acaeb1e\") " pod="openstack/mariadb-client" Feb 27 17:30:28 crc kubenswrapper[4830]: I0227 17:30:28.950529 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2l7n\" (UniqueName: \"kubernetes.io/projected/dec880c9-ae48-432b-8791-a7e82acaeb1e-kube-api-access-r2l7n\") pod \"mariadb-client\" (UID: \"dec880c9-ae48-432b-8791-a7e82acaeb1e\") " pod="openstack/mariadb-client" Feb 27 17:30:29 crc kubenswrapper[4830]: I0227 17:30:29.032914 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 27 17:30:29 crc kubenswrapper[4830]: I0227 17:30:29.402101 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 27 17:30:29 crc kubenswrapper[4830]: W0227 17:30:29.414085 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddec880c9_ae48_432b_8791_a7e82acaeb1e.slice/crio-76b4083438a328e205ccfd26f4fea76c30e180c107ff1eb7a4c993153f922b36 WatchSource:0}: Error finding container 76b4083438a328e205ccfd26f4fea76c30e180c107ff1eb7a4c993153f922b36: Status 404 returned error can't find the container with id 76b4083438a328e205ccfd26f4fea76c30e180c107ff1eb7a4c993153f922b36 Feb 27 17:30:30 crc kubenswrapper[4830]: I0227 17:30:30.012333 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"dec880c9-ae48-432b-8791-a7e82acaeb1e","Type":"ContainerStarted","Data":"28fe9e1a69b52d8dcf6b682043528ef635f49c093312750eeed04d29e78aa9c2"} Feb 27 17:30:30 crc kubenswrapper[4830]: I0227 17:30:30.012713 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"dec880c9-ae48-432b-8791-a7e82acaeb1e","Type":"ContainerStarted","Data":"76b4083438a328e205ccfd26f4fea76c30e180c107ff1eb7a4c993153f922b36"} Feb 27 17:30:30 crc kubenswrapper[4830]: I0227 17:30:30.033921 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client" podStartSLOduration=2.033903332 podStartE2EDuration="2.033903332s" podCreationTimestamp="2026-02-27 17:30:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:30:30.026682219 +0000 UTC m=+5026.115954702" watchObservedRunningTime="2026-02-27 17:30:30.033903332 +0000 UTC m=+5026.123175795" Feb 27 17:30:33 crc kubenswrapper[4830]: I0227 17:30:33.160248 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:30:33 crc kubenswrapper[4830]: I0227 17:30:33.160732 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:30:33 crc kubenswrapper[4830]: I0227 17:30:33.160801 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 17:30:33 crc kubenswrapper[4830]: I0227 17:30:33.161732 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"313fc4aac8e61b508c32aefdb01d7489ea5d8194a163882d5d4b5ec4665839cb"} pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 17:30:33 crc kubenswrapper[4830]: I0227 17:30:33.161835 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" containerID="cri-o://313fc4aac8e61b508c32aefdb01d7489ea5d8194a163882d5d4b5ec4665839cb" gracePeriod=600 Feb 27 17:30:33 crc kubenswrapper[4830]: E0227 17:30:33.295367 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:30:34 crc kubenswrapper[4830]: I0227 17:30:34.056530 4830 generic.go:334] "Generic (PLEG): container finished" podID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerID="313fc4aac8e61b508c32aefdb01d7489ea5d8194a163882d5d4b5ec4665839cb" exitCode=0 Feb 27 17:30:34 crc kubenswrapper[4830]: I0227 17:30:34.056618 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerDied","Data":"313fc4aac8e61b508c32aefdb01d7489ea5d8194a163882d5d4b5ec4665839cb"} Feb 27 17:30:34 crc kubenswrapper[4830]: I0227 17:30:34.056823 4830 scope.go:117] "RemoveContainer" containerID="19e6a24991d0874a855368f8e306131672121f114d688786c52f7e0dafcd4823" Feb 27 17:30:34 crc kubenswrapper[4830]: I0227 17:30:34.057813 4830 scope.go:117] "RemoveContainer" containerID="313fc4aac8e61b508c32aefdb01d7489ea5d8194a163882d5d4b5ec4665839cb" Feb 27 17:30:34 crc kubenswrapper[4830]: E0227 17:30:34.058161 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:30:42 crc kubenswrapper[4830]: E0227 17:30:42.398366 4830 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.36:57624->38.129.56.36:42557: write tcp 38.129.56.36:57624->38.129.56.36:42557: write: broken pipe Feb 27 17:30:42 crc kubenswrapper[4830]: E0227 17:30:42.727523 4830 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.36:57642->38.129.56.36:42557: write tcp 38.129.56.36:57642->38.129.56.36:42557: write: connection reset by peer Feb 27 17:30:46 crc kubenswrapper[4830]: I0227 17:30:46.102043 4830 scope.go:117] "RemoveContainer" containerID="313fc4aac8e61b508c32aefdb01d7489ea5d8194a163882d5d4b5ec4665839cb" Feb 27 17:30:46 crc kubenswrapper[4830]: E0227 17:30:46.102637 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:30:46 crc kubenswrapper[4830]: I0227 17:30:46.198131 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Feb 27 17:30:46 crc kubenswrapper[4830]: I0227 17:30:46.198448 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-client" podUID="dec880c9-ae48-432b-8791-a7e82acaeb1e" containerName="mariadb-client" containerID="cri-o://28fe9e1a69b52d8dcf6b682043528ef635f49c093312750eeed04d29e78aa9c2" gracePeriod=30 Feb 27 17:30:46 crc kubenswrapper[4830]: I0227 17:30:46.776720 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 27 17:30:46 crc kubenswrapper[4830]: I0227 17:30:46.810686 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2l7n\" (UniqueName: \"kubernetes.io/projected/dec880c9-ae48-432b-8791-a7e82acaeb1e-kube-api-access-r2l7n\") pod \"dec880c9-ae48-432b-8791-a7e82acaeb1e\" (UID: \"dec880c9-ae48-432b-8791-a7e82acaeb1e\") " Feb 27 17:30:46 crc kubenswrapper[4830]: I0227 17:30:46.817219 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dec880c9-ae48-432b-8791-a7e82acaeb1e-kube-api-access-r2l7n" (OuterVolumeSpecName: "kube-api-access-r2l7n") pod "dec880c9-ae48-432b-8791-a7e82acaeb1e" (UID: "dec880c9-ae48-432b-8791-a7e82acaeb1e"). InnerVolumeSpecName "kube-api-access-r2l7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:30:46 crc kubenswrapper[4830]: I0227 17:30:46.914013 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2l7n\" (UniqueName: \"kubernetes.io/projected/dec880c9-ae48-432b-8791-a7e82acaeb1e-kube-api-access-r2l7n\") on node \"crc\" DevicePath \"\"" Feb 27 17:30:47 crc kubenswrapper[4830]: I0227 17:30:47.136926 4830 generic.go:334] "Generic (PLEG): container finished" podID="dec880c9-ae48-432b-8791-a7e82acaeb1e" containerID="28fe9e1a69b52d8dcf6b682043528ef635f49c093312750eeed04d29e78aa9c2" exitCode=143 Feb 27 17:30:47 crc kubenswrapper[4830]: I0227 17:30:47.136989 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 27 17:30:47 crc kubenswrapper[4830]: I0227 17:30:47.137015 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"dec880c9-ae48-432b-8791-a7e82acaeb1e","Type":"ContainerDied","Data":"28fe9e1a69b52d8dcf6b682043528ef635f49c093312750eeed04d29e78aa9c2"} Feb 27 17:30:47 crc kubenswrapper[4830]: I0227 17:30:47.137157 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"dec880c9-ae48-432b-8791-a7e82acaeb1e","Type":"ContainerDied","Data":"76b4083438a328e205ccfd26f4fea76c30e180c107ff1eb7a4c993153f922b36"} Feb 27 17:30:47 crc kubenswrapper[4830]: I0227 17:30:47.137207 4830 scope.go:117] "RemoveContainer" containerID="28fe9e1a69b52d8dcf6b682043528ef635f49c093312750eeed04d29e78aa9c2" Feb 27 17:30:47 crc kubenswrapper[4830]: I0227 17:30:47.177242 4830 scope.go:117] "RemoveContainer" containerID="28fe9e1a69b52d8dcf6b682043528ef635f49c093312750eeed04d29e78aa9c2" Feb 27 17:30:47 crc kubenswrapper[4830]: E0227 17:30:47.179119 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28fe9e1a69b52d8dcf6b682043528ef635f49c093312750eeed04d29e78aa9c2\": container with ID starting with 28fe9e1a69b52d8dcf6b682043528ef635f49c093312750eeed04d29e78aa9c2 not found: ID does not exist" containerID="28fe9e1a69b52d8dcf6b682043528ef635f49c093312750eeed04d29e78aa9c2" Feb 27 17:30:47 crc kubenswrapper[4830]: I0227 17:30:47.179205 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28fe9e1a69b52d8dcf6b682043528ef635f49c093312750eeed04d29e78aa9c2"} err="failed to get container status \"28fe9e1a69b52d8dcf6b682043528ef635f49c093312750eeed04d29e78aa9c2\": rpc error: code = NotFound desc = could not find container \"28fe9e1a69b52d8dcf6b682043528ef635f49c093312750eeed04d29e78aa9c2\": container with ID starting with 28fe9e1a69b52d8dcf6b682043528ef635f49c093312750eeed04d29e78aa9c2 not found: ID does not exist" Feb 27 17:30:47 crc kubenswrapper[4830]: I0227 17:30:47.192586 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Feb 27 17:30:47 crc kubenswrapper[4830]: I0227 17:30:47.203419 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Feb 27 17:30:48 crc kubenswrapper[4830]: I0227 17:30:48.180656 4830 scope.go:117] "RemoveContainer" containerID="e234bdf9d383b5a101302a0d6cd53ac32e00c4a16d6430203957335ff2082563" Feb 27 17:30:48 crc kubenswrapper[4830]: I0227 17:30:48.245153 4830 scope.go:117] "RemoveContainer" containerID="2f0846069a58d31584c7158e1dc49b088af3a82683ebccd78ae041bf55658993" Feb 27 17:30:48 crc kubenswrapper[4830]: I0227 17:30:48.779163 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dec880c9-ae48-432b-8791-a7e82acaeb1e" path="/var/lib/kubelet/pods/dec880c9-ae48-432b-8791-a7e82acaeb1e/volumes" Feb 27 17:30:56 crc kubenswrapper[4830]: I0227 17:30:56.763341 4830 scope.go:117] "RemoveContainer" containerID="313fc4aac8e61b508c32aefdb01d7489ea5d8194a163882d5d4b5ec4665839cb" Feb 27 17:30:56 crc kubenswrapper[4830]: E0227 17:30:56.764371 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:31:08 crc kubenswrapper[4830]: I0227 17:31:08.763004 4830 scope.go:117] "RemoveContainer" containerID="313fc4aac8e61b508c32aefdb01d7489ea5d8194a163882d5d4b5ec4665839cb" Feb 27 17:31:08 crc kubenswrapper[4830]: E0227 17:31:08.764587 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:31:23 crc kubenswrapper[4830]: I0227 17:31:23.761862 4830 scope.go:117] "RemoveContainer" containerID="313fc4aac8e61b508c32aefdb01d7489ea5d8194a163882d5d4b5ec4665839cb" Feb 27 17:31:23 crc kubenswrapper[4830]: E0227 17:31:23.762631 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:31:34 crc kubenswrapper[4830]: I0227 17:31:34.773439 4830 scope.go:117] "RemoveContainer" containerID="313fc4aac8e61b508c32aefdb01d7489ea5d8194a163882d5d4b5ec4665839cb" Feb 27 17:31:34 crc kubenswrapper[4830]: E0227 17:31:34.774409 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:31:47 crc kubenswrapper[4830]: I0227 17:31:47.763638 4830 scope.go:117] "RemoveContainer" containerID="313fc4aac8e61b508c32aefdb01d7489ea5d8194a163882d5d4b5ec4665839cb" Feb 27 17:31:47 crc kubenswrapper[4830]: E0227 17:31:47.764858 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:31:48 crc kubenswrapper[4830]: I0227 17:31:48.355966 4830 scope.go:117] "RemoveContainer" containerID="efd82b4d34a33dc4375da46991f444e0c32d0c4ae41a0f5b068dbde2830438ef" Feb 27 17:31:59 crc kubenswrapper[4830]: I0227 17:31:59.762373 4830 scope.go:117] "RemoveContainer" containerID="313fc4aac8e61b508c32aefdb01d7489ea5d8194a163882d5d4b5ec4665839cb" Feb 27 17:31:59 crc kubenswrapper[4830]: E0227 17:31:59.763359 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:32:00 crc kubenswrapper[4830]: I0227 17:32:00.163173 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536892-6fnr5"] Feb 27 17:32:00 crc kubenswrapper[4830]: E0227 17:32:00.163732 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dec880c9-ae48-432b-8791-a7e82acaeb1e" containerName="mariadb-client" Feb 27 17:32:00 crc kubenswrapper[4830]: I0227 17:32:00.163764 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="dec880c9-ae48-432b-8791-a7e82acaeb1e" containerName="mariadb-client" Feb 27 17:32:00 crc kubenswrapper[4830]: I0227 17:32:00.164038 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="dec880c9-ae48-432b-8791-a7e82acaeb1e" containerName="mariadb-client" Feb 27 17:32:00 crc kubenswrapper[4830]: I0227 17:32:00.165086 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536892-6fnr5" Feb 27 17:32:00 crc kubenswrapper[4830]: I0227 17:32:00.168451 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:32:00 crc kubenswrapper[4830]: I0227 17:32:00.169731 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 17:32:00 crc kubenswrapper[4830]: I0227 17:32:00.174821 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:32:00 crc kubenswrapper[4830]: I0227 17:32:00.180113 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536892-6fnr5"] Feb 27 17:32:00 crc kubenswrapper[4830]: I0227 17:32:00.246714 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crxzg\" (UniqueName: \"kubernetes.io/projected/1b7ee878-6086-45a4-a46c-ba5aa7f2d79f-kube-api-access-crxzg\") pod \"auto-csr-approver-29536892-6fnr5\" (UID: \"1b7ee878-6086-45a4-a46c-ba5aa7f2d79f\") " pod="openshift-infra/auto-csr-approver-29536892-6fnr5" Feb 27 17:32:00 crc kubenswrapper[4830]: I0227 17:32:00.349706 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crxzg\" (UniqueName: \"kubernetes.io/projected/1b7ee878-6086-45a4-a46c-ba5aa7f2d79f-kube-api-access-crxzg\") pod \"auto-csr-approver-29536892-6fnr5\" (UID: \"1b7ee878-6086-45a4-a46c-ba5aa7f2d79f\") " pod="openshift-infra/auto-csr-approver-29536892-6fnr5" Feb 27 17:32:00 crc kubenswrapper[4830]: I0227 17:32:00.379220 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crxzg\" (UniqueName: \"kubernetes.io/projected/1b7ee878-6086-45a4-a46c-ba5aa7f2d79f-kube-api-access-crxzg\") pod \"auto-csr-approver-29536892-6fnr5\" (UID: \"1b7ee878-6086-45a4-a46c-ba5aa7f2d79f\") " pod="openshift-infra/auto-csr-approver-29536892-6fnr5" Feb 27 17:32:00 crc kubenswrapper[4830]: I0227 17:32:00.497178 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536892-6fnr5" Feb 27 17:32:01 crc kubenswrapper[4830]: I0227 17:32:01.016389 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536892-6fnr5"] Feb 27 17:32:01 crc kubenswrapper[4830]: W0227 17:32:01.018566 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b7ee878_6086_45a4_a46c_ba5aa7f2d79f.slice/crio-41d0caac4bc6f5d1deefed5f8caf56014bbf6da17e7490c37ec3715d37c5e323 WatchSource:0}: Error finding container 41d0caac4bc6f5d1deefed5f8caf56014bbf6da17e7490c37ec3715d37c5e323: Status 404 returned error can't find the container with id 41d0caac4bc6f5d1deefed5f8caf56014bbf6da17e7490c37ec3715d37c5e323 Feb 27 17:32:01 crc kubenswrapper[4830]: I0227 17:32:01.023927 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 17:32:01 crc kubenswrapper[4830]: I0227 17:32:01.913866 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536892-6fnr5" event={"ID":"1b7ee878-6086-45a4-a46c-ba5aa7f2d79f","Type":"ContainerStarted","Data":"41d0caac4bc6f5d1deefed5f8caf56014bbf6da17e7490c37ec3715d37c5e323"} Feb 27 17:32:02 crc kubenswrapper[4830]: I0227 17:32:02.926321 4830 generic.go:334] "Generic (PLEG): container finished" podID="1b7ee878-6086-45a4-a46c-ba5aa7f2d79f" containerID="25164afdd22e06077501b45e41e095380202757a71cdd6bc94d864b4c7eb0a49" exitCode=0 Feb 27 17:32:02 crc kubenswrapper[4830]: I0227 17:32:02.926414 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536892-6fnr5" event={"ID":"1b7ee878-6086-45a4-a46c-ba5aa7f2d79f","Type":"ContainerDied","Data":"25164afdd22e06077501b45e41e095380202757a71cdd6bc94d864b4c7eb0a49"} Feb 27 17:32:04 crc kubenswrapper[4830]: I0227 17:32:04.330646 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536892-6fnr5" Feb 27 17:32:04 crc kubenswrapper[4830]: I0227 17:32:04.423063 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crxzg\" (UniqueName: \"kubernetes.io/projected/1b7ee878-6086-45a4-a46c-ba5aa7f2d79f-kube-api-access-crxzg\") pod \"1b7ee878-6086-45a4-a46c-ba5aa7f2d79f\" (UID: \"1b7ee878-6086-45a4-a46c-ba5aa7f2d79f\") " Feb 27 17:32:04 crc kubenswrapper[4830]: I0227 17:32:04.432694 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b7ee878-6086-45a4-a46c-ba5aa7f2d79f-kube-api-access-crxzg" (OuterVolumeSpecName: "kube-api-access-crxzg") pod "1b7ee878-6086-45a4-a46c-ba5aa7f2d79f" (UID: "1b7ee878-6086-45a4-a46c-ba5aa7f2d79f"). InnerVolumeSpecName "kube-api-access-crxzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:32:04 crc kubenswrapper[4830]: I0227 17:32:04.526022 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crxzg\" (UniqueName: \"kubernetes.io/projected/1b7ee878-6086-45a4-a46c-ba5aa7f2d79f-kube-api-access-crxzg\") on node \"crc\" DevicePath \"\"" Feb 27 17:32:04 crc kubenswrapper[4830]: I0227 17:32:04.968064 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536892-6fnr5" event={"ID":"1b7ee878-6086-45a4-a46c-ba5aa7f2d79f","Type":"ContainerDied","Data":"41d0caac4bc6f5d1deefed5f8caf56014bbf6da17e7490c37ec3715d37c5e323"} Feb 27 17:32:04 crc kubenswrapper[4830]: I0227 17:32:04.968106 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41d0caac4bc6f5d1deefed5f8caf56014bbf6da17e7490c37ec3715d37c5e323" Feb 27 17:32:04 crc kubenswrapper[4830]: I0227 17:32:04.968113 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536892-6fnr5" Feb 27 17:32:05 crc kubenswrapper[4830]: I0227 17:32:05.422987 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536886-tb8fw"] Feb 27 17:32:05 crc kubenswrapper[4830]: I0227 17:32:05.429891 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536886-tb8fw"] Feb 27 17:32:06 crc kubenswrapper[4830]: I0227 17:32:06.778903 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8d1db84-59f7-464c-958a-2f1c2b6744d8" path="/var/lib/kubelet/pods/e8d1db84-59f7-464c-958a-2f1c2b6744d8/volumes" Feb 27 17:32:14 crc kubenswrapper[4830]: I0227 17:32:14.770216 4830 scope.go:117] "RemoveContainer" containerID="313fc4aac8e61b508c32aefdb01d7489ea5d8194a163882d5d4b5ec4665839cb" Feb 27 17:32:14 crc kubenswrapper[4830]: E0227 17:32:14.771813 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:32:28 crc kubenswrapper[4830]: I0227 17:32:28.763817 4830 scope.go:117] "RemoveContainer" containerID="313fc4aac8e61b508c32aefdb01d7489ea5d8194a163882d5d4b5ec4665839cb" Feb 27 17:32:28 crc kubenswrapper[4830]: E0227 17:32:28.765191 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:32:40 crc kubenswrapper[4830]: I0227 17:32:40.763290 4830 scope.go:117] "RemoveContainer" containerID="313fc4aac8e61b508c32aefdb01d7489ea5d8194a163882d5d4b5ec4665839cb" Feb 27 17:32:40 crc kubenswrapper[4830]: E0227 17:32:40.764727 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:32:48 crc kubenswrapper[4830]: I0227 17:32:48.419818 4830 scope.go:117] "RemoveContainer" containerID="8fe2bd0b693bf56df228c47272f949bc0bc0a3d4192b3ac4591598d6b0153d7d" Feb 27 17:32:55 crc kubenswrapper[4830]: I0227 17:32:55.763362 4830 scope.go:117] "RemoveContainer" containerID="313fc4aac8e61b508c32aefdb01d7489ea5d8194a163882d5d4b5ec4665839cb" Feb 27 17:32:55 crc kubenswrapper[4830]: E0227 17:32:55.765062 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:33:08 crc kubenswrapper[4830]: I0227 17:33:08.763296 4830 scope.go:117] "RemoveContainer" containerID="313fc4aac8e61b508c32aefdb01d7489ea5d8194a163882d5d4b5ec4665839cb" Feb 27 17:33:08 crc kubenswrapper[4830]: E0227 17:33:08.764292 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:33:21 crc kubenswrapper[4830]: I0227 17:33:21.762323 4830 scope.go:117] "RemoveContainer" containerID="313fc4aac8e61b508c32aefdb01d7489ea5d8194a163882d5d4b5ec4665839cb" Feb 27 17:33:21 crc kubenswrapper[4830]: E0227 17:33:21.763649 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:33:36 crc kubenswrapper[4830]: I0227 17:33:36.763797 4830 scope.go:117] "RemoveContainer" containerID="313fc4aac8e61b508c32aefdb01d7489ea5d8194a163882d5d4b5ec4665839cb" Feb 27 17:33:36 crc kubenswrapper[4830]: E0227 17:33:36.765384 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:33:50 crc kubenswrapper[4830]: I0227 17:33:50.763584 4830 scope.go:117] "RemoveContainer" containerID="313fc4aac8e61b508c32aefdb01d7489ea5d8194a163882d5d4b5ec4665839cb" Feb 27 17:33:50 crc kubenswrapper[4830]: E0227 17:33:50.764448 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:34:00 crc kubenswrapper[4830]: I0227 17:34:00.173012 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536894-d74wz"] Feb 27 17:34:00 crc kubenswrapper[4830]: E0227 17:34:00.174748 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b7ee878-6086-45a4-a46c-ba5aa7f2d79f" containerName="oc" Feb 27 17:34:00 crc kubenswrapper[4830]: I0227 17:34:00.174775 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b7ee878-6086-45a4-a46c-ba5aa7f2d79f" containerName="oc" Feb 27 17:34:00 crc kubenswrapper[4830]: I0227 17:34:00.175118 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b7ee878-6086-45a4-a46c-ba5aa7f2d79f" containerName="oc" Feb 27 17:34:00 crc kubenswrapper[4830]: I0227 17:34:00.176248 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536894-d74wz" Feb 27 17:34:00 crc kubenswrapper[4830]: I0227 17:34:00.183805 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 17:34:00 crc kubenswrapper[4830]: I0227 17:34:00.184309 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:34:00 crc kubenswrapper[4830]: I0227 17:34:00.184556 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:34:00 crc kubenswrapper[4830]: I0227 17:34:00.193016 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536894-d74wz"] Feb 27 17:34:00 crc kubenswrapper[4830]: I0227 17:34:00.318995 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wknl\" (UniqueName: \"kubernetes.io/projected/4d453e6f-c44f-480c-bda1-650c519b749a-kube-api-access-2wknl\") pod \"auto-csr-approver-29536894-d74wz\" (UID: \"4d453e6f-c44f-480c-bda1-650c519b749a\") " pod="openshift-infra/auto-csr-approver-29536894-d74wz" Feb 27 17:34:00 crc kubenswrapper[4830]: I0227 17:34:00.421363 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wknl\" (UniqueName: \"kubernetes.io/projected/4d453e6f-c44f-480c-bda1-650c519b749a-kube-api-access-2wknl\") pod \"auto-csr-approver-29536894-d74wz\" (UID: \"4d453e6f-c44f-480c-bda1-650c519b749a\") " pod="openshift-infra/auto-csr-approver-29536894-d74wz" Feb 27 17:34:00 crc kubenswrapper[4830]: I0227 17:34:00.449755 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wknl\" (UniqueName: \"kubernetes.io/projected/4d453e6f-c44f-480c-bda1-650c519b749a-kube-api-access-2wknl\") pod \"auto-csr-approver-29536894-d74wz\" (UID: \"4d453e6f-c44f-480c-bda1-650c519b749a\") " pod="openshift-infra/auto-csr-approver-29536894-d74wz" Feb 27 17:34:00 crc kubenswrapper[4830]: I0227 17:34:00.512784 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536894-d74wz" Feb 27 17:34:01 crc kubenswrapper[4830]: I0227 17:34:01.055416 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536894-d74wz"] Feb 27 17:34:01 crc kubenswrapper[4830]: W0227 17:34:01.059372 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d453e6f_c44f_480c_bda1_650c519b749a.slice/crio-93f1feae6656e69aa4ba20c61399f63c71773db1b7bea92726d678bb42b47f5f WatchSource:0}: Error finding container 93f1feae6656e69aa4ba20c61399f63c71773db1b7bea92726d678bb42b47f5f: Status 404 returned error can't find the container with id 93f1feae6656e69aa4ba20c61399f63c71773db1b7bea92726d678bb42b47f5f Feb 27 17:34:01 crc kubenswrapper[4830]: I0227 17:34:01.214925 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536894-d74wz" event={"ID":"4d453e6f-c44f-480c-bda1-650c519b749a","Type":"ContainerStarted","Data":"93f1feae6656e69aa4ba20c61399f63c71773db1b7bea92726d678bb42b47f5f"} Feb 27 17:34:01 crc kubenswrapper[4830]: I0227 17:34:01.763185 4830 scope.go:117] "RemoveContainer" containerID="313fc4aac8e61b508c32aefdb01d7489ea5d8194a163882d5d4b5ec4665839cb" Feb 27 17:34:01 crc kubenswrapper[4830]: E0227 17:34:01.763745 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:34:03 crc kubenswrapper[4830]: I0227 17:34:03.246462 4830 generic.go:334] "Generic (PLEG): container finished" podID="4d453e6f-c44f-480c-bda1-650c519b749a" containerID="181f95bdb98422c1ec4757b625802270a141a2ad650e80a8426133b764a0c4d8" exitCode=0 Feb 27 17:34:03 crc kubenswrapper[4830]: I0227 17:34:03.246942 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536894-d74wz" event={"ID":"4d453e6f-c44f-480c-bda1-650c519b749a","Type":"ContainerDied","Data":"181f95bdb98422c1ec4757b625802270a141a2ad650e80a8426133b764a0c4d8"} Feb 27 17:34:04 crc kubenswrapper[4830]: I0227 17:34:04.679457 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536894-d74wz" Feb 27 17:34:04 crc kubenswrapper[4830]: I0227 17:34:04.815004 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wknl\" (UniqueName: \"kubernetes.io/projected/4d453e6f-c44f-480c-bda1-650c519b749a-kube-api-access-2wknl\") pod \"4d453e6f-c44f-480c-bda1-650c519b749a\" (UID: \"4d453e6f-c44f-480c-bda1-650c519b749a\") " Feb 27 17:34:04 crc kubenswrapper[4830]: I0227 17:34:04.825058 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d453e6f-c44f-480c-bda1-650c519b749a-kube-api-access-2wknl" (OuterVolumeSpecName: "kube-api-access-2wknl") pod "4d453e6f-c44f-480c-bda1-650c519b749a" (UID: "4d453e6f-c44f-480c-bda1-650c519b749a"). InnerVolumeSpecName "kube-api-access-2wknl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:34:04 crc kubenswrapper[4830]: I0227 17:34:04.918274 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wknl\" (UniqueName: \"kubernetes.io/projected/4d453e6f-c44f-480c-bda1-650c519b749a-kube-api-access-2wknl\") on node \"crc\" DevicePath \"\"" Feb 27 17:34:05 crc kubenswrapper[4830]: I0227 17:34:05.271817 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536894-d74wz" event={"ID":"4d453e6f-c44f-480c-bda1-650c519b749a","Type":"ContainerDied","Data":"93f1feae6656e69aa4ba20c61399f63c71773db1b7bea92726d678bb42b47f5f"} Feb 27 17:34:05 crc kubenswrapper[4830]: I0227 17:34:05.271900 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93f1feae6656e69aa4ba20c61399f63c71773db1b7bea92726d678bb42b47f5f" Feb 27 17:34:05 crc kubenswrapper[4830]: I0227 17:34:05.271908 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536894-d74wz" Feb 27 17:34:05 crc kubenswrapper[4830]: I0227 17:34:05.784528 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536888-cf7qh"] Feb 27 17:34:05 crc kubenswrapper[4830]: I0227 17:34:05.796056 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536888-cf7qh"] Feb 27 17:34:06 crc kubenswrapper[4830]: I0227 17:34:06.782543 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="022c16c8-6b4c-4b11-a860-f9212af89fdd" path="/var/lib/kubelet/pods/022c16c8-6b4c-4b11-a860-f9212af89fdd/volumes" Feb 27 17:34:12 crc kubenswrapper[4830]: I0227 17:34:12.763768 4830 scope.go:117] "RemoveContainer" containerID="313fc4aac8e61b508c32aefdb01d7489ea5d8194a163882d5d4b5ec4665839cb" Feb 27 17:34:12 crc kubenswrapper[4830]: E0227 17:34:12.765169 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:34:25 crc kubenswrapper[4830]: I0227 17:34:25.764336 4830 scope.go:117] "RemoveContainer" containerID="313fc4aac8e61b508c32aefdb01d7489ea5d8194a163882d5d4b5ec4665839cb" Feb 27 17:34:25 crc kubenswrapper[4830]: E0227 17:34:25.765858 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:34:38 crc kubenswrapper[4830]: I0227 17:34:38.763646 4830 scope.go:117] "RemoveContainer" containerID="313fc4aac8e61b508c32aefdb01d7489ea5d8194a163882d5d4b5ec4665839cb" Feb 27 17:34:38 crc kubenswrapper[4830]: E0227 17:34:38.764705 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:34:48 crc kubenswrapper[4830]: I0227 17:34:48.564391 4830 scope.go:117] "RemoveContainer" containerID="a480bad0cbfab70d22afdc63f371aa0c1ccec24ce2eea886522351c227e7342f" Feb 27 17:34:48 crc kubenswrapper[4830]: I0227 17:34:48.606296 4830 scope.go:117] "RemoveContainer" containerID="a89cfd00e6a4fb866e3b23015795bcff47fb47761897bd84d6cdd1d3433adafb" Feb 27 17:34:51 crc kubenswrapper[4830]: I0227 17:34:51.762243 4830 scope.go:117] "RemoveContainer" containerID="313fc4aac8e61b508c32aefdb01d7489ea5d8194a163882d5d4b5ec4665839cb" Feb 27 17:34:51 crc kubenswrapper[4830]: E0227 17:34:51.763480 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:35:05 crc kubenswrapper[4830]: I0227 17:35:05.763596 4830 scope.go:117] "RemoveContainer" containerID="313fc4aac8e61b508c32aefdb01d7489ea5d8194a163882d5d4b5ec4665839cb" Feb 27 17:35:05 crc kubenswrapper[4830]: E0227 17:35:05.765296 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:35:17 crc kubenswrapper[4830]: I0227 17:35:17.762693 4830 scope.go:117] "RemoveContainer" containerID="313fc4aac8e61b508c32aefdb01d7489ea5d8194a163882d5d4b5ec4665839cb" Feb 27 17:35:17 crc kubenswrapper[4830]: E0227 17:35:17.763357 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:35:23 crc kubenswrapper[4830]: I0227 17:35:23.949588 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Feb 27 17:35:23 crc kubenswrapper[4830]: E0227 17:35:23.951149 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d453e6f-c44f-480c-bda1-650c519b749a" containerName="oc" Feb 27 17:35:23 crc kubenswrapper[4830]: I0227 17:35:23.951232 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d453e6f-c44f-480c-bda1-650c519b749a" containerName="oc" Feb 27 17:35:23 crc kubenswrapper[4830]: I0227 17:35:23.951742 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d453e6f-c44f-480c-bda1-650c519b749a" containerName="oc" Feb 27 17:35:23 crc kubenswrapper[4830]: I0227 17:35:23.953008 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Feb 27 17:35:23 crc kubenswrapper[4830]: I0227 17:35:23.957032 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-7xk7b" Feb 27 17:35:23 crc kubenswrapper[4830]: I0227 17:35:23.989847 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Feb 27 17:35:24 crc kubenswrapper[4830]: I0227 17:35:24.091897 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b8t5\" (UniqueName: \"kubernetes.io/projected/a7b1cd16-932c-44e3-b8fa-bed298c7d045-kube-api-access-2b8t5\") pod \"mariadb-copy-data\" (UID: \"a7b1cd16-932c-44e3-b8fa-bed298c7d045\") " pod="openstack/mariadb-copy-data" Feb 27 17:35:24 crc kubenswrapper[4830]: I0227 17:35:24.091989 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-df0d49b1-461a-4aae-aeb1-5512fd2f7599\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-df0d49b1-461a-4aae-aeb1-5512fd2f7599\") pod \"mariadb-copy-data\" (UID: \"a7b1cd16-932c-44e3-b8fa-bed298c7d045\") " pod="openstack/mariadb-copy-data" Feb 27 17:35:24 crc kubenswrapper[4830]: I0227 17:35:24.193495 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b8t5\" (UniqueName: \"kubernetes.io/projected/a7b1cd16-932c-44e3-b8fa-bed298c7d045-kube-api-access-2b8t5\") pod \"mariadb-copy-data\" (UID: \"a7b1cd16-932c-44e3-b8fa-bed298c7d045\") " pod="openstack/mariadb-copy-data" Feb 27 17:35:24 crc kubenswrapper[4830]: I0227 17:35:24.193565 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-df0d49b1-461a-4aae-aeb1-5512fd2f7599\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-df0d49b1-461a-4aae-aeb1-5512fd2f7599\") pod \"mariadb-copy-data\" (UID: \"a7b1cd16-932c-44e3-b8fa-bed298c7d045\") " pod="openstack/mariadb-copy-data" Feb 27 17:35:24 crc kubenswrapper[4830]: I0227 17:35:24.198026 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 17:35:24 crc kubenswrapper[4830]: I0227 17:35:24.198066 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-df0d49b1-461a-4aae-aeb1-5512fd2f7599\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-df0d49b1-461a-4aae-aeb1-5512fd2f7599\") pod \"mariadb-copy-data\" (UID: \"a7b1cd16-932c-44e3-b8fa-bed298c7d045\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a3f2a9fa91e8a2e223eeb1217f12438ade0af65c175867e82f9778a952fc8bb8/globalmount\"" pod="openstack/mariadb-copy-data" Feb 27 17:35:24 crc kubenswrapper[4830]: I0227 17:35:24.231029 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-df0d49b1-461a-4aae-aeb1-5512fd2f7599\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-df0d49b1-461a-4aae-aeb1-5512fd2f7599\") pod \"mariadb-copy-data\" (UID: \"a7b1cd16-932c-44e3-b8fa-bed298c7d045\") " pod="openstack/mariadb-copy-data" Feb 27 17:35:24 crc kubenswrapper[4830]: I0227 17:35:24.233002 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b8t5\" (UniqueName: \"kubernetes.io/projected/a7b1cd16-932c-44e3-b8fa-bed298c7d045-kube-api-access-2b8t5\") pod \"mariadb-copy-data\" (UID: \"a7b1cd16-932c-44e3-b8fa-bed298c7d045\") " pod="openstack/mariadb-copy-data" Feb 27 17:35:24 crc kubenswrapper[4830]: I0227 17:35:24.295349 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Feb 27 17:35:24 crc kubenswrapper[4830]: I0227 17:35:24.694303 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Feb 27 17:35:25 crc kubenswrapper[4830]: I0227 17:35:25.306009 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"a7b1cd16-932c-44e3-b8fa-bed298c7d045","Type":"ContainerStarted","Data":"8024b641a995a8ca10c295edfa71abc40db406299267272d7865332a76c8fa26"} Feb 27 17:35:25 crc kubenswrapper[4830]: I0227 17:35:25.306526 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"a7b1cd16-932c-44e3-b8fa-bed298c7d045","Type":"ContainerStarted","Data":"5864068cb7cd13dff5d7bf7ff1df7d2a1e0bf0d57316f0fe4d4c75a6cbe212b8"} Feb 27 17:35:25 crc kubenswrapper[4830]: I0227 17:35:25.328163 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=3.328137572 podStartE2EDuration="3.328137572s" podCreationTimestamp="2026-02-27 17:35:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:35:25.327331534 +0000 UTC m=+5321.416604037" watchObservedRunningTime="2026-02-27 17:35:25.328137572 +0000 UTC m=+5321.417410035" Feb 27 17:35:28 crc kubenswrapper[4830]: I0227 17:35:28.304907 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Feb 27 17:35:28 crc kubenswrapper[4830]: I0227 17:35:28.307709 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 27 17:35:28 crc kubenswrapper[4830]: I0227 17:35:28.317397 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 27 17:35:28 crc kubenswrapper[4830]: I0227 17:35:28.420185 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkjrl\" (UniqueName: \"kubernetes.io/projected/654375cd-21e9-4efd-83f8-5b53e8ad294c-kube-api-access-wkjrl\") pod \"mariadb-client\" (UID: \"654375cd-21e9-4efd-83f8-5b53e8ad294c\") " pod="openstack/mariadb-client" Feb 27 17:35:28 crc kubenswrapper[4830]: I0227 17:35:28.521259 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkjrl\" (UniqueName: \"kubernetes.io/projected/654375cd-21e9-4efd-83f8-5b53e8ad294c-kube-api-access-wkjrl\") pod \"mariadb-client\" (UID: \"654375cd-21e9-4efd-83f8-5b53e8ad294c\") " pod="openstack/mariadb-client" Feb 27 17:35:28 crc kubenswrapper[4830]: I0227 17:35:28.550387 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkjrl\" (UniqueName: \"kubernetes.io/projected/654375cd-21e9-4efd-83f8-5b53e8ad294c-kube-api-access-wkjrl\") pod \"mariadb-client\" (UID: \"654375cd-21e9-4efd-83f8-5b53e8ad294c\") " pod="openstack/mariadb-client" Feb 27 17:35:28 crc kubenswrapper[4830]: I0227 17:35:28.644879 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 27 17:35:28 crc kubenswrapper[4830]: I0227 17:35:28.763704 4830 scope.go:117] "RemoveContainer" containerID="313fc4aac8e61b508c32aefdb01d7489ea5d8194a163882d5d4b5ec4665839cb" Feb 27 17:35:28 crc kubenswrapper[4830]: E0227 17:35:28.764022 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:35:29 crc kubenswrapper[4830]: I0227 17:35:29.181382 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 27 17:35:29 crc kubenswrapper[4830]: I0227 17:35:29.345814 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"654375cd-21e9-4efd-83f8-5b53e8ad294c","Type":"ContainerStarted","Data":"1d8a1e2f9dbe52c7f04bb281958751447ffa686f3184a7fab52c4d6ae9c641df"} Feb 27 17:35:30 crc kubenswrapper[4830]: I0227 17:35:30.360653 4830 generic.go:334] "Generic (PLEG): container finished" podID="654375cd-21e9-4efd-83f8-5b53e8ad294c" containerID="31102fddd1ac7d179d4134f82cb88832c5c80e8bd3ad53f87fc53c09096c59fb" exitCode=0 Feb 27 17:35:30 crc kubenswrapper[4830]: I0227 17:35:30.360737 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"654375cd-21e9-4efd-83f8-5b53e8ad294c","Type":"ContainerDied","Data":"31102fddd1ac7d179d4134f82cb88832c5c80e8bd3ad53f87fc53c09096c59fb"} Feb 27 17:35:31 crc kubenswrapper[4830]: I0227 17:35:31.761374 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 27 17:35:31 crc kubenswrapper[4830]: I0227 17:35:31.791668 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_654375cd-21e9-4efd-83f8-5b53e8ad294c/mariadb-client/0.log" Feb 27 17:35:31 crc kubenswrapper[4830]: I0227 17:35:31.825290 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Feb 27 17:35:31 crc kubenswrapper[4830]: I0227 17:35:31.834815 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Feb 27 17:35:31 crc kubenswrapper[4830]: I0227 17:35:31.881842 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkjrl\" (UniqueName: \"kubernetes.io/projected/654375cd-21e9-4efd-83f8-5b53e8ad294c-kube-api-access-wkjrl\") pod \"654375cd-21e9-4efd-83f8-5b53e8ad294c\" (UID: \"654375cd-21e9-4efd-83f8-5b53e8ad294c\") " Feb 27 17:35:31 crc kubenswrapper[4830]: I0227 17:35:31.895456 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/654375cd-21e9-4efd-83f8-5b53e8ad294c-kube-api-access-wkjrl" (OuterVolumeSpecName: "kube-api-access-wkjrl") pod "654375cd-21e9-4efd-83f8-5b53e8ad294c" (UID: "654375cd-21e9-4efd-83f8-5b53e8ad294c"). InnerVolumeSpecName "kube-api-access-wkjrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:35:31 crc kubenswrapper[4830]: I0227 17:35:31.972579 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Feb 27 17:35:31 crc kubenswrapper[4830]: E0227 17:35:31.973975 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="654375cd-21e9-4efd-83f8-5b53e8ad294c" containerName="mariadb-client" Feb 27 17:35:31 crc kubenswrapper[4830]: I0227 17:35:31.974002 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="654375cd-21e9-4efd-83f8-5b53e8ad294c" containerName="mariadb-client" Feb 27 17:35:31 crc kubenswrapper[4830]: I0227 17:35:31.974205 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="654375cd-21e9-4efd-83f8-5b53e8ad294c" containerName="mariadb-client" Feb 27 17:35:31 crc kubenswrapper[4830]: I0227 17:35:31.974868 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 27 17:35:31 crc kubenswrapper[4830]: I0227 17:35:31.979426 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 27 17:35:31 crc kubenswrapper[4830]: I0227 17:35:31.983349 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkjrl\" (UniqueName: \"kubernetes.io/projected/654375cd-21e9-4efd-83f8-5b53e8ad294c-kube-api-access-wkjrl\") on node \"crc\" DevicePath \"\"" Feb 27 17:35:32 crc kubenswrapper[4830]: I0227 17:35:32.086886 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j27pz\" (UniqueName: \"kubernetes.io/projected/81bb98fa-188d-4d25-92d0-c95a261dc52f-kube-api-access-j27pz\") pod \"mariadb-client\" (UID: \"81bb98fa-188d-4d25-92d0-c95a261dc52f\") " pod="openstack/mariadb-client" Feb 27 17:35:32 crc kubenswrapper[4830]: I0227 17:35:32.189073 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j27pz\" (UniqueName: \"kubernetes.io/projected/81bb98fa-188d-4d25-92d0-c95a261dc52f-kube-api-access-j27pz\") pod \"mariadb-client\" (UID: \"81bb98fa-188d-4d25-92d0-c95a261dc52f\") " pod="openstack/mariadb-client" Feb 27 17:35:32 crc kubenswrapper[4830]: I0227 17:35:32.227525 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j27pz\" (UniqueName: \"kubernetes.io/projected/81bb98fa-188d-4d25-92d0-c95a261dc52f-kube-api-access-j27pz\") pod \"mariadb-client\" (UID: \"81bb98fa-188d-4d25-92d0-c95a261dc52f\") " pod="openstack/mariadb-client" Feb 27 17:35:32 crc kubenswrapper[4830]: I0227 17:35:32.305371 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 27 17:35:32 crc kubenswrapper[4830]: I0227 17:35:32.384358 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d8a1e2f9dbe52c7f04bb281958751447ffa686f3184a7fab52c4d6ae9c641df" Feb 27 17:35:32 crc kubenswrapper[4830]: I0227 17:35:32.384525 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 27 17:35:32 crc kubenswrapper[4830]: I0227 17:35:32.424084 4830 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="654375cd-21e9-4efd-83f8-5b53e8ad294c" podUID="81bb98fa-188d-4d25-92d0-c95a261dc52f" Feb 27 17:35:32 crc kubenswrapper[4830]: I0227 17:35:32.773880 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="654375cd-21e9-4efd-83f8-5b53e8ad294c" path="/var/lib/kubelet/pods/654375cd-21e9-4efd-83f8-5b53e8ad294c/volumes" Feb 27 17:35:32 crc kubenswrapper[4830]: I0227 17:35:32.856846 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Feb 27 17:35:32 crc kubenswrapper[4830]: W0227 17:35:32.857370 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81bb98fa_188d_4d25_92d0_c95a261dc52f.slice/crio-b163535a893b31a536adaf2793895e548958e46382f51668feb4274ca0b52447 WatchSource:0}: Error finding container b163535a893b31a536adaf2793895e548958e46382f51668feb4274ca0b52447: Status 404 returned error can't find the container with id b163535a893b31a536adaf2793895e548958e46382f51668feb4274ca0b52447 Feb 27 17:35:33 crc kubenswrapper[4830]: I0227 17:35:33.397802 4830 generic.go:334] "Generic (PLEG): container finished" podID="81bb98fa-188d-4d25-92d0-c95a261dc52f" containerID="f3fc59651524f079b13d12be58df71cd15350f127c4285da7ae7c34f7ceb8ff6" exitCode=0 Feb 27 17:35:33 crc kubenswrapper[4830]: I0227 17:35:33.397880 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"81bb98fa-188d-4d25-92d0-c95a261dc52f","Type":"ContainerDied","Data":"f3fc59651524f079b13d12be58df71cd15350f127c4285da7ae7c34f7ceb8ff6"} Feb 27 17:35:33 crc kubenswrapper[4830]: I0227 17:35:33.397927 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"81bb98fa-188d-4d25-92d0-c95a261dc52f","Type":"ContainerStarted","Data":"b163535a893b31a536adaf2793895e548958e46382f51668feb4274ca0b52447"} Feb 27 17:35:34 crc kubenswrapper[4830]: I0227 17:35:34.835468 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 27 17:35:34 crc kubenswrapper[4830]: I0227 17:35:34.861535 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_81bb98fa-188d-4d25-92d0-c95a261dc52f/mariadb-client/0.log" Feb 27 17:35:34 crc kubenswrapper[4830]: I0227 17:35:34.902435 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Feb 27 17:35:34 crc kubenswrapper[4830]: I0227 17:35:34.916726 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Feb 27 17:35:35 crc kubenswrapper[4830]: I0227 17:35:35.035925 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j27pz\" (UniqueName: \"kubernetes.io/projected/81bb98fa-188d-4d25-92d0-c95a261dc52f-kube-api-access-j27pz\") pod \"81bb98fa-188d-4d25-92d0-c95a261dc52f\" (UID: \"81bb98fa-188d-4d25-92d0-c95a261dc52f\") " Feb 27 17:35:35 crc kubenswrapper[4830]: I0227 17:35:35.046028 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81bb98fa-188d-4d25-92d0-c95a261dc52f-kube-api-access-j27pz" (OuterVolumeSpecName: "kube-api-access-j27pz") pod "81bb98fa-188d-4d25-92d0-c95a261dc52f" (UID: "81bb98fa-188d-4d25-92d0-c95a261dc52f"). InnerVolumeSpecName "kube-api-access-j27pz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:35:35 crc kubenswrapper[4830]: I0227 17:35:35.138567 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j27pz\" (UniqueName: \"kubernetes.io/projected/81bb98fa-188d-4d25-92d0-c95a261dc52f-kube-api-access-j27pz\") on node \"crc\" DevicePath \"\"" Feb 27 17:35:35 crc kubenswrapper[4830]: I0227 17:35:35.424828 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b163535a893b31a536adaf2793895e548958e46382f51668feb4274ca0b52447" Feb 27 17:35:35 crc kubenswrapper[4830]: I0227 17:35:35.425043 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Feb 27 17:35:36 crc kubenswrapper[4830]: I0227 17:35:36.793781 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81bb98fa-188d-4d25-92d0-c95a261dc52f" path="/var/lib/kubelet/pods/81bb98fa-188d-4d25-92d0-c95a261dc52f/volumes" Feb 27 17:35:40 crc kubenswrapper[4830]: I0227 17:35:40.763348 4830 scope.go:117] "RemoveContainer" containerID="313fc4aac8e61b508c32aefdb01d7489ea5d8194a163882d5d4b5ec4665839cb" Feb 27 17:35:41 crc kubenswrapper[4830]: I0227 17:35:41.488853 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerStarted","Data":"22fbcacd37ad840c90f07fc1e16c44d308f846d0fbace0b7a3cfa023009541af"} Feb 27 17:35:42 crc kubenswrapper[4830]: I0227 17:35:42.281715 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zb87f"] Feb 27 17:35:42 crc kubenswrapper[4830]: E0227 17:35:42.282779 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81bb98fa-188d-4d25-92d0-c95a261dc52f" containerName="mariadb-client" Feb 27 17:35:42 crc kubenswrapper[4830]: I0227 17:35:42.282808 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="81bb98fa-188d-4d25-92d0-c95a261dc52f" containerName="mariadb-client" Feb 27 17:35:42 crc kubenswrapper[4830]: I0227 17:35:42.283192 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="81bb98fa-188d-4d25-92d0-c95a261dc52f" containerName="mariadb-client" Feb 27 17:35:42 crc kubenswrapper[4830]: I0227 17:35:42.285146 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zb87f" Feb 27 17:35:42 crc kubenswrapper[4830]: I0227 17:35:42.303779 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zb87f"] Feb 27 17:35:42 crc kubenswrapper[4830]: I0227 17:35:42.387507 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxgsw\" (UniqueName: \"kubernetes.io/projected/317691ab-3073-41a4-9415-c481194fe41c-kube-api-access-gxgsw\") pod \"redhat-operators-zb87f\" (UID: \"317691ab-3073-41a4-9415-c481194fe41c\") " pod="openshift-marketplace/redhat-operators-zb87f" Feb 27 17:35:42 crc kubenswrapper[4830]: I0227 17:35:42.387631 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/317691ab-3073-41a4-9415-c481194fe41c-utilities\") pod \"redhat-operators-zb87f\" (UID: \"317691ab-3073-41a4-9415-c481194fe41c\") " pod="openshift-marketplace/redhat-operators-zb87f" Feb 27 17:35:42 crc kubenswrapper[4830]: I0227 17:35:42.387733 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/317691ab-3073-41a4-9415-c481194fe41c-catalog-content\") pod \"redhat-operators-zb87f\" (UID: \"317691ab-3073-41a4-9415-c481194fe41c\") " pod="openshift-marketplace/redhat-operators-zb87f" Feb 27 17:35:42 crc kubenswrapper[4830]: I0227 17:35:42.490211 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxgsw\" (UniqueName: \"kubernetes.io/projected/317691ab-3073-41a4-9415-c481194fe41c-kube-api-access-gxgsw\") pod \"redhat-operators-zb87f\" (UID: \"317691ab-3073-41a4-9415-c481194fe41c\") " pod="openshift-marketplace/redhat-operators-zb87f" Feb 27 17:35:42 crc kubenswrapper[4830]: I0227 17:35:42.490378 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/317691ab-3073-41a4-9415-c481194fe41c-utilities\") pod \"redhat-operators-zb87f\" (UID: \"317691ab-3073-41a4-9415-c481194fe41c\") " pod="openshift-marketplace/redhat-operators-zb87f" Feb 27 17:35:42 crc kubenswrapper[4830]: I0227 17:35:42.490451 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/317691ab-3073-41a4-9415-c481194fe41c-catalog-content\") pod \"redhat-operators-zb87f\" (UID: \"317691ab-3073-41a4-9415-c481194fe41c\") " pod="openshift-marketplace/redhat-operators-zb87f" Feb 27 17:35:42 crc kubenswrapper[4830]: I0227 17:35:42.491041 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/317691ab-3073-41a4-9415-c481194fe41c-utilities\") pod \"redhat-operators-zb87f\" (UID: \"317691ab-3073-41a4-9415-c481194fe41c\") " pod="openshift-marketplace/redhat-operators-zb87f" Feb 27 17:35:42 crc kubenswrapper[4830]: I0227 17:35:42.491412 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/317691ab-3073-41a4-9415-c481194fe41c-catalog-content\") pod \"redhat-operators-zb87f\" (UID: \"317691ab-3073-41a4-9415-c481194fe41c\") " pod="openshift-marketplace/redhat-operators-zb87f" Feb 27 17:35:42 crc kubenswrapper[4830]: I0227 17:35:42.519447 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxgsw\" (UniqueName: \"kubernetes.io/projected/317691ab-3073-41a4-9415-c481194fe41c-kube-api-access-gxgsw\") pod \"redhat-operators-zb87f\" (UID: \"317691ab-3073-41a4-9415-c481194fe41c\") " pod="openshift-marketplace/redhat-operators-zb87f" Feb 27 17:35:42 crc kubenswrapper[4830]: I0227 17:35:42.614166 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zb87f" Feb 27 17:35:43 crc kubenswrapper[4830]: I0227 17:35:43.153037 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zb87f"] Feb 27 17:35:43 crc kubenswrapper[4830]: I0227 17:35:43.506988 4830 generic.go:334] "Generic (PLEG): container finished" podID="317691ab-3073-41a4-9415-c481194fe41c" containerID="5493cd40ee8b1d652f9c34ae7929ead14efbc7312c7d142e6b9a527adae049a2" exitCode=0 Feb 27 17:35:43 crc kubenswrapper[4830]: I0227 17:35:43.507114 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zb87f" event={"ID":"317691ab-3073-41a4-9415-c481194fe41c","Type":"ContainerDied","Data":"5493cd40ee8b1d652f9c34ae7929ead14efbc7312c7d142e6b9a527adae049a2"} Feb 27 17:35:43 crc kubenswrapper[4830]: I0227 17:35:43.507433 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zb87f" event={"ID":"317691ab-3073-41a4-9415-c481194fe41c","Type":"ContainerStarted","Data":"69e976c331d380c70e54ee622c19233f4e3872b5c802dcf8424302fed6be9752"} Feb 27 17:35:44 crc kubenswrapper[4830]: I0227 17:35:44.517404 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zb87f" event={"ID":"317691ab-3073-41a4-9415-c481194fe41c","Type":"ContainerStarted","Data":"f5a5452e0e8ac948bd4415ab53cb44073c77406990ecbd8037c8109b8da5ecaa"} Feb 27 17:35:45 crc kubenswrapper[4830]: I0227 17:35:45.532304 4830 generic.go:334] "Generic (PLEG): container finished" podID="317691ab-3073-41a4-9415-c481194fe41c" containerID="f5a5452e0e8ac948bd4415ab53cb44073c77406990ecbd8037c8109b8da5ecaa" exitCode=0 Feb 27 17:35:45 crc kubenswrapper[4830]: I0227 17:35:45.532366 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zb87f" event={"ID":"317691ab-3073-41a4-9415-c481194fe41c","Type":"ContainerDied","Data":"f5a5452e0e8ac948bd4415ab53cb44073c77406990ecbd8037c8109b8da5ecaa"} Feb 27 17:35:46 crc kubenswrapper[4830]: I0227 17:35:46.548766 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zb87f" event={"ID":"317691ab-3073-41a4-9415-c481194fe41c","Type":"ContainerStarted","Data":"bf0a61463c0fc22ddba84638f30fa6a8470bdcd31fcac8599345d48f7956690c"} Feb 27 17:35:46 crc kubenswrapper[4830]: I0227 17:35:46.583043 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zb87f" podStartSLOduration=2.1255381509999998 podStartE2EDuration="4.583008087s" podCreationTimestamp="2026-02-27 17:35:42 +0000 UTC" firstStartedPulling="2026-02-27 17:35:43.50933693 +0000 UTC m=+5339.598609393" lastFinishedPulling="2026-02-27 17:35:45.966806856 +0000 UTC m=+5342.056079329" observedRunningTime="2026-02-27 17:35:46.581772667 +0000 UTC m=+5342.671045150" watchObservedRunningTime="2026-02-27 17:35:46.583008087 +0000 UTC m=+5342.672280630" Feb 27 17:35:52 crc kubenswrapper[4830]: I0227 17:35:52.614939 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zb87f" Feb 27 17:35:52 crc kubenswrapper[4830]: I0227 17:35:52.615653 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zb87f" Feb 27 17:35:53 crc kubenswrapper[4830]: I0227 17:35:53.668401 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zb87f" podUID="317691ab-3073-41a4-9415-c481194fe41c" containerName="registry-server" probeResult="failure" output=< Feb 27 17:35:53 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Feb 27 17:35:53 crc kubenswrapper[4830]: > Feb 27 17:35:57 crc kubenswrapper[4830]: I0227 17:35:57.572628 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rn2cp"] Feb 27 17:35:57 crc kubenswrapper[4830]: I0227 17:35:57.576036 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rn2cp" Feb 27 17:35:57 crc kubenswrapper[4830]: I0227 17:35:57.592328 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rn2cp"] Feb 27 17:35:57 crc kubenswrapper[4830]: I0227 17:35:57.688136 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clzsv\" (UniqueName: \"kubernetes.io/projected/5bb16ad0-59a4-4667-bb05-4f1e6723bcd1-kube-api-access-clzsv\") pod \"community-operators-rn2cp\" (UID: \"5bb16ad0-59a4-4667-bb05-4f1e6723bcd1\") " pod="openshift-marketplace/community-operators-rn2cp" Feb 27 17:35:57 crc kubenswrapper[4830]: I0227 17:35:57.688229 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bb16ad0-59a4-4667-bb05-4f1e6723bcd1-utilities\") pod \"community-operators-rn2cp\" (UID: \"5bb16ad0-59a4-4667-bb05-4f1e6723bcd1\") " pod="openshift-marketplace/community-operators-rn2cp" Feb 27 17:35:57 crc kubenswrapper[4830]: I0227 17:35:57.688313 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bb16ad0-59a4-4667-bb05-4f1e6723bcd1-catalog-content\") pod \"community-operators-rn2cp\" (UID: \"5bb16ad0-59a4-4667-bb05-4f1e6723bcd1\") " pod="openshift-marketplace/community-operators-rn2cp" Feb 27 17:35:57 crc kubenswrapper[4830]: I0227 17:35:57.790292 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clzsv\" (UniqueName: \"kubernetes.io/projected/5bb16ad0-59a4-4667-bb05-4f1e6723bcd1-kube-api-access-clzsv\") pod \"community-operators-rn2cp\" (UID: \"5bb16ad0-59a4-4667-bb05-4f1e6723bcd1\") " pod="openshift-marketplace/community-operators-rn2cp" Feb 27 17:35:57 crc kubenswrapper[4830]: I0227 17:35:57.790373 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bb16ad0-59a4-4667-bb05-4f1e6723bcd1-utilities\") pod \"community-operators-rn2cp\" (UID: \"5bb16ad0-59a4-4667-bb05-4f1e6723bcd1\") " pod="openshift-marketplace/community-operators-rn2cp" Feb 27 17:35:57 crc kubenswrapper[4830]: I0227 17:35:57.790424 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bb16ad0-59a4-4667-bb05-4f1e6723bcd1-catalog-content\") pod \"community-operators-rn2cp\" (UID: \"5bb16ad0-59a4-4667-bb05-4f1e6723bcd1\") " pod="openshift-marketplace/community-operators-rn2cp" Feb 27 17:35:57 crc kubenswrapper[4830]: I0227 17:35:57.791100 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bb16ad0-59a4-4667-bb05-4f1e6723bcd1-catalog-content\") pod \"community-operators-rn2cp\" (UID: \"5bb16ad0-59a4-4667-bb05-4f1e6723bcd1\") " pod="openshift-marketplace/community-operators-rn2cp" Feb 27 17:35:57 crc kubenswrapper[4830]: I0227 17:35:57.791224 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bb16ad0-59a4-4667-bb05-4f1e6723bcd1-utilities\") pod \"community-operators-rn2cp\" (UID: \"5bb16ad0-59a4-4667-bb05-4f1e6723bcd1\") " pod="openshift-marketplace/community-operators-rn2cp" Feb 27 17:35:57 crc kubenswrapper[4830]: I0227 17:35:57.810902 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clzsv\" (UniqueName: \"kubernetes.io/projected/5bb16ad0-59a4-4667-bb05-4f1e6723bcd1-kube-api-access-clzsv\") pod \"community-operators-rn2cp\" (UID: \"5bb16ad0-59a4-4667-bb05-4f1e6723bcd1\") " pod="openshift-marketplace/community-operators-rn2cp" Feb 27 17:35:57 crc kubenswrapper[4830]: I0227 17:35:57.908720 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rn2cp" Feb 27 17:35:58 crc kubenswrapper[4830]: I0227 17:35:58.422853 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rn2cp"] Feb 27 17:35:58 crc kubenswrapper[4830]: W0227 17:35:58.423972 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bb16ad0_59a4_4667_bb05_4f1e6723bcd1.slice/crio-e6de87fea5064bfb63345a07e48257e9fa882c87dc8a96a856a9dfb872d44e4b WatchSource:0}: Error finding container e6de87fea5064bfb63345a07e48257e9fa882c87dc8a96a856a9dfb872d44e4b: Status 404 returned error can't find the container with id e6de87fea5064bfb63345a07e48257e9fa882c87dc8a96a856a9dfb872d44e4b Feb 27 17:35:58 crc kubenswrapper[4830]: I0227 17:35:58.672322 4830 generic.go:334] "Generic (PLEG): container finished" podID="5bb16ad0-59a4-4667-bb05-4f1e6723bcd1" containerID="31c8c26af76a5cb3e67109bc4b1aa820f59e8d640b266441fec0782d07a5021f" exitCode=0 Feb 27 17:35:58 crc kubenswrapper[4830]: I0227 17:35:58.672390 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rn2cp" event={"ID":"5bb16ad0-59a4-4667-bb05-4f1e6723bcd1","Type":"ContainerDied","Data":"31c8c26af76a5cb3e67109bc4b1aa820f59e8d640b266441fec0782d07a5021f"} Feb 27 17:35:58 crc kubenswrapper[4830]: I0227 17:35:58.672425 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rn2cp" event={"ID":"5bb16ad0-59a4-4667-bb05-4f1e6723bcd1","Type":"ContainerStarted","Data":"e6de87fea5064bfb63345a07e48257e9fa882c87dc8a96a856a9dfb872d44e4b"} Feb 27 17:36:00 crc kubenswrapper[4830]: I0227 17:36:00.170017 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536896-w25wj"] Feb 27 17:36:00 crc kubenswrapper[4830]: I0227 17:36:00.172172 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536896-w25wj" Feb 27 17:36:00 crc kubenswrapper[4830]: I0227 17:36:00.180327 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:36:00 crc kubenswrapper[4830]: I0227 17:36:00.180572 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:36:00 crc kubenswrapper[4830]: I0227 17:36:00.180840 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 17:36:00 crc kubenswrapper[4830]: I0227 17:36:00.203727 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536896-w25wj"] Feb 27 17:36:00 crc kubenswrapper[4830]: I0227 17:36:00.340646 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5xs2\" (UniqueName: \"kubernetes.io/projected/d0ad818e-4327-4796-958d-87f0c600e5d0-kube-api-access-t5xs2\") pod \"auto-csr-approver-29536896-w25wj\" (UID: \"d0ad818e-4327-4796-958d-87f0c600e5d0\") " pod="openshift-infra/auto-csr-approver-29536896-w25wj" Feb 27 17:36:00 crc kubenswrapper[4830]: I0227 17:36:00.443814 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5xs2\" (UniqueName: \"kubernetes.io/projected/d0ad818e-4327-4796-958d-87f0c600e5d0-kube-api-access-t5xs2\") pod \"auto-csr-approver-29536896-w25wj\" (UID: \"d0ad818e-4327-4796-958d-87f0c600e5d0\") " pod="openshift-infra/auto-csr-approver-29536896-w25wj" Feb 27 17:36:00 crc kubenswrapper[4830]: I0227 17:36:00.486630 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5xs2\" (UniqueName: \"kubernetes.io/projected/d0ad818e-4327-4796-958d-87f0c600e5d0-kube-api-access-t5xs2\") pod \"auto-csr-approver-29536896-w25wj\" (UID: \"d0ad818e-4327-4796-958d-87f0c600e5d0\") " pod="openshift-infra/auto-csr-approver-29536896-w25wj" Feb 27 17:36:00 crc kubenswrapper[4830]: I0227 17:36:00.499373 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536896-w25wj" Feb 27 17:36:00 crc kubenswrapper[4830]: I0227 17:36:00.698496 4830 generic.go:334] "Generic (PLEG): container finished" podID="5bb16ad0-59a4-4667-bb05-4f1e6723bcd1" containerID="201575f0b11c86545c3632d291cf6936b02d3e2d99c4d82414b682d726afdbf5" exitCode=0 Feb 27 17:36:00 crc kubenswrapper[4830]: I0227 17:36:00.698556 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rn2cp" event={"ID":"5bb16ad0-59a4-4667-bb05-4f1e6723bcd1","Type":"ContainerDied","Data":"201575f0b11c86545c3632d291cf6936b02d3e2d99c4d82414b682d726afdbf5"} Feb 27 17:36:00 crc kubenswrapper[4830]: I0227 17:36:00.987794 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536896-w25wj"] Feb 27 17:36:00 crc kubenswrapper[4830]: W0227 17:36:00.988685 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0ad818e_4327_4796_958d_87f0c600e5d0.slice/crio-bc217144572d33479d35d011cdff77f3b91b8671186af66b0674e38834377403 WatchSource:0}: Error finding container bc217144572d33479d35d011cdff77f3b91b8671186af66b0674e38834377403: Status 404 returned error can't find the container with id bc217144572d33479d35d011cdff77f3b91b8671186af66b0674e38834377403 Feb 27 17:36:01 crc kubenswrapper[4830]: I0227 17:36:01.710825 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536896-w25wj" event={"ID":"d0ad818e-4327-4796-958d-87f0c600e5d0","Type":"ContainerStarted","Data":"bc217144572d33479d35d011cdff77f3b91b8671186af66b0674e38834377403"} Feb 27 17:36:01 crc kubenswrapper[4830]: I0227 17:36:01.713643 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rn2cp" event={"ID":"5bb16ad0-59a4-4667-bb05-4f1e6723bcd1","Type":"ContainerStarted","Data":"6ce8c4ca06a743994ee77dab0b6b9609963d6f9cff52b139dd39f59467930681"} Feb 27 17:36:01 crc kubenswrapper[4830]: I0227 17:36:01.743564 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rn2cp" podStartSLOduration=2.299283954 podStartE2EDuration="4.743528523s" podCreationTimestamp="2026-02-27 17:35:57 +0000 UTC" firstStartedPulling="2026-02-27 17:35:58.675297706 +0000 UTC m=+5354.764570169" lastFinishedPulling="2026-02-27 17:36:01.119542235 +0000 UTC m=+5357.208814738" observedRunningTime="2026-02-27 17:36:01.735692564 +0000 UTC m=+5357.824965067" watchObservedRunningTime="2026-02-27 17:36:01.743528523 +0000 UTC m=+5357.832801006" Feb 27 17:36:02 crc kubenswrapper[4830]: I0227 17:36:02.660734 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zb87f" Feb 27 17:36:02 crc kubenswrapper[4830]: I0227 17:36:02.724239 4830 generic.go:334] "Generic (PLEG): container finished" podID="d0ad818e-4327-4796-958d-87f0c600e5d0" containerID="d9413209a63ed534e80c99f7d62265cb445e8bac648ad296b3a540292cb8161f" exitCode=0 Feb 27 17:36:02 crc kubenswrapper[4830]: I0227 17:36:02.724382 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536896-w25wj" event={"ID":"d0ad818e-4327-4796-958d-87f0c600e5d0","Type":"ContainerDied","Data":"d9413209a63ed534e80c99f7d62265cb445e8bac648ad296b3a540292cb8161f"} Feb 27 17:36:02 crc kubenswrapper[4830]: I0227 17:36:02.737618 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zb87f" Feb 27 17:36:03 crc kubenswrapper[4830]: I0227 17:36:03.936478 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zb87f"] Feb 27 17:36:03 crc kubenswrapper[4830]: I0227 17:36:03.937353 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zb87f" podUID="317691ab-3073-41a4-9415-c481194fe41c" containerName="registry-server" containerID="cri-o://bf0a61463c0fc22ddba84638f30fa6a8470bdcd31fcac8599345d48f7956690c" gracePeriod=2 Feb 27 17:36:04 crc kubenswrapper[4830]: I0227 17:36:04.183631 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536896-w25wj" Feb 27 17:36:04 crc kubenswrapper[4830]: I0227 17:36:04.333618 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5xs2\" (UniqueName: \"kubernetes.io/projected/d0ad818e-4327-4796-958d-87f0c600e5d0-kube-api-access-t5xs2\") pod \"d0ad818e-4327-4796-958d-87f0c600e5d0\" (UID: \"d0ad818e-4327-4796-958d-87f0c600e5d0\") " Feb 27 17:36:04 crc kubenswrapper[4830]: I0227 17:36:04.340572 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0ad818e-4327-4796-958d-87f0c600e5d0-kube-api-access-t5xs2" (OuterVolumeSpecName: "kube-api-access-t5xs2") pod "d0ad818e-4327-4796-958d-87f0c600e5d0" (UID: "d0ad818e-4327-4796-958d-87f0c600e5d0"). InnerVolumeSpecName "kube-api-access-t5xs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:36:04 crc kubenswrapper[4830]: I0227 17:36:04.412288 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zb87f" Feb 27 17:36:04 crc kubenswrapper[4830]: I0227 17:36:04.434827 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxgsw\" (UniqueName: \"kubernetes.io/projected/317691ab-3073-41a4-9415-c481194fe41c-kube-api-access-gxgsw\") pod \"317691ab-3073-41a4-9415-c481194fe41c\" (UID: \"317691ab-3073-41a4-9415-c481194fe41c\") " Feb 27 17:36:04 crc kubenswrapper[4830]: I0227 17:36:04.434914 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/317691ab-3073-41a4-9415-c481194fe41c-catalog-content\") pod \"317691ab-3073-41a4-9415-c481194fe41c\" (UID: \"317691ab-3073-41a4-9415-c481194fe41c\") " Feb 27 17:36:04 crc kubenswrapper[4830]: I0227 17:36:04.435004 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/317691ab-3073-41a4-9415-c481194fe41c-utilities\") pod \"317691ab-3073-41a4-9415-c481194fe41c\" (UID: \"317691ab-3073-41a4-9415-c481194fe41c\") " Feb 27 17:36:04 crc kubenswrapper[4830]: I0227 17:36:04.435544 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5xs2\" (UniqueName: \"kubernetes.io/projected/d0ad818e-4327-4796-958d-87f0c600e5d0-kube-api-access-t5xs2\") on node \"crc\" DevicePath \"\"" Feb 27 17:36:04 crc kubenswrapper[4830]: I0227 17:36:04.436769 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/317691ab-3073-41a4-9415-c481194fe41c-utilities" (OuterVolumeSpecName: "utilities") pod "317691ab-3073-41a4-9415-c481194fe41c" (UID: "317691ab-3073-41a4-9415-c481194fe41c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:36:04 crc kubenswrapper[4830]: I0227 17:36:04.439432 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/317691ab-3073-41a4-9415-c481194fe41c-kube-api-access-gxgsw" (OuterVolumeSpecName: "kube-api-access-gxgsw") pod "317691ab-3073-41a4-9415-c481194fe41c" (UID: "317691ab-3073-41a4-9415-c481194fe41c"). InnerVolumeSpecName "kube-api-access-gxgsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:36:04 crc kubenswrapper[4830]: I0227 17:36:04.536233 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxgsw\" (UniqueName: \"kubernetes.io/projected/317691ab-3073-41a4-9415-c481194fe41c-kube-api-access-gxgsw\") on node \"crc\" DevicePath \"\"" Feb 27 17:36:04 crc kubenswrapper[4830]: I0227 17:36:04.536503 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/317691ab-3073-41a4-9415-c481194fe41c-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:36:04 crc kubenswrapper[4830]: I0227 17:36:04.639396 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/317691ab-3073-41a4-9415-c481194fe41c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "317691ab-3073-41a4-9415-c481194fe41c" (UID: "317691ab-3073-41a4-9415-c481194fe41c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:36:04 crc kubenswrapper[4830]: I0227 17:36:04.739539 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/317691ab-3073-41a4-9415-c481194fe41c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:36:04 crc kubenswrapper[4830]: I0227 17:36:04.748986 4830 generic.go:334] "Generic (PLEG): container finished" podID="317691ab-3073-41a4-9415-c481194fe41c" containerID="bf0a61463c0fc22ddba84638f30fa6a8470bdcd31fcac8599345d48f7956690c" exitCode=0 Feb 27 17:36:04 crc kubenswrapper[4830]: I0227 17:36:04.749128 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zb87f" event={"ID":"317691ab-3073-41a4-9415-c481194fe41c","Type":"ContainerDied","Data":"bf0a61463c0fc22ddba84638f30fa6a8470bdcd31fcac8599345d48f7956690c"} Feb 27 17:36:04 crc kubenswrapper[4830]: I0227 17:36:04.749202 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zb87f" event={"ID":"317691ab-3073-41a4-9415-c481194fe41c","Type":"ContainerDied","Data":"69e976c331d380c70e54ee622c19233f4e3872b5c802dcf8424302fed6be9752"} Feb 27 17:36:04 crc kubenswrapper[4830]: I0227 17:36:04.749238 4830 scope.go:117] "RemoveContainer" containerID="bf0a61463c0fc22ddba84638f30fa6a8470bdcd31fcac8599345d48f7956690c" Feb 27 17:36:04 crc kubenswrapper[4830]: I0227 17:36:04.749310 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zb87f" Feb 27 17:36:04 crc kubenswrapper[4830]: I0227 17:36:04.751450 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536896-w25wj" event={"ID":"d0ad818e-4327-4796-958d-87f0c600e5d0","Type":"ContainerDied","Data":"bc217144572d33479d35d011cdff77f3b91b8671186af66b0674e38834377403"} Feb 27 17:36:04 crc kubenswrapper[4830]: I0227 17:36:04.751497 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc217144572d33479d35d011cdff77f3b91b8671186af66b0674e38834377403" Feb 27 17:36:04 crc kubenswrapper[4830]: I0227 17:36:04.751534 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536896-w25wj" Feb 27 17:36:04 crc kubenswrapper[4830]: I0227 17:36:04.772654 4830 scope.go:117] "RemoveContainer" containerID="f5a5452e0e8ac948bd4415ab53cb44073c77406990ecbd8037c8109b8da5ecaa" Feb 27 17:36:04 crc kubenswrapper[4830]: I0227 17:36:04.800879 4830 scope.go:117] "RemoveContainer" containerID="5493cd40ee8b1d652f9c34ae7929ead14efbc7312c7d142e6b9a527adae049a2" Feb 27 17:36:04 crc kubenswrapper[4830]: I0227 17:36:04.800994 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zb87f"] Feb 27 17:36:04 crc kubenswrapper[4830]: I0227 17:36:04.815917 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zb87f"] Feb 27 17:36:04 crc kubenswrapper[4830]: I0227 17:36:04.828160 4830 scope.go:117] "RemoveContainer" containerID="bf0a61463c0fc22ddba84638f30fa6a8470bdcd31fcac8599345d48f7956690c" Feb 27 17:36:04 crc kubenswrapper[4830]: E0227 17:36:04.828578 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf0a61463c0fc22ddba84638f30fa6a8470bdcd31fcac8599345d48f7956690c\": container with ID starting with bf0a61463c0fc22ddba84638f30fa6a8470bdcd31fcac8599345d48f7956690c not found: ID does not exist" containerID="bf0a61463c0fc22ddba84638f30fa6a8470bdcd31fcac8599345d48f7956690c" Feb 27 17:36:04 crc kubenswrapper[4830]: I0227 17:36:04.828609 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf0a61463c0fc22ddba84638f30fa6a8470bdcd31fcac8599345d48f7956690c"} err="failed to get container status \"bf0a61463c0fc22ddba84638f30fa6a8470bdcd31fcac8599345d48f7956690c\": rpc error: code = NotFound desc = could not find container \"bf0a61463c0fc22ddba84638f30fa6a8470bdcd31fcac8599345d48f7956690c\": container with ID starting with bf0a61463c0fc22ddba84638f30fa6a8470bdcd31fcac8599345d48f7956690c not found: ID does not exist" Feb 27 17:36:04 crc kubenswrapper[4830]: I0227 17:36:04.828632 4830 scope.go:117] "RemoveContainer" containerID="f5a5452e0e8ac948bd4415ab53cb44073c77406990ecbd8037c8109b8da5ecaa" Feb 27 17:36:04 crc kubenswrapper[4830]: E0227 17:36:04.829129 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5a5452e0e8ac948bd4415ab53cb44073c77406990ecbd8037c8109b8da5ecaa\": container with ID starting with f5a5452e0e8ac948bd4415ab53cb44073c77406990ecbd8037c8109b8da5ecaa not found: ID does not exist" containerID="f5a5452e0e8ac948bd4415ab53cb44073c77406990ecbd8037c8109b8da5ecaa" Feb 27 17:36:04 crc kubenswrapper[4830]: I0227 17:36:04.829150 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5a5452e0e8ac948bd4415ab53cb44073c77406990ecbd8037c8109b8da5ecaa"} err="failed to get container status \"f5a5452e0e8ac948bd4415ab53cb44073c77406990ecbd8037c8109b8da5ecaa\": rpc error: code = NotFound desc = could not find container \"f5a5452e0e8ac948bd4415ab53cb44073c77406990ecbd8037c8109b8da5ecaa\": container with ID starting with f5a5452e0e8ac948bd4415ab53cb44073c77406990ecbd8037c8109b8da5ecaa not found: ID does not exist" Feb 27 17:36:04 crc kubenswrapper[4830]: I0227 17:36:04.829162 4830 scope.go:117] "RemoveContainer" containerID="5493cd40ee8b1d652f9c34ae7929ead14efbc7312c7d142e6b9a527adae049a2" Feb 27 17:36:04 crc kubenswrapper[4830]: E0227 17:36:04.829430 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5493cd40ee8b1d652f9c34ae7929ead14efbc7312c7d142e6b9a527adae049a2\": container with ID starting with 5493cd40ee8b1d652f9c34ae7929ead14efbc7312c7d142e6b9a527adae049a2 not found: ID does not exist" containerID="5493cd40ee8b1d652f9c34ae7929ead14efbc7312c7d142e6b9a527adae049a2" Feb 27 17:36:04 crc kubenswrapper[4830]: I0227 17:36:04.829452 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5493cd40ee8b1d652f9c34ae7929ead14efbc7312c7d142e6b9a527adae049a2"} err="failed to get container status \"5493cd40ee8b1d652f9c34ae7929ead14efbc7312c7d142e6b9a527adae049a2\": rpc error: code = NotFound desc = could not find container \"5493cd40ee8b1d652f9c34ae7929ead14efbc7312c7d142e6b9a527adae049a2\": container with ID starting with 5493cd40ee8b1d652f9c34ae7929ead14efbc7312c7d142e6b9a527adae049a2 not found: ID does not exist" Feb 27 17:36:05 crc kubenswrapper[4830]: I0227 17:36:05.290263 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536890-hsmxf"] Feb 27 17:36:05 crc kubenswrapper[4830]: I0227 17:36:05.300877 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536890-hsmxf"] Feb 27 17:36:06 crc kubenswrapper[4830]: I0227 17:36:06.780865 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="317691ab-3073-41a4-9415-c481194fe41c" path="/var/lib/kubelet/pods/317691ab-3073-41a4-9415-c481194fe41c/volumes" Feb 27 17:36:06 crc kubenswrapper[4830]: I0227 17:36:06.782572 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9d30f3b-8912-4203-88dc-194bd00d4a71" path="/var/lib/kubelet/pods/d9d30f3b-8912-4203-88dc-194bd00d4a71/volumes" Feb 27 17:36:07 crc kubenswrapper[4830]: I0227 17:36:07.909059 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rn2cp" Feb 27 17:36:07 crc kubenswrapper[4830]: I0227 17:36:07.909153 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rn2cp" Feb 27 17:36:07 crc kubenswrapper[4830]: I0227 17:36:07.986583 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rn2cp" Feb 27 17:36:08 crc kubenswrapper[4830]: I0227 17:36:08.854360 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rn2cp" Feb 27 17:36:08 crc kubenswrapper[4830]: I0227 17:36:08.915229 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rn2cp"] Feb 27 17:36:10 crc kubenswrapper[4830]: I0227 17:36:10.818170 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rn2cp" podUID="5bb16ad0-59a4-4667-bb05-4f1e6723bcd1" containerName="registry-server" containerID="cri-o://6ce8c4ca06a743994ee77dab0b6b9609963d6f9cff52b139dd39f59467930681" gracePeriod=2 Feb 27 17:36:11 crc kubenswrapper[4830]: I0227 17:36:11.791976 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rn2cp" Feb 27 17:36:11 crc kubenswrapper[4830]: I0227 17:36:11.833863 4830 generic.go:334] "Generic (PLEG): container finished" podID="5bb16ad0-59a4-4667-bb05-4f1e6723bcd1" containerID="6ce8c4ca06a743994ee77dab0b6b9609963d6f9cff52b139dd39f59467930681" exitCode=0 Feb 27 17:36:11 crc kubenswrapper[4830]: I0227 17:36:11.834538 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rn2cp" event={"ID":"5bb16ad0-59a4-4667-bb05-4f1e6723bcd1","Type":"ContainerDied","Data":"6ce8c4ca06a743994ee77dab0b6b9609963d6f9cff52b139dd39f59467930681"} Feb 27 17:36:11 crc kubenswrapper[4830]: I0227 17:36:11.834607 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rn2cp" event={"ID":"5bb16ad0-59a4-4667-bb05-4f1e6723bcd1","Type":"ContainerDied","Data":"e6de87fea5064bfb63345a07e48257e9fa882c87dc8a96a856a9dfb872d44e4b"} Feb 27 17:36:11 crc kubenswrapper[4830]: I0227 17:36:11.834631 4830 scope.go:117] "RemoveContainer" containerID="6ce8c4ca06a743994ee77dab0b6b9609963d6f9cff52b139dd39f59467930681" Feb 27 17:36:11 crc kubenswrapper[4830]: I0227 17:36:11.834888 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rn2cp" Feb 27 17:36:11 crc kubenswrapper[4830]: I0227 17:36:11.853999 4830 scope.go:117] "RemoveContainer" containerID="201575f0b11c86545c3632d291cf6936b02d3e2d99c4d82414b682d726afdbf5" Feb 27 17:36:11 crc kubenswrapper[4830]: I0227 17:36:11.873244 4830 scope.go:117] "RemoveContainer" containerID="31c8c26af76a5cb3e67109bc4b1aa820f59e8d640b266441fec0782d07a5021f" Feb 27 17:36:11 crc kubenswrapper[4830]: I0227 17:36:11.903504 4830 scope.go:117] "RemoveContainer" containerID="6ce8c4ca06a743994ee77dab0b6b9609963d6f9cff52b139dd39f59467930681" Feb 27 17:36:11 crc kubenswrapper[4830]: E0227 17:36:11.904087 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ce8c4ca06a743994ee77dab0b6b9609963d6f9cff52b139dd39f59467930681\": container with ID starting with 6ce8c4ca06a743994ee77dab0b6b9609963d6f9cff52b139dd39f59467930681 not found: ID does not exist" containerID="6ce8c4ca06a743994ee77dab0b6b9609963d6f9cff52b139dd39f59467930681" Feb 27 17:36:11 crc kubenswrapper[4830]: I0227 17:36:11.904117 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ce8c4ca06a743994ee77dab0b6b9609963d6f9cff52b139dd39f59467930681"} err="failed to get container status \"6ce8c4ca06a743994ee77dab0b6b9609963d6f9cff52b139dd39f59467930681\": rpc error: code = NotFound desc = could not find container \"6ce8c4ca06a743994ee77dab0b6b9609963d6f9cff52b139dd39f59467930681\": container with ID starting with 6ce8c4ca06a743994ee77dab0b6b9609963d6f9cff52b139dd39f59467930681 not found: ID does not exist" Feb 27 17:36:11 crc kubenswrapper[4830]: I0227 17:36:11.904139 4830 scope.go:117] "RemoveContainer" containerID="201575f0b11c86545c3632d291cf6936b02d3e2d99c4d82414b682d726afdbf5" Feb 27 17:36:11 crc kubenswrapper[4830]: E0227 17:36:11.904510 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"201575f0b11c86545c3632d291cf6936b02d3e2d99c4d82414b682d726afdbf5\": container with ID starting with 201575f0b11c86545c3632d291cf6936b02d3e2d99c4d82414b682d726afdbf5 not found: ID does not exist" containerID="201575f0b11c86545c3632d291cf6936b02d3e2d99c4d82414b682d726afdbf5" Feb 27 17:36:11 crc kubenswrapper[4830]: I0227 17:36:11.904556 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"201575f0b11c86545c3632d291cf6936b02d3e2d99c4d82414b682d726afdbf5"} err="failed to get container status \"201575f0b11c86545c3632d291cf6936b02d3e2d99c4d82414b682d726afdbf5\": rpc error: code = NotFound desc = could not find container \"201575f0b11c86545c3632d291cf6936b02d3e2d99c4d82414b682d726afdbf5\": container with ID starting with 201575f0b11c86545c3632d291cf6936b02d3e2d99c4d82414b682d726afdbf5 not found: ID does not exist" Feb 27 17:36:11 crc kubenswrapper[4830]: I0227 17:36:11.904589 4830 scope.go:117] "RemoveContainer" containerID="31c8c26af76a5cb3e67109bc4b1aa820f59e8d640b266441fec0782d07a5021f" Feb 27 17:36:11 crc kubenswrapper[4830]: E0227 17:36:11.905197 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31c8c26af76a5cb3e67109bc4b1aa820f59e8d640b266441fec0782d07a5021f\": container with ID starting with 31c8c26af76a5cb3e67109bc4b1aa820f59e8d640b266441fec0782d07a5021f not found: ID does not exist" containerID="31c8c26af76a5cb3e67109bc4b1aa820f59e8d640b266441fec0782d07a5021f" Feb 27 17:36:11 crc kubenswrapper[4830]: I0227 17:36:11.905304 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31c8c26af76a5cb3e67109bc4b1aa820f59e8d640b266441fec0782d07a5021f"} err="failed to get container status \"31c8c26af76a5cb3e67109bc4b1aa820f59e8d640b266441fec0782d07a5021f\": rpc error: code = NotFound desc = could not find container \"31c8c26af76a5cb3e67109bc4b1aa820f59e8d640b266441fec0782d07a5021f\": container with ID starting with 31c8c26af76a5cb3e67109bc4b1aa820f59e8d640b266441fec0782d07a5021f not found: ID does not exist" Feb 27 17:36:11 crc kubenswrapper[4830]: I0227 17:36:11.992624 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clzsv\" (UniqueName: \"kubernetes.io/projected/5bb16ad0-59a4-4667-bb05-4f1e6723bcd1-kube-api-access-clzsv\") pod \"5bb16ad0-59a4-4667-bb05-4f1e6723bcd1\" (UID: \"5bb16ad0-59a4-4667-bb05-4f1e6723bcd1\") " Feb 27 17:36:11 crc kubenswrapper[4830]: I0227 17:36:11.992682 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bb16ad0-59a4-4667-bb05-4f1e6723bcd1-catalog-content\") pod \"5bb16ad0-59a4-4667-bb05-4f1e6723bcd1\" (UID: \"5bb16ad0-59a4-4667-bb05-4f1e6723bcd1\") " Feb 27 17:36:11 crc kubenswrapper[4830]: I0227 17:36:11.992735 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bb16ad0-59a4-4667-bb05-4f1e6723bcd1-utilities\") pod \"5bb16ad0-59a4-4667-bb05-4f1e6723bcd1\" (UID: \"5bb16ad0-59a4-4667-bb05-4f1e6723bcd1\") " Feb 27 17:36:11 crc kubenswrapper[4830]: I0227 17:36:11.993929 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bb16ad0-59a4-4667-bb05-4f1e6723bcd1-utilities" (OuterVolumeSpecName: "utilities") pod "5bb16ad0-59a4-4667-bb05-4f1e6723bcd1" (UID: "5bb16ad0-59a4-4667-bb05-4f1e6723bcd1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:36:11 crc kubenswrapper[4830]: I0227 17:36:11.999472 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bb16ad0-59a4-4667-bb05-4f1e6723bcd1-kube-api-access-clzsv" (OuterVolumeSpecName: "kube-api-access-clzsv") pod "5bb16ad0-59a4-4667-bb05-4f1e6723bcd1" (UID: "5bb16ad0-59a4-4667-bb05-4f1e6723bcd1"). InnerVolumeSpecName "kube-api-access-clzsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:36:12 crc kubenswrapper[4830]: I0227 17:36:12.083325 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bb16ad0-59a4-4667-bb05-4f1e6723bcd1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5bb16ad0-59a4-4667-bb05-4f1e6723bcd1" (UID: "5bb16ad0-59a4-4667-bb05-4f1e6723bcd1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:36:12 crc kubenswrapper[4830]: I0227 17:36:12.094738 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clzsv\" (UniqueName: \"kubernetes.io/projected/5bb16ad0-59a4-4667-bb05-4f1e6723bcd1-kube-api-access-clzsv\") on node \"crc\" DevicePath \"\"" Feb 27 17:36:12 crc kubenswrapper[4830]: I0227 17:36:12.095126 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bb16ad0-59a4-4667-bb05-4f1e6723bcd1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:36:12 crc kubenswrapper[4830]: I0227 17:36:12.095156 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bb16ad0-59a4-4667-bb05-4f1e6723bcd1-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:36:12 crc kubenswrapper[4830]: I0227 17:36:12.184376 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rn2cp"] Feb 27 17:36:12 crc kubenswrapper[4830]: I0227 17:36:12.195520 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rn2cp"] Feb 27 17:36:12 crc kubenswrapper[4830]: I0227 17:36:12.783684 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bb16ad0-59a4-4667-bb05-4f1e6723bcd1" path="/var/lib/kubelet/pods/5bb16ad0-59a4-4667-bb05-4f1e6723bcd1/volumes" Feb 27 17:36:17 crc kubenswrapper[4830]: E0227 17:36:17.391026 4830 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.36:59074->38.129.56.36:42557: write tcp 38.129.56.36:59074->38.129.56.36:42557: write: broken pipe Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.632353 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 27 17:36:19 crc kubenswrapper[4830]: E0227 17:36:19.633318 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bb16ad0-59a4-4667-bb05-4f1e6723bcd1" containerName="registry-server" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.633342 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bb16ad0-59a4-4667-bb05-4f1e6723bcd1" containerName="registry-server" Feb 27 17:36:19 crc kubenswrapper[4830]: E0227 17:36:19.633364 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0ad818e-4327-4796-958d-87f0c600e5d0" containerName="oc" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.633376 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0ad818e-4327-4796-958d-87f0c600e5d0" containerName="oc" Feb 27 17:36:19 crc kubenswrapper[4830]: E0227 17:36:19.633406 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="317691ab-3073-41a4-9415-c481194fe41c" containerName="extract-utilities" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.633461 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="317691ab-3073-41a4-9415-c481194fe41c" containerName="extract-utilities" Feb 27 17:36:19 crc kubenswrapper[4830]: E0227 17:36:19.633495 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="317691ab-3073-41a4-9415-c481194fe41c" containerName="extract-content" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.633507 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="317691ab-3073-41a4-9415-c481194fe41c" containerName="extract-content" Feb 27 17:36:19 crc kubenswrapper[4830]: E0227 17:36:19.633527 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="317691ab-3073-41a4-9415-c481194fe41c" containerName="registry-server" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.633538 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="317691ab-3073-41a4-9415-c481194fe41c" containerName="registry-server" Feb 27 17:36:19 crc kubenswrapper[4830]: E0227 17:36:19.633553 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bb16ad0-59a4-4667-bb05-4f1e6723bcd1" containerName="extract-utilities" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.633565 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bb16ad0-59a4-4667-bb05-4f1e6723bcd1" containerName="extract-utilities" Feb 27 17:36:19 crc kubenswrapper[4830]: E0227 17:36:19.633583 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bb16ad0-59a4-4667-bb05-4f1e6723bcd1" containerName="extract-content" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.633594 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bb16ad0-59a4-4667-bb05-4f1e6723bcd1" containerName="extract-content" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.633850 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="317691ab-3073-41a4-9415-c481194fe41c" containerName="registry-server" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.633869 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0ad818e-4327-4796-958d-87f0c600e5d0" containerName="oc" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.633902 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bb16ad0-59a4-4667-bb05-4f1e6723bcd1" containerName="registry-server" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.635300 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.638590 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-wjfn9" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.638890 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.639130 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.665449 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.668335 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.692856 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.712377 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.714483 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.756162 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.757760 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r7mw\" (UniqueName: \"kubernetes.io/projected/64bfb115-0d42-406c-8cf7-eee1da063fdf-kube-api-access-6r7mw\") pod \"ovsdbserver-nb-0\" (UID: \"64bfb115-0d42-406c-8cf7-eee1da063fdf\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.758054 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/64bfb115-0d42-406c-8cf7-eee1da063fdf-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"64bfb115-0d42-406c-8cf7-eee1da063fdf\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.758103 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ab46198f-858f-4a3d-82f3-0881dc733012\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ab46198f-858f-4a3d-82f3-0881dc733012\") pod \"ovsdbserver-nb-0\" (UID: \"64bfb115-0d42-406c-8cf7-eee1da063fdf\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.758171 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64bfb115-0d42-406c-8cf7-eee1da063fdf-config\") pod \"ovsdbserver-nb-0\" (UID: \"64bfb115-0d42-406c-8cf7-eee1da063fdf\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.758314 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/64bfb115-0d42-406c-8cf7-eee1da063fdf-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"64bfb115-0d42-406c-8cf7-eee1da063fdf\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.758372 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64bfb115-0d42-406c-8cf7-eee1da063fdf-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"64bfb115-0d42-406c-8cf7-eee1da063fdf\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.768291 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.840648 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.842579 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.854676 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.855285 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-7xrj2" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.859094 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.860574 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.864669 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b5aca21-88b5-41b5-a8fa-58df03c2dc7b-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"3b5aca21-88b5-41b5-a8fa-58df03c2dc7b\") " pod="openstack/ovsdbserver-nb-2" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.864774 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r7mw\" (UniqueName: \"kubernetes.io/projected/64bfb115-0d42-406c-8cf7-eee1da063fdf-kube-api-access-6r7mw\") pod \"ovsdbserver-nb-0\" (UID: \"64bfb115-0d42-406c-8cf7-eee1da063fdf\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.864831 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3aaa1ca6-2199-4a24-8ac2-af90f0f4b4e0-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"3aaa1ca6-2199-4a24-8ac2-af90f0f4b4e0\") " pod="openstack/ovsdbserver-nb-1" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.864876 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tckrk\" (UniqueName: \"kubernetes.io/projected/3b5aca21-88b5-41b5-a8fa-58df03c2dc7b-kube-api-access-tckrk\") pod \"ovsdbserver-nb-2\" (UID: \"3b5aca21-88b5-41b5-a8fa-58df03c2dc7b\") " pod="openstack/ovsdbserver-nb-2" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.864925 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2c2db20a-6e1a-41ff-8bd3-eda821800c11\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2c2db20a-6e1a-41ff-8bd3-eda821800c11\") pod \"ovsdbserver-nb-2\" (UID: \"3b5aca21-88b5-41b5-a8fa-58df03c2dc7b\") " pod="openstack/ovsdbserver-nb-2" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.864987 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b5aca21-88b5-41b5-a8fa-58df03c2dc7b-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"3b5aca21-88b5-41b5-a8fa-58df03c2dc7b\") " pod="openstack/ovsdbserver-nb-2" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.865030 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/64bfb115-0d42-406c-8cf7-eee1da063fdf-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"64bfb115-0d42-406c-8cf7-eee1da063fdf\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.865102 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ab46198f-858f-4a3d-82f3-0881dc733012\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ab46198f-858f-4a3d-82f3-0881dc733012\") pod \"ovsdbserver-nb-0\" (UID: \"64bfb115-0d42-406c-8cf7-eee1da063fdf\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.865150 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ca23957b-90a9-4221-95c0-659604238e45\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ca23957b-90a9-4221-95c0-659604238e45\") pod \"ovsdbserver-nb-1\" (UID: \"3aaa1ca6-2199-4a24-8ac2-af90f0f4b4e0\") " pod="openstack/ovsdbserver-nb-1" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.865219 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64bfb115-0d42-406c-8cf7-eee1da063fdf-config\") pod \"ovsdbserver-nb-0\" (UID: \"64bfb115-0d42-406c-8cf7-eee1da063fdf\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.865294 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b5aca21-88b5-41b5-a8fa-58df03c2dc7b-config\") pod \"ovsdbserver-nb-2\" (UID: \"3b5aca21-88b5-41b5-a8fa-58df03c2dc7b\") " pod="openstack/ovsdbserver-nb-2" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.865335 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/64bfb115-0d42-406c-8cf7-eee1da063fdf-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"64bfb115-0d42-406c-8cf7-eee1da063fdf\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.865410 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3aaa1ca6-2199-4a24-8ac2-af90f0f4b4e0-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"3aaa1ca6-2199-4a24-8ac2-af90f0f4b4e0\") " pod="openstack/ovsdbserver-nb-1" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.865460 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64bfb115-0d42-406c-8cf7-eee1da063fdf-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"64bfb115-0d42-406c-8cf7-eee1da063fdf\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.865485 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3b5aca21-88b5-41b5-a8fa-58df03c2dc7b-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"3b5aca21-88b5-41b5-a8fa-58df03c2dc7b\") " pod="openstack/ovsdbserver-nb-2" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.865531 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aaa1ca6-2199-4a24-8ac2-af90f0f4b4e0-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"3aaa1ca6-2199-4a24-8ac2-af90f0f4b4e0\") " pod="openstack/ovsdbserver-nb-1" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.865576 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3aaa1ca6-2199-4a24-8ac2-af90f0f4b4e0-config\") pod \"ovsdbserver-nb-1\" (UID: \"3aaa1ca6-2199-4a24-8ac2-af90f0f4b4e0\") " pod="openstack/ovsdbserver-nb-1" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.865609 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qpvg\" (UniqueName: \"kubernetes.io/projected/3aaa1ca6-2199-4a24-8ac2-af90f0f4b4e0-kube-api-access-7qpvg\") pod \"ovsdbserver-nb-1\" (UID: \"3aaa1ca6-2199-4a24-8ac2-af90f0f4b4e0\") " pod="openstack/ovsdbserver-nb-1" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.869853 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/64bfb115-0d42-406c-8cf7-eee1da063fdf-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"64bfb115-0d42-406c-8cf7-eee1da063fdf\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.871532 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/64bfb115-0d42-406c-8cf7-eee1da063fdf-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"64bfb115-0d42-406c-8cf7-eee1da063fdf\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.872636 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64bfb115-0d42-406c-8cf7-eee1da063fdf-config\") pod \"ovsdbserver-nb-0\" (UID: \"64bfb115-0d42-406c-8cf7-eee1da063fdf\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.874237 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.881651 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64bfb115-0d42-406c-8cf7-eee1da063fdf-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"64bfb115-0d42-406c-8cf7-eee1da063fdf\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.882936 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.883003 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ab46198f-858f-4a3d-82f3-0881dc733012\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ab46198f-858f-4a3d-82f3-0881dc733012\") pod \"ovsdbserver-nb-0\" (UID: \"64bfb115-0d42-406c-8cf7-eee1da063fdf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a3718a2be3b9d04b1136cd02049e7f7259d641c0075a9507fee5a14a427dbbc1/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.887591 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.887735 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.889199 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.890829 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.893920 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r7mw\" (UniqueName: \"kubernetes.io/projected/64bfb115-0d42-406c-8cf7-eee1da063fdf-kube-api-access-6r7mw\") pod \"ovsdbserver-nb-0\" (UID: \"64bfb115-0d42-406c-8cf7-eee1da063fdf\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.895660 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.936420 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ab46198f-858f-4a3d-82f3-0881dc733012\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ab46198f-858f-4a3d-82f3-0881dc733012\") pod \"ovsdbserver-nb-0\" (UID: \"64bfb115-0d42-406c-8cf7-eee1da063fdf\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.966679 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9629cfc-62f4-4e7b-abc6-c5310b859385-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"b9629cfc-62f4-4e7b-abc6-c5310b859385\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.966725 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ca23957b-90a9-4221-95c0-659604238e45\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ca23957b-90a9-4221-95c0-659604238e45\") pod \"ovsdbserver-nb-1\" (UID: \"3aaa1ca6-2199-4a24-8ac2-af90f0f4b4e0\") " pod="openstack/ovsdbserver-nb-1" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.966751 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c480d9-5633-47b2-935e-c8db62ccb85f-config\") pod \"ovsdbserver-sb-2\" (UID: \"22c480d9-5633-47b2-935e-c8db62ccb85f\") " pod="openstack/ovsdbserver-sb-2" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.966772 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqrbg\" (UniqueName: \"kubernetes.io/projected/22c480d9-5633-47b2-935e-c8db62ccb85f-kube-api-access-sqrbg\") pod \"ovsdbserver-sb-2\" (UID: \"22c480d9-5633-47b2-935e-c8db62ccb85f\") " pod="openstack/ovsdbserver-sb-2" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.966794 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0eef23c9-65e6-4e7a-8981-1222879fc38d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eef23c9-65e6-4e7a-8981-1222879fc38d\") pod \"ovsdbserver-sb-1\" (UID: \"63a0cf52-f7cb-41ac-80a8-e83fcaff23d2\") " pod="openstack/ovsdbserver-sb-1" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.966814 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e0a2ddc3-afa1-4d62-bc66-656cf1840d23\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e0a2ddc3-afa1-4d62-bc66-656cf1840d23\") pod \"ovsdbserver-sb-0\" (UID: \"b9629cfc-62f4-4e7b-abc6-c5310b859385\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.966835 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b9629cfc-62f4-4e7b-abc6-c5310b859385-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"b9629cfc-62f4-4e7b-abc6-c5310b859385\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.966857 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9629cfc-62f4-4e7b-abc6-c5310b859385-config\") pod \"ovsdbserver-sb-0\" (UID: \"b9629cfc-62f4-4e7b-abc6-c5310b859385\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.966877 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22c480d9-5633-47b2-935e-c8db62ccb85f-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"22c480d9-5633-47b2-935e-c8db62ccb85f\") " pod="openstack/ovsdbserver-sb-2" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.966903 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9629cfc-62f4-4e7b-abc6-c5310b859385-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"b9629cfc-62f4-4e7b-abc6-c5310b859385\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.966924 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b5aca21-88b5-41b5-a8fa-58df03c2dc7b-config\") pod \"ovsdbserver-nb-2\" (UID: \"3b5aca21-88b5-41b5-a8fa-58df03c2dc7b\") " pod="openstack/ovsdbserver-nb-2" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.966967 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a0cf52-f7cb-41ac-80a8-e83fcaff23d2-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"63a0cf52-f7cb-41ac-80a8-e83fcaff23d2\") " pod="openstack/ovsdbserver-sb-1" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.966985 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3aaa1ca6-2199-4a24-8ac2-af90f0f4b4e0-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"3aaa1ca6-2199-4a24-8ac2-af90f0f4b4e0\") " pod="openstack/ovsdbserver-nb-1" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.967005 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3b5aca21-88b5-41b5-a8fa-58df03c2dc7b-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"3b5aca21-88b5-41b5-a8fa-58df03c2dc7b\") " pod="openstack/ovsdbserver-nb-2" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.967022 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63a0cf52-f7cb-41ac-80a8-e83fcaff23d2-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"63a0cf52-f7cb-41ac-80a8-e83fcaff23d2\") " pod="openstack/ovsdbserver-sb-1" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.967050 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aaa1ca6-2199-4a24-8ac2-af90f0f4b4e0-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"3aaa1ca6-2199-4a24-8ac2-af90f0f4b4e0\") " pod="openstack/ovsdbserver-nb-1" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.967068 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3aaa1ca6-2199-4a24-8ac2-af90f0f4b4e0-config\") pod \"ovsdbserver-nb-1\" (UID: \"3aaa1ca6-2199-4a24-8ac2-af90f0f4b4e0\") " pod="openstack/ovsdbserver-nb-1" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.967086 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qpvg\" (UniqueName: \"kubernetes.io/projected/3aaa1ca6-2199-4a24-8ac2-af90f0f4b4e0-kube-api-access-7qpvg\") pod \"ovsdbserver-nb-1\" (UID: \"3aaa1ca6-2199-4a24-8ac2-af90f0f4b4e0\") " pod="openstack/ovsdbserver-nb-1" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.967105 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22c480d9-5633-47b2-935e-c8db62ccb85f-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"22c480d9-5633-47b2-935e-c8db62ccb85f\") " pod="openstack/ovsdbserver-sb-2" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.967121 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b5aca21-88b5-41b5-a8fa-58df03c2dc7b-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"3b5aca21-88b5-41b5-a8fa-58df03c2dc7b\") " pod="openstack/ovsdbserver-nb-2" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.967143 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/63a0cf52-f7cb-41ac-80a8-e83fcaff23d2-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"63a0cf52-f7cb-41ac-80a8-e83fcaff23d2\") " pod="openstack/ovsdbserver-sb-1" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.967168 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3aaa1ca6-2199-4a24-8ac2-af90f0f4b4e0-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"3aaa1ca6-2199-4a24-8ac2-af90f0f4b4e0\") " pod="openstack/ovsdbserver-nb-1" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.967190 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tckrk\" (UniqueName: \"kubernetes.io/projected/3b5aca21-88b5-41b5-a8fa-58df03c2dc7b-kube-api-access-tckrk\") pod \"ovsdbserver-nb-2\" (UID: \"3b5aca21-88b5-41b5-a8fa-58df03c2dc7b\") " pod="openstack/ovsdbserver-nb-2" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.967215 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2c2db20a-6e1a-41ff-8bd3-eda821800c11\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2c2db20a-6e1a-41ff-8bd3-eda821800c11\") pod \"ovsdbserver-nb-2\" (UID: \"3b5aca21-88b5-41b5-a8fa-58df03c2dc7b\") " pod="openstack/ovsdbserver-nb-2" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.967243 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b5aca21-88b5-41b5-a8fa-58df03c2dc7b-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"3b5aca21-88b5-41b5-a8fa-58df03c2dc7b\") " pod="openstack/ovsdbserver-nb-2" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.967267 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjbh2\" (UniqueName: \"kubernetes.io/projected/b9629cfc-62f4-4e7b-abc6-c5310b859385-kube-api-access-vjbh2\") pod \"ovsdbserver-sb-0\" (UID: \"b9629cfc-62f4-4e7b-abc6-c5310b859385\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.967285 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63a0cf52-f7cb-41ac-80a8-e83fcaff23d2-config\") pod \"ovsdbserver-sb-1\" (UID: \"63a0cf52-f7cb-41ac-80a8-e83fcaff23d2\") " pod="openstack/ovsdbserver-sb-1" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.967308 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22c480d9-5633-47b2-935e-c8db62ccb85f-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"22c480d9-5633-47b2-935e-c8db62ccb85f\") " pod="openstack/ovsdbserver-sb-2" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.967328 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2cddd803-2d8d-4f1c-950a-a62b859f9d53\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2cddd803-2d8d-4f1c-950a-a62b859f9d53\") pod \"ovsdbserver-sb-2\" (UID: \"22c480d9-5633-47b2-935e-c8db62ccb85f\") " pod="openstack/ovsdbserver-sb-2" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.967344 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msd4l\" (UniqueName: \"kubernetes.io/projected/63a0cf52-f7cb-41ac-80a8-e83fcaff23d2-kube-api-access-msd4l\") pod \"ovsdbserver-sb-1\" (UID: \"63a0cf52-f7cb-41ac-80a8-e83fcaff23d2\") " pod="openstack/ovsdbserver-sb-1" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.968760 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3aaa1ca6-2199-4a24-8ac2-af90f0f4b4e0-config\") pod \"ovsdbserver-nb-1\" (UID: \"3aaa1ca6-2199-4a24-8ac2-af90f0f4b4e0\") " pod="openstack/ovsdbserver-nb-1" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.969786 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.970866 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b5aca21-88b5-41b5-a8fa-58df03c2dc7b-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"3b5aca21-88b5-41b5-a8fa-58df03c2dc7b\") " pod="openstack/ovsdbserver-nb-2" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.972602 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3b5aca21-88b5-41b5-a8fa-58df03c2dc7b-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"3b5aca21-88b5-41b5-a8fa-58df03c2dc7b\") " pod="openstack/ovsdbserver-nb-2" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.973548 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aaa1ca6-2199-4a24-8ac2-af90f0f4b4e0-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"3aaa1ca6-2199-4a24-8ac2-af90f0f4b4e0\") " pod="openstack/ovsdbserver-nb-1" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.973623 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3aaa1ca6-2199-4a24-8ac2-af90f0f4b4e0-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"3aaa1ca6-2199-4a24-8ac2-af90f0f4b4e0\") " pod="openstack/ovsdbserver-nb-1" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.974929 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.975001 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ca23957b-90a9-4221-95c0-659604238e45\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ca23957b-90a9-4221-95c0-659604238e45\") pod \"ovsdbserver-nb-1\" (UID: \"3aaa1ca6-2199-4a24-8ac2-af90f0f4b4e0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1a1ec10b88596aaaee2f9a1250831a896734902d7dc5fca022c450c019035638/globalmount\"" pod="openstack/ovsdbserver-nb-1" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.975311 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.975347 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2c2db20a-6e1a-41ff-8bd3-eda821800c11\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2c2db20a-6e1a-41ff-8bd3-eda821800c11\") pod \"ovsdbserver-nb-2\" (UID: \"3b5aca21-88b5-41b5-a8fa-58df03c2dc7b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fd76923d30e58bcf6fb09451a2a2984fe41089fa752a427e4d3c4f923dddfd55/globalmount\"" pod="openstack/ovsdbserver-nb-2" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.980163 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3aaa1ca6-2199-4a24-8ac2-af90f0f4b4e0-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"3aaa1ca6-2199-4a24-8ac2-af90f0f4b4e0\") " pod="openstack/ovsdbserver-nb-1" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.984573 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b5aca21-88b5-41b5-a8fa-58df03c2dc7b-config\") pod \"ovsdbserver-nb-2\" (UID: \"3b5aca21-88b5-41b5-a8fa-58df03c2dc7b\") " pod="openstack/ovsdbserver-nb-2" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.985227 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qpvg\" (UniqueName: \"kubernetes.io/projected/3aaa1ca6-2199-4a24-8ac2-af90f0f4b4e0-kube-api-access-7qpvg\") pod \"ovsdbserver-nb-1\" (UID: \"3aaa1ca6-2199-4a24-8ac2-af90f0f4b4e0\") " pod="openstack/ovsdbserver-nb-1" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.985890 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tckrk\" (UniqueName: \"kubernetes.io/projected/3b5aca21-88b5-41b5-a8fa-58df03c2dc7b-kube-api-access-tckrk\") pod \"ovsdbserver-nb-2\" (UID: \"3b5aca21-88b5-41b5-a8fa-58df03c2dc7b\") " pod="openstack/ovsdbserver-nb-2" Feb 27 17:36:19 crc kubenswrapper[4830]: I0227 17:36:19.993679 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b5aca21-88b5-41b5-a8fa-58df03c2dc7b-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"3b5aca21-88b5-41b5-a8fa-58df03c2dc7b\") " pod="openstack/ovsdbserver-nb-2" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.012363 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ca23957b-90a9-4221-95c0-659604238e45\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ca23957b-90a9-4221-95c0-659604238e45\") pod \"ovsdbserver-nb-1\" (UID: \"3aaa1ca6-2199-4a24-8ac2-af90f0f4b4e0\") " pod="openstack/ovsdbserver-nb-1" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.013524 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2c2db20a-6e1a-41ff-8bd3-eda821800c11\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2c2db20a-6e1a-41ff-8bd3-eda821800c11\") pod \"ovsdbserver-nb-2\" (UID: \"3b5aca21-88b5-41b5-a8fa-58df03c2dc7b\") " pod="openstack/ovsdbserver-nb-2" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.059309 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.069374 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22c480d9-5633-47b2-935e-c8db62ccb85f-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"22c480d9-5633-47b2-935e-c8db62ccb85f\") " pod="openstack/ovsdbserver-sb-2" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.069422 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/63a0cf52-f7cb-41ac-80a8-e83fcaff23d2-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"63a0cf52-f7cb-41ac-80a8-e83fcaff23d2\") " pod="openstack/ovsdbserver-sb-1" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.069512 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjbh2\" (UniqueName: \"kubernetes.io/projected/b9629cfc-62f4-4e7b-abc6-c5310b859385-kube-api-access-vjbh2\") pod \"ovsdbserver-sb-0\" (UID: \"b9629cfc-62f4-4e7b-abc6-c5310b859385\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.069553 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63a0cf52-f7cb-41ac-80a8-e83fcaff23d2-config\") pod \"ovsdbserver-sb-1\" (UID: \"63a0cf52-f7cb-41ac-80a8-e83fcaff23d2\") " pod="openstack/ovsdbserver-sb-1" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.069586 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2cddd803-2d8d-4f1c-950a-a62b859f9d53\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2cddd803-2d8d-4f1c-950a-a62b859f9d53\") pod \"ovsdbserver-sb-2\" (UID: \"22c480d9-5633-47b2-935e-c8db62ccb85f\") " pod="openstack/ovsdbserver-sb-2" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.069604 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22c480d9-5633-47b2-935e-c8db62ccb85f-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"22c480d9-5633-47b2-935e-c8db62ccb85f\") " pod="openstack/ovsdbserver-sb-2" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.069621 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msd4l\" (UniqueName: \"kubernetes.io/projected/63a0cf52-f7cb-41ac-80a8-e83fcaff23d2-kube-api-access-msd4l\") pod \"ovsdbserver-sb-1\" (UID: \"63a0cf52-f7cb-41ac-80a8-e83fcaff23d2\") " pod="openstack/ovsdbserver-sb-1" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.069648 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9629cfc-62f4-4e7b-abc6-c5310b859385-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"b9629cfc-62f4-4e7b-abc6-c5310b859385\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.069667 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c480d9-5633-47b2-935e-c8db62ccb85f-config\") pod \"ovsdbserver-sb-2\" (UID: \"22c480d9-5633-47b2-935e-c8db62ccb85f\") " pod="openstack/ovsdbserver-sb-2" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.069686 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqrbg\" (UniqueName: \"kubernetes.io/projected/22c480d9-5633-47b2-935e-c8db62ccb85f-kube-api-access-sqrbg\") pod \"ovsdbserver-sb-2\" (UID: \"22c480d9-5633-47b2-935e-c8db62ccb85f\") " pod="openstack/ovsdbserver-sb-2" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.069715 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0eef23c9-65e6-4e7a-8981-1222879fc38d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eef23c9-65e6-4e7a-8981-1222879fc38d\") pod \"ovsdbserver-sb-1\" (UID: \"63a0cf52-f7cb-41ac-80a8-e83fcaff23d2\") " pod="openstack/ovsdbserver-sb-1" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.069738 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e0a2ddc3-afa1-4d62-bc66-656cf1840d23\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e0a2ddc3-afa1-4d62-bc66-656cf1840d23\") pod \"ovsdbserver-sb-0\" (UID: \"b9629cfc-62f4-4e7b-abc6-c5310b859385\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.069761 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b9629cfc-62f4-4e7b-abc6-c5310b859385-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"b9629cfc-62f4-4e7b-abc6-c5310b859385\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.069826 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9629cfc-62f4-4e7b-abc6-c5310b859385-config\") pod \"ovsdbserver-sb-0\" (UID: \"b9629cfc-62f4-4e7b-abc6-c5310b859385\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.069849 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22c480d9-5633-47b2-935e-c8db62ccb85f-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"22c480d9-5633-47b2-935e-c8db62ccb85f\") " pod="openstack/ovsdbserver-sb-2" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.069903 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9629cfc-62f4-4e7b-abc6-c5310b859385-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"b9629cfc-62f4-4e7b-abc6-c5310b859385\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.069934 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a0cf52-f7cb-41ac-80a8-e83fcaff23d2-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"63a0cf52-f7cb-41ac-80a8-e83fcaff23d2\") " pod="openstack/ovsdbserver-sb-1" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.070044 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/63a0cf52-f7cb-41ac-80a8-e83fcaff23d2-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"63a0cf52-f7cb-41ac-80a8-e83fcaff23d2\") " pod="openstack/ovsdbserver-sb-1" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.070715 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63a0cf52-f7cb-41ac-80a8-e83fcaff23d2-config\") pod \"ovsdbserver-sb-1\" (UID: \"63a0cf52-f7cb-41ac-80a8-e83fcaff23d2\") " pod="openstack/ovsdbserver-sb-1" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.070757 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22c480d9-5633-47b2-935e-c8db62ccb85f-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"22c480d9-5633-47b2-935e-c8db62ccb85f\") " pod="openstack/ovsdbserver-sb-2" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.071179 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c480d9-5633-47b2-935e-c8db62ccb85f-config\") pod \"ovsdbserver-sb-2\" (UID: \"22c480d9-5633-47b2-935e-c8db62ccb85f\") " pod="openstack/ovsdbserver-sb-2" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.071230 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9629cfc-62f4-4e7b-abc6-c5310b859385-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"b9629cfc-62f4-4e7b-abc6-c5310b859385\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.071574 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b9629cfc-62f4-4e7b-abc6-c5310b859385-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"b9629cfc-62f4-4e7b-abc6-c5310b859385\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.071643 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63a0cf52-f7cb-41ac-80a8-e83fcaff23d2-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"63a0cf52-f7cb-41ac-80a8-e83fcaff23d2\") " pod="openstack/ovsdbserver-sb-1" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.071804 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22c480d9-5633-47b2-935e-c8db62ccb85f-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"22c480d9-5633-47b2-935e-c8db62ccb85f\") " pod="openstack/ovsdbserver-sb-2" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.072660 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63a0cf52-f7cb-41ac-80a8-e83fcaff23d2-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"63a0cf52-f7cb-41ac-80a8-e83fcaff23d2\") " pod="openstack/ovsdbserver-sb-1" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.073847 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9629cfc-62f4-4e7b-abc6-c5310b859385-config\") pod \"ovsdbserver-sb-0\" (UID: \"b9629cfc-62f4-4e7b-abc6-c5310b859385\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.074431 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.074459 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0eef23c9-65e6-4e7a-8981-1222879fc38d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eef23c9-65e6-4e7a-8981-1222879fc38d\") pod \"ovsdbserver-sb-1\" (UID: \"63a0cf52-f7cb-41ac-80a8-e83fcaff23d2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ce4a6b76831412c3407a439d20f5a4cb0a1b0fb9f7a683863eea34a208f8c33b/globalmount\"" pod="openstack/ovsdbserver-sb-1" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.074531 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22c480d9-5633-47b2-935e-c8db62ccb85f-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"22c480d9-5633-47b2-935e-c8db62ccb85f\") " pod="openstack/ovsdbserver-sb-2" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.075753 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.075773 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e0a2ddc3-afa1-4d62-bc66-656cf1840d23\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e0a2ddc3-afa1-4d62-bc66-656cf1840d23\") pod \"ovsdbserver-sb-0\" (UID: \"b9629cfc-62f4-4e7b-abc6-c5310b859385\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2db79a8f942e25e2c8a5ab913d527faa1b74c86b5c5cdb0b8e407c0243dcc463/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.075843 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9629cfc-62f4-4e7b-abc6-c5310b859385-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"b9629cfc-62f4-4e7b-abc6-c5310b859385\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.088340 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqrbg\" (UniqueName: \"kubernetes.io/projected/22c480d9-5633-47b2-935e-c8db62ccb85f-kube-api-access-sqrbg\") pod \"ovsdbserver-sb-2\" (UID: \"22c480d9-5633-47b2-935e-c8db62ccb85f\") " pod="openstack/ovsdbserver-sb-2" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.088919 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a0cf52-f7cb-41ac-80a8-e83fcaff23d2-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"63a0cf52-f7cb-41ac-80a8-e83fcaff23d2\") " pod="openstack/ovsdbserver-sb-1" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.089072 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.089103 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2cddd803-2d8d-4f1c-950a-a62b859f9d53\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2cddd803-2d8d-4f1c-950a-a62b859f9d53\") pod \"ovsdbserver-sb-2\" (UID: \"22c480d9-5633-47b2-935e-c8db62ccb85f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/468dad1da558ac5725222407f2dd9d8b76b24991a37b61f5606b18f6737d55f2/globalmount\"" pod="openstack/ovsdbserver-sb-2" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.089400 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjbh2\" (UniqueName: \"kubernetes.io/projected/b9629cfc-62f4-4e7b-abc6-c5310b859385-kube-api-access-vjbh2\") pod \"ovsdbserver-sb-0\" (UID: \"b9629cfc-62f4-4e7b-abc6-c5310b859385\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.097260 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msd4l\" (UniqueName: \"kubernetes.io/projected/63a0cf52-f7cb-41ac-80a8-e83fcaff23d2-kube-api-access-msd4l\") pod \"ovsdbserver-sb-1\" (UID: \"63a0cf52-f7cb-41ac-80a8-e83fcaff23d2\") " pod="openstack/ovsdbserver-sb-1" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.113432 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0eef23c9-65e6-4e7a-8981-1222879fc38d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eef23c9-65e6-4e7a-8981-1222879fc38d\") pod \"ovsdbserver-sb-1\" (UID: \"63a0cf52-f7cb-41ac-80a8-e83fcaff23d2\") " pod="openstack/ovsdbserver-sb-1" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.120085 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2cddd803-2d8d-4f1c-950a-a62b859f9d53\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2cddd803-2d8d-4f1c-950a-a62b859f9d53\") pod \"ovsdbserver-sb-2\" (UID: \"22c480d9-5633-47b2-935e-c8db62ccb85f\") " pod="openstack/ovsdbserver-sb-2" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.124343 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e0a2ddc3-afa1-4d62-bc66-656cf1840d23\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e0a2ddc3-afa1-4d62-bc66-656cf1840d23\") pod \"ovsdbserver-sb-0\" (UID: \"b9629cfc-62f4-4e7b-abc6-c5310b859385\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.260023 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.273262 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.308903 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.313019 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.364971 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.605641 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.830518 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.949602 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.951662 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"63a0cf52-f7cb-41ac-80a8-e83fcaff23d2","Type":"ContainerStarted","Data":"40e14711517d458ace8cb75dba27c0689b78406aef4ad444b05c5a2b0008a81e"} Feb 27 17:36:20 crc kubenswrapper[4830]: W0227 17:36:20.954727 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9629cfc_62f4_4e7b_abc6_c5310b859385.slice/crio-b686e6421edf40b674e3d5fd605a996fc9c90e087f9eeb5a4c220b2708798f6b WatchSource:0}: Error finding container b686e6421edf40b674e3d5fd605a996fc9c90e087f9eeb5a4c220b2708798f6b: Status 404 returned error can't find the container with id b686e6421edf40b674e3d5fd605a996fc9c90e087f9eeb5a4c220b2708798f6b Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.955076 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"3aaa1ca6-2199-4a24-8ac2-af90f0f4b4e0","Type":"ContainerStarted","Data":"f872f79b0f591ee57eb72350427b3f590baaa966825c51d588cca156c55bd4cf"} Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.955138 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"3aaa1ca6-2199-4a24-8ac2-af90f0f4b4e0","Type":"ContainerStarted","Data":"0d209a8f04f288b79461d6206b411fdfc5e2349d6b09510d58d52da05cb35fab"} Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.974324 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"64bfb115-0d42-406c-8cf7-eee1da063fdf","Type":"ContainerStarted","Data":"4b79302c6dd05618c99041028e8270e2cfdc1349187193f96837edf98e9cb39d"} Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.975039 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"64bfb115-0d42-406c-8cf7-eee1da063fdf","Type":"ContainerStarted","Data":"b77e272a166e11d90e69ff929d3588c4c6f893dd7245652b1440b542105bf466"} Feb 27 17:36:20 crc kubenswrapper[4830]: I0227 17:36:20.975096 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"64bfb115-0d42-406c-8cf7-eee1da063fdf","Type":"ContainerStarted","Data":"1951fcb1ba0e770e196a42359b0bc8f9d73646844efac0d833157c97e7b7d75e"} Feb 27 17:36:21 crc kubenswrapper[4830]: I0227 17:36:21.008697 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=3.008675061 podStartE2EDuration="3.008675061s" podCreationTimestamp="2026-02-27 17:36:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:36:20.994214203 +0000 UTC m=+5377.083486666" watchObservedRunningTime="2026-02-27 17:36:21.008675061 +0000 UTC m=+5377.097947524" Feb 27 17:36:21 crc kubenswrapper[4830]: I0227 17:36:21.034048 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Feb 27 17:36:21 crc kubenswrapper[4830]: I0227 17:36:21.825896 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Feb 27 17:36:21 crc kubenswrapper[4830]: W0227 17:36:21.831059 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22c480d9_5633_47b2_935e_c8db62ccb85f.slice/crio-41ba0fab6b2f48e8e160fb58e5c2a4b7e95a6fb112e65ad1a4fcb182ba0f7e3a WatchSource:0}: Error finding container 41ba0fab6b2f48e8e160fb58e5c2a4b7e95a6fb112e65ad1a4fcb182ba0f7e3a: Status 404 returned error can't find the container with id 41ba0fab6b2f48e8e160fb58e5c2a4b7e95a6fb112e65ad1a4fcb182ba0f7e3a Feb 27 17:36:21 crc kubenswrapper[4830]: I0227 17:36:21.988820 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"3aaa1ca6-2199-4a24-8ac2-af90f0f4b4e0","Type":"ContainerStarted","Data":"1efd1a1960e6a8a8babbf0cd213df3ce202c5893c2d994c07c8f00134a53bf2b"} Feb 27 17:36:21 crc kubenswrapper[4830]: I0227 17:36:21.990741 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"22c480d9-5633-47b2-935e-c8db62ccb85f","Type":"ContainerStarted","Data":"41ba0fab6b2f48e8e160fb58e5c2a4b7e95a6fb112e65ad1a4fcb182ba0f7e3a"} Feb 27 17:36:21 crc kubenswrapper[4830]: I0227 17:36:21.996152 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b9629cfc-62f4-4e7b-abc6-c5310b859385","Type":"ContainerStarted","Data":"434d85741f18c13e6c5d87b0ff8ae433e918f59f01b65dcef8131a9807c7f336"} Feb 27 17:36:21 crc kubenswrapper[4830]: I0227 17:36:21.996215 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b9629cfc-62f4-4e7b-abc6-c5310b859385","Type":"ContainerStarted","Data":"1156c09ef5fd55cde2ab02a9655e2b26ca5ab9a1545c6bf56e22949a699eea00"} Feb 27 17:36:21 crc kubenswrapper[4830]: I0227 17:36:21.996230 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b9629cfc-62f4-4e7b-abc6-c5310b859385","Type":"ContainerStarted","Data":"b686e6421edf40b674e3d5fd605a996fc9c90e087f9eeb5a4c220b2708798f6b"} Feb 27 17:36:21 crc kubenswrapper[4830]: I0227 17:36:21.998532 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"3b5aca21-88b5-41b5-a8fa-58df03c2dc7b","Type":"ContainerStarted","Data":"a8138f90c85f2b3d24521845958c65665f2d79c1a1245ae09d1aec0f9fb5f5d0"} Feb 27 17:36:21 crc kubenswrapper[4830]: I0227 17:36:21.998588 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"3b5aca21-88b5-41b5-a8fa-58df03c2dc7b","Type":"ContainerStarted","Data":"3e1b259f9bcf7923b9b937344696b8602420e763bfbc89ea131cd6bccd7d27e8"} Feb 27 17:36:21 crc kubenswrapper[4830]: I0227 17:36:21.998609 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"3b5aca21-88b5-41b5-a8fa-58df03c2dc7b","Type":"ContainerStarted","Data":"b5736b86190c74d65d95f5a056af687f29aad7ffa94debe770f42423e2d7723d"} Feb 27 17:36:22 crc kubenswrapper[4830]: I0227 17:36:22.003544 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"63a0cf52-f7cb-41ac-80a8-e83fcaff23d2","Type":"ContainerStarted","Data":"b6449ada4ec2ce610b5c584bd124f3bb417dc7a060fbdf562c5edb0c806eee45"} Feb 27 17:36:22 crc kubenswrapper[4830]: I0227 17:36:22.003614 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"63a0cf52-f7cb-41ac-80a8-e83fcaff23d2","Type":"ContainerStarted","Data":"7debe282c3d1dbac5e99471c830b4ebf7f7c07fdd901cdfdff473bbaf9280a95"} Feb 27 17:36:22 crc kubenswrapper[4830]: I0227 17:36:22.019625 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=4.019600595 podStartE2EDuration="4.019600595s" podCreationTimestamp="2026-02-27 17:36:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:36:22.01939727 +0000 UTC m=+5378.108669773" watchObservedRunningTime="2026-02-27 17:36:22.019600595 +0000 UTC m=+5378.108873088" Feb 27 17:36:22 crc kubenswrapper[4830]: I0227 17:36:22.061931 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=4.061909933 podStartE2EDuration="4.061909933s" podCreationTimestamp="2026-02-27 17:36:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:36:22.060002157 +0000 UTC m=+5378.149274620" watchObservedRunningTime="2026-02-27 17:36:22.061909933 +0000 UTC m=+5378.151182396" Feb 27 17:36:22 crc kubenswrapper[4830]: I0227 17:36:22.083648 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=4.083628435 podStartE2EDuration="4.083628435s" podCreationTimestamp="2026-02-27 17:36:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:36:22.083614265 +0000 UTC m=+5378.172886738" watchObservedRunningTime="2026-02-27 17:36:22.083628435 +0000 UTC m=+5378.172900898" Feb 27 17:36:22 crc kubenswrapper[4830]: I0227 17:36:22.120221 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=4.120183464 podStartE2EDuration="4.120183464s" podCreationTimestamp="2026-02-27 17:36:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:36:22.112638253 +0000 UTC m=+5378.201910716" watchObservedRunningTime="2026-02-27 17:36:22.120183464 +0000 UTC m=+5378.209455947" Feb 27 17:36:22 crc kubenswrapper[4830]: I0227 17:36:22.971173 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 27 17:36:23 crc kubenswrapper[4830]: I0227 17:36:23.019533 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"22c480d9-5633-47b2-935e-c8db62ccb85f","Type":"ContainerStarted","Data":"439469df6417a9adc0b000d194e8d8e0926ef58ddc0fa21af4f3acaf12df3a7b"} Feb 27 17:36:23 crc kubenswrapper[4830]: I0227 17:36:23.019614 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"22c480d9-5633-47b2-935e-c8db62ccb85f","Type":"ContainerStarted","Data":"4e14fded1c83f31098a4dc8be0d61102a848aa7b3724beb476c52fed364dd903"} Feb 27 17:36:23 crc kubenswrapper[4830]: I0227 17:36:23.060659 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Feb 27 17:36:23 crc kubenswrapper[4830]: I0227 17:36:23.067926 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=5.067884978 podStartE2EDuration="5.067884978s" podCreationTimestamp="2026-02-27 17:36:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:36:23.052832236 +0000 UTC m=+5379.142104729" watchObservedRunningTime="2026-02-27 17:36:23.067884978 +0000 UTC m=+5379.157157481" Feb 27 17:36:23 crc kubenswrapper[4830]: I0227 17:36:23.137309 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Feb 27 17:36:23 crc kubenswrapper[4830]: I0227 17:36:23.261287 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 27 17:36:23 crc kubenswrapper[4830]: I0227 17:36:23.274749 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Feb 27 17:36:23 crc kubenswrapper[4830]: I0227 17:36:23.309293 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Feb 27 17:36:23 crc kubenswrapper[4830]: I0227 17:36:23.366184 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Feb 27 17:36:24 crc kubenswrapper[4830]: I0227 17:36:24.037147 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Feb 27 17:36:24 crc kubenswrapper[4830]: I0227 17:36:24.971587 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 27 17:36:25 crc kubenswrapper[4830]: I0227 17:36:25.119658 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Feb 27 17:36:25 crc kubenswrapper[4830]: I0227 17:36:25.260789 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 27 17:36:25 crc kubenswrapper[4830]: I0227 17:36:25.274660 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Feb 27 17:36:25 crc kubenswrapper[4830]: I0227 17:36:25.309311 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Feb 27 17:36:25 crc kubenswrapper[4830]: I0227 17:36:25.365567 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Feb 27 17:36:25 crc kubenswrapper[4830]: I0227 17:36:25.468429 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-547968cc8f-pzpk4"] Feb 27 17:36:25 crc kubenswrapper[4830]: I0227 17:36:25.471357 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-547968cc8f-pzpk4" Feb 27 17:36:25 crc kubenswrapper[4830]: I0227 17:36:25.473681 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 27 17:36:25 crc kubenswrapper[4830]: I0227 17:36:25.488880 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-547968cc8f-pzpk4"] Feb 27 17:36:25 crc kubenswrapper[4830]: I0227 17:36:25.602484 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92cf14c2-0505-41db-804c-1413c9cfb87f-dns-svc\") pod \"dnsmasq-dns-547968cc8f-pzpk4\" (UID: \"92cf14c2-0505-41db-804c-1413c9cfb87f\") " pod="openstack/dnsmasq-dns-547968cc8f-pzpk4" Feb 27 17:36:25 crc kubenswrapper[4830]: I0227 17:36:25.602598 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92cf14c2-0505-41db-804c-1413c9cfb87f-ovsdbserver-nb\") pod \"dnsmasq-dns-547968cc8f-pzpk4\" (UID: \"92cf14c2-0505-41db-804c-1413c9cfb87f\") " pod="openstack/dnsmasq-dns-547968cc8f-pzpk4" Feb 27 17:36:25 crc kubenswrapper[4830]: I0227 17:36:25.602667 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2fhb\" (UniqueName: \"kubernetes.io/projected/92cf14c2-0505-41db-804c-1413c9cfb87f-kube-api-access-v2fhb\") pod \"dnsmasq-dns-547968cc8f-pzpk4\" (UID: \"92cf14c2-0505-41db-804c-1413c9cfb87f\") " pod="openstack/dnsmasq-dns-547968cc8f-pzpk4" Feb 27 17:36:25 crc kubenswrapper[4830]: I0227 17:36:25.602901 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92cf14c2-0505-41db-804c-1413c9cfb87f-config\") pod \"dnsmasq-dns-547968cc8f-pzpk4\" (UID: \"92cf14c2-0505-41db-804c-1413c9cfb87f\") " pod="openstack/dnsmasq-dns-547968cc8f-pzpk4" Feb 27 17:36:25 crc kubenswrapper[4830]: I0227 17:36:25.704925 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92cf14c2-0505-41db-804c-1413c9cfb87f-config\") pod \"dnsmasq-dns-547968cc8f-pzpk4\" (UID: \"92cf14c2-0505-41db-804c-1413c9cfb87f\") " pod="openstack/dnsmasq-dns-547968cc8f-pzpk4" Feb 27 17:36:25 crc kubenswrapper[4830]: I0227 17:36:25.705050 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92cf14c2-0505-41db-804c-1413c9cfb87f-dns-svc\") pod \"dnsmasq-dns-547968cc8f-pzpk4\" (UID: \"92cf14c2-0505-41db-804c-1413c9cfb87f\") " pod="openstack/dnsmasq-dns-547968cc8f-pzpk4" Feb 27 17:36:25 crc kubenswrapper[4830]: I0227 17:36:25.705128 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92cf14c2-0505-41db-804c-1413c9cfb87f-ovsdbserver-nb\") pod \"dnsmasq-dns-547968cc8f-pzpk4\" (UID: \"92cf14c2-0505-41db-804c-1413c9cfb87f\") " pod="openstack/dnsmasq-dns-547968cc8f-pzpk4" Feb 27 17:36:25 crc kubenswrapper[4830]: I0227 17:36:25.705181 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2fhb\" (UniqueName: \"kubernetes.io/projected/92cf14c2-0505-41db-804c-1413c9cfb87f-kube-api-access-v2fhb\") pod \"dnsmasq-dns-547968cc8f-pzpk4\" (UID: \"92cf14c2-0505-41db-804c-1413c9cfb87f\") " pod="openstack/dnsmasq-dns-547968cc8f-pzpk4" Feb 27 17:36:25 crc kubenswrapper[4830]: I0227 17:36:25.706053 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92cf14c2-0505-41db-804c-1413c9cfb87f-config\") pod \"dnsmasq-dns-547968cc8f-pzpk4\" (UID: \"92cf14c2-0505-41db-804c-1413c9cfb87f\") " pod="openstack/dnsmasq-dns-547968cc8f-pzpk4" Feb 27 17:36:25 crc kubenswrapper[4830]: I0227 17:36:25.706262 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92cf14c2-0505-41db-804c-1413c9cfb87f-dns-svc\") pod \"dnsmasq-dns-547968cc8f-pzpk4\" (UID: \"92cf14c2-0505-41db-804c-1413c9cfb87f\") " pod="openstack/dnsmasq-dns-547968cc8f-pzpk4" Feb 27 17:36:25 crc kubenswrapper[4830]: I0227 17:36:25.706445 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92cf14c2-0505-41db-804c-1413c9cfb87f-ovsdbserver-nb\") pod \"dnsmasq-dns-547968cc8f-pzpk4\" (UID: \"92cf14c2-0505-41db-804c-1413c9cfb87f\") " pod="openstack/dnsmasq-dns-547968cc8f-pzpk4" Feb 27 17:36:25 crc kubenswrapper[4830]: I0227 17:36:25.736630 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2fhb\" (UniqueName: \"kubernetes.io/projected/92cf14c2-0505-41db-804c-1413c9cfb87f-kube-api-access-v2fhb\") pod \"dnsmasq-dns-547968cc8f-pzpk4\" (UID: \"92cf14c2-0505-41db-804c-1413c9cfb87f\") " pod="openstack/dnsmasq-dns-547968cc8f-pzpk4" Feb 27 17:36:25 crc kubenswrapper[4830]: I0227 17:36:25.795504 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-547968cc8f-pzpk4" Feb 27 17:36:26 crc kubenswrapper[4830]: I0227 17:36:26.025598 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 27 17:36:26 crc kubenswrapper[4830]: I0227 17:36:26.069057 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 27 17:36:26 crc kubenswrapper[4830]: I0227 17:36:26.283756 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-547968cc8f-pzpk4"] Feb 27 17:36:26 crc kubenswrapper[4830]: W0227 17:36:26.288653 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92cf14c2_0505_41db_804c_1413c9cfb87f.slice/crio-c964b4e5e6af1cbff7b814e2e9cf2dadfa11cf3ed82286d240ac225f6e578c5d WatchSource:0}: Error finding container c964b4e5e6af1cbff7b814e2e9cf2dadfa11cf3ed82286d240ac225f6e578c5d: Status 404 returned error can't find the container with id c964b4e5e6af1cbff7b814e2e9cf2dadfa11cf3ed82286d240ac225f6e578c5d Feb 27 17:36:26 crc kubenswrapper[4830]: I0227 17:36:26.322971 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 27 17:36:26 crc kubenswrapper[4830]: I0227 17:36:26.350815 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Feb 27 17:36:26 crc kubenswrapper[4830]: I0227 17:36:26.375969 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 27 17:36:26 crc kubenswrapper[4830]: I0227 17:36:26.417474 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Feb 27 17:36:26 crc kubenswrapper[4830]: I0227 17:36:26.423744 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Feb 27 17:36:26 crc kubenswrapper[4830]: I0227 17:36:26.496241 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Feb 27 17:36:26 crc kubenswrapper[4830]: I0227 17:36:26.501820 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Feb 27 17:36:26 crc kubenswrapper[4830]: I0227 17:36:26.605374 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-547968cc8f-pzpk4"] Feb 27 17:36:26 crc kubenswrapper[4830]: I0227 17:36:26.638356 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55c8698c57-xb4mv"] Feb 27 17:36:26 crc kubenswrapper[4830]: I0227 17:36:26.639827 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55c8698c57-xb4mv" Feb 27 17:36:26 crc kubenswrapper[4830]: I0227 17:36:26.641775 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 27 17:36:26 crc kubenswrapper[4830]: I0227 17:36:26.660019 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55c8698c57-xb4mv"] Feb 27 17:36:26 crc kubenswrapper[4830]: I0227 17:36:26.743761 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45ecec9f-98ba-40bf-8a0f-7adaf09e74d0-dns-svc\") pod \"dnsmasq-dns-55c8698c57-xb4mv\" (UID: \"45ecec9f-98ba-40bf-8a0f-7adaf09e74d0\") " pod="openstack/dnsmasq-dns-55c8698c57-xb4mv" Feb 27 17:36:26 crc kubenswrapper[4830]: I0227 17:36:26.744182 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45ecec9f-98ba-40bf-8a0f-7adaf09e74d0-ovsdbserver-nb\") pod \"dnsmasq-dns-55c8698c57-xb4mv\" (UID: \"45ecec9f-98ba-40bf-8a0f-7adaf09e74d0\") " pod="openstack/dnsmasq-dns-55c8698c57-xb4mv" Feb 27 17:36:26 crc kubenswrapper[4830]: I0227 17:36:26.744232 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45ecec9f-98ba-40bf-8a0f-7adaf09e74d0-ovsdbserver-sb\") pod \"dnsmasq-dns-55c8698c57-xb4mv\" (UID: \"45ecec9f-98ba-40bf-8a0f-7adaf09e74d0\") " pod="openstack/dnsmasq-dns-55c8698c57-xb4mv" Feb 27 17:36:26 crc kubenswrapper[4830]: I0227 17:36:26.744276 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45ecec9f-98ba-40bf-8a0f-7adaf09e74d0-config\") pod \"dnsmasq-dns-55c8698c57-xb4mv\" (UID: \"45ecec9f-98ba-40bf-8a0f-7adaf09e74d0\") " pod="openstack/dnsmasq-dns-55c8698c57-xb4mv" Feb 27 17:36:26 crc kubenswrapper[4830]: I0227 17:36:26.744363 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkfrk\" (UniqueName: \"kubernetes.io/projected/45ecec9f-98ba-40bf-8a0f-7adaf09e74d0-kube-api-access-zkfrk\") pod \"dnsmasq-dns-55c8698c57-xb4mv\" (UID: \"45ecec9f-98ba-40bf-8a0f-7adaf09e74d0\") " pod="openstack/dnsmasq-dns-55c8698c57-xb4mv" Feb 27 17:36:26 crc kubenswrapper[4830]: I0227 17:36:26.846386 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkfrk\" (UniqueName: \"kubernetes.io/projected/45ecec9f-98ba-40bf-8a0f-7adaf09e74d0-kube-api-access-zkfrk\") pod \"dnsmasq-dns-55c8698c57-xb4mv\" (UID: \"45ecec9f-98ba-40bf-8a0f-7adaf09e74d0\") " pod="openstack/dnsmasq-dns-55c8698c57-xb4mv" Feb 27 17:36:26 crc kubenswrapper[4830]: I0227 17:36:26.846510 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45ecec9f-98ba-40bf-8a0f-7adaf09e74d0-dns-svc\") pod \"dnsmasq-dns-55c8698c57-xb4mv\" (UID: \"45ecec9f-98ba-40bf-8a0f-7adaf09e74d0\") " pod="openstack/dnsmasq-dns-55c8698c57-xb4mv" Feb 27 17:36:26 crc kubenswrapper[4830]: I0227 17:36:26.846553 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45ecec9f-98ba-40bf-8a0f-7adaf09e74d0-ovsdbserver-nb\") pod \"dnsmasq-dns-55c8698c57-xb4mv\" (UID: \"45ecec9f-98ba-40bf-8a0f-7adaf09e74d0\") " pod="openstack/dnsmasq-dns-55c8698c57-xb4mv" Feb 27 17:36:26 crc kubenswrapper[4830]: I0227 17:36:26.846631 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45ecec9f-98ba-40bf-8a0f-7adaf09e74d0-ovsdbserver-sb\") pod \"dnsmasq-dns-55c8698c57-xb4mv\" (UID: \"45ecec9f-98ba-40bf-8a0f-7adaf09e74d0\") " pod="openstack/dnsmasq-dns-55c8698c57-xb4mv" Feb 27 17:36:26 crc kubenswrapper[4830]: I0227 17:36:26.846703 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45ecec9f-98ba-40bf-8a0f-7adaf09e74d0-config\") pod \"dnsmasq-dns-55c8698c57-xb4mv\" (UID: \"45ecec9f-98ba-40bf-8a0f-7adaf09e74d0\") " pod="openstack/dnsmasq-dns-55c8698c57-xb4mv" Feb 27 17:36:26 crc kubenswrapper[4830]: I0227 17:36:26.848315 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45ecec9f-98ba-40bf-8a0f-7adaf09e74d0-dns-svc\") pod \"dnsmasq-dns-55c8698c57-xb4mv\" (UID: \"45ecec9f-98ba-40bf-8a0f-7adaf09e74d0\") " pod="openstack/dnsmasq-dns-55c8698c57-xb4mv" Feb 27 17:36:26 crc kubenswrapper[4830]: I0227 17:36:26.848440 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45ecec9f-98ba-40bf-8a0f-7adaf09e74d0-ovsdbserver-sb\") pod \"dnsmasq-dns-55c8698c57-xb4mv\" (UID: \"45ecec9f-98ba-40bf-8a0f-7adaf09e74d0\") " pod="openstack/dnsmasq-dns-55c8698c57-xb4mv" Feb 27 17:36:26 crc kubenswrapper[4830]: I0227 17:36:26.848558 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45ecec9f-98ba-40bf-8a0f-7adaf09e74d0-config\") pod \"dnsmasq-dns-55c8698c57-xb4mv\" (UID: \"45ecec9f-98ba-40bf-8a0f-7adaf09e74d0\") " pod="openstack/dnsmasq-dns-55c8698c57-xb4mv" Feb 27 17:36:26 crc kubenswrapper[4830]: I0227 17:36:26.848687 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45ecec9f-98ba-40bf-8a0f-7adaf09e74d0-ovsdbserver-nb\") pod \"dnsmasq-dns-55c8698c57-xb4mv\" (UID: \"45ecec9f-98ba-40bf-8a0f-7adaf09e74d0\") " pod="openstack/dnsmasq-dns-55c8698c57-xb4mv" Feb 27 17:36:26 crc kubenswrapper[4830]: I0227 17:36:26.874229 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkfrk\" (UniqueName: \"kubernetes.io/projected/45ecec9f-98ba-40bf-8a0f-7adaf09e74d0-kube-api-access-zkfrk\") pod \"dnsmasq-dns-55c8698c57-xb4mv\" (UID: \"45ecec9f-98ba-40bf-8a0f-7adaf09e74d0\") " pod="openstack/dnsmasq-dns-55c8698c57-xb4mv" Feb 27 17:36:26 crc kubenswrapper[4830]: I0227 17:36:26.968017 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55c8698c57-xb4mv" Feb 27 17:36:27 crc kubenswrapper[4830]: I0227 17:36:27.076406 4830 generic.go:334] "Generic (PLEG): container finished" podID="92cf14c2-0505-41db-804c-1413c9cfb87f" containerID="f78258691e24dbd955e55f39e3ab171c30345922eb664d472836910bf6d06600" exitCode=0 Feb 27 17:36:27 crc kubenswrapper[4830]: I0227 17:36:27.078613 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-547968cc8f-pzpk4" event={"ID":"92cf14c2-0505-41db-804c-1413c9cfb87f","Type":"ContainerDied","Data":"f78258691e24dbd955e55f39e3ab171c30345922eb664d472836910bf6d06600"} Feb 27 17:36:27 crc kubenswrapper[4830]: I0227 17:36:27.078669 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-547968cc8f-pzpk4" event={"ID":"92cf14c2-0505-41db-804c-1413c9cfb87f","Type":"ContainerStarted","Data":"c964b4e5e6af1cbff7b814e2e9cf2dadfa11cf3ed82286d240ac225f6e578c5d"} Feb 27 17:36:27 crc kubenswrapper[4830]: I0227 17:36:27.157627 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Feb 27 17:36:27 crc kubenswrapper[4830]: I0227 17:36:27.551311 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55c8698c57-xb4mv"] Feb 27 17:36:27 crc kubenswrapper[4830]: W0227 17:36:27.556514 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45ecec9f_98ba_40bf_8a0f_7adaf09e74d0.slice/crio-30c864b673aa0e5296aba9c3f40e28f9c26126673f3ff6d754cde3bf55220dd3 WatchSource:0}: Error finding container 30c864b673aa0e5296aba9c3f40e28f9c26126673f3ff6d754cde3bf55220dd3: Status 404 returned error can't find the container with id 30c864b673aa0e5296aba9c3f40e28f9c26126673f3ff6d754cde3bf55220dd3 Feb 27 17:36:28 crc kubenswrapper[4830]: I0227 17:36:28.099674 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-547968cc8f-pzpk4" event={"ID":"92cf14c2-0505-41db-804c-1413c9cfb87f","Type":"ContainerStarted","Data":"e9e136824a85bca907ed2f24c7dc9f6d8839ab4c656bda77857345d637c34c8b"} Feb 27 17:36:28 crc kubenswrapper[4830]: I0227 17:36:28.100328 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-547968cc8f-pzpk4" Feb 27 17:36:28 crc kubenswrapper[4830]: I0227 17:36:28.099864 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-547968cc8f-pzpk4" podUID="92cf14c2-0505-41db-804c-1413c9cfb87f" containerName="dnsmasq-dns" containerID="cri-o://e9e136824a85bca907ed2f24c7dc9f6d8839ab4c656bda77857345d637c34c8b" gracePeriod=10 Feb 27 17:36:28 crc kubenswrapper[4830]: I0227 17:36:28.105730 4830 generic.go:334] "Generic (PLEG): container finished" podID="45ecec9f-98ba-40bf-8a0f-7adaf09e74d0" containerID="fda945cd20c2726f61f3cb4730ca4b6cf7d4bc487646314c0ed7a9382a73bbeb" exitCode=0 Feb 27 17:36:28 crc kubenswrapper[4830]: I0227 17:36:28.105921 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55c8698c57-xb4mv" event={"ID":"45ecec9f-98ba-40bf-8a0f-7adaf09e74d0","Type":"ContainerDied","Data":"fda945cd20c2726f61f3cb4730ca4b6cf7d4bc487646314c0ed7a9382a73bbeb"} Feb 27 17:36:28 crc kubenswrapper[4830]: I0227 17:36:28.106018 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55c8698c57-xb4mv" event={"ID":"45ecec9f-98ba-40bf-8a0f-7adaf09e74d0","Type":"ContainerStarted","Data":"30c864b673aa0e5296aba9c3f40e28f9c26126673f3ff6d754cde3bf55220dd3"} Feb 27 17:36:28 crc kubenswrapper[4830]: I0227 17:36:28.142879 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-547968cc8f-pzpk4" podStartSLOduration=3.14284195 podStartE2EDuration="3.14284195s" podCreationTimestamp="2026-02-27 17:36:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:36:28.133666759 +0000 UTC m=+5384.222939262" watchObservedRunningTime="2026-02-27 17:36:28.14284195 +0000 UTC m=+5384.232114453" Feb 27 17:36:28 crc kubenswrapper[4830]: I0227 17:36:28.635609 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-547968cc8f-pzpk4" Feb 27 17:36:28 crc kubenswrapper[4830]: I0227 17:36:28.689507 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92cf14c2-0505-41db-804c-1413c9cfb87f-ovsdbserver-nb\") pod \"92cf14c2-0505-41db-804c-1413c9cfb87f\" (UID: \"92cf14c2-0505-41db-804c-1413c9cfb87f\") " Feb 27 17:36:28 crc kubenswrapper[4830]: I0227 17:36:28.689563 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92cf14c2-0505-41db-804c-1413c9cfb87f-config\") pod \"92cf14c2-0505-41db-804c-1413c9cfb87f\" (UID: \"92cf14c2-0505-41db-804c-1413c9cfb87f\") " Feb 27 17:36:28 crc kubenswrapper[4830]: I0227 17:36:28.689790 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92cf14c2-0505-41db-804c-1413c9cfb87f-dns-svc\") pod \"92cf14c2-0505-41db-804c-1413c9cfb87f\" (UID: \"92cf14c2-0505-41db-804c-1413c9cfb87f\") " Feb 27 17:36:28 crc kubenswrapper[4830]: I0227 17:36:28.689813 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2fhb\" (UniqueName: \"kubernetes.io/projected/92cf14c2-0505-41db-804c-1413c9cfb87f-kube-api-access-v2fhb\") pod \"92cf14c2-0505-41db-804c-1413c9cfb87f\" (UID: \"92cf14c2-0505-41db-804c-1413c9cfb87f\") " Feb 27 17:36:28 crc kubenswrapper[4830]: I0227 17:36:28.698499 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92cf14c2-0505-41db-804c-1413c9cfb87f-kube-api-access-v2fhb" (OuterVolumeSpecName: "kube-api-access-v2fhb") pod "92cf14c2-0505-41db-804c-1413c9cfb87f" (UID: "92cf14c2-0505-41db-804c-1413c9cfb87f"). InnerVolumeSpecName "kube-api-access-v2fhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:36:28 crc kubenswrapper[4830]: I0227 17:36:28.746997 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92cf14c2-0505-41db-804c-1413c9cfb87f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "92cf14c2-0505-41db-804c-1413c9cfb87f" (UID: "92cf14c2-0505-41db-804c-1413c9cfb87f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:36:28 crc kubenswrapper[4830]: I0227 17:36:28.761028 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92cf14c2-0505-41db-804c-1413c9cfb87f-config" (OuterVolumeSpecName: "config") pod "92cf14c2-0505-41db-804c-1413c9cfb87f" (UID: "92cf14c2-0505-41db-804c-1413c9cfb87f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:36:28 crc kubenswrapper[4830]: I0227 17:36:28.779852 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92cf14c2-0505-41db-804c-1413c9cfb87f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "92cf14c2-0505-41db-804c-1413c9cfb87f" (UID: "92cf14c2-0505-41db-804c-1413c9cfb87f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:36:28 crc kubenswrapper[4830]: I0227 17:36:28.791912 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92cf14c2-0505-41db-804c-1413c9cfb87f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 17:36:28 crc kubenswrapper[4830]: I0227 17:36:28.791941 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2fhb\" (UniqueName: \"kubernetes.io/projected/92cf14c2-0505-41db-804c-1413c9cfb87f-kube-api-access-v2fhb\") on node \"crc\" DevicePath \"\"" Feb 27 17:36:28 crc kubenswrapper[4830]: I0227 17:36:28.791989 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92cf14c2-0505-41db-804c-1413c9cfb87f-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:36:28 crc kubenswrapper[4830]: I0227 17:36:28.791998 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92cf14c2-0505-41db-804c-1413c9cfb87f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 17:36:29 crc kubenswrapper[4830]: I0227 17:36:29.121938 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55c8698c57-xb4mv" event={"ID":"45ecec9f-98ba-40bf-8a0f-7adaf09e74d0","Type":"ContainerStarted","Data":"56ac54a1d1a4a8c5a03f462557375ac8be5907d48a387597cd5e0dd844fa79af"} Feb 27 17:36:29 crc kubenswrapper[4830]: I0227 17:36:29.137274 4830 generic.go:334] "Generic (PLEG): container finished" podID="92cf14c2-0505-41db-804c-1413c9cfb87f" containerID="e9e136824a85bca907ed2f24c7dc9f6d8839ab4c656bda77857345d637c34c8b" exitCode=0 Feb 27 17:36:29 crc kubenswrapper[4830]: I0227 17:36:29.137334 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-547968cc8f-pzpk4" event={"ID":"92cf14c2-0505-41db-804c-1413c9cfb87f","Type":"ContainerDied","Data":"e9e136824a85bca907ed2f24c7dc9f6d8839ab4c656bda77857345d637c34c8b"} Feb 27 17:36:29 crc kubenswrapper[4830]: I0227 17:36:29.137369 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-547968cc8f-pzpk4" event={"ID":"92cf14c2-0505-41db-804c-1413c9cfb87f","Type":"ContainerDied","Data":"c964b4e5e6af1cbff7b814e2e9cf2dadfa11cf3ed82286d240ac225f6e578c5d"} Feb 27 17:36:29 crc kubenswrapper[4830]: I0227 17:36:29.137391 4830 scope.go:117] "RemoveContainer" containerID="e9e136824a85bca907ed2f24c7dc9f6d8839ab4c656bda77857345d637c34c8b" Feb 27 17:36:29 crc kubenswrapper[4830]: I0227 17:36:29.137550 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-547968cc8f-pzpk4" Feb 27 17:36:29 crc kubenswrapper[4830]: I0227 17:36:29.161304 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55c8698c57-xb4mv" podStartSLOduration=3.161277485 podStartE2EDuration="3.161277485s" podCreationTimestamp="2026-02-27 17:36:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:36:29.147853341 +0000 UTC m=+5385.237125814" watchObservedRunningTime="2026-02-27 17:36:29.161277485 +0000 UTC m=+5385.250549958" Feb 27 17:36:29 crc kubenswrapper[4830]: I0227 17:36:29.172357 4830 scope.go:117] "RemoveContainer" containerID="f78258691e24dbd955e55f39e3ab171c30345922eb664d472836910bf6d06600" Feb 27 17:36:29 crc kubenswrapper[4830]: I0227 17:36:29.186917 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-547968cc8f-pzpk4"] Feb 27 17:36:29 crc kubenswrapper[4830]: I0227 17:36:29.197567 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-547968cc8f-pzpk4"] Feb 27 17:36:29 crc kubenswrapper[4830]: I0227 17:36:29.198088 4830 scope.go:117] "RemoveContainer" containerID="e9e136824a85bca907ed2f24c7dc9f6d8839ab4c656bda77857345d637c34c8b" Feb 27 17:36:29 crc kubenswrapper[4830]: E0227 17:36:29.198853 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9e136824a85bca907ed2f24c7dc9f6d8839ab4c656bda77857345d637c34c8b\": container with ID starting with e9e136824a85bca907ed2f24c7dc9f6d8839ab4c656bda77857345d637c34c8b not found: ID does not exist" containerID="e9e136824a85bca907ed2f24c7dc9f6d8839ab4c656bda77857345d637c34c8b" Feb 27 17:36:29 crc kubenswrapper[4830]: I0227 17:36:29.199060 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9e136824a85bca907ed2f24c7dc9f6d8839ab4c656bda77857345d637c34c8b"} err="failed to get container status \"e9e136824a85bca907ed2f24c7dc9f6d8839ab4c656bda77857345d637c34c8b\": rpc error: code = NotFound desc = could not find container \"e9e136824a85bca907ed2f24c7dc9f6d8839ab4c656bda77857345d637c34c8b\": container with ID starting with e9e136824a85bca907ed2f24c7dc9f6d8839ab4c656bda77857345d637c34c8b not found: ID does not exist" Feb 27 17:36:29 crc kubenswrapper[4830]: I0227 17:36:29.199188 4830 scope.go:117] "RemoveContainer" containerID="f78258691e24dbd955e55f39e3ab171c30345922eb664d472836910bf6d06600" Feb 27 17:36:29 crc kubenswrapper[4830]: E0227 17:36:29.199885 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f78258691e24dbd955e55f39e3ab171c30345922eb664d472836910bf6d06600\": container with ID starting with f78258691e24dbd955e55f39e3ab171c30345922eb664d472836910bf6d06600 not found: ID does not exist" containerID="f78258691e24dbd955e55f39e3ab171c30345922eb664d472836910bf6d06600" Feb 27 17:36:29 crc kubenswrapper[4830]: I0227 17:36:29.199940 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f78258691e24dbd955e55f39e3ab171c30345922eb664d472836910bf6d06600"} err="failed to get container status \"f78258691e24dbd955e55f39e3ab171c30345922eb664d472836910bf6d06600\": rpc error: code = NotFound desc = could not find container \"f78258691e24dbd955e55f39e3ab171c30345922eb664d472836910bf6d06600\": container with ID starting with f78258691e24dbd955e55f39e3ab171c30345922eb664d472836910bf6d06600 not found: ID does not exist" Feb 27 17:36:29 crc kubenswrapper[4830]: I0227 17:36:29.718849 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Feb 27 17:36:29 crc kubenswrapper[4830]: E0227 17:36:29.719543 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92cf14c2-0505-41db-804c-1413c9cfb87f" containerName="init" Feb 27 17:36:29 crc kubenswrapper[4830]: I0227 17:36:29.719568 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="92cf14c2-0505-41db-804c-1413c9cfb87f" containerName="init" Feb 27 17:36:29 crc kubenswrapper[4830]: E0227 17:36:29.719627 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92cf14c2-0505-41db-804c-1413c9cfb87f" containerName="dnsmasq-dns" Feb 27 17:36:29 crc kubenswrapper[4830]: I0227 17:36:29.719641 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="92cf14c2-0505-41db-804c-1413c9cfb87f" containerName="dnsmasq-dns" Feb 27 17:36:29 crc kubenswrapper[4830]: I0227 17:36:29.720012 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="92cf14c2-0505-41db-804c-1413c9cfb87f" containerName="dnsmasq-dns" Feb 27 17:36:29 crc kubenswrapper[4830]: I0227 17:36:29.721091 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Feb 27 17:36:29 crc kubenswrapper[4830]: I0227 17:36:29.733027 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Feb 27 17:36:29 crc kubenswrapper[4830]: I0227 17:36:29.753832 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Feb 27 17:36:29 crc kubenswrapper[4830]: I0227 17:36:29.825065 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/af8bf2f1-b870-4b65-ac65-fcf8a2a11c1f-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"af8bf2f1-b870-4b65-ac65-fcf8a2a11c1f\") " pod="openstack/ovn-copy-data" Feb 27 17:36:29 crc kubenswrapper[4830]: I0227 17:36:29.825168 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65wfb\" (UniqueName: \"kubernetes.io/projected/af8bf2f1-b870-4b65-ac65-fcf8a2a11c1f-kube-api-access-65wfb\") pod \"ovn-copy-data\" (UID: \"af8bf2f1-b870-4b65-ac65-fcf8a2a11c1f\") " pod="openstack/ovn-copy-data" Feb 27 17:36:29 crc kubenswrapper[4830]: I0227 17:36:29.825256 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-186f59e7-5a33-4c26-a7f5-627da800702d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-186f59e7-5a33-4c26-a7f5-627da800702d\") pod \"ovn-copy-data\" (UID: \"af8bf2f1-b870-4b65-ac65-fcf8a2a11c1f\") " pod="openstack/ovn-copy-data" Feb 27 17:36:29 crc kubenswrapper[4830]: I0227 17:36:29.927169 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65wfb\" (UniqueName: \"kubernetes.io/projected/af8bf2f1-b870-4b65-ac65-fcf8a2a11c1f-kube-api-access-65wfb\") pod \"ovn-copy-data\" (UID: \"af8bf2f1-b870-4b65-ac65-fcf8a2a11c1f\") " pod="openstack/ovn-copy-data" Feb 27 17:36:29 crc kubenswrapper[4830]: I0227 17:36:29.927290 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-186f59e7-5a33-4c26-a7f5-627da800702d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-186f59e7-5a33-4c26-a7f5-627da800702d\") pod \"ovn-copy-data\" (UID: \"af8bf2f1-b870-4b65-ac65-fcf8a2a11c1f\") " pod="openstack/ovn-copy-data" Feb 27 17:36:29 crc kubenswrapper[4830]: I0227 17:36:29.927442 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/af8bf2f1-b870-4b65-ac65-fcf8a2a11c1f-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"af8bf2f1-b870-4b65-ac65-fcf8a2a11c1f\") " pod="openstack/ovn-copy-data" Feb 27 17:36:29 crc kubenswrapper[4830]: I0227 17:36:29.934491 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 17:36:29 crc kubenswrapper[4830]: I0227 17:36:29.934582 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-186f59e7-5a33-4c26-a7f5-627da800702d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-186f59e7-5a33-4c26-a7f5-627da800702d\") pod \"ovn-copy-data\" (UID: \"af8bf2f1-b870-4b65-ac65-fcf8a2a11c1f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7a0ddfa994a77928984a577df2b89ad562dc7caf51a84126d09fa3d137cee011/globalmount\"" pod="openstack/ovn-copy-data" Feb 27 17:36:29 crc kubenswrapper[4830]: I0227 17:36:29.947278 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/af8bf2f1-b870-4b65-ac65-fcf8a2a11c1f-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"af8bf2f1-b870-4b65-ac65-fcf8a2a11c1f\") " pod="openstack/ovn-copy-data" Feb 27 17:36:29 crc kubenswrapper[4830]: I0227 17:36:29.953155 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65wfb\" (UniqueName: \"kubernetes.io/projected/af8bf2f1-b870-4b65-ac65-fcf8a2a11c1f-kube-api-access-65wfb\") pod \"ovn-copy-data\" (UID: \"af8bf2f1-b870-4b65-ac65-fcf8a2a11c1f\") " pod="openstack/ovn-copy-data" Feb 27 17:36:29 crc kubenswrapper[4830]: I0227 17:36:29.986769 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-186f59e7-5a33-4c26-a7f5-627da800702d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-186f59e7-5a33-4c26-a7f5-627da800702d\") pod \"ovn-copy-data\" (UID: \"af8bf2f1-b870-4b65-ac65-fcf8a2a11c1f\") " pod="openstack/ovn-copy-data" Feb 27 17:36:30 crc kubenswrapper[4830]: I0227 17:36:30.063258 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Feb 27 17:36:30 crc kubenswrapper[4830]: I0227 17:36:30.150773 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55c8698c57-xb4mv" Feb 27 17:36:30 crc kubenswrapper[4830]: I0227 17:36:30.771911 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92cf14c2-0505-41db-804c-1413c9cfb87f" path="/var/lib/kubelet/pods/92cf14c2-0505-41db-804c-1413c9cfb87f/volumes" Feb 27 17:36:30 crc kubenswrapper[4830]: I0227 17:36:30.813094 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Feb 27 17:36:31 crc kubenswrapper[4830]: I0227 17:36:31.167090 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"af8bf2f1-b870-4b65-ac65-fcf8a2a11c1f","Type":"ContainerStarted","Data":"5ba1bce1b189fa4620ad2036d198085f046e496a304a8c571c8d246e87764a33"} Feb 27 17:36:31 crc kubenswrapper[4830]: I0227 17:36:31.167144 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"af8bf2f1-b870-4b65-ac65-fcf8a2a11c1f","Type":"ContainerStarted","Data":"8cf7e3e37d02535d219e3e8e4ebe253f12d99799d2be20ebf7c69f2dbbbd1772"} Feb 27 17:36:31 crc kubenswrapper[4830]: I0227 17:36:31.197125 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=3.197104999 podStartE2EDuration="3.197104999s" podCreationTimestamp="2026-02-27 17:36:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:36:31.187550469 +0000 UTC m=+5387.276822942" watchObservedRunningTime="2026-02-27 17:36:31.197104999 +0000 UTC m=+5387.286377472" Feb 27 17:36:36 crc kubenswrapper[4830]: I0227 17:36:36.936603 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 27 17:36:36 crc kubenswrapper[4830]: I0227 17:36:36.952266 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 27 17:36:36 crc kubenswrapper[4830]: I0227 17:36:36.962905 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-s8fth" Feb 27 17:36:36 crc kubenswrapper[4830]: I0227 17:36:36.963205 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 27 17:36:36 crc kubenswrapper[4830]: I0227 17:36:36.963754 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 27 17:36:36 crc kubenswrapper[4830]: I0227 17:36:36.975815 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 27 17:36:36 crc kubenswrapper[4830]: I0227 17:36:36.981558 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55c8698c57-xb4mv" Feb 27 17:36:37 crc kubenswrapper[4830]: I0227 17:36:37.001291 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8cea844f-8422-43f9-8056-0fa419120d61-scripts\") pod \"ovn-northd-0\" (UID: \"8cea844f-8422-43f9-8056-0fa419120d61\") " pod="openstack/ovn-northd-0" Feb 27 17:36:37 crc kubenswrapper[4830]: I0227 17:36:37.001446 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cea844f-8422-43f9-8056-0fa419120d61-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8cea844f-8422-43f9-8056-0fa419120d61\") " pod="openstack/ovn-northd-0" Feb 27 17:36:37 crc kubenswrapper[4830]: I0227 17:36:37.001503 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea844f-8422-43f9-8056-0fa419120d61-config\") pod \"ovn-northd-0\" (UID: \"8cea844f-8422-43f9-8056-0fa419120d61\") " pod="openstack/ovn-northd-0" Feb 27 17:36:37 crc kubenswrapper[4830]: I0227 17:36:37.001526 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8cea844f-8422-43f9-8056-0fa419120d61-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8cea844f-8422-43f9-8056-0fa419120d61\") " pod="openstack/ovn-northd-0" Feb 27 17:36:37 crc kubenswrapper[4830]: I0227 17:36:37.001619 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndzsd\" (UniqueName: \"kubernetes.io/projected/8cea844f-8422-43f9-8056-0fa419120d61-kube-api-access-ndzsd\") pod \"ovn-northd-0\" (UID: \"8cea844f-8422-43f9-8056-0fa419120d61\") " pod="openstack/ovn-northd-0" Feb 27 17:36:37 crc kubenswrapper[4830]: I0227 17:36:37.059818 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-68742"] Feb 27 17:36:37 crc kubenswrapper[4830]: I0227 17:36:37.060272 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b7946d7b9-68742" podUID="f9d32d14-02d4-46b6-8949-d183cf055428" containerName="dnsmasq-dns" containerID="cri-o://398a2068cd2085ee139379a1f89e4167dd96f986424eb802cf1c0618fcb22970" gracePeriod=10 Feb 27 17:36:37 crc kubenswrapper[4830]: I0227 17:36:37.104263 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea844f-8422-43f9-8056-0fa419120d61-config\") pod \"ovn-northd-0\" (UID: \"8cea844f-8422-43f9-8056-0fa419120d61\") " pod="openstack/ovn-northd-0" Feb 27 17:36:37 crc kubenswrapper[4830]: I0227 17:36:37.104311 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8cea844f-8422-43f9-8056-0fa419120d61-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8cea844f-8422-43f9-8056-0fa419120d61\") " pod="openstack/ovn-northd-0" Feb 27 17:36:37 crc kubenswrapper[4830]: I0227 17:36:37.104418 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndzsd\" (UniqueName: \"kubernetes.io/projected/8cea844f-8422-43f9-8056-0fa419120d61-kube-api-access-ndzsd\") pod \"ovn-northd-0\" (UID: \"8cea844f-8422-43f9-8056-0fa419120d61\") " pod="openstack/ovn-northd-0" Feb 27 17:36:37 crc kubenswrapper[4830]: I0227 17:36:37.104452 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8cea844f-8422-43f9-8056-0fa419120d61-scripts\") pod \"ovn-northd-0\" (UID: \"8cea844f-8422-43f9-8056-0fa419120d61\") " pod="openstack/ovn-northd-0" Feb 27 17:36:37 crc kubenswrapper[4830]: I0227 17:36:37.104519 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cea844f-8422-43f9-8056-0fa419120d61-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8cea844f-8422-43f9-8056-0fa419120d61\") " pod="openstack/ovn-northd-0" Feb 27 17:36:37 crc kubenswrapper[4830]: I0227 17:36:37.104934 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8cea844f-8422-43f9-8056-0fa419120d61-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"8cea844f-8422-43f9-8056-0fa419120d61\") " pod="openstack/ovn-northd-0" Feb 27 17:36:37 crc kubenswrapper[4830]: I0227 17:36:37.105614 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea844f-8422-43f9-8056-0fa419120d61-config\") pod \"ovn-northd-0\" (UID: \"8cea844f-8422-43f9-8056-0fa419120d61\") " pod="openstack/ovn-northd-0" Feb 27 17:36:37 crc kubenswrapper[4830]: I0227 17:36:37.106898 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8cea844f-8422-43f9-8056-0fa419120d61-scripts\") pod \"ovn-northd-0\" (UID: \"8cea844f-8422-43f9-8056-0fa419120d61\") " pod="openstack/ovn-northd-0" Feb 27 17:36:37 crc kubenswrapper[4830]: I0227 17:36:37.114206 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cea844f-8422-43f9-8056-0fa419120d61-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"8cea844f-8422-43f9-8056-0fa419120d61\") " pod="openstack/ovn-northd-0" Feb 27 17:36:37 crc kubenswrapper[4830]: I0227 17:36:37.137851 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndzsd\" (UniqueName: \"kubernetes.io/projected/8cea844f-8422-43f9-8056-0fa419120d61-kube-api-access-ndzsd\") pod \"ovn-northd-0\" (UID: \"8cea844f-8422-43f9-8056-0fa419120d61\") " pod="openstack/ovn-northd-0" Feb 27 17:36:37 crc kubenswrapper[4830]: I0227 17:36:37.237298 4830 generic.go:334] "Generic (PLEG): container finished" podID="f9d32d14-02d4-46b6-8949-d183cf055428" containerID="398a2068cd2085ee139379a1f89e4167dd96f986424eb802cf1c0618fcb22970" exitCode=0 Feb 27 17:36:37 crc kubenswrapper[4830]: I0227 17:36:37.237368 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-68742" event={"ID":"f9d32d14-02d4-46b6-8949-d183cf055428","Type":"ContainerDied","Data":"398a2068cd2085ee139379a1f89e4167dd96f986424eb802cf1c0618fcb22970"} Feb 27 17:36:37 crc kubenswrapper[4830]: I0227 17:36:37.279365 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 27 17:36:37 crc kubenswrapper[4830]: I0227 17:36:37.534832 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-68742" Feb 27 17:36:37 crc kubenswrapper[4830]: I0227 17:36:37.616408 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9d32d14-02d4-46b6-8949-d183cf055428-config\") pod \"f9d32d14-02d4-46b6-8949-d183cf055428\" (UID: \"f9d32d14-02d4-46b6-8949-d183cf055428\") " Feb 27 17:36:37 crc kubenswrapper[4830]: I0227 17:36:37.616588 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9d32d14-02d4-46b6-8949-d183cf055428-dns-svc\") pod \"f9d32d14-02d4-46b6-8949-d183cf055428\" (UID: \"f9d32d14-02d4-46b6-8949-d183cf055428\") " Feb 27 17:36:37 crc kubenswrapper[4830]: I0227 17:36:37.616737 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdd27\" (UniqueName: \"kubernetes.io/projected/f9d32d14-02d4-46b6-8949-d183cf055428-kube-api-access-rdd27\") pod \"f9d32d14-02d4-46b6-8949-d183cf055428\" (UID: \"f9d32d14-02d4-46b6-8949-d183cf055428\") " Feb 27 17:36:37 crc kubenswrapper[4830]: I0227 17:36:37.621916 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9d32d14-02d4-46b6-8949-d183cf055428-kube-api-access-rdd27" (OuterVolumeSpecName: "kube-api-access-rdd27") pod "f9d32d14-02d4-46b6-8949-d183cf055428" (UID: "f9d32d14-02d4-46b6-8949-d183cf055428"). InnerVolumeSpecName "kube-api-access-rdd27". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:36:37 crc kubenswrapper[4830]: I0227 17:36:37.661460 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9d32d14-02d4-46b6-8949-d183cf055428-config" (OuterVolumeSpecName: "config") pod "f9d32d14-02d4-46b6-8949-d183cf055428" (UID: "f9d32d14-02d4-46b6-8949-d183cf055428"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:36:37 crc kubenswrapper[4830]: I0227 17:36:37.663796 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9d32d14-02d4-46b6-8949-d183cf055428-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f9d32d14-02d4-46b6-8949-d183cf055428" (UID: "f9d32d14-02d4-46b6-8949-d183cf055428"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:36:37 crc kubenswrapper[4830]: I0227 17:36:37.718974 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9d32d14-02d4-46b6-8949-d183cf055428-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:36:37 crc kubenswrapper[4830]: I0227 17:36:37.719387 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9d32d14-02d4-46b6-8949-d183cf055428-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 17:36:37 crc kubenswrapper[4830]: I0227 17:36:37.719402 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdd27\" (UniqueName: \"kubernetes.io/projected/f9d32d14-02d4-46b6-8949-d183cf055428-kube-api-access-rdd27\") on node \"crc\" DevicePath \"\"" Feb 27 17:36:37 crc kubenswrapper[4830]: I0227 17:36:37.797889 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 27 17:36:37 crc kubenswrapper[4830]: W0227 17:36:37.801172 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8cea844f_8422_43f9_8056_0fa419120d61.slice/crio-7e3ea566c4ba69f1b1a6479ac0d25c97ac910f0f87d64f1d32629d1f9f9864b8 WatchSource:0}: Error finding container 7e3ea566c4ba69f1b1a6479ac0d25c97ac910f0f87d64f1d32629d1f9f9864b8: Status 404 returned error can't find the container with id 7e3ea566c4ba69f1b1a6479ac0d25c97ac910f0f87d64f1d32629d1f9f9864b8 Feb 27 17:36:38 crc kubenswrapper[4830]: I0227 17:36:38.251614 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b7946d7b9-68742" event={"ID":"f9d32d14-02d4-46b6-8949-d183cf055428","Type":"ContainerDied","Data":"1069f06859508065d87df99d1eb23d6f1d28eeb4b602959cbfa5f2b43d5e58d1"} Feb 27 17:36:38 crc kubenswrapper[4830]: I0227 17:36:38.251657 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b7946d7b9-68742" Feb 27 17:36:38 crc kubenswrapper[4830]: I0227 17:36:38.251728 4830 scope.go:117] "RemoveContainer" containerID="398a2068cd2085ee139379a1f89e4167dd96f986424eb802cf1c0618fcb22970" Feb 27 17:36:38 crc kubenswrapper[4830]: I0227 17:36:38.260413 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8cea844f-8422-43f9-8056-0fa419120d61","Type":"ContainerStarted","Data":"c0ca48cb062b75f3102b4f5dff972666c033f7729bf02bbda1ebd5e77bd8cc3b"} Feb 27 17:36:38 crc kubenswrapper[4830]: I0227 17:36:38.260483 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8cea844f-8422-43f9-8056-0fa419120d61","Type":"ContainerStarted","Data":"9d4b786093fc790dbd79b4e0aeeb23de2ad035f8fe36215661df6d0a9d5850d5"} Feb 27 17:36:38 crc kubenswrapper[4830]: I0227 17:36:38.260505 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"8cea844f-8422-43f9-8056-0fa419120d61","Type":"ContainerStarted","Data":"7e3ea566c4ba69f1b1a6479ac0d25c97ac910f0f87d64f1d32629d1f9f9864b8"} Feb 27 17:36:38 crc kubenswrapper[4830]: I0227 17:36:38.260623 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 27 17:36:38 crc kubenswrapper[4830]: I0227 17:36:38.300097 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.300055377 podStartE2EDuration="2.300055377s" podCreationTimestamp="2026-02-27 17:36:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:36:38.297133997 +0000 UTC m=+5394.386406490" watchObservedRunningTime="2026-02-27 17:36:38.300055377 +0000 UTC m=+5394.389327890" Feb 27 17:36:38 crc kubenswrapper[4830]: I0227 17:36:38.316402 4830 scope.go:117] "RemoveContainer" containerID="dbcf98ff9ac9c7f2e652167587134f123b523487119042c835f8c68e8558e7db" Feb 27 17:36:38 crc kubenswrapper[4830]: I0227 17:36:38.325254 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-68742"] Feb 27 17:36:38 crc kubenswrapper[4830]: I0227 17:36:38.349232 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b7946d7b9-68742"] Feb 27 17:36:38 crc kubenswrapper[4830]: I0227 17:36:38.780913 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9d32d14-02d4-46b6-8949-d183cf055428" path="/var/lib/kubelet/pods/f9d32d14-02d4-46b6-8949-d183cf055428/volumes" Feb 27 17:36:42 crc kubenswrapper[4830]: I0227 17:36:42.313608 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-tq4ws"] Feb 27 17:36:42 crc kubenswrapper[4830]: E0227 17:36:42.314327 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9d32d14-02d4-46b6-8949-d183cf055428" containerName="init" Feb 27 17:36:42 crc kubenswrapper[4830]: I0227 17:36:42.314343 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9d32d14-02d4-46b6-8949-d183cf055428" containerName="init" Feb 27 17:36:42 crc kubenswrapper[4830]: E0227 17:36:42.314355 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9d32d14-02d4-46b6-8949-d183cf055428" containerName="dnsmasq-dns" Feb 27 17:36:42 crc kubenswrapper[4830]: I0227 17:36:42.314362 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9d32d14-02d4-46b6-8949-d183cf055428" containerName="dnsmasq-dns" Feb 27 17:36:42 crc kubenswrapper[4830]: I0227 17:36:42.314523 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9d32d14-02d4-46b6-8949-d183cf055428" containerName="dnsmasq-dns" Feb 27 17:36:42 crc kubenswrapper[4830]: I0227 17:36:42.315192 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-tq4ws" Feb 27 17:36:42 crc kubenswrapper[4830]: I0227 17:36:42.323089 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-06a6-account-create-update-bgq2j"] Feb 27 17:36:42 crc kubenswrapper[4830]: I0227 17:36:42.325014 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-06a6-account-create-update-bgq2j" Feb 27 17:36:42 crc kubenswrapper[4830]: I0227 17:36:42.326705 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 27 17:36:42 crc kubenswrapper[4830]: I0227 17:36:42.329546 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-tq4ws"] Feb 27 17:36:42 crc kubenswrapper[4830]: I0227 17:36:42.359322 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-06a6-account-create-update-bgq2j"] Feb 27 17:36:42 crc kubenswrapper[4830]: I0227 17:36:42.455051 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b582958-8ecf-444a-a09f-db96b283db18-operator-scripts\") pod \"keystone-db-create-tq4ws\" (UID: \"0b582958-8ecf-444a-a09f-db96b283db18\") " pod="openstack/keystone-db-create-tq4ws" Feb 27 17:36:42 crc kubenswrapper[4830]: I0227 17:36:42.455117 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpskn\" (UniqueName: \"kubernetes.io/projected/84a29c55-5ac8-46dd-8be0-a12243cedbbf-kube-api-access-fpskn\") pod \"keystone-06a6-account-create-update-bgq2j\" (UID: \"84a29c55-5ac8-46dd-8be0-a12243cedbbf\") " pod="openstack/keystone-06a6-account-create-update-bgq2j" Feb 27 17:36:42 crc kubenswrapper[4830]: I0227 17:36:42.455163 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84a29c55-5ac8-46dd-8be0-a12243cedbbf-operator-scripts\") pod \"keystone-06a6-account-create-update-bgq2j\" (UID: \"84a29c55-5ac8-46dd-8be0-a12243cedbbf\") " pod="openstack/keystone-06a6-account-create-update-bgq2j" Feb 27 17:36:42 crc kubenswrapper[4830]: I0227 17:36:42.455519 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw9z8\" (UniqueName: \"kubernetes.io/projected/0b582958-8ecf-444a-a09f-db96b283db18-kube-api-access-kw9z8\") pod \"keystone-db-create-tq4ws\" (UID: \"0b582958-8ecf-444a-a09f-db96b283db18\") " pod="openstack/keystone-db-create-tq4ws" Feb 27 17:36:42 crc kubenswrapper[4830]: I0227 17:36:42.557314 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b582958-8ecf-444a-a09f-db96b283db18-operator-scripts\") pod \"keystone-db-create-tq4ws\" (UID: \"0b582958-8ecf-444a-a09f-db96b283db18\") " pod="openstack/keystone-db-create-tq4ws" Feb 27 17:36:42 crc kubenswrapper[4830]: I0227 17:36:42.557387 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpskn\" (UniqueName: \"kubernetes.io/projected/84a29c55-5ac8-46dd-8be0-a12243cedbbf-kube-api-access-fpskn\") pod \"keystone-06a6-account-create-update-bgq2j\" (UID: \"84a29c55-5ac8-46dd-8be0-a12243cedbbf\") " pod="openstack/keystone-06a6-account-create-update-bgq2j" Feb 27 17:36:42 crc kubenswrapper[4830]: I0227 17:36:42.557438 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84a29c55-5ac8-46dd-8be0-a12243cedbbf-operator-scripts\") pod \"keystone-06a6-account-create-update-bgq2j\" (UID: \"84a29c55-5ac8-46dd-8be0-a12243cedbbf\") " pod="openstack/keystone-06a6-account-create-update-bgq2j" Feb 27 17:36:42 crc kubenswrapper[4830]: I0227 17:36:42.557531 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw9z8\" (UniqueName: \"kubernetes.io/projected/0b582958-8ecf-444a-a09f-db96b283db18-kube-api-access-kw9z8\") pod \"keystone-db-create-tq4ws\" (UID: \"0b582958-8ecf-444a-a09f-db96b283db18\") " pod="openstack/keystone-db-create-tq4ws" Feb 27 17:36:42 crc kubenswrapper[4830]: I0227 17:36:42.558888 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84a29c55-5ac8-46dd-8be0-a12243cedbbf-operator-scripts\") pod \"keystone-06a6-account-create-update-bgq2j\" (UID: \"84a29c55-5ac8-46dd-8be0-a12243cedbbf\") " pod="openstack/keystone-06a6-account-create-update-bgq2j" Feb 27 17:36:42 crc kubenswrapper[4830]: I0227 17:36:42.559447 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b582958-8ecf-444a-a09f-db96b283db18-operator-scripts\") pod \"keystone-db-create-tq4ws\" (UID: \"0b582958-8ecf-444a-a09f-db96b283db18\") " pod="openstack/keystone-db-create-tq4ws" Feb 27 17:36:42 crc kubenswrapper[4830]: I0227 17:36:42.586173 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpskn\" (UniqueName: \"kubernetes.io/projected/84a29c55-5ac8-46dd-8be0-a12243cedbbf-kube-api-access-fpskn\") pod \"keystone-06a6-account-create-update-bgq2j\" (UID: \"84a29c55-5ac8-46dd-8be0-a12243cedbbf\") " pod="openstack/keystone-06a6-account-create-update-bgq2j" Feb 27 17:36:42 crc kubenswrapper[4830]: I0227 17:36:42.586346 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw9z8\" (UniqueName: \"kubernetes.io/projected/0b582958-8ecf-444a-a09f-db96b283db18-kube-api-access-kw9z8\") pod \"keystone-db-create-tq4ws\" (UID: \"0b582958-8ecf-444a-a09f-db96b283db18\") " pod="openstack/keystone-db-create-tq4ws" Feb 27 17:36:42 crc kubenswrapper[4830]: I0227 17:36:42.644470 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-tq4ws" Feb 27 17:36:42 crc kubenswrapper[4830]: I0227 17:36:42.654506 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-06a6-account-create-update-bgq2j" Feb 27 17:36:43 crc kubenswrapper[4830]: W0227 17:36:43.263377 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84a29c55_5ac8_46dd_8be0_a12243cedbbf.slice/crio-b5ce58d019cceb333c043054464d66a582d004f840c785b5fc6f54c94fc7d200 WatchSource:0}: Error finding container b5ce58d019cceb333c043054464d66a582d004f840c785b5fc6f54c94fc7d200: Status 404 returned error can't find the container with id b5ce58d019cceb333c043054464d66a582d004f840c785b5fc6f54c94fc7d200 Feb 27 17:36:43 crc kubenswrapper[4830]: I0227 17:36:43.271409 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-06a6-account-create-update-bgq2j"] Feb 27 17:36:43 crc kubenswrapper[4830]: I0227 17:36:43.325078 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-06a6-account-create-update-bgq2j" event={"ID":"84a29c55-5ac8-46dd-8be0-a12243cedbbf","Type":"ContainerStarted","Data":"b5ce58d019cceb333c043054464d66a582d004f840c785b5fc6f54c94fc7d200"} Feb 27 17:36:43 crc kubenswrapper[4830]: I0227 17:36:43.336409 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-tq4ws"] Feb 27 17:36:43 crc kubenswrapper[4830]: W0227 17:36:43.350935 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b582958_8ecf_444a_a09f_db96b283db18.slice/crio-fe4e833b60e09a45ac1833dbfb96b80965cfce2f3b801c22f7913ea80fa15dec WatchSource:0}: Error finding container fe4e833b60e09a45ac1833dbfb96b80965cfce2f3b801c22f7913ea80fa15dec: Status 404 returned error can't find the container with id fe4e833b60e09a45ac1833dbfb96b80965cfce2f3b801c22f7913ea80fa15dec Feb 27 17:36:44 crc kubenswrapper[4830]: I0227 17:36:44.339807 4830 generic.go:334] "Generic (PLEG): container finished" podID="0b582958-8ecf-444a-a09f-db96b283db18" containerID="f89cdd1399349b91b86536eefb41a584598a542d50342d2dc8053c5141c1672b" exitCode=0 Feb 27 17:36:44 crc kubenswrapper[4830]: I0227 17:36:44.339885 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-tq4ws" event={"ID":"0b582958-8ecf-444a-a09f-db96b283db18","Type":"ContainerDied","Data":"f89cdd1399349b91b86536eefb41a584598a542d50342d2dc8053c5141c1672b"} Feb 27 17:36:44 crc kubenswrapper[4830]: I0227 17:36:44.340491 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-tq4ws" event={"ID":"0b582958-8ecf-444a-a09f-db96b283db18","Type":"ContainerStarted","Data":"fe4e833b60e09a45ac1833dbfb96b80965cfce2f3b801c22f7913ea80fa15dec"} Feb 27 17:36:44 crc kubenswrapper[4830]: I0227 17:36:44.345534 4830 generic.go:334] "Generic (PLEG): container finished" podID="84a29c55-5ac8-46dd-8be0-a12243cedbbf" containerID="f79c27818404e9b20da607e84a10901094ce0ae0527be8e0df748d5a83409c7d" exitCode=0 Feb 27 17:36:44 crc kubenswrapper[4830]: I0227 17:36:44.345618 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-06a6-account-create-update-bgq2j" event={"ID":"84a29c55-5ac8-46dd-8be0-a12243cedbbf","Type":"ContainerDied","Data":"f79c27818404e9b20da607e84a10901094ce0ae0527be8e0df748d5a83409c7d"} Feb 27 17:36:45 crc kubenswrapper[4830]: I0227 17:36:45.919863 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-06a6-account-create-update-bgq2j" Feb 27 17:36:45 crc kubenswrapper[4830]: I0227 17:36:45.926114 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-tq4ws" Feb 27 17:36:46 crc kubenswrapper[4830]: I0227 17:36:46.033313 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b582958-8ecf-444a-a09f-db96b283db18-operator-scripts\") pod \"0b582958-8ecf-444a-a09f-db96b283db18\" (UID: \"0b582958-8ecf-444a-a09f-db96b283db18\") " Feb 27 17:36:46 crc kubenswrapper[4830]: I0227 17:36:46.033588 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kw9z8\" (UniqueName: \"kubernetes.io/projected/0b582958-8ecf-444a-a09f-db96b283db18-kube-api-access-kw9z8\") pod \"0b582958-8ecf-444a-a09f-db96b283db18\" (UID: \"0b582958-8ecf-444a-a09f-db96b283db18\") " Feb 27 17:36:46 crc kubenswrapper[4830]: I0227 17:36:46.033651 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpskn\" (UniqueName: \"kubernetes.io/projected/84a29c55-5ac8-46dd-8be0-a12243cedbbf-kube-api-access-fpskn\") pod \"84a29c55-5ac8-46dd-8be0-a12243cedbbf\" (UID: \"84a29c55-5ac8-46dd-8be0-a12243cedbbf\") " Feb 27 17:36:46 crc kubenswrapper[4830]: I0227 17:36:46.033761 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84a29c55-5ac8-46dd-8be0-a12243cedbbf-operator-scripts\") pod \"84a29c55-5ac8-46dd-8be0-a12243cedbbf\" (UID: \"84a29c55-5ac8-46dd-8be0-a12243cedbbf\") " Feb 27 17:36:46 crc kubenswrapper[4830]: I0227 17:36:46.034752 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b582958-8ecf-444a-a09f-db96b283db18-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0b582958-8ecf-444a-a09f-db96b283db18" (UID: "0b582958-8ecf-444a-a09f-db96b283db18"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:36:46 crc kubenswrapper[4830]: I0227 17:36:46.034832 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84a29c55-5ac8-46dd-8be0-a12243cedbbf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "84a29c55-5ac8-46dd-8be0-a12243cedbbf" (UID: "84a29c55-5ac8-46dd-8be0-a12243cedbbf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:36:46 crc kubenswrapper[4830]: I0227 17:36:46.043070 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84a29c55-5ac8-46dd-8be0-a12243cedbbf-kube-api-access-fpskn" (OuterVolumeSpecName: "kube-api-access-fpskn") pod "84a29c55-5ac8-46dd-8be0-a12243cedbbf" (UID: "84a29c55-5ac8-46dd-8be0-a12243cedbbf"). InnerVolumeSpecName "kube-api-access-fpskn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:36:46 crc kubenswrapper[4830]: I0227 17:36:46.043662 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b582958-8ecf-444a-a09f-db96b283db18-kube-api-access-kw9z8" (OuterVolumeSpecName: "kube-api-access-kw9z8") pod "0b582958-8ecf-444a-a09f-db96b283db18" (UID: "0b582958-8ecf-444a-a09f-db96b283db18"). InnerVolumeSpecName "kube-api-access-kw9z8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:36:46 crc kubenswrapper[4830]: I0227 17:36:46.135775 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kw9z8\" (UniqueName: \"kubernetes.io/projected/0b582958-8ecf-444a-a09f-db96b283db18-kube-api-access-kw9z8\") on node \"crc\" DevicePath \"\"" Feb 27 17:36:46 crc kubenswrapper[4830]: I0227 17:36:46.135815 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpskn\" (UniqueName: \"kubernetes.io/projected/84a29c55-5ac8-46dd-8be0-a12243cedbbf-kube-api-access-fpskn\") on node \"crc\" DevicePath \"\"" Feb 27 17:36:46 crc kubenswrapper[4830]: I0227 17:36:46.135830 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84a29c55-5ac8-46dd-8be0-a12243cedbbf-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:36:46 crc kubenswrapper[4830]: I0227 17:36:46.135848 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b582958-8ecf-444a-a09f-db96b283db18-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:36:46 crc kubenswrapper[4830]: I0227 17:36:46.371028 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-tq4ws" event={"ID":"0b582958-8ecf-444a-a09f-db96b283db18","Type":"ContainerDied","Data":"fe4e833b60e09a45ac1833dbfb96b80965cfce2f3b801c22f7913ea80fa15dec"} Feb 27 17:36:46 crc kubenswrapper[4830]: I0227 17:36:46.371134 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe4e833b60e09a45ac1833dbfb96b80965cfce2f3b801c22f7913ea80fa15dec" Feb 27 17:36:46 crc kubenswrapper[4830]: I0227 17:36:46.371553 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-tq4ws" Feb 27 17:36:46 crc kubenswrapper[4830]: I0227 17:36:46.373843 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-06a6-account-create-update-bgq2j" event={"ID":"84a29c55-5ac8-46dd-8be0-a12243cedbbf","Type":"ContainerDied","Data":"b5ce58d019cceb333c043054464d66a582d004f840c785b5fc6f54c94fc7d200"} Feb 27 17:36:46 crc kubenswrapper[4830]: I0227 17:36:46.373917 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5ce58d019cceb333c043054464d66a582d004f840c785b5fc6f54c94fc7d200" Feb 27 17:36:46 crc kubenswrapper[4830]: I0227 17:36:46.374157 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-06a6-account-create-update-bgq2j" Feb 27 17:36:47 crc kubenswrapper[4830]: I0227 17:36:47.380831 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 27 17:36:47 crc kubenswrapper[4830]: I0227 17:36:47.798789 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-4sk26"] Feb 27 17:36:47 crc kubenswrapper[4830]: E0227 17:36:47.799410 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84a29c55-5ac8-46dd-8be0-a12243cedbbf" containerName="mariadb-account-create-update" Feb 27 17:36:47 crc kubenswrapper[4830]: I0227 17:36:47.799432 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="84a29c55-5ac8-46dd-8be0-a12243cedbbf" containerName="mariadb-account-create-update" Feb 27 17:36:47 crc kubenswrapper[4830]: E0227 17:36:47.799458 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b582958-8ecf-444a-a09f-db96b283db18" containerName="mariadb-database-create" Feb 27 17:36:47 crc kubenswrapper[4830]: I0227 17:36:47.799465 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b582958-8ecf-444a-a09f-db96b283db18" containerName="mariadb-database-create" Feb 27 17:36:47 crc kubenswrapper[4830]: I0227 17:36:47.799625 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="84a29c55-5ac8-46dd-8be0-a12243cedbbf" containerName="mariadb-account-create-update" Feb 27 17:36:47 crc kubenswrapper[4830]: I0227 17:36:47.799645 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b582958-8ecf-444a-a09f-db96b283db18" containerName="mariadb-database-create" Feb 27 17:36:47 crc kubenswrapper[4830]: I0227 17:36:47.800218 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4sk26" Feb 27 17:36:47 crc kubenswrapper[4830]: I0227 17:36:47.806619 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 27 17:36:47 crc kubenswrapper[4830]: I0227 17:36:47.807224 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 27 17:36:47 crc kubenswrapper[4830]: I0227 17:36:47.809250 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 27 17:36:47 crc kubenswrapper[4830]: I0227 17:36:47.810717 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-c6l2j" Feb 27 17:36:47 crc kubenswrapper[4830]: I0227 17:36:47.825124 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-4sk26"] Feb 27 17:36:47 crc kubenswrapper[4830]: I0227 17:36:47.972218 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdgh6\" (UniqueName: \"kubernetes.io/projected/c5236dad-b287-4f62-afa9-c6449a7b18d2-kube-api-access-gdgh6\") pod \"keystone-db-sync-4sk26\" (UID: \"c5236dad-b287-4f62-afa9-c6449a7b18d2\") " pod="openstack/keystone-db-sync-4sk26" Feb 27 17:36:47 crc kubenswrapper[4830]: I0227 17:36:47.972570 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5236dad-b287-4f62-afa9-c6449a7b18d2-combined-ca-bundle\") pod \"keystone-db-sync-4sk26\" (UID: \"c5236dad-b287-4f62-afa9-c6449a7b18d2\") " pod="openstack/keystone-db-sync-4sk26" Feb 27 17:36:47 crc kubenswrapper[4830]: I0227 17:36:47.972695 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5236dad-b287-4f62-afa9-c6449a7b18d2-config-data\") pod \"keystone-db-sync-4sk26\" (UID: \"c5236dad-b287-4f62-afa9-c6449a7b18d2\") " pod="openstack/keystone-db-sync-4sk26" Feb 27 17:36:48 crc kubenswrapper[4830]: I0227 17:36:48.075135 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdgh6\" (UniqueName: \"kubernetes.io/projected/c5236dad-b287-4f62-afa9-c6449a7b18d2-kube-api-access-gdgh6\") pod \"keystone-db-sync-4sk26\" (UID: \"c5236dad-b287-4f62-afa9-c6449a7b18d2\") " pod="openstack/keystone-db-sync-4sk26" Feb 27 17:36:48 crc kubenswrapper[4830]: I0227 17:36:48.075265 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5236dad-b287-4f62-afa9-c6449a7b18d2-combined-ca-bundle\") pod \"keystone-db-sync-4sk26\" (UID: \"c5236dad-b287-4f62-afa9-c6449a7b18d2\") " pod="openstack/keystone-db-sync-4sk26" Feb 27 17:36:48 crc kubenswrapper[4830]: I0227 17:36:48.075323 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5236dad-b287-4f62-afa9-c6449a7b18d2-config-data\") pod \"keystone-db-sync-4sk26\" (UID: \"c5236dad-b287-4f62-afa9-c6449a7b18d2\") " pod="openstack/keystone-db-sync-4sk26" Feb 27 17:36:48 crc kubenswrapper[4830]: I0227 17:36:48.082591 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5236dad-b287-4f62-afa9-c6449a7b18d2-config-data\") pod \"keystone-db-sync-4sk26\" (UID: \"c5236dad-b287-4f62-afa9-c6449a7b18d2\") " pod="openstack/keystone-db-sync-4sk26" Feb 27 17:36:48 crc kubenswrapper[4830]: I0227 17:36:48.083637 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5236dad-b287-4f62-afa9-c6449a7b18d2-combined-ca-bundle\") pod \"keystone-db-sync-4sk26\" (UID: \"c5236dad-b287-4f62-afa9-c6449a7b18d2\") " pod="openstack/keystone-db-sync-4sk26" Feb 27 17:36:48 crc kubenswrapper[4830]: I0227 17:36:48.113028 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdgh6\" (UniqueName: \"kubernetes.io/projected/c5236dad-b287-4f62-afa9-c6449a7b18d2-kube-api-access-gdgh6\") pod \"keystone-db-sync-4sk26\" (UID: \"c5236dad-b287-4f62-afa9-c6449a7b18d2\") " pod="openstack/keystone-db-sync-4sk26" Feb 27 17:36:48 crc kubenswrapper[4830]: I0227 17:36:48.118772 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4sk26" Feb 27 17:36:48 crc kubenswrapper[4830]: I0227 17:36:48.404561 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-4sk26"] Feb 27 17:36:48 crc kubenswrapper[4830]: W0227 17:36:48.423343 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5236dad_b287_4f62_afa9_c6449a7b18d2.slice/crio-ee3e87e4404cd08c792d5c2b5a87e4cfeee9f774f89733b44443613d91f8aead WatchSource:0}: Error finding container ee3e87e4404cd08c792d5c2b5a87e4cfeee9f774f89733b44443613d91f8aead: Status 404 returned error can't find the container with id ee3e87e4404cd08c792d5c2b5a87e4cfeee9f774f89733b44443613d91f8aead Feb 27 17:36:48 crc kubenswrapper[4830]: I0227 17:36:48.777569 4830 scope.go:117] "RemoveContainer" containerID="c16f444ed61a9f55c5af7ad328f6f0bbdb14381e68fc9af10bf6a89a1841edff" Feb 27 17:36:49 crc kubenswrapper[4830]: I0227 17:36:49.412609 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4sk26" event={"ID":"c5236dad-b287-4f62-afa9-c6449a7b18d2","Type":"ContainerStarted","Data":"da5c9e4f1a7ad40a38bab01079f48f00c8d8cba892da8cc746dfeb95b8010427"} Feb 27 17:36:49 crc kubenswrapper[4830]: I0227 17:36:49.413227 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4sk26" event={"ID":"c5236dad-b287-4f62-afa9-c6449a7b18d2","Type":"ContainerStarted","Data":"ee3e87e4404cd08c792d5c2b5a87e4cfeee9f774f89733b44443613d91f8aead"} Feb 27 17:36:49 crc kubenswrapper[4830]: I0227 17:36:49.448507 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-4sk26" podStartSLOduration=2.4484781030000002 podStartE2EDuration="2.448478103s" podCreationTimestamp="2026-02-27 17:36:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:36:49.439528008 +0000 UTC m=+5405.528800511" watchObservedRunningTime="2026-02-27 17:36:49.448478103 +0000 UTC m=+5405.537750576" Feb 27 17:36:50 crc kubenswrapper[4830]: I0227 17:36:50.429561 4830 generic.go:334] "Generic (PLEG): container finished" podID="c5236dad-b287-4f62-afa9-c6449a7b18d2" containerID="da5c9e4f1a7ad40a38bab01079f48f00c8d8cba892da8cc746dfeb95b8010427" exitCode=0 Feb 27 17:36:50 crc kubenswrapper[4830]: I0227 17:36:50.429634 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4sk26" event={"ID":"c5236dad-b287-4f62-afa9-c6449a7b18d2","Type":"ContainerDied","Data":"da5c9e4f1a7ad40a38bab01079f48f00c8d8cba892da8cc746dfeb95b8010427"} Feb 27 17:36:51 crc kubenswrapper[4830]: I0227 17:36:51.893623 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4sk26" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.075541 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5236dad-b287-4f62-afa9-c6449a7b18d2-combined-ca-bundle\") pod \"c5236dad-b287-4f62-afa9-c6449a7b18d2\" (UID: \"c5236dad-b287-4f62-afa9-c6449a7b18d2\") " Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.075656 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5236dad-b287-4f62-afa9-c6449a7b18d2-config-data\") pod \"c5236dad-b287-4f62-afa9-c6449a7b18d2\" (UID: \"c5236dad-b287-4f62-afa9-c6449a7b18d2\") " Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.076000 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdgh6\" (UniqueName: \"kubernetes.io/projected/c5236dad-b287-4f62-afa9-c6449a7b18d2-kube-api-access-gdgh6\") pod \"c5236dad-b287-4f62-afa9-c6449a7b18d2\" (UID: \"c5236dad-b287-4f62-afa9-c6449a7b18d2\") " Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.095486 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5236dad-b287-4f62-afa9-c6449a7b18d2-kube-api-access-gdgh6" (OuterVolumeSpecName: "kube-api-access-gdgh6") pod "c5236dad-b287-4f62-afa9-c6449a7b18d2" (UID: "c5236dad-b287-4f62-afa9-c6449a7b18d2"). InnerVolumeSpecName "kube-api-access-gdgh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.124903 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5236dad-b287-4f62-afa9-c6449a7b18d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5236dad-b287-4f62-afa9-c6449a7b18d2" (UID: "c5236dad-b287-4f62-afa9-c6449a7b18d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.140691 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5236dad-b287-4f62-afa9-c6449a7b18d2-config-data" (OuterVolumeSpecName: "config-data") pod "c5236dad-b287-4f62-afa9-c6449a7b18d2" (UID: "c5236dad-b287-4f62-afa9-c6449a7b18d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.179416 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5236dad-b287-4f62-afa9-c6449a7b18d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.179636 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5236dad-b287-4f62-afa9-c6449a7b18d2-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.179784 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdgh6\" (UniqueName: \"kubernetes.io/projected/c5236dad-b287-4f62-afa9-c6449a7b18d2-kube-api-access-gdgh6\") on node \"crc\" DevicePath \"\"" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.462419 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4sk26" event={"ID":"c5236dad-b287-4f62-afa9-c6449a7b18d2","Type":"ContainerDied","Data":"ee3e87e4404cd08c792d5c2b5a87e4cfeee9f774f89733b44443613d91f8aead"} Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.462488 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee3e87e4404cd08c792d5c2b5a87e4cfeee9f774f89733b44443613d91f8aead" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.463174 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4sk26" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.794937 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68db9477c7-tp8ct"] Feb 27 17:36:52 crc kubenswrapper[4830]: E0227 17:36:52.795837 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5236dad-b287-4f62-afa9-c6449a7b18d2" containerName="keystone-db-sync" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.795854 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5236dad-b287-4f62-afa9-c6449a7b18d2" containerName="keystone-db-sync" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.796055 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5236dad-b287-4f62-afa9-c6449a7b18d2" containerName="keystone-db-sync" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.797056 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68db9477c7-tp8ct" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.807894 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-ts8s6"] Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.809333 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ts8s6" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.811624 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.812800 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-c6l2j" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.812824 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.815122 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.817762 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68db9477c7-tp8ct"] Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.818567 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.827382 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ts8s6"] Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.897684 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db17b982-6152-4a97-867a-1df9ee446fff-config-data\") pod \"keystone-bootstrap-ts8s6\" (UID: \"db17b982-6152-4a97-867a-1df9ee446fff\") " pod="openstack/keystone-bootstrap-ts8s6" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.897740 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9694a9b4-cc71-4423-a8dd-56a80240d3cd-config\") pod \"dnsmasq-dns-68db9477c7-tp8ct\" (UID: \"9694a9b4-cc71-4423-a8dd-56a80240d3cd\") " pod="openstack/dnsmasq-dns-68db9477c7-tp8ct" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.897778 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4sb5\" (UniqueName: \"kubernetes.io/projected/9694a9b4-cc71-4423-a8dd-56a80240d3cd-kube-api-access-w4sb5\") pod \"dnsmasq-dns-68db9477c7-tp8ct\" (UID: \"9694a9b4-cc71-4423-a8dd-56a80240d3cd\") " pod="openstack/dnsmasq-dns-68db9477c7-tp8ct" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.897837 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/db17b982-6152-4a97-867a-1df9ee446fff-credential-keys\") pod \"keystone-bootstrap-ts8s6\" (UID: \"db17b982-6152-4a97-867a-1df9ee446fff\") " pod="openstack/keystone-bootstrap-ts8s6" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.897859 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9694a9b4-cc71-4423-a8dd-56a80240d3cd-ovsdbserver-nb\") pod \"dnsmasq-dns-68db9477c7-tp8ct\" (UID: \"9694a9b4-cc71-4423-a8dd-56a80240d3cd\") " pod="openstack/dnsmasq-dns-68db9477c7-tp8ct" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.897878 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db17b982-6152-4a97-867a-1df9ee446fff-combined-ca-bundle\") pod \"keystone-bootstrap-ts8s6\" (UID: \"db17b982-6152-4a97-867a-1df9ee446fff\") " pod="openstack/keystone-bootstrap-ts8s6" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.897906 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db17b982-6152-4a97-867a-1df9ee446fff-fernet-keys\") pod \"keystone-bootstrap-ts8s6\" (UID: \"db17b982-6152-4a97-867a-1df9ee446fff\") " pod="openstack/keystone-bootstrap-ts8s6" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.897923 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9694a9b4-cc71-4423-a8dd-56a80240d3cd-ovsdbserver-sb\") pod \"dnsmasq-dns-68db9477c7-tp8ct\" (UID: \"9694a9b4-cc71-4423-a8dd-56a80240d3cd\") " pod="openstack/dnsmasq-dns-68db9477c7-tp8ct" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.898003 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db17b982-6152-4a97-867a-1df9ee446fff-scripts\") pod \"keystone-bootstrap-ts8s6\" (UID: \"db17b982-6152-4a97-867a-1df9ee446fff\") " pod="openstack/keystone-bootstrap-ts8s6" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.898048 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48znz\" (UniqueName: \"kubernetes.io/projected/db17b982-6152-4a97-867a-1df9ee446fff-kube-api-access-48znz\") pod \"keystone-bootstrap-ts8s6\" (UID: \"db17b982-6152-4a97-867a-1df9ee446fff\") " pod="openstack/keystone-bootstrap-ts8s6" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.898070 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9694a9b4-cc71-4423-a8dd-56a80240d3cd-dns-svc\") pod \"dnsmasq-dns-68db9477c7-tp8ct\" (UID: \"9694a9b4-cc71-4423-a8dd-56a80240d3cd\") " pod="openstack/dnsmasq-dns-68db9477c7-tp8ct" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.999300 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db17b982-6152-4a97-867a-1df9ee446fff-fernet-keys\") pod \"keystone-bootstrap-ts8s6\" (UID: \"db17b982-6152-4a97-867a-1df9ee446fff\") " pod="openstack/keystone-bootstrap-ts8s6" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.999351 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9694a9b4-cc71-4423-a8dd-56a80240d3cd-ovsdbserver-sb\") pod \"dnsmasq-dns-68db9477c7-tp8ct\" (UID: \"9694a9b4-cc71-4423-a8dd-56a80240d3cd\") " pod="openstack/dnsmasq-dns-68db9477c7-tp8ct" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.999389 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db17b982-6152-4a97-867a-1df9ee446fff-scripts\") pod \"keystone-bootstrap-ts8s6\" (UID: \"db17b982-6152-4a97-867a-1df9ee446fff\") " pod="openstack/keystone-bootstrap-ts8s6" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.999412 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48znz\" (UniqueName: \"kubernetes.io/projected/db17b982-6152-4a97-867a-1df9ee446fff-kube-api-access-48znz\") pod \"keystone-bootstrap-ts8s6\" (UID: \"db17b982-6152-4a97-867a-1df9ee446fff\") " pod="openstack/keystone-bootstrap-ts8s6" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.999432 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9694a9b4-cc71-4423-a8dd-56a80240d3cd-dns-svc\") pod \"dnsmasq-dns-68db9477c7-tp8ct\" (UID: \"9694a9b4-cc71-4423-a8dd-56a80240d3cd\") " pod="openstack/dnsmasq-dns-68db9477c7-tp8ct" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.999517 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db17b982-6152-4a97-867a-1df9ee446fff-config-data\") pod \"keystone-bootstrap-ts8s6\" (UID: \"db17b982-6152-4a97-867a-1df9ee446fff\") " pod="openstack/keystone-bootstrap-ts8s6" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.999538 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9694a9b4-cc71-4423-a8dd-56a80240d3cd-config\") pod \"dnsmasq-dns-68db9477c7-tp8ct\" (UID: \"9694a9b4-cc71-4423-a8dd-56a80240d3cd\") " pod="openstack/dnsmasq-dns-68db9477c7-tp8ct" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.999564 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4sb5\" (UniqueName: \"kubernetes.io/projected/9694a9b4-cc71-4423-a8dd-56a80240d3cd-kube-api-access-w4sb5\") pod \"dnsmasq-dns-68db9477c7-tp8ct\" (UID: \"9694a9b4-cc71-4423-a8dd-56a80240d3cd\") " pod="openstack/dnsmasq-dns-68db9477c7-tp8ct" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.999591 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/db17b982-6152-4a97-867a-1df9ee446fff-credential-keys\") pod \"keystone-bootstrap-ts8s6\" (UID: \"db17b982-6152-4a97-867a-1df9ee446fff\") " pod="openstack/keystone-bootstrap-ts8s6" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.999613 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9694a9b4-cc71-4423-a8dd-56a80240d3cd-ovsdbserver-nb\") pod \"dnsmasq-dns-68db9477c7-tp8ct\" (UID: \"9694a9b4-cc71-4423-a8dd-56a80240d3cd\") " pod="openstack/dnsmasq-dns-68db9477c7-tp8ct" Feb 27 17:36:52 crc kubenswrapper[4830]: I0227 17:36:52.999633 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db17b982-6152-4a97-867a-1df9ee446fff-combined-ca-bundle\") pod \"keystone-bootstrap-ts8s6\" (UID: \"db17b982-6152-4a97-867a-1df9ee446fff\") " pod="openstack/keystone-bootstrap-ts8s6" Feb 27 17:36:53 crc kubenswrapper[4830]: I0227 17:36:53.000447 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9694a9b4-cc71-4423-a8dd-56a80240d3cd-ovsdbserver-sb\") pod \"dnsmasq-dns-68db9477c7-tp8ct\" (UID: \"9694a9b4-cc71-4423-a8dd-56a80240d3cd\") " pod="openstack/dnsmasq-dns-68db9477c7-tp8ct" Feb 27 17:36:53 crc kubenswrapper[4830]: I0227 17:36:53.001664 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9694a9b4-cc71-4423-a8dd-56a80240d3cd-config\") pod \"dnsmasq-dns-68db9477c7-tp8ct\" (UID: \"9694a9b4-cc71-4423-a8dd-56a80240d3cd\") " pod="openstack/dnsmasq-dns-68db9477c7-tp8ct" Feb 27 17:36:53 crc kubenswrapper[4830]: I0227 17:36:53.002281 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9694a9b4-cc71-4423-a8dd-56a80240d3cd-ovsdbserver-nb\") pod \"dnsmasq-dns-68db9477c7-tp8ct\" (UID: \"9694a9b4-cc71-4423-a8dd-56a80240d3cd\") " pod="openstack/dnsmasq-dns-68db9477c7-tp8ct" Feb 27 17:36:53 crc kubenswrapper[4830]: I0227 17:36:53.002355 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9694a9b4-cc71-4423-a8dd-56a80240d3cd-dns-svc\") pod \"dnsmasq-dns-68db9477c7-tp8ct\" (UID: \"9694a9b4-cc71-4423-a8dd-56a80240d3cd\") " pod="openstack/dnsmasq-dns-68db9477c7-tp8ct" Feb 27 17:36:53 crc kubenswrapper[4830]: I0227 17:36:53.005982 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db17b982-6152-4a97-867a-1df9ee446fff-combined-ca-bundle\") pod \"keystone-bootstrap-ts8s6\" (UID: \"db17b982-6152-4a97-867a-1df9ee446fff\") " pod="openstack/keystone-bootstrap-ts8s6" Feb 27 17:36:53 crc kubenswrapper[4830]: I0227 17:36:53.006114 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db17b982-6152-4a97-867a-1df9ee446fff-fernet-keys\") pod \"keystone-bootstrap-ts8s6\" (UID: \"db17b982-6152-4a97-867a-1df9ee446fff\") " pod="openstack/keystone-bootstrap-ts8s6" Feb 27 17:36:53 crc kubenswrapper[4830]: I0227 17:36:53.010363 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/db17b982-6152-4a97-867a-1df9ee446fff-credential-keys\") pod \"keystone-bootstrap-ts8s6\" (UID: \"db17b982-6152-4a97-867a-1df9ee446fff\") " pod="openstack/keystone-bootstrap-ts8s6" Feb 27 17:36:53 crc kubenswrapper[4830]: I0227 17:36:53.010895 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db17b982-6152-4a97-867a-1df9ee446fff-config-data\") pod \"keystone-bootstrap-ts8s6\" (UID: \"db17b982-6152-4a97-867a-1df9ee446fff\") " pod="openstack/keystone-bootstrap-ts8s6" Feb 27 17:36:53 crc kubenswrapper[4830]: I0227 17:36:53.016178 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db17b982-6152-4a97-867a-1df9ee446fff-scripts\") pod \"keystone-bootstrap-ts8s6\" (UID: \"db17b982-6152-4a97-867a-1df9ee446fff\") " pod="openstack/keystone-bootstrap-ts8s6" Feb 27 17:36:53 crc kubenswrapper[4830]: I0227 17:36:53.017211 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4sb5\" (UniqueName: \"kubernetes.io/projected/9694a9b4-cc71-4423-a8dd-56a80240d3cd-kube-api-access-w4sb5\") pod \"dnsmasq-dns-68db9477c7-tp8ct\" (UID: \"9694a9b4-cc71-4423-a8dd-56a80240d3cd\") " pod="openstack/dnsmasq-dns-68db9477c7-tp8ct" Feb 27 17:36:53 crc kubenswrapper[4830]: I0227 17:36:53.026680 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48znz\" (UniqueName: \"kubernetes.io/projected/db17b982-6152-4a97-867a-1df9ee446fff-kube-api-access-48znz\") pod \"keystone-bootstrap-ts8s6\" (UID: \"db17b982-6152-4a97-867a-1df9ee446fff\") " pod="openstack/keystone-bootstrap-ts8s6" Feb 27 17:36:53 crc kubenswrapper[4830]: I0227 17:36:53.131022 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68db9477c7-tp8ct" Feb 27 17:36:53 crc kubenswrapper[4830]: I0227 17:36:53.154107 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ts8s6" Feb 27 17:36:53 crc kubenswrapper[4830]: I0227 17:36:53.497879 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68db9477c7-tp8ct"] Feb 27 17:36:53 crc kubenswrapper[4830]: I0227 17:36:53.743864 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ts8s6"] Feb 27 17:36:53 crc kubenswrapper[4830]: W0227 17:36:53.801634 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb17b982_6152_4a97_867a_1df9ee446fff.slice/crio-ae299468179b49728d384b209505a79b7e232f6b72aebcd084b797fd6c8a1218 WatchSource:0}: Error finding container ae299468179b49728d384b209505a79b7e232f6b72aebcd084b797fd6c8a1218: Status 404 returned error can't find the container with id ae299468179b49728d384b209505a79b7e232f6b72aebcd084b797fd6c8a1218 Feb 27 17:36:54 crc kubenswrapper[4830]: I0227 17:36:54.482519 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ts8s6" event={"ID":"db17b982-6152-4a97-867a-1df9ee446fff","Type":"ContainerStarted","Data":"cfe791d8da96314f66528f242569ccaaaff78cc5620fb47a107fcdd4d3f4e74e"} Feb 27 17:36:54 crc kubenswrapper[4830]: I0227 17:36:54.483113 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ts8s6" event={"ID":"db17b982-6152-4a97-867a-1df9ee446fff","Type":"ContainerStarted","Data":"ae299468179b49728d384b209505a79b7e232f6b72aebcd084b797fd6c8a1218"} Feb 27 17:36:54 crc kubenswrapper[4830]: I0227 17:36:54.486292 4830 generic.go:334] "Generic (PLEG): container finished" podID="9694a9b4-cc71-4423-a8dd-56a80240d3cd" containerID="5f471a2ccfea9ad5a27ec27aa94fd81c983b35c5387a8770441570a9112004b9" exitCode=0 Feb 27 17:36:54 crc kubenswrapper[4830]: I0227 17:36:54.486368 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68db9477c7-tp8ct" event={"ID":"9694a9b4-cc71-4423-a8dd-56a80240d3cd","Type":"ContainerDied","Data":"5f471a2ccfea9ad5a27ec27aa94fd81c983b35c5387a8770441570a9112004b9"} Feb 27 17:36:54 crc kubenswrapper[4830]: I0227 17:36:54.486412 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68db9477c7-tp8ct" event={"ID":"9694a9b4-cc71-4423-a8dd-56a80240d3cd","Type":"ContainerStarted","Data":"6eb22a64d615f7c774a14a2c4d0fa464151d5ae585f690c4c3c0f5c1416bdc27"} Feb 27 17:36:54 crc kubenswrapper[4830]: I0227 17:36:54.521767 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-ts8s6" podStartSLOduration=2.521740774 podStartE2EDuration="2.521740774s" podCreationTimestamp="2026-02-27 17:36:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:36:54.518703811 +0000 UTC m=+5410.607976314" watchObservedRunningTime="2026-02-27 17:36:54.521740774 +0000 UTC m=+5410.611013277" Feb 27 17:36:55 crc kubenswrapper[4830]: I0227 17:36:55.502137 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68db9477c7-tp8ct" event={"ID":"9694a9b4-cc71-4423-a8dd-56a80240d3cd","Type":"ContainerStarted","Data":"070d0d0852aa3c2d0c5454543e2b849d27c200f77e8b1db2e019452173412d11"} Feb 27 17:36:55 crc kubenswrapper[4830]: I0227 17:36:55.528274 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68db9477c7-tp8ct" podStartSLOduration=3.5282483019999997 podStartE2EDuration="3.528248302s" podCreationTimestamp="2026-02-27 17:36:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:36:55.526654514 +0000 UTC m=+5411.615926987" watchObservedRunningTime="2026-02-27 17:36:55.528248302 +0000 UTC m=+5411.617520775" Feb 27 17:36:56 crc kubenswrapper[4830]: I0227 17:36:56.511708 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68db9477c7-tp8ct" Feb 27 17:36:57 crc kubenswrapper[4830]: I0227 17:36:57.525362 4830 generic.go:334] "Generic (PLEG): container finished" podID="db17b982-6152-4a97-867a-1df9ee446fff" containerID="cfe791d8da96314f66528f242569ccaaaff78cc5620fb47a107fcdd4d3f4e74e" exitCode=0 Feb 27 17:36:57 crc kubenswrapper[4830]: I0227 17:36:57.525462 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ts8s6" event={"ID":"db17b982-6152-4a97-867a-1df9ee446fff","Type":"ContainerDied","Data":"cfe791d8da96314f66528f242569ccaaaff78cc5620fb47a107fcdd4d3f4e74e"} Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.013977 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ts8s6" Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.043800 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db17b982-6152-4a97-867a-1df9ee446fff-combined-ca-bundle\") pod \"db17b982-6152-4a97-867a-1df9ee446fff\" (UID: \"db17b982-6152-4a97-867a-1df9ee446fff\") " Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.043869 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db17b982-6152-4a97-867a-1df9ee446fff-scripts\") pod \"db17b982-6152-4a97-867a-1df9ee446fff\" (UID: \"db17b982-6152-4a97-867a-1df9ee446fff\") " Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.043921 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/db17b982-6152-4a97-867a-1df9ee446fff-credential-keys\") pod \"db17b982-6152-4a97-867a-1df9ee446fff\" (UID: \"db17b982-6152-4a97-867a-1df9ee446fff\") " Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.044113 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db17b982-6152-4a97-867a-1df9ee446fff-config-data\") pod \"db17b982-6152-4a97-867a-1df9ee446fff\" (UID: \"db17b982-6152-4a97-867a-1df9ee446fff\") " Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.044399 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db17b982-6152-4a97-867a-1df9ee446fff-fernet-keys\") pod \"db17b982-6152-4a97-867a-1df9ee446fff\" (UID: \"db17b982-6152-4a97-867a-1df9ee446fff\") " Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.044462 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48znz\" (UniqueName: \"kubernetes.io/projected/db17b982-6152-4a97-867a-1df9ee446fff-kube-api-access-48znz\") pod \"db17b982-6152-4a97-867a-1df9ee446fff\" (UID: \"db17b982-6152-4a97-867a-1df9ee446fff\") " Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.055428 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db17b982-6152-4a97-867a-1df9ee446fff-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "db17b982-6152-4a97-867a-1df9ee446fff" (UID: "db17b982-6152-4a97-867a-1df9ee446fff"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.058551 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db17b982-6152-4a97-867a-1df9ee446fff-scripts" (OuterVolumeSpecName: "scripts") pod "db17b982-6152-4a97-867a-1df9ee446fff" (UID: "db17b982-6152-4a97-867a-1df9ee446fff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.059020 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db17b982-6152-4a97-867a-1df9ee446fff-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "db17b982-6152-4a97-867a-1df9ee446fff" (UID: "db17b982-6152-4a97-867a-1df9ee446fff"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.061024 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db17b982-6152-4a97-867a-1df9ee446fff-kube-api-access-48znz" (OuterVolumeSpecName: "kube-api-access-48znz") pod "db17b982-6152-4a97-867a-1df9ee446fff" (UID: "db17b982-6152-4a97-867a-1df9ee446fff"). InnerVolumeSpecName "kube-api-access-48znz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:36:59 crc kubenswrapper[4830]: E0227 17:36:59.092507 4830 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db17b982-6152-4a97-867a-1df9ee446fff-config-data podName:db17b982-6152-4a97-867a-1df9ee446fff nodeName:}" failed. No retries permitted until 2026-02-27 17:36:59.592446886 +0000 UTC m=+5415.681719379 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/db17b982-6152-4a97-867a-1df9ee446fff-config-data") pod "db17b982-6152-4a97-867a-1df9ee446fff" (UID: "db17b982-6152-4a97-867a-1df9ee446fff") : error deleting /var/lib/kubelet/pods/db17b982-6152-4a97-867a-1df9ee446fff/volume-subpaths: remove /var/lib/kubelet/pods/db17b982-6152-4a97-867a-1df9ee446fff/volume-subpaths: no such file or directory Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.098148 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db17b982-6152-4a97-867a-1df9ee446fff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db17b982-6152-4a97-867a-1df9ee446fff" (UID: "db17b982-6152-4a97-867a-1df9ee446fff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.147382 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db17b982-6152-4a97-867a-1df9ee446fff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.147436 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db17b982-6152-4a97-867a-1df9ee446fff-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.147456 4830 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/db17b982-6152-4a97-867a-1df9ee446fff-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.147474 4830 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/db17b982-6152-4a97-867a-1df9ee446fff-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.147494 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48znz\" (UniqueName: \"kubernetes.io/projected/db17b982-6152-4a97-867a-1df9ee446fff-kube-api-access-48znz\") on node \"crc\" DevicePath \"\"" Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.567588 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ts8s6" event={"ID":"db17b982-6152-4a97-867a-1df9ee446fff","Type":"ContainerDied","Data":"ae299468179b49728d384b209505a79b7e232f6b72aebcd084b797fd6c8a1218"} Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.567719 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ts8s6" Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.567663 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae299468179b49728d384b209505a79b7e232f6b72aebcd084b797fd6c8a1218" Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.657753 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db17b982-6152-4a97-867a-1df9ee446fff-config-data\") pod \"db17b982-6152-4a97-867a-1df9ee446fff\" (UID: \"db17b982-6152-4a97-867a-1df9ee446fff\") " Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.676429 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db17b982-6152-4a97-867a-1df9ee446fff-config-data" (OuterVolumeSpecName: "config-data") pod "db17b982-6152-4a97-867a-1df9ee446fff" (UID: "db17b982-6152-4a97-867a-1df9ee446fff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.702027 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-ts8s6"] Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.723801 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-ts8s6"] Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.760986 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db17b982-6152-4a97-867a-1df9ee446fff-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.776138 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-nhz7x"] Feb 27 17:36:59 crc kubenswrapper[4830]: E0227 17:36:59.776742 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db17b982-6152-4a97-867a-1df9ee446fff" containerName="keystone-bootstrap" Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.776767 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="db17b982-6152-4a97-867a-1df9ee446fff" containerName="keystone-bootstrap" Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.777000 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="db17b982-6152-4a97-867a-1df9ee446fff" containerName="keystone-bootstrap" Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.778063 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nhz7x" Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.784209 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nhz7x"] Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.862073 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/24e71808-4a6f-46f1-b878-7a4b2e75270b-credential-keys\") pod \"keystone-bootstrap-nhz7x\" (UID: \"24e71808-4a6f-46f1-b878-7a4b2e75270b\") " pod="openstack/keystone-bootstrap-nhz7x" Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.862178 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/24e71808-4a6f-46f1-b878-7a4b2e75270b-fernet-keys\") pod \"keystone-bootstrap-nhz7x\" (UID: \"24e71808-4a6f-46f1-b878-7a4b2e75270b\") " pod="openstack/keystone-bootstrap-nhz7x" Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.862243 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g82nk\" (UniqueName: \"kubernetes.io/projected/24e71808-4a6f-46f1-b878-7a4b2e75270b-kube-api-access-g82nk\") pod \"keystone-bootstrap-nhz7x\" (UID: \"24e71808-4a6f-46f1-b878-7a4b2e75270b\") " pod="openstack/keystone-bootstrap-nhz7x" Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.862481 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24e71808-4a6f-46f1-b878-7a4b2e75270b-scripts\") pod \"keystone-bootstrap-nhz7x\" (UID: \"24e71808-4a6f-46f1-b878-7a4b2e75270b\") " pod="openstack/keystone-bootstrap-nhz7x" Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.862996 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24e71808-4a6f-46f1-b878-7a4b2e75270b-config-data\") pod \"keystone-bootstrap-nhz7x\" (UID: \"24e71808-4a6f-46f1-b878-7a4b2e75270b\") " pod="openstack/keystone-bootstrap-nhz7x" Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.863290 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24e71808-4a6f-46f1-b878-7a4b2e75270b-combined-ca-bundle\") pod \"keystone-bootstrap-nhz7x\" (UID: \"24e71808-4a6f-46f1-b878-7a4b2e75270b\") " pod="openstack/keystone-bootstrap-nhz7x" Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.966299 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24e71808-4a6f-46f1-b878-7a4b2e75270b-config-data\") pod \"keystone-bootstrap-nhz7x\" (UID: \"24e71808-4a6f-46f1-b878-7a4b2e75270b\") " pod="openstack/keystone-bootstrap-nhz7x" Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.966499 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24e71808-4a6f-46f1-b878-7a4b2e75270b-combined-ca-bundle\") pod \"keystone-bootstrap-nhz7x\" (UID: \"24e71808-4a6f-46f1-b878-7a4b2e75270b\") " pod="openstack/keystone-bootstrap-nhz7x" Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.966600 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/24e71808-4a6f-46f1-b878-7a4b2e75270b-credential-keys\") pod \"keystone-bootstrap-nhz7x\" (UID: \"24e71808-4a6f-46f1-b878-7a4b2e75270b\") " pod="openstack/keystone-bootstrap-nhz7x" Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.966669 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/24e71808-4a6f-46f1-b878-7a4b2e75270b-fernet-keys\") pod \"keystone-bootstrap-nhz7x\" (UID: \"24e71808-4a6f-46f1-b878-7a4b2e75270b\") " pod="openstack/keystone-bootstrap-nhz7x" Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.966844 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g82nk\" (UniqueName: \"kubernetes.io/projected/24e71808-4a6f-46f1-b878-7a4b2e75270b-kube-api-access-g82nk\") pod \"keystone-bootstrap-nhz7x\" (UID: \"24e71808-4a6f-46f1-b878-7a4b2e75270b\") " pod="openstack/keystone-bootstrap-nhz7x" Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.966994 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24e71808-4a6f-46f1-b878-7a4b2e75270b-scripts\") pod \"keystone-bootstrap-nhz7x\" (UID: \"24e71808-4a6f-46f1-b878-7a4b2e75270b\") " pod="openstack/keystone-bootstrap-nhz7x" Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.971706 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24e71808-4a6f-46f1-b878-7a4b2e75270b-combined-ca-bundle\") pod \"keystone-bootstrap-nhz7x\" (UID: \"24e71808-4a6f-46f1-b878-7a4b2e75270b\") " pod="openstack/keystone-bootstrap-nhz7x" Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.973033 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/24e71808-4a6f-46f1-b878-7a4b2e75270b-fernet-keys\") pod \"keystone-bootstrap-nhz7x\" (UID: \"24e71808-4a6f-46f1-b878-7a4b2e75270b\") " pod="openstack/keystone-bootstrap-nhz7x" Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.973754 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24e71808-4a6f-46f1-b878-7a4b2e75270b-scripts\") pod \"keystone-bootstrap-nhz7x\" (UID: \"24e71808-4a6f-46f1-b878-7a4b2e75270b\") " pod="openstack/keystone-bootstrap-nhz7x" Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.974775 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/24e71808-4a6f-46f1-b878-7a4b2e75270b-credential-keys\") pod \"keystone-bootstrap-nhz7x\" (UID: \"24e71808-4a6f-46f1-b878-7a4b2e75270b\") " pod="openstack/keystone-bootstrap-nhz7x" Feb 27 17:36:59 crc kubenswrapper[4830]: I0227 17:36:59.977523 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24e71808-4a6f-46f1-b878-7a4b2e75270b-config-data\") pod \"keystone-bootstrap-nhz7x\" (UID: \"24e71808-4a6f-46f1-b878-7a4b2e75270b\") " pod="openstack/keystone-bootstrap-nhz7x" Feb 27 17:37:00 crc kubenswrapper[4830]: I0227 17:37:00.005240 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g82nk\" (UniqueName: \"kubernetes.io/projected/24e71808-4a6f-46f1-b878-7a4b2e75270b-kube-api-access-g82nk\") pod \"keystone-bootstrap-nhz7x\" (UID: \"24e71808-4a6f-46f1-b878-7a4b2e75270b\") " pod="openstack/keystone-bootstrap-nhz7x" Feb 27 17:37:00 crc kubenswrapper[4830]: I0227 17:37:00.100726 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nhz7x" Feb 27 17:37:00 crc kubenswrapper[4830]: I0227 17:37:00.639182 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nhz7x"] Feb 27 17:37:00 crc kubenswrapper[4830]: I0227 17:37:00.784542 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db17b982-6152-4a97-867a-1df9ee446fff" path="/var/lib/kubelet/pods/db17b982-6152-4a97-867a-1df9ee446fff/volumes" Feb 27 17:37:01 crc kubenswrapper[4830]: I0227 17:37:01.596815 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nhz7x" event={"ID":"24e71808-4a6f-46f1-b878-7a4b2e75270b","Type":"ContainerStarted","Data":"6134fcedb998b9c4741d590a9737112edccecbfc5ea4fffb7c1568515daf569c"} Feb 27 17:37:01 crc kubenswrapper[4830]: I0227 17:37:01.596883 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nhz7x" event={"ID":"24e71808-4a6f-46f1-b878-7a4b2e75270b","Type":"ContainerStarted","Data":"69ab358a6ecdc9df3c267e61d3ebc7e69fcda5994ce7dd098abcdbcb7253aa1a"} Feb 27 17:37:01 crc kubenswrapper[4830]: I0227 17:37:01.646936 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-nhz7x" podStartSLOduration=2.646905886 podStartE2EDuration="2.646905886s" podCreationTimestamp="2026-02-27 17:36:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:37:01.635722267 +0000 UTC m=+5417.724994780" watchObservedRunningTime="2026-02-27 17:37:01.646905886 +0000 UTC m=+5417.736178389" Feb 27 17:37:03 crc kubenswrapper[4830]: I0227 17:37:03.132417 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68db9477c7-tp8ct" Feb 27 17:37:03 crc kubenswrapper[4830]: I0227 17:37:03.219118 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55c8698c57-xb4mv"] Feb 27 17:37:03 crc kubenswrapper[4830]: I0227 17:37:03.219637 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55c8698c57-xb4mv" podUID="45ecec9f-98ba-40bf-8a0f-7adaf09e74d0" containerName="dnsmasq-dns" containerID="cri-o://56ac54a1d1a4a8c5a03f462557375ac8be5907d48a387597cd5e0dd844fa79af" gracePeriod=10 Feb 27 17:37:03 crc kubenswrapper[4830]: I0227 17:37:03.621967 4830 generic.go:334] "Generic (PLEG): container finished" podID="45ecec9f-98ba-40bf-8a0f-7adaf09e74d0" containerID="56ac54a1d1a4a8c5a03f462557375ac8be5907d48a387597cd5e0dd844fa79af" exitCode=0 Feb 27 17:37:03 crc kubenswrapper[4830]: I0227 17:37:03.622006 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55c8698c57-xb4mv" event={"ID":"45ecec9f-98ba-40bf-8a0f-7adaf09e74d0","Type":"ContainerDied","Data":"56ac54a1d1a4a8c5a03f462557375ac8be5907d48a387597cd5e0dd844fa79af"} Feb 27 17:37:03 crc kubenswrapper[4830]: I0227 17:37:03.625499 4830 generic.go:334] "Generic (PLEG): container finished" podID="24e71808-4a6f-46f1-b878-7a4b2e75270b" containerID="6134fcedb998b9c4741d590a9737112edccecbfc5ea4fffb7c1568515daf569c" exitCode=0 Feb 27 17:37:03 crc kubenswrapper[4830]: I0227 17:37:03.625552 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nhz7x" event={"ID":"24e71808-4a6f-46f1-b878-7a4b2e75270b","Type":"ContainerDied","Data":"6134fcedb998b9c4741d590a9737112edccecbfc5ea4fffb7c1568515daf569c"} Feb 27 17:37:03 crc kubenswrapper[4830]: I0227 17:37:03.758526 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55c8698c57-xb4mv" Feb 27 17:37:03 crc kubenswrapper[4830]: I0227 17:37:03.762958 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkfrk\" (UniqueName: \"kubernetes.io/projected/45ecec9f-98ba-40bf-8a0f-7adaf09e74d0-kube-api-access-zkfrk\") pod \"45ecec9f-98ba-40bf-8a0f-7adaf09e74d0\" (UID: \"45ecec9f-98ba-40bf-8a0f-7adaf09e74d0\") " Feb 27 17:37:03 crc kubenswrapper[4830]: I0227 17:37:03.763103 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45ecec9f-98ba-40bf-8a0f-7adaf09e74d0-ovsdbserver-sb\") pod \"45ecec9f-98ba-40bf-8a0f-7adaf09e74d0\" (UID: \"45ecec9f-98ba-40bf-8a0f-7adaf09e74d0\") " Feb 27 17:37:03 crc kubenswrapper[4830]: I0227 17:37:03.763159 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45ecec9f-98ba-40bf-8a0f-7adaf09e74d0-dns-svc\") pod \"45ecec9f-98ba-40bf-8a0f-7adaf09e74d0\" (UID: \"45ecec9f-98ba-40bf-8a0f-7adaf09e74d0\") " Feb 27 17:37:03 crc kubenswrapper[4830]: I0227 17:37:03.771081 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45ecec9f-98ba-40bf-8a0f-7adaf09e74d0-kube-api-access-zkfrk" (OuterVolumeSpecName: "kube-api-access-zkfrk") pod "45ecec9f-98ba-40bf-8a0f-7adaf09e74d0" (UID: "45ecec9f-98ba-40bf-8a0f-7adaf09e74d0"). InnerVolumeSpecName "kube-api-access-zkfrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:37:03 crc kubenswrapper[4830]: I0227 17:37:03.823321 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45ecec9f-98ba-40bf-8a0f-7adaf09e74d0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "45ecec9f-98ba-40bf-8a0f-7adaf09e74d0" (UID: "45ecec9f-98ba-40bf-8a0f-7adaf09e74d0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:37:03 crc kubenswrapper[4830]: I0227 17:37:03.846740 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45ecec9f-98ba-40bf-8a0f-7adaf09e74d0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "45ecec9f-98ba-40bf-8a0f-7adaf09e74d0" (UID: "45ecec9f-98ba-40bf-8a0f-7adaf09e74d0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:37:03 crc kubenswrapper[4830]: I0227 17:37:03.864713 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45ecec9f-98ba-40bf-8a0f-7adaf09e74d0-ovsdbserver-nb\") pod \"45ecec9f-98ba-40bf-8a0f-7adaf09e74d0\" (UID: \"45ecec9f-98ba-40bf-8a0f-7adaf09e74d0\") " Feb 27 17:37:03 crc kubenswrapper[4830]: I0227 17:37:03.865511 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45ecec9f-98ba-40bf-8a0f-7adaf09e74d0-config\") pod \"45ecec9f-98ba-40bf-8a0f-7adaf09e74d0\" (UID: \"45ecec9f-98ba-40bf-8a0f-7adaf09e74d0\") " Feb 27 17:37:03 crc kubenswrapper[4830]: I0227 17:37:03.866518 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkfrk\" (UniqueName: \"kubernetes.io/projected/45ecec9f-98ba-40bf-8a0f-7adaf09e74d0-kube-api-access-zkfrk\") on node \"crc\" DevicePath \"\"" Feb 27 17:37:03 crc kubenswrapper[4830]: I0227 17:37:03.866559 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45ecec9f-98ba-40bf-8a0f-7adaf09e74d0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 17:37:03 crc kubenswrapper[4830]: I0227 17:37:03.866581 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45ecec9f-98ba-40bf-8a0f-7adaf09e74d0-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 17:37:03 crc kubenswrapper[4830]: I0227 17:37:03.926984 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45ecec9f-98ba-40bf-8a0f-7adaf09e74d0-config" (OuterVolumeSpecName: "config") pod "45ecec9f-98ba-40bf-8a0f-7adaf09e74d0" (UID: "45ecec9f-98ba-40bf-8a0f-7adaf09e74d0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:37:03 crc kubenswrapper[4830]: I0227 17:37:03.934827 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45ecec9f-98ba-40bf-8a0f-7adaf09e74d0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "45ecec9f-98ba-40bf-8a0f-7adaf09e74d0" (UID: "45ecec9f-98ba-40bf-8a0f-7adaf09e74d0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:37:03 crc kubenswrapper[4830]: I0227 17:37:03.969259 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45ecec9f-98ba-40bf-8a0f-7adaf09e74d0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 17:37:03 crc kubenswrapper[4830]: I0227 17:37:03.969309 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45ecec9f-98ba-40bf-8a0f-7adaf09e74d0-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:37:04 crc kubenswrapper[4830]: I0227 17:37:04.646406 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55c8698c57-xb4mv" event={"ID":"45ecec9f-98ba-40bf-8a0f-7adaf09e74d0","Type":"ContainerDied","Data":"30c864b673aa0e5296aba9c3f40e28f9c26126673f3ff6d754cde3bf55220dd3"} Feb 27 17:37:04 crc kubenswrapper[4830]: I0227 17:37:04.647046 4830 scope.go:117] "RemoveContainer" containerID="56ac54a1d1a4a8c5a03f462557375ac8be5907d48a387597cd5e0dd844fa79af" Feb 27 17:37:04 crc kubenswrapper[4830]: I0227 17:37:04.646516 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55c8698c57-xb4mv" Feb 27 17:37:04 crc kubenswrapper[4830]: I0227 17:37:04.679930 4830 scope.go:117] "RemoveContainer" containerID="fda945cd20c2726f61f3cb4730ca4b6cf7d4bc487646314c0ed7a9382a73bbeb" Feb 27 17:37:04 crc kubenswrapper[4830]: I0227 17:37:04.710337 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55c8698c57-xb4mv"] Feb 27 17:37:04 crc kubenswrapper[4830]: I0227 17:37:04.718685 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55c8698c57-xb4mv"] Feb 27 17:37:04 crc kubenswrapper[4830]: I0227 17:37:04.790933 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45ecec9f-98ba-40bf-8a0f-7adaf09e74d0" path="/var/lib/kubelet/pods/45ecec9f-98ba-40bf-8a0f-7adaf09e74d0/volumes" Feb 27 17:37:05 crc kubenswrapper[4830]: I0227 17:37:05.123355 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nhz7x" Feb 27 17:37:05 crc kubenswrapper[4830]: I0227 17:37:05.301868 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24e71808-4a6f-46f1-b878-7a4b2e75270b-config-data\") pod \"24e71808-4a6f-46f1-b878-7a4b2e75270b\" (UID: \"24e71808-4a6f-46f1-b878-7a4b2e75270b\") " Feb 27 17:37:05 crc kubenswrapper[4830]: I0227 17:37:05.302117 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/24e71808-4a6f-46f1-b878-7a4b2e75270b-credential-keys\") pod \"24e71808-4a6f-46f1-b878-7a4b2e75270b\" (UID: \"24e71808-4a6f-46f1-b878-7a4b2e75270b\") " Feb 27 17:37:05 crc kubenswrapper[4830]: I0227 17:37:05.302166 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24e71808-4a6f-46f1-b878-7a4b2e75270b-scripts\") pod \"24e71808-4a6f-46f1-b878-7a4b2e75270b\" (UID: \"24e71808-4a6f-46f1-b878-7a4b2e75270b\") " Feb 27 17:37:05 crc kubenswrapper[4830]: I0227 17:37:05.302304 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/24e71808-4a6f-46f1-b878-7a4b2e75270b-fernet-keys\") pod \"24e71808-4a6f-46f1-b878-7a4b2e75270b\" (UID: \"24e71808-4a6f-46f1-b878-7a4b2e75270b\") " Feb 27 17:37:05 crc kubenswrapper[4830]: I0227 17:37:05.302381 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g82nk\" (UniqueName: \"kubernetes.io/projected/24e71808-4a6f-46f1-b878-7a4b2e75270b-kube-api-access-g82nk\") pod \"24e71808-4a6f-46f1-b878-7a4b2e75270b\" (UID: \"24e71808-4a6f-46f1-b878-7a4b2e75270b\") " Feb 27 17:37:05 crc kubenswrapper[4830]: I0227 17:37:05.302521 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24e71808-4a6f-46f1-b878-7a4b2e75270b-combined-ca-bundle\") pod \"24e71808-4a6f-46f1-b878-7a4b2e75270b\" (UID: \"24e71808-4a6f-46f1-b878-7a4b2e75270b\") " Feb 27 17:37:05 crc kubenswrapper[4830]: I0227 17:37:05.316939 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24e71808-4a6f-46f1-b878-7a4b2e75270b-scripts" (OuterVolumeSpecName: "scripts") pod "24e71808-4a6f-46f1-b878-7a4b2e75270b" (UID: "24e71808-4a6f-46f1-b878-7a4b2e75270b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:37:05 crc kubenswrapper[4830]: I0227 17:37:05.318418 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24e71808-4a6f-46f1-b878-7a4b2e75270b-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "24e71808-4a6f-46f1-b878-7a4b2e75270b" (UID: "24e71808-4a6f-46f1-b878-7a4b2e75270b"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:37:05 crc kubenswrapper[4830]: I0227 17:37:05.325089 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24e71808-4a6f-46f1-b878-7a4b2e75270b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "24e71808-4a6f-46f1-b878-7a4b2e75270b" (UID: "24e71808-4a6f-46f1-b878-7a4b2e75270b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:37:05 crc kubenswrapper[4830]: I0227 17:37:05.325223 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24e71808-4a6f-46f1-b878-7a4b2e75270b-kube-api-access-g82nk" (OuterVolumeSpecName: "kube-api-access-g82nk") pod "24e71808-4a6f-46f1-b878-7a4b2e75270b" (UID: "24e71808-4a6f-46f1-b878-7a4b2e75270b"). InnerVolumeSpecName "kube-api-access-g82nk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:37:05 crc kubenswrapper[4830]: I0227 17:37:05.348888 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24e71808-4a6f-46f1-b878-7a4b2e75270b-config-data" (OuterVolumeSpecName: "config-data") pod "24e71808-4a6f-46f1-b878-7a4b2e75270b" (UID: "24e71808-4a6f-46f1-b878-7a4b2e75270b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:37:05 crc kubenswrapper[4830]: I0227 17:37:05.356719 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24e71808-4a6f-46f1-b878-7a4b2e75270b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24e71808-4a6f-46f1-b878-7a4b2e75270b" (UID: "24e71808-4a6f-46f1-b878-7a4b2e75270b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:37:05 crc kubenswrapper[4830]: I0227 17:37:05.405554 4830 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/24e71808-4a6f-46f1-b878-7a4b2e75270b-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 27 17:37:05 crc kubenswrapper[4830]: I0227 17:37:05.405609 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g82nk\" (UniqueName: \"kubernetes.io/projected/24e71808-4a6f-46f1-b878-7a4b2e75270b-kube-api-access-g82nk\") on node \"crc\" DevicePath \"\"" Feb 27 17:37:05 crc kubenswrapper[4830]: I0227 17:37:05.405634 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24e71808-4a6f-46f1-b878-7a4b2e75270b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:37:05 crc kubenswrapper[4830]: I0227 17:37:05.405653 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24e71808-4a6f-46f1-b878-7a4b2e75270b-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:37:05 crc kubenswrapper[4830]: I0227 17:37:05.405671 4830 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/24e71808-4a6f-46f1-b878-7a4b2e75270b-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 27 17:37:05 crc kubenswrapper[4830]: I0227 17:37:05.405689 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24e71808-4a6f-46f1-b878-7a4b2e75270b-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:37:05 crc kubenswrapper[4830]: I0227 17:37:05.664481 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nhz7x" event={"ID":"24e71808-4a6f-46f1-b878-7a4b2e75270b","Type":"ContainerDied","Data":"69ab358a6ecdc9df3c267e61d3ebc7e69fcda5994ce7dd098abcdbcb7253aa1a"} Feb 27 17:37:05 crc kubenswrapper[4830]: I0227 17:37:05.666113 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69ab358a6ecdc9df3c267e61d3ebc7e69fcda5994ce7dd098abcdbcb7253aa1a" Feb 27 17:37:05 crc kubenswrapper[4830]: I0227 17:37:05.664571 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nhz7x" Feb 27 17:37:06 crc kubenswrapper[4830]: I0227 17:37:06.280693 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-676ffb979c-dk4rh"] Feb 27 17:37:06 crc kubenswrapper[4830]: E0227 17:37:06.282621 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45ecec9f-98ba-40bf-8a0f-7adaf09e74d0" containerName="dnsmasq-dns" Feb 27 17:37:06 crc kubenswrapper[4830]: I0227 17:37:06.282650 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="45ecec9f-98ba-40bf-8a0f-7adaf09e74d0" containerName="dnsmasq-dns" Feb 27 17:37:06 crc kubenswrapper[4830]: E0227 17:37:06.282675 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24e71808-4a6f-46f1-b878-7a4b2e75270b" containerName="keystone-bootstrap" Feb 27 17:37:06 crc kubenswrapper[4830]: I0227 17:37:06.282684 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="24e71808-4a6f-46f1-b878-7a4b2e75270b" containerName="keystone-bootstrap" Feb 27 17:37:06 crc kubenswrapper[4830]: E0227 17:37:06.282704 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45ecec9f-98ba-40bf-8a0f-7adaf09e74d0" containerName="init" Feb 27 17:37:06 crc kubenswrapper[4830]: I0227 17:37:06.282715 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="45ecec9f-98ba-40bf-8a0f-7adaf09e74d0" containerName="init" Feb 27 17:37:06 crc kubenswrapper[4830]: I0227 17:37:06.282923 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="45ecec9f-98ba-40bf-8a0f-7adaf09e74d0" containerName="dnsmasq-dns" Feb 27 17:37:06 crc kubenswrapper[4830]: I0227 17:37:06.282967 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="24e71808-4a6f-46f1-b878-7a4b2e75270b" containerName="keystone-bootstrap" Feb 27 17:37:06 crc kubenswrapper[4830]: I0227 17:37:06.283708 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-676ffb979c-dk4rh" Feb 27 17:37:06 crc kubenswrapper[4830]: I0227 17:37:06.287334 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-c6l2j" Feb 27 17:37:06 crc kubenswrapper[4830]: I0227 17:37:06.287621 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 27 17:37:06 crc kubenswrapper[4830]: I0227 17:37:06.288917 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 27 17:37:06 crc kubenswrapper[4830]: I0227 17:37:06.289146 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 27 17:37:06 crc kubenswrapper[4830]: I0227 17:37:06.305161 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-676ffb979c-dk4rh"] Feb 27 17:37:06 crc kubenswrapper[4830]: I0227 17:37:06.433666 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f66c590-b19e-4188-bf5c-125cc3b78c4f-config-data\") pod \"keystone-676ffb979c-dk4rh\" (UID: \"8f66c590-b19e-4188-bf5c-125cc3b78c4f\") " pod="openstack/keystone-676ffb979c-dk4rh" Feb 27 17:37:06 crc kubenswrapper[4830]: I0227 17:37:06.433769 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jks6\" (UniqueName: \"kubernetes.io/projected/8f66c590-b19e-4188-bf5c-125cc3b78c4f-kube-api-access-8jks6\") pod \"keystone-676ffb979c-dk4rh\" (UID: \"8f66c590-b19e-4188-bf5c-125cc3b78c4f\") " pod="openstack/keystone-676ffb979c-dk4rh" Feb 27 17:37:06 crc kubenswrapper[4830]: I0227 17:37:06.433861 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8f66c590-b19e-4188-bf5c-125cc3b78c4f-credential-keys\") pod \"keystone-676ffb979c-dk4rh\" (UID: \"8f66c590-b19e-4188-bf5c-125cc3b78c4f\") " pod="openstack/keystone-676ffb979c-dk4rh" Feb 27 17:37:06 crc kubenswrapper[4830]: I0227 17:37:06.434100 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f66c590-b19e-4188-bf5c-125cc3b78c4f-scripts\") pod \"keystone-676ffb979c-dk4rh\" (UID: \"8f66c590-b19e-4188-bf5c-125cc3b78c4f\") " pod="openstack/keystone-676ffb979c-dk4rh" Feb 27 17:37:06 crc kubenswrapper[4830]: I0227 17:37:06.434146 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f66c590-b19e-4188-bf5c-125cc3b78c4f-combined-ca-bundle\") pod \"keystone-676ffb979c-dk4rh\" (UID: \"8f66c590-b19e-4188-bf5c-125cc3b78c4f\") " pod="openstack/keystone-676ffb979c-dk4rh" Feb 27 17:37:06 crc kubenswrapper[4830]: I0227 17:37:06.434203 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8f66c590-b19e-4188-bf5c-125cc3b78c4f-fernet-keys\") pod \"keystone-676ffb979c-dk4rh\" (UID: \"8f66c590-b19e-4188-bf5c-125cc3b78c4f\") " pod="openstack/keystone-676ffb979c-dk4rh" Feb 27 17:37:06 crc kubenswrapper[4830]: I0227 17:37:06.536269 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8f66c590-b19e-4188-bf5c-125cc3b78c4f-credential-keys\") pod \"keystone-676ffb979c-dk4rh\" (UID: \"8f66c590-b19e-4188-bf5c-125cc3b78c4f\") " pod="openstack/keystone-676ffb979c-dk4rh" Feb 27 17:37:06 crc kubenswrapper[4830]: I0227 17:37:06.536390 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f66c590-b19e-4188-bf5c-125cc3b78c4f-scripts\") pod \"keystone-676ffb979c-dk4rh\" (UID: \"8f66c590-b19e-4188-bf5c-125cc3b78c4f\") " pod="openstack/keystone-676ffb979c-dk4rh" Feb 27 17:37:06 crc kubenswrapper[4830]: I0227 17:37:06.536421 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f66c590-b19e-4188-bf5c-125cc3b78c4f-combined-ca-bundle\") pod \"keystone-676ffb979c-dk4rh\" (UID: \"8f66c590-b19e-4188-bf5c-125cc3b78c4f\") " pod="openstack/keystone-676ffb979c-dk4rh" Feb 27 17:37:06 crc kubenswrapper[4830]: I0227 17:37:06.536456 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8f66c590-b19e-4188-bf5c-125cc3b78c4f-fernet-keys\") pod \"keystone-676ffb979c-dk4rh\" (UID: \"8f66c590-b19e-4188-bf5c-125cc3b78c4f\") " pod="openstack/keystone-676ffb979c-dk4rh" Feb 27 17:37:06 crc kubenswrapper[4830]: I0227 17:37:06.536528 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f66c590-b19e-4188-bf5c-125cc3b78c4f-config-data\") pod \"keystone-676ffb979c-dk4rh\" (UID: \"8f66c590-b19e-4188-bf5c-125cc3b78c4f\") " pod="openstack/keystone-676ffb979c-dk4rh" Feb 27 17:37:06 crc kubenswrapper[4830]: I0227 17:37:06.536568 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jks6\" (UniqueName: \"kubernetes.io/projected/8f66c590-b19e-4188-bf5c-125cc3b78c4f-kube-api-access-8jks6\") pod \"keystone-676ffb979c-dk4rh\" (UID: \"8f66c590-b19e-4188-bf5c-125cc3b78c4f\") " pod="openstack/keystone-676ffb979c-dk4rh" Feb 27 17:37:06 crc kubenswrapper[4830]: I0227 17:37:06.541773 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8f66c590-b19e-4188-bf5c-125cc3b78c4f-credential-keys\") pod \"keystone-676ffb979c-dk4rh\" (UID: \"8f66c590-b19e-4188-bf5c-125cc3b78c4f\") " pod="openstack/keystone-676ffb979c-dk4rh" Feb 27 17:37:06 crc kubenswrapper[4830]: I0227 17:37:06.542155 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f66c590-b19e-4188-bf5c-125cc3b78c4f-combined-ca-bundle\") pod \"keystone-676ffb979c-dk4rh\" (UID: \"8f66c590-b19e-4188-bf5c-125cc3b78c4f\") " pod="openstack/keystone-676ffb979c-dk4rh" Feb 27 17:37:06 crc kubenswrapper[4830]: I0227 17:37:06.542681 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8f66c590-b19e-4188-bf5c-125cc3b78c4f-fernet-keys\") pod \"keystone-676ffb979c-dk4rh\" (UID: \"8f66c590-b19e-4188-bf5c-125cc3b78c4f\") " pod="openstack/keystone-676ffb979c-dk4rh" Feb 27 17:37:06 crc kubenswrapper[4830]: I0227 17:37:06.544437 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f66c590-b19e-4188-bf5c-125cc3b78c4f-scripts\") pod \"keystone-676ffb979c-dk4rh\" (UID: \"8f66c590-b19e-4188-bf5c-125cc3b78c4f\") " pod="openstack/keystone-676ffb979c-dk4rh" Feb 27 17:37:06 crc kubenswrapper[4830]: I0227 17:37:06.548755 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f66c590-b19e-4188-bf5c-125cc3b78c4f-config-data\") pod \"keystone-676ffb979c-dk4rh\" (UID: \"8f66c590-b19e-4188-bf5c-125cc3b78c4f\") " pod="openstack/keystone-676ffb979c-dk4rh" Feb 27 17:37:06 crc kubenswrapper[4830]: I0227 17:37:06.559684 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jks6\" (UniqueName: \"kubernetes.io/projected/8f66c590-b19e-4188-bf5c-125cc3b78c4f-kube-api-access-8jks6\") pod \"keystone-676ffb979c-dk4rh\" (UID: \"8f66c590-b19e-4188-bf5c-125cc3b78c4f\") " pod="openstack/keystone-676ffb979c-dk4rh" Feb 27 17:37:06 crc kubenswrapper[4830]: I0227 17:37:06.641005 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-676ffb979c-dk4rh" Feb 27 17:37:07 crc kubenswrapper[4830]: I0227 17:37:07.199319 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-676ffb979c-dk4rh"] Feb 27 17:37:07 crc kubenswrapper[4830]: I0227 17:37:07.687666 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-676ffb979c-dk4rh" event={"ID":"8f66c590-b19e-4188-bf5c-125cc3b78c4f","Type":"ContainerStarted","Data":"d7c3727c1c28d29b98308f5025fdca577bf87ce29226ded87ab1d2657d7bcb5d"} Feb 27 17:37:07 crc kubenswrapper[4830]: I0227 17:37:07.688274 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-676ffb979c-dk4rh" event={"ID":"8f66c590-b19e-4188-bf5c-125cc3b78c4f","Type":"ContainerStarted","Data":"7c8d42bb2ce87fe084c9312233594e11b4523a8bedb1966c4dfc4efaf80a2344"} Feb 27 17:37:07 crc kubenswrapper[4830]: I0227 17:37:07.688345 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-676ffb979c-dk4rh" Feb 27 17:37:07 crc kubenswrapper[4830]: I0227 17:37:07.718033 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-676ffb979c-dk4rh" podStartSLOduration=1.717994366 podStartE2EDuration="1.717994366s" podCreationTimestamp="2026-02-27 17:37:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:37:07.711748605 +0000 UTC m=+5423.801021108" watchObservedRunningTime="2026-02-27 17:37:07.717994366 +0000 UTC m=+5423.807266869" Feb 27 17:37:38 crc kubenswrapper[4830]: I0227 17:37:38.014910 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-676ffb979c-dk4rh" Feb 27 17:37:42 crc kubenswrapper[4830]: I0227 17:37:42.228746 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 27 17:37:42 crc kubenswrapper[4830]: I0227 17:37:42.231756 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 27 17:37:42 crc kubenswrapper[4830]: I0227 17:37:42.236215 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 27 17:37:42 crc kubenswrapper[4830]: I0227 17:37:42.236720 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-jxxw5" Feb 27 17:37:42 crc kubenswrapper[4830]: I0227 17:37:42.239638 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 27 17:37:42 crc kubenswrapper[4830]: I0227 17:37:42.245514 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 27 17:37:42 crc kubenswrapper[4830]: I0227 17:37:42.277353 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5f45a253-e0e6-49aa-9c48-8c57b3639130-openstack-config\") pod \"openstackclient\" (UID: \"5f45a253-e0e6-49aa-9c48-8c57b3639130\") " pod="openstack/openstackclient" Feb 27 17:37:42 crc kubenswrapper[4830]: I0227 17:37:42.277505 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4zz8\" (UniqueName: \"kubernetes.io/projected/5f45a253-e0e6-49aa-9c48-8c57b3639130-kube-api-access-m4zz8\") pod \"openstackclient\" (UID: \"5f45a253-e0e6-49aa-9c48-8c57b3639130\") " pod="openstack/openstackclient" Feb 27 17:37:42 crc kubenswrapper[4830]: I0227 17:37:42.277732 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5f45a253-e0e6-49aa-9c48-8c57b3639130-openstack-config-secret\") pod \"openstackclient\" (UID: \"5f45a253-e0e6-49aa-9c48-8c57b3639130\") " pod="openstack/openstackclient" Feb 27 17:37:42 crc kubenswrapper[4830]: I0227 17:37:42.378708 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5f45a253-e0e6-49aa-9c48-8c57b3639130-openstack-config\") pod \"openstackclient\" (UID: \"5f45a253-e0e6-49aa-9c48-8c57b3639130\") " pod="openstack/openstackclient" Feb 27 17:37:42 crc kubenswrapper[4830]: I0227 17:37:42.379217 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4zz8\" (UniqueName: \"kubernetes.io/projected/5f45a253-e0e6-49aa-9c48-8c57b3639130-kube-api-access-m4zz8\") pod \"openstackclient\" (UID: \"5f45a253-e0e6-49aa-9c48-8c57b3639130\") " pod="openstack/openstackclient" Feb 27 17:37:42 crc kubenswrapper[4830]: I0227 17:37:42.379541 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5f45a253-e0e6-49aa-9c48-8c57b3639130-openstack-config-secret\") pod \"openstackclient\" (UID: \"5f45a253-e0e6-49aa-9c48-8c57b3639130\") " pod="openstack/openstackclient" Feb 27 17:37:42 crc kubenswrapper[4830]: I0227 17:37:42.380677 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5f45a253-e0e6-49aa-9c48-8c57b3639130-openstack-config\") pod \"openstackclient\" (UID: \"5f45a253-e0e6-49aa-9c48-8c57b3639130\") " pod="openstack/openstackclient" Feb 27 17:37:42 crc kubenswrapper[4830]: I0227 17:37:42.391267 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5f45a253-e0e6-49aa-9c48-8c57b3639130-openstack-config-secret\") pod \"openstackclient\" (UID: \"5f45a253-e0e6-49aa-9c48-8c57b3639130\") " pod="openstack/openstackclient" Feb 27 17:37:42 crc kubenswrapper[4830]: I0227 17:37:42.408281 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4zz8\" (UniqueName: \"kubernetes.io/projected/5f45a253-e0e6-49aa-9c48-8c57b3639130-kube-api-access-m4zz8\") pod \"openstackclient\" (UID: \"5f45a253-e0e6-49aa-9c48-8c57b3639130\") " pod="openstack/openstackclient" Feb 27 17:37:42 crc kubenswrapper[4830]: I0227 17:37:42.574464 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 27 17:37:43 crc kubenswrapper[4830]: I0227 17:37:43.130540 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 27 17:37:44 crc kubenswrapper[4830]: I0227 17:37:44.146171 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5f45a253-e0e6-49aa-9c48-8c57b3639130","Type":"ContainerStarted","Data":"453bd6cfd132d0dc0f8b96af15c32bd0a0d75846bad555fcb8e37140409ee65a"} Feb 27 17:37:44 crc kubenswrapper[4830]: I0227 17:37:44.146739 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5f45a253-e0e6-49aa-9c48-8c57b3639130","Type":"ContainerStarted","Data":"62128e4d60b7fd5b762a7ae52ba42b6cbf828f80cfec59061bbc2cb1caa1f7d4"} Feb 27 17:37:44 crc kubenswrapper[4830]: I0227 17:37:44.181342 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.181312835 podStartE2EDuration="2.181312835s" podCreationTimestamp="2026-02-27 17:37:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:37:44.174080771 +0000 UTC m=+5460.263353304" watchObservedRunningTime="2026-02-27 17:37:44.181312835 +0000 UTC m=+5460.270585328" Feb 27 17:38:00 crc kubenswrapper[4830]: I0227 17:38:00.172712 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536898-vrwjs"] Feb 27 17:38:00 crc kubenswrapper[4830]: I0227 17:38:00.177532 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" Feb 27 17:38:00 crc kubenswrapper[4830]: I0227 17:38:00.180527 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:38:00 crc kubenswrapper[4830]: I0227 17:38:00.181368 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:38:00 crc kubenswrapper[4830]: I0227 17:38:00.182372 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 17:38:00 crc kubenswrapper[4830]: I0227 17:38:00.197095 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536898-vrwjs"] Feb 27 17:38:00 crc kubenswrapper[4830]: I0227 17:38:00.211183 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdb7h\" (UniqueName: \"kubernetes.io/projected/204eb1af-36ad-4de7-9da7-9a37fefd3026-kube-api-access-mdb7h\") pod \"auto-csr-approver-29536898-vrwjs\" (UID: \"204eb1af-36ad-4de7-9da7-9a37fefd3026\") " pod="openshift-infra/auto-csr-approver-29536898-vrwjs" Feb 27 17:38:00 crc kubenswrapper[4830]: I0227 17:38:00.312555 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdb7h\" (UniqueName: \"kubernetes.io/projected/204eb1af-36ad-4de7-9da7-9a37fefd3026-kube-api-access-mdb7h\") pod \"auto-csr-approver-29536898-vrwjs\" (UID: \"204eb1af-36ad-4de7-9da7-9a37fefd3026\") " pod="openshift-infra/auto-csr-approver-29536898-vrwjs" Feb 27 17:38:00 crc kubenswrapper[4830]: I0227 17:38:00.347626 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdb7h\" (UniqueName: \"kubernetes.io/projected/204eb1af-36ad-4de7-9da7-9a37fefd3026-kube-api-access-mdb7h\") pod \"auto-csr-approver-29536898-vrwjs\" (UID: \"204eb1af-36ad-4de7-9da7-9a37fefd3026\") " pod="openshift-infra/auto-csr-approver-29536898-vrwjs" Feb 27 17:38:00 crc kubenswrapper[4830]: I0227 17:38:00.510896 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" Feb 27 17:38:01 crc kubenswrapper[4830]: I0227 17:38:01.034091 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536898-vrwjs"] Feb 27 17:38:01 crc kubenswrapper[4830]: I0227 17:38:01.041881 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 17:38:01 crc kubenswrapper[4830]: I0227 17:38:01.358444 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" event={"ID":"204eb1af-36ad-4de7-9da7-9a37fefd3026","Type":"ContainerStarted","Data":"3db9d6aea1c2c387a3f3cb880ea977586521a7ea06db01806d487256c1900006"} Feb 27 17:38:02 crc kubenswrapper[4830]: E0227 17:38:02.040399 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 17:38:02 crc kubenswrapper[4830]: E0227 17:38:02.041469 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 17:38:02 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 17:38:02 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mdb7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536898-vrwjs_openshift-infra(204eb1af-36ad-4de7-9da7-9a37fefd3026): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 17:38:02 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 17:38:02 crc kubenswrapper[4830]: E0227 17:38:02.042714 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:38:02 crc kubenswrapper[4830]: E0227 17:38:02.380671 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:38:03 crc kubenswrapper[4830]: I0227 17:38:03.160593 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:38:03 crc kubenswrapper[4830]: I0227 17:38:03.160698 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:38:17 crc kubenswrapper[4830]: I0227 17:38:17.581257 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gvnvz"] Feb 27 17:38:17 crc kubenswrapper[4830]: I0227 17:38:17.584087 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gvnvz" Feb 27 17:38:17 crc kubenswrapper[4830]: I0227 17:38:17.599749 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvnvz"] Feb 27 17:38:17 crc kubenswrapper[4830]: I0227 17:38:17.728987 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1c73a78-1e95-4481-a273-ba7e3b5a127c-utilities\") pod \"redhat-marketplace-gvnvz\" (UID: \"f1c73a78-1e95-4481-a273-ba7e3b5a127c\") " pod="openshift-marketplace/redhat-marketplace-gvnvz" Feb 27 17:38:17 crc kubenswrapper[4830]: I0227 17:38:17.729085 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1c73a78-1e95-4481-a273-ba7e3b5a127c-catalog-content\") pod \"redhat-marketplace-gvnvz\" (UID: \"f1c73a78-1e95-4481-a273-ba7e3b5a127c\") " pod="openshift-marketplace/redhat-marketplace-gvnvz" Feb 27 17:38:17 crc kubenswrapper[4830]: I0227 17:38:17.729655 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m48cm\" (UniqueName: \"kubernetes.io/projected/f1c73a78-1e95-4481-a273-ba7e3b5a127c-kube-api-access-m48cm\") pod \"redhat-marketplace-gvnvz\" (UID: \"f1c73a78-1e95-4481-a273-ba7e3b5a127c\") " pod="openshift-marketplace/redhat-marketplace-gvnvz" Feb 27 17:38:17 crc kubenswrapper[4830]: I0227 17:38:17.832704 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m48cm\" (UniqueName: \"kubernetes.io/projected/f1c73a78-1e95-4481-a273-ba7e3b5a127c-kube-api-access-m48cm\") pod \"redhat-marketplace-gvnvz\" (UID: \"f1c73a78-1e95-4481-a273-ba7e3b5a127c\") " pod="openshift-marketplace/redhat-marketplace-gvnvz" Feb 27 17:38:17 crc kubenswrapper[4830]: I0227 17:38:17.832883 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1c73a78-1e95-4481-a273-ba7e3b5a127c-utilities\") pod \"redhat-marketplace-gvnvz\" (UID: \"f1c73a78-1e95-4481-a273-ba7e3b5a127c\") " pod="openshift-marketplace/redhat-marketplace-gvnvz" Feb 27 17:38:17 crc kubenswrapper[4830]: I0227 17:38:17.832993 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1c73a78-1e95-4481-a273-ba7e3b5a127c-catalog-content\") pod \"redhat-marketplace-gvnvz\" (UID: \"f1c73a78-1e95-4481-a273-ba7e3b5a127c\") " pod="openshift-marketplace/redhat-marketplace-gvnvz" Feb 27 17:38:17 crc kubenswrapper[4830]: I0227 17:38:17.834023 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1c73a78-1e95-4481-a273-ba7e3b5a127c-catalog-content\") pod \"redhat-marketplace-gvnvz\" (UID: \"f1c73a78-1e95-4481-a273-ba7e3b5a127c\") " pod="openshift-marketplace/redhat-marketplace-gvnvz" Feb 27 17:38:17 crc kubenswrapper[4830]: I0227 17:38:17.834024 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1c73a78-1e95-4481-a273-ba7e3b5a127c-utilities\") pod \"redhat-marketplace-gvnvz\" (UID: \"f1c73a78-1e95-4481-a273-ba7e3b5a127c\") " pod="openshift-marketplace/redhat-marketplace-gvnvz" Feb 27 17:38:17 crc kubenswrapper[4830]: I0227 17:38:17.863676 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m48cm\" (UniqueName: \"kubernetes.io/projected/f1c73a78-1e95-4481-a273-ba7e3b5a127c-kube-api-access-m48cm\") pod \"redhat-marketplace-gvnvz\" (UID: \"f1c73a78-1e95-4481-a273-ba7e3b5a127c\") " pod="openshift-marketplace/redhat-marketplace-gvnvz" Feb 27 17:38:17 crc kubenswrapper[4830]: I0227 17:38:17.927068 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gvnvz" Feb 27 17:38:18 crc kubenswrapper[4830]: I0227 17:38:18.412828 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvnvz"] Feb 27 17:38:18 crc kubenswrapper[4830]: I0227 17:38:18.608936 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvnvz" event={"ID":"f1c73a78-1e95-4481-a273-ba7e3b5a127c","Type":"ContainerStarted","Data":"8067f7bc39866687dce562e122ca85297eea79801f1d919ce1d8cf42af4d53c7"} Feb 27 17:38:19 crc kubenswrapper[4830]: I0227 17:38:19.627388 4830 generic.go:334] "Generic (PLEG): container finished" podID="f1c73a78-1e95-4481-a273-ba7e3b5a127c" containerID="706b3557b618cda4f51cdbcde480fe025d87a38446876837cc03418c665b3fc5" exitCode=0 Feb 27 17:38:19 crc kubenswrapper[4830]: I0227 17:38:19.627464 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvnvz" event={"ID":"f1c73a78-1e95-4481-a273-ba7e3b5a127c","Type":"ContainerDied","Data":"706b3557b618cda4f51cdbcde480fe025d87a38446876837cc03418c665b3fc5"} Feb 27 17:38:20 crc kubenswrapper[4830]: E0227 17:38:20.389184 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 17:38:20 crc kubenswrapper[4830]: E0227 17:38:20.389855 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m48cm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-gvnvz_openshift-marketplace(f1c73a78-1e95-4481-a273-ba7e3b5a127c): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 17:38:20 crc kubenswrapper[4830]: E0227 17:38:20.391091 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-gvnvz" podUID="f1c73a78-1e95-4481-a273-ba7e3b5a127c" Feb 27 17:38:20 crc kubenswrapper[4830]: E0227 17:38:20.645064 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-gvnvz" podUID="f1c73a78-1e95-4481-a273-ba7e3b5a127c" Feb 27 17:38:31 crc kubenswrapper[4830]: E0227 17:38:31.494636 4830 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.36:52728->38.129.56.36:42557: write tcp 38.129.56.36:52728->38.129.56.36:42557: write: broken pipe Feb 27 17:38:33 crc kubenswrapper[4830]: I0227 17:38:33.160249 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:38:33 crc kubenswrapper[4830]: I0227 17:38:33.160838 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:38:33 crc kubenswrapper[4830]: E0227 17:38:33.907889 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 17:38:33 crc kubenswrapper[4830]: E0227 17:38:33.908092 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m48cm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-gvnvz_openshift-marketplace(f1c73a78-1e95-4481-a273-ba7e3b5a127c): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 17:38:33 crc kubenswrapper[4830]: E0227 17:38:33.909602 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-gvnvz" podUID="f1c73a78-1e95-4481-a273-ba7e3b5a127c" Feb 27 17:38:47 crc kubenswrapper[4830]: E0227 17:38:47.765265 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-gvnvz" podUID="f1c73a78-1e95-4481-a273-ba7e3b5a127c" Feb 27 17:38:49 crc kubenswrapper[4830]: E0227 17:38:49.249582 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 17:38:49 crc kubenswrapper[4830]: E0227 17:38:49.249824 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 17:38:49 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 17:38:49 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mdb7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536898-vrwjs_openshift-infra(204eb1af-36ad-4de7-9da7-9a37fefd3026): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 17:38:49 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 17:38:49 crc kubenswrapper[4830]: E0227 17:38:49.251145 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:38:53 crc kubenswrapper[4830]: I0227 17:38:53.096193 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-5ntzt"] Feb 27 17:38:53 crc kubenswrapper[4830]: I0227 17:38:53.115034 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-5ntzt"] Feb 27 17:38:54 crc kubenswrapper[4830]: I0227 17:38:54.776973 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfce94bc-640b-4eb2-88a1-b77db6d2dd03" path="/var/lib/kubelet/pods/cfce94bc-640b-4eb2-88a1-b77db6d2dd03/volumes" Feb 27 17:39:00 crc kubenswrapper[4830]: E0227 17:39:00.765657 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:39:03 crc kubenswrapper[4830]: I0227 17:39:03.160536 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:39:03 crc kubenswrapper[4830]: I0227 17:39:03.160636 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:39:03 crc kubenswrapper[4830]: I0227 17:39:03.160704 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 17:39:03 crc kubenswrapper[4830]: I0227 17:39:03.161584 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"22fbcacd37ad840c90f07fc1e16c44d308f846d0fbace0b7a3cfa023009541af"} pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 17:39:03 crc kubenswrapper[4830]: I0227 17:39:03.161670 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" containerID="cri-o://22fbcacd37ad840c90f07fc1e16c44d308f846d0fbace0b7a3cfa023009541af" gracePeriod=600 Feb 27 17:39:04 crc kubenswrapper[4830]: I0227 17:39:04.161055 4830 generic.go:334] "Generic (PLEG): container finished" podID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerID="22fbcacd37ad840c90f07fc1e16c44d308f846d0fbace0b7a3cfa023009541af" exitCode=0 Feb 27 17:39:04 crc kubenswrapper[4830]: I0227 17:39:04.161168 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerDied","Data":"22fbcacd37ad840c90f07fc1e16c44d308f846d0fbace0b7a3cfa023009541af"} Feb 27 17:39:04 crc kubenswrapper[4830]: I0227 17:39:04.161861 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerStarted","Data":"0f73af46b765b3d98f3f5e9883b49887ac4f2ac3485e91a001e18c5d351646d8"} Feb 27 17:39:04 crc kubenswrapper[4830]: I0227 17:39:04.161885 4830 scope.go:117] "RemoveContainer" containerID="313fc4aac8e61b508c32aefdb01d7489ea5d8194a163882d5d4b5ec4665839cb" Feb 27 17:39:14 crc kubenswrapper[4830]: E0227 17:39:14.854917 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 17:39:14 crc kubenswrapper[4830]: E0227 17:39:14.855808 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 17:39:14 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 17:39:14 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mdb7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536898-vrwjs_openshift-infra(204eb1af-36ad-4de7-9da7-9a37fefd3026): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 17:39:14 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 17:39:14 crc kubenswrapper[4830]: E0227 17:39:14.857494 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:39:26 crc kubenswrapper[4830]: E0227 17:39:26.766153 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:39:31 crc kubenswrapper[4830]: I0227 17:39:31.032547 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-5ba2-account-create-update-jmd88"] Feb 27 17:39:31 crc kubenswrapper[4830]: I0227 17:39:31.035178 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5ba2-account-create-update-jmd88" Feb 27 17:39:31 crc kubenswrapper[4830]: I0227 17:39:31.038271 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 27 17:39:31 crc kubenswrapper[4830]: I0227 17:39:31.065566 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-9b84v"] Feb 27 17:39:31 crc kubenswrapper[4830]: I0227 17:39:31.067355 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9b84v" Feb 27 17:39:31 crc kubenswrapper[4830]: I0227 17:39:31.071545 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-9b84v"] Feb 27 17:39:31 crc kubenswrapper[4830]: I0227 17:39:31.084818 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5ba2-account-create-update-jmd88"] Feb 27 17:39:31 crc kubenswrapper[4830]: I0227 17:39:31.198209 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd776b00-a862-464a-b2f5-bd60682f924c-operator-scripts\") pod \"barbican-5ba2-account-create-update-jmd88\" (UID: \"cd776b00-a862-464a-b2f5-bd60682f924c\") " pod="openstack/barbican-5ba2-account-create-update-jmd88" Feb 27 17:39:31 crc kubenswrapper[4830]: I0227 17:39:31.198473 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b82dc08e-a7da-4563-af1b-25e6f06b353a-operator-scripts\") pod \"barbican-db-create-9b84v\" (UID: \"b82dc08e-a7da-4563-af1b-25e6f06b353a\") " pod="openstack/barbican-db-create-9b84v" Feb 27 17:39:31 crc kubenswrapper[4830]: I0227 17:39:31.198606 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s27g4\" (UniqueName: \"kubernetes.io/projected/cd776b00-a862-464a-b2f5-bd60682f924c-kube-api-access-s27g4\") pod \"barbican-5ba2-account-create-update-jmd88\" (UID: \"cd776b00-a862-464a-b2f5-bd60682f924c\") " pod="openstack/barbican-5ba2-account-create-update-jmd88" Feb 27 17:39:31 crc kubenswrapper[4830]: I0227 17:39:31.199469 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5kmb\" (UniqueName: \"kubernetes.io/projected/b82dc08e-a7da-4563-af1b-25e6f06b353a-kube-api-access-d5kmb\") pod \"barbican-db-create-9b84v\" (UID: \"b82dc08e-a7da-4563-af1b-25e6f06b353a\") " pod="openstack/barbican-db-create-9b84v" Feb 27 17:39:31 crc kubenswrapper[4830]: I0227 17:39:31.301664 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd776b00-a862-464a-b2f5-bd60682f924c-operator-scripts\") pod \"barbican-5ba2-account-create-update-jmd88\" (UID: \"cd776b00-a862-464a-b2f5-bd60682f924c\") " pod="openstack/barbican-5ba2-account-create-update-jmd88" Feb 27 17:39:31 crc kubenswrapper[4830]: I0227 17:39:31.301789 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b82dc08e-a7da-4563-af1b-25e6f06b353a-operator-scripts\") pod \"barbican-db-create-9b84v\" (UID: \"b82dc08e-a7da-4563-af1b-25e6f06b353a\") " pod="openstack/barbican-db-create-9b84v" Feb 27 17:39:31 crc kubenswrapper[4830]: I0227 17:39:31.301824 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s27g4\" (UniqueName: \"kubernetes.io/projected/cd776b00-a862-464a-b2f5-bd60682f924c-kube-api-access-s27g4\") pod \"barbican-5ba2-account-create-update-jmd88\" (UID: \"cd776b00-a862-464a-b2f5-bd60682f924c\") " pod="openstack/barbican-5ba2-account-create-update-jmd88" Feb 27 17:39:31 crc kubenswrapper[4830]: I0227 17:39:31.301852 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5kmb\" (UniqueName: \"kubernetes.io/projected/b82dc08e-a7da-4563-af1b-25e6f06b353a-kube-api-access-d5kmb\") pod \"barbican-db-create-9b84v\" (UID: \"b82dc08e-a7da-4563-af1b-25e6f06b353a\") " pod="openstack/barbican-db-create-9b84v" Feb 27 17:39:31 crc kubenswrapper[4830]: I0227 17:39:31.303308 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd776b00-a862-464a-b2f5-bd60682f924c-operator-scripts\") pod \"barbican-5ba2-account-create-update-jmd88\" (UID: \"cd776b00-a862-464a-b2f5-bd60682f924c\") " pod="openstack/barbican-5ba2-account-create-update-jmd88" Feb 27 17:39:31 crc kubenswrapper[4830]: I0227 17:39:31.303460 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b82dc08e-a7da-4563-af1b-25e6f06b353a-operator-scripts\") pod \"barbican-db-create-9b84v\" (UID: \"b82dc08e-a7da-4563-af1b-25e6f06b353a\") " pod="openstack/barbican-db-create-9b84v" Feb 27 17:39:31 crc kubenswrapper[4830]: I0227 17:39:31.337890 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s27g4\" (UniqueName: \"kubernetes.io/projected/cd776b00-a862-464a-b2f5-bd60682f924c-kube-api-access-s27g4\") pod \"barbican-5ba2-account-create-update-jmd88\" (UID: \"cd776b00-a862-464a-b2f5-bd60682f924c\") " pod="openstack/barbican-5ba2-account-create-update-jmd88" Feb 27 17:39:31 crc kubenswrapper[4830]: I0227 17:39:31.338448 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5kmb\" (UniqueName: \"kubernetes.io/projected/b82dc08e-a7da-4563-af1b-25e6f06b353a-kube-api-access-d5kmb\") pod \"barbican-db-create-9b84v\" (UID: \"b82dc08e-a7da-4563-af1b-25e6f06b353a\") " pod="openstack/barbican-db-create-9b84v" Feb 27 17:39:31 crc kubenswrapper[4830]: I0227 17:39:31.379966 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5ba2-account-create-update-jmd88" Feb 27 17:39:31 crc kubenswrapper[4830]: I0227 17:39:31.411976 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9b84v" Feb 27 17:39:31 crc kubenswrapper[4830]: I0227 17:39:31.748956 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-9b84v"] Feb 27 17:39:31 crc kubenswrapper[4830]: I0227 17:39:31.900368 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5ba2-account-create-update-jmd88"] Feb 27 17:39:31 crc kubenswrapper[4830]: W0227 17:39:31.901772 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd776b00_a862_464a_b2f5_bd60682f924c.slice/crio-755c669acdd021c4bb834fd55712c7fc5956abfb0b17c7bd52f76994cebfd8c2 WatchSource:0}: Error finding container 755c669acdd021c4bb834fd55712c7fc5956abfb0b17c7bd52f76994cebfd8c2: Status 404 returned error can't find the container with id 755c669acdd021c4bb834fd55712c7fc5956abfb0b17c7bd52f76994cebfd8c2 Feb 27 17:39:32 crc kubenswrapper[4830]: I0227 17:39:32.521019 4830 generic.go:334] "Generic (PLEG): container finished" podID="b82dc08e-a7da-4563-af1b-25e6f06b353a" containerID="66d60c3b592c6831df559473b6d404fbaf00c6d1b56cf75eadbe991ab774a372" exitCode=0 Feb 27 17:39:32 crc kubenswrapper[4830]: I0227 17:39:32.521122 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-9b84v" event={"ID":"b82dc08e-a7da-4563-af1b-25e6f06b353a","Type":"ContainerDied","Data":"66d60c3b592c6831df559473b6d404fbaf00c6d1b56cf75eadbe991ab774a372"} Feb 27 17:39:32 crc kubenswrapper[4830]: I0227 17:39:32.521699 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-9b84v" event={"ID":"b82dc08e-a7da-4563-af1b-25e6f06b353a","Type":"ContainerStarted","Data":"8c67616aaab0d361f13cdfee19dffa9c426a49c95f197dfde8ce061a0f84cce9"} Feb 27 17:39:32 crc kubenswrapper[4830]: I0227 17:39:32.523998 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5ba2-account-create-update-jmd88" event={"ID":"cd776b00-a862-464a-b2f5-bd60682f924c","Type":"ContainerStarted","Data":"c8127dfea0e640ca387461852677ae251653369b15e612b21a844f5474210fa1"} Feb 27 17:39:32 crc kubenswrapper[4830]: I0227 17:39:32.524074 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5ba2-account-create-update-jmd88" event={"ID":"cd776b00-a862-464a-b2f5-bd60682f924c","Type":"ContainerStarted","Data":"755c669acdd021c4bb834fd55712c7fc5956abfb0b17c7bd52f76994cebfd8c2"} Feb 27 17:39:32 crc kubenswrapper[4830]: I0227 17:39:32.572707 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-5ba2-account-create-update-jmd88" podStartSLOduration=1.572685042 podStartE2EDuration="1.572685042s" podCreationTimestamp="2026-02-27 17:39:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:39:32.565796537 +0000 UTC m=+5568.655069020" watchObservedRunningTime="2026-02-27 17:39:32.572685042 +0000 UTC m=+5568.661957515" Feb 27 17:39:33 crc kubenswrapper[4830]: I0227 17:39:33.532482 4830 generic.go:334] "Generic (PLEG): container finished" podID="cd776b00-a862-464a-b2f5-bd60682f924c" containerID="c8127dfea0e640ca387461852677ae251653369b15e612b21a844f5474210fa1" exitCode=0 Feb 27 17:39:33 crc kubenswrapper[4830]: I0227 17:39:33.533094 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5ba2-account-create-update-jmd88" event={"ID":"cd776b00-a862-464a-b2f5-bd60682f924c","Type":"ContainerDied","Data":"c8127dfea0e640ca387461852677ae251653369b15e612b21a844f5474210fa1"} Feb 27 17:39:33 crc kubenswrapper[4830]: I0227 17:39:33.937938 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9b84v" Feb 27 17:39:34 crc kubenswrapper[4830]: I0227 17:39:34.058822 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5kmb\" (UniqueName: \"kubernetes.io/projected/b82dc08e-a7da-4563-af1b-25e6f06b353a-kube-api-access-d5kmb\") pod \"b82dc08e-a7da-4563-af1b-25e6f06b353a\" (UID: \"b82dc08e-a7da-4563-af1b-25e6f06b353a\") " Feb 27 17:39:34 crc kubenswrapper[4830]: I0227 17:39:34.059116 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b82dc08e-a7da-4563-af1b-25e6f06b353a-operator-scripts\") pod \"b82dc08e-a7da-4563-af1b-25e6f06b353a\" (UID: \"b82dc08e-a7da-4563-af1b-25e6f06b353a\") " Feb 27 17:39:34 crc kubenswrapper[4830]: I0227 17:39:34.059882 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b82dc08e-a7da-4563-af1b-25e6f06b353a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b82dc08e-a7da-4563-af1b-25e6f06b353a" (UID: "b82dc08e-a7da-4563-af1b-25e6f06b353a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:39:34 crc kubenswrapper[4830]: I0227 17:39:34.072249 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b82dc08e-a7da-4563-af1b-25e6f06b353a-kube-api-access-d5kmb" (OuterVolumeSpecName: "kube-api-access-d5kmb") pod "b82dc08e-a7da-4563-af1b-25e6f06b353a" (UID: "b82dc08e-a7da-4563-af1b-25e6f06b353a"). InnerVolumeSpecName "kube-api-access-d5kmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:39:34 crc kubenswrapper[4830]: I0227 17:39:34.161275 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5kmb\" (UniqueName: \"kubernetes.io/projected/b82dc08e-a7da-4563-af1b-25e6f06b353a-kube-api-access-d5kmb\") on node \"crc\" DevicePath \"\"" Feb 27 17:39:34 crc kubenswrapper[4830]: I0227 17:39:34.161324 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b82dc08e-a7da-4563-af1b-25e6f06b353a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:39:34 crc kubenswrapper[4830]: I0227 17:39:34.545926 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-9b84v" event={"ID":"b82dc08e-a7da-4563-af1b-25e6f06b353a","Type":"ContainerDied","Data":"8c67616aaab0d361f13cdfee19dffa9c426a49c95f197dfde8ce061a0f84cce9"} Feb 27 17:39:34 crc kubenswrapper[4830]: I0227 17:39:34.546371 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c67616aaab0d361f13cdfee19dffa9c426a49c95f197dfde8ce061a0f84cce9" Feb 27 17:39:34 crc kubenswrapper[4830]: I0227 17:39:34.545936 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9b84v" Feb 27 17:39:34 crc kubenswrapper[4830]: I0227 17:39:34.954765 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5ba2-account-create-update-jmd88" Feb 27 17:39:34 crc kubenswrapper[4830]: I0227 17:39:34.985536 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s27g4\" (UniqueName: \"kubernetes.io/projected/cd776b00-a862-464a-b2f5-bd60682f924c-kube-api-access-s27g4\") pod \"cd776b00-a862-464a-b2f5-bd60682f924c\" (UID: \"cd776b00-a862-464a-b2f5-bd60682f924c\") " Feb 27 17:39:34 crc kubenswrapper[4830]: I0227 17:39:34.985595 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd776b00-a862-464a-b2f5-bd60682f924c-operator-scripts\") pod \"cd776b00-a862-464a-b2f5-bd60682f924c\" (UID: \"cd776b00-a862-464a-b2f5-bd60682f924c\") " Feb 27 17:39:34 crc kubenswrapper[4830]: I0227 17:39:34.987465 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd776b00-a862-464a-b2f5-bd60682f924c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cd776b00-a862-464a-b2f5-bd60682f924c" (UID: "cd776b00-a862-464a-b2f5-bd60682f924c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:39:34 crc kubenswrapper[4830]: I0227 17:39:34.997081 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd776b00-a862-464a-b2f5-bd60682f924c-kube-api-access-s27g4" (OuterVolumeSpecName: "kube-api-access-s27g4") pod "cd776b00-a862-464a-b2f5-bd60682f924c" (UID: "cd776b00-a862-464a-b2f5-bd60682f924c"). InnerVolumeSpecName "kube-api-access-s27g4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:39:35 crc kubenswrapper[4830]: I0227 17:39:35.087847 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s27g4\" (UniqueName: \"kubernetes.io/projected/cd776b00-a862-464a-b2f5-bd60682f924c-kube-api-access-s27g4\") on node \"crc\" DevicePath \"\"" Feb 27 17:39:35 crc kubenswrapper[4830]: I0227 17:39:35.087883 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd776b00-a862-464a-b2f5-bd60682f924c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:39:35 crc kubenswrapper[4830]: I0227 17:39:35.558603 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5ba2-account-create-update-jmd88" event={"ID":"cd776b00-a862-464a-b2f5-bd60682f924c","Type":"ContainerDied","Data":"755c669acdd021c4bb834fd55712c7fc5956abfb0b17c7bd52f76994cebfd8c2"} Feb 27 17:39:35 crc kubenswrapper[4830]: I0227 17:39:35.558649 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="755c669acdd021c4bb834fd55712c7fc5956abfb0b17c7bd52f76994cebfd8c2" Feb 27 17:39:35 crc kubenswrapper[4830]: I0227 17:39:35.558728 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5ba2-account-create-update-jmd88" Feb 27 17:39:36 crc kubenswrapper[4830]: I0227 17:39:36.416754 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-djgjz"] Feb 27 17:39:36 crc kubenswrapper[4830]: E0227 17:39:36.417314 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd776b00-a862-464a-b2f5-bd60682f924c" containerName="mariadb-account-create-update" Feb 27 17:39:36 crc kubenswrapper[4830]: I0227 17:39:36.417344 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd776b00-a862-464a-b2f5-bd60682f924c" containerName="mariadb-account-create-update" Feb 27 17:39:36 crc kubenswrapper[4830]: E0227 17:39:36.417405 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b82dc08e-a7da-4563-af1b-25e6f06b353a" containerName="mariadb-database-create" Feb 27 17:39:36 crc kubenswrapper[4830]: I0227 17:39:36.417420 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="b82dc08e-a7da-4563-af1b-25e6f06b353a" containerName="mariadb-database-create" Feb 27 17:39:36 crc kubenswrapper[4830]: I0227 17:39:36.417713 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd776b00-a862-464a-b2f5-bd60682f924c" containerName="mariadb-account-create-update" Feb 27 17:39:36 crc kubenswrapper[4830]: I0227 17:39:36.417761 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="b82dc08e-a7da-4563-af1b-25e6f06b353a" containerName="mariadb-database-create" Feb 27 17:39:36 crc kubenswrapper[4830]: I0227 17:39:36.418681 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-djgjz" Feb 27 17:39:36 crc kubenswrapper[4830]: I0227 17:39:36.421528 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 27 17:39:36 crc kubenswrapper[4830]: I0227 17:39:36.421738 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-x4l4t" Feb 27 17:39:36 crc kubenswrapper[4830]: I0227 17:39:36.439653 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-djgjz"] Feb 27 17:39:36 crc kubenswrapper[4830]: I0227 17:39:36.516064 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/840b1cf6-0ffb-47c8-9dac-779004f691b0-db-sync-config-data\") pod \"barbican-db-sync-djgjz\" (UID: \"840b1cf6-0ffb-47c8-9dac-779004f691b0\") " pod="openstack/barbican-db-sync-djgjz" Feb 27 17:39:36 crc kubenswrapper[4830]: I0227 17:39:36.516587 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/840b1cf6-0ffb-47c8-9dac-779004f691b0-combined-ca-bundle\") pod \"barbican-db-sync-djgjz\" (UID: \"840b1cf6-0ffb-47c8-9dac-779004f691b0\") " pod="openstack/barbican-db-sync-djgjz" Feb 27 17:39:36 crc kubenswrapper[4830]: I0227 17:39:36.516855 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbsc6\" (UniqueName: \"kubernetes.io/projected/840b1cf6-0ffb-47c8-9dac-779004f691b0-kube-api-access-tbsc6\") pod \"barbican-db-sync-djgjz\" (UID: \"840b1cf6-0ffb-47c8-9dac-779004f691b0\") " pod="openstack/barbican-db-sync-djgjz" Feb 27 17:39:36 crc kubenswrapper[4830]: I0227 17:39:36.619031 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/840b1cf6-0ffb-47c8-9dac-779004f691b0-db-sync-config-data\") pod \"barbican-db-sync-djgjz\" (UID: \"840b1cf6-0ffb-47c8-9dac-779004f691b0\") " pod="openstack/barbican-db-sync-djgjz" Feb 27 17:39:36 crc kubenswrapper[4830]: I0227 17:39:36.619177 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/840b1cf6-0ffb-47c8-9dac-779004f691b0-combined-ca-bundle\") pod \"barbican-db-sync-djgjz\" (UID: \"840b1cf6-0ffb-47c8-9dac-779004f691b0\") " pod="openstack/barbican-db-sync-djgjz" Feb 27 17:39:36 crc kubenswrapper[4830]: I0227 17:39:36.619305 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbsc6\" (UniqueName: \"kubernetes.io/projected/840b1cf6-0ffb-47c8-9dac-779004f691b0-kube-api-access-tbsc6\") pod \"barbican-db-sync-djgjz\" (UID: \"840b1cf6-0ffb-47c8-9dac-779004f691b0\") " pod="openstack/barbican-db-sync-djgjz" Feb 27 17:39:36 crc kubenswrapper[4830]: I0227 17:39:36.627043 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/840b1cf6-0ffb-47c8-9dac-779004f691b0-db-sync-config-data\") pod \"barbican-db-sync-djgjz\" (UID: \"840b1cf6-0ffb-47c8-9dac-779004f691b0\") " pod="openstack/barbican-db-sync-djgjz" Feb 27 17:39:36 crc kubenswrapper[4830]: I0227 17:39:36.634644 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/840b1cf6-0ffb-47c8-9dac-779004f691b0-combined-ca-bundle\") pod \"barbican-db-sync-djgjz\" (UID: \"840b1cf6-0ffb-47c8-9dac-779004f691b0\") " pod="openstack/barbican-db-sync-djgjz" Feb 27 17:39:36 crc kubenswrapper[4830]: I0227 17:39:36.638510 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbsc6\" (UniqueName: \"kubernetes.io/projected/840b1cf6-0ffb-47c8-9dac-779004f691b0-kube-api-access-tbsc6\") pod \"barbican-db-sync-djgjz\" (UID: \"840b1cf6-0ffb-47c8-9dac-779004f691b0\") " pod="openstack/barbican-db-sync-djgjz" Feb 27 17:39:36 crc kubenswrapper[4830]: I0227 17:39:36.741996 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-djgjz" Feb 27 17:39:37 crc kubenswrapper[4830]: I0227 17:39:37.113246 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-djgjz"] Feb 27 17:39:37 crc kubenswrapper[4830]: I0227 17:39:37.578758 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-djgjz" event={"ID":"840b1cf6-0ffb-47c8-9dac-779004f691b0","Type":"ContainerStarted","Data":"c25e29a5c819cf324ba7ab3dec326fbb20097cf6d51fe143e8ab2797af03800c"} Feb 27 17:39:37 crc kubenswrapper[4830]: I0227 17:39:37.579302 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-djgjz" event={"ID":"840b1cf6-0ffb-47c8-9dac-779004f691b0","Type":"ContainerStarted","Data":"814de2f97eee677973987f19326082050b5e2093d592536414ed1df6c4fb7b18"} Feb 27 17:39:37 crc kubenswrapper[4830]: I0227 17:39:37.601233 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-djgjz" podStartSLOduration=1.6012098940000001 podStartE2EDuration="1.601209894s" podCreationTimestamp="2026-02-27 17:39:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:39:37.59561067 +0000 UTC m=+5573.684883173" watchObservedRunningTime="2026-02-27 17:39:37.601209894 +0000 UTC m=+5573.690482367" Feb 27 17:39:38 crc kubenswrapper[4830]: I0227 17:39:38.586732 4830 generic.go:334] "Generic (PLEG): container finished" podID="840b1cf6-0ffb-47c8-9dac-779004f691b0" containerID="c25e29a5c819cf324ba7ab3dec326fbb20097cf6d51fe143e8ab2797af03800c" exitCode=0 Feb 27 17:39:38 crc kubenswrapper[4830]: I0227 17:39:38.586898 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-djgjz" event={"ID":"840b1cf6-0ffb-47c8-9dac-779004f691b0","Type":"ContainerDied","Data":"c25e29a5c819cf324ba7ab3dec326fbb20097cf6d51fe143e8ab2797af03800c"} Feb 27 17:39:39 crc kubenswrapper[4830]: I0227 17:39:39.985833 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-djgjz" Feb 27 17:39:40 crc kubenswrapper[4830]: I0227 17:39:40.011042 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/840b1cf6-0ffb-47c8-9dac-779004f691b0-db-sync-config-data\") pod \"840b1cf6-0ffb-47c8-9dac-779004f691b0\" (UID: \"840b1cf6-0ffb-47c8-9dac-779004f691b0\") " Feb 27 17:39:40 crc kubenswrapper[4830]: I0227 17:39:40.011099 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbsc6\" (UniqueName: \"kubernetes.io/projected/840b1cf6-0ffb-47c8-9dac-779004f691b0-kube-api-access-tbsc6\") pod \"840b1cf6-0ffb-47c8-9dac-779004f691b0\" (UID: \"840b1cf6-0ffb-47c8-9dac-779004f691b0\") " Feb 27 17:39:40 crc kubenswrapper[4830]: I0227 17:39:40.011172 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/840b1cf6-0ffb-47c8-9dac-779004f691b0-combined-ca-bundle\") pod \"840b1cf6-0ffb-47c8-9dac-779004f691b0\" (UID: \"840b1cf6-0ffb-47c8-9dac-779004f691b0\") " Feb 27 17:39:40 crc kubenswrapper[4830]: I0227 17:39:40.033294 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/840b1cf6-0ffb-47c8-9dac-779004f691b0-kube-api-access-tbsc6" (OuterVolumeSpecName: "kube-api-access-tbsc6") pod "840b1cf6-0ffb-47c8-9dac-779004f691b0" (UID: "840b1cf6-0ffb-47c8-9dac-779004f691b0"). InnerVolumeSpecName "kube-api-access-tbsc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:39:40 crc kubenswrapper[4830]: I0227 17:39:40.033672 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/840b1cf6-0ffb-47c8-9dac-779004f691b0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "840b1cf6-0ffb-47c8-9dac-779004f691b0" (UID: "840b1cf6-0ffb-47c8-9dac-779004f691b0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:39:40 crc kubenswrapper[4830]: I0227 17:39:40.045490 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/840b1cf6-0ffb-47c8-9dac-779004f691b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "840b1cf6-0ffb-47c8-9dac-779004f691b0" (UID: "840b1cf6-0ffb-47c8-9dac-779004f691b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:39:40 crc kubenswrapper[4830]: I0227 17:39:40.113730 4830 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/840b1cf6-0ffb-47c8-9dac-779004f691b0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:39:40 crc kubenswrapper[4830]: I0227 17:39:40.113901 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbsc6\" (UniqueName: \"kubernetes.io/projected/840b1cf6-0ffb-47c8-9dac-779004f691b0-kube-api-access-tbsc6\") on node \"crc\" DevicePath \"\"" Feb 27 17:39:40 crc kubenswrapper[4830]: I0227 17:39:40.113921 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/840b1cf6-0ffb-47c8-9dac-779004f691b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:39:40 crc kubenswrapper[4830]: I0227 17:39:40.612004 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-djgjz" event={"ID":"840b1cf6-0ffb-47c8-9dac-779004f691b0","Type":"ContainerDied","Data":"814de2f97eee677973987f19326082050b5e2093d592536414ed1df6c4fb7b18"} Feb 27 17:39:40 crc kubenswrapper[4830]: I0227 17:39:40.612568 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="814de2f97eee677973987f19326082050b5e2093d592536414ed1df6c4fb7b18" Feb 27 17:39:40 crc kubenswrapper[4830]: I0227 17:39:40.612680 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-djgjz" Feb 27 17:39:40 crc kubenswrapper[4830]: E0227 17:39:40.764801 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:39:40 crc kubenswrapper[4830]: I0227 17:39:40.849168 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-55f77b7c67-tb7rb"] Feb 27 17:39:40 crc kubenswrapper[4830]: E0227 17:39:40.849560 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="840b1cf6-0ffb-47c8-9dac-779004f691b0" containerName="barbican-db-sync" Feb 27 17:39:40 crc kubenswrapper[4830]: I0227 17:39:40.849576 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="840b1cf6-0ffb-47c8-9dac-779004f691b0" containerName="barbican-db-sync" Feb 27 17:39:40 crc kubenswrapper[4830]: I0227 17:39:40.849736 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="840b1cf6-0ffb-47c8-9dac-779004f691b0" containerName="barbican-db-sync" Feb 27 17:39:40 crc kubenswrapper[4830]: I0227 17:39:40.850625 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-55f77b7c67-tb7rb" Feb 27 17:39:40 crc kubenswrapper[4830]: I0227 17:39:40.855595 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 27 17:39:40 crc kubenswrapper[4830]: I0227 17:39:40.855991 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-x4l4t" Feb 27 17:39:40 crc kubenswrapper[4830]: I0227 17:39:40.856198 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 27 17:39:40 crc kubenswrapper[4830]: I0227 17:39:40.864520 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-55f77b7c67-tb7rb"] Feb 27 17:39:40 crc kubenswrapper[4830]: I0227 17:39:40.906054 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5658d7bb68-tdlwd"] Feb 27 17:39:40 crc kubenswrapper[4830]: I0227 17:39:40.907858 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5658d7bb68-tdlwd" Feb 27 17:39:40 crc kubenswrapper[4830]: I0227 17:39:40.913414 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 27 17:39:40 crc kubenswrapper[4830]: I0227 17:39:40.916047 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5658d7bb68-tdlwd"] Feb 27 17:39:40 crc kubenswrapper[4830]: I0227 17:39:40.933515 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6874bf8c6f-lpnwz"] Feb 27 17:39:40 crc kubenswrapper[4830]: I0227 17:39:40.935215 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6874bf8c6f-lpnwz" Feb 27 17:39:40 crc kubenswrapper[4830]: I0227 17:39:40.940070 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69c33f33-e26d-48e1-91c6-2bcf08372648-config-data\") pod \"barbican-worker-55f77b7c67-tb7rb\" (UID: \"69c33f33-e26d-48e1-91c6-2bcf08372648\") " pod="openstack/barbican-worker-55f77b7c67-tb7rb" Feb 27 17:39:40 crc kubenswrapper[4830]: I0227 17:39:40.940119 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smqpz\" (UniqueName: \"kubernetes.io/projected/13e050dc-75b5-42df-bd0f-04e850d34786-kube-api-access-smqpz\") pod \"barbican-keystone-listener-5658d7bb68-tdlwd\" (UID: \"13e050dc-75b5-42df-bd0f-04e850d34786\") " pod="openstack/barbican-keystone-listener-5658d7bb68-tdlwd" Feb 27 17:39:40 crc kubenswrapper[4830]: I0227 17:39:40.940146 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69c33f33-e26d-48e1-91c6-2bcf08372648-combined-ca-bundle\") pod \"barbican-worker-55f77b7c67-tb7rb\" (UID: \"69c33f33-e26d-48e1-91c6-2bcf08372648\") " pod="openstack/barbican-worker-55f77b7c67-tb7rb" Feb 27 17:39:40 crc kubenswrapper[4830]: I0227 17:39:40.940180 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plq6r\" (UniqueName: \"kubernetes.io/projected/69c33f33-e26d-48e1-91c6-2bcf08372648-kube-api-access-plq6r\") pod \"barbican-worker-55f77b7c67-tb7rb\" (UID: \"69c33f33-e26d-48e1-91c6-2bcf08372648\") " pod="openstack/barbican-worker-55f77b7c67-tb7rb" Feb 27 17:39:40 crc kubenswrapper[4830]: I0227 17:39:40.940225 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13e050dc-75b5-42df-bd0f-04e850d34786-config-data-custom\") pod \"barbican-keystone-listener-5658d7bb68-tdlwd\" (UID: \"13e050dc-75b5-42df-bd0f-04e850d34786\") " pod="openstack/barbican-keystone-listener-5658d7bb68-tdlwd" Feb 27 17:39:40 crc kubenswrapper[4830]: I0227 17:39:40.940255 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13e050dc-75b5-42df-bd0f-04e850d34786-combined-ca-bundle\") pod \"barbican-keystone-listener-5658d7bb68-tdlwd\" (UID: \"13e050dc-75b5-42df-bd0f-04e850d34786\") " pod="openstack/barbican-keystone-listener-5658d7bb68-tdlwd" Feb 27 17:39:40 crc kubenswrapper[4830]: I0227 17:39:40.940279 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69c33f33-e26d-48e1-91c6-2bcf08372648-logs\") pod \"barbican-worker-55f77b7c67-tb7rb\" (UID: \"69c33f33-e26d-48e1-91c6-2bcf08372648\") " pod="openstack/barbican-worker-55f77b7c67-tb7rb" Feb 27 17:39:40 crc kubenswrapper[4830]: I0227 17:39:40.940300 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13e050dc-75b5-42df-bd0f-04e850d34786-logs\") pod \"barbican-keystone-listener-5658d7bb68-tdlwd\" (UID: \"13e050dc-75b5-42df-bd0f-04e850d34786\") " pod="openstack/barbican-keystone-listener-5658d7bb68-tdlwd" Feb 27 17:39:40 crc kubenswrapper[4830]: I0227 17:39:40.941472 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69c33f33-e26d-48e1-91c6-2bcf08372648-config-data-custom\") pod \"barbican-worker-55f77b7c67-tb7rb\" (UID: \"69c33f33-e26d-48e1-91c6-2bcf08372648\") " pod="openstack/barbican-worker-55f77b7c67-tb7rb" Feb 27 17:39:40 crc kubenswrapper[4830]: I0227 17:39:40.941518 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13e050dc-75b5-42df-bd0f-04e850d34786-config-data\") pod \"barbican-keystone-listener-5658d7bb68-tdlwd\" (UID: \"13e050dc-75b5-42df-bd0f-04e850d34786\") " pod="openstack/barbican-keystone-listener-5658d7bb68-tdlwd" Feb 27 17:39:40 crc kubenswrapper[4830]: I0227 17:39:40.951170 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6874bf8c6f-lpnwz"] Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.044233 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69c33f33-e26d-48e1-91c6-2bcf08372648-config-data-custom\") pod \"barbican-worker-55f77b7c67-tb7rb\" (UID: \"69c33f33-e26d-48e1-91c6-2bcf08372648\") " pod="openstack/barbican-worker-55f77b7c67-tb7rb" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.044282 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13e050dc-75b5-42df-bd0f-04e850d34786-config-data\") pod \"barbican-keystone-listener-5658d7bb68-tdlwd\" (UID: \"13e050dc-75b5-42df-bd0f-04e850d34786\") " pod="openstack/barbican-keystone-listener-5658d7bb68-tdlwd" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.044320 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69c33f33-e26d-48e1-91c6-2bcf08372648-config-data\") pod \"barbican-worker-55f77b7c67-tb7rb\" (UID: \"69c33f33-e26d-48e1-91c6-2bcf08372648\") " pod="openstack/barbican-worker-55f77b7c67-tb7rb" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.044338 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smqpz\" (UniqueName: \"kubernetes.io/projected/13e050dc-75b5-42df-bd0f-04e850d34786-kube-api-access-smqpz\") pod \"barbican-keystone-listener-5658d7bb68-tdlwd\" (UID: \"13e050dc-75b5-42df-bd0f-04e850d34786\") " pod="openstack/barbican-keystone-listener-5658d7bb68-tdlwd" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.044363 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tf46\" (UniqueName: \"kubernetes.io/projected/f91775a7-c80a-4262-ad8a-912d9f1b1da8-kube-api-access-5tf46\") pod \"dnsmasq-dns-6874bf8c6f-lpnwz\" (UID: \"f91775a7-c80a-4262-ad8a-912d9f1b1da8\") " pod="openstack/dnsmasq-dns-6874bf8c6f-lpnwz" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.044387 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69c33f33-e26d-48e1-91c6-2bcf08372648-combined-ca-bundle\") pod \"barbican-worker-55f77b7c67-tb7rb\" (UID: \"69c33f33-e26d-48e1-91c6-2bcf08372648\") " pod="openstack/barbican-worker-55f77b7c67-tb7rb" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.044414 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plq6r\" (UniqueName: \"kubernetes.io/projected/69c33f33-e26d-48e1-91c6-2bcf08372648-kube-api-access-plq6r\") pod \"barbican-worker-55f77b7c67-tb7rb\" (UID: \"69c33f33-e26d-48e1-91c6-2bcf08372648\") " pod="openstack/barbican-worker-55f77b7c67-tb7rb" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.044434 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13e050dc-75b5-42df-bd0f-04e850d34786-config-data-custom\") pod \"barbican-keystone-listener-5658d7bb68-tdlwd\" (UID: \"13e050dc-75b5-42df-bd0f-04e850d34786\") " pod="openstack/barbican-keystone-listener-5658d7bb68-tdlwd" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.044468 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f91775a7-c80a-4262-ad8a-912d9f1b1da8-ovsdbserver-nb\") pod \"dnsmasq-dns-6874bf8c6f-lpnwz\" (UID: \"f91775a7-c80a-4262-ad8a-912d9f1b1da8\") " pod="openstack/dnsmasq-dns-6874bf8c6f-lpnwz" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.044494 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13e050dc-75b5-42df-bd0f-04e850d34786-combined-ca-bundle\") pod \"barbican-keystone-listener-5658d7bb68-tdlwd\" (UID: \"13e050dc-75b5-42df-bd0f-04e850d34786\") " pod="openstack/barbican-keystone-listener-5658d7bb68-tdlwd" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.044520 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69c33f33-e26d-48e1-91c6-2bcf08372648-logs\") pod \"barbican-worker-55f77b7c67-tb7rb\" (UID: \"69c33f33-e26d-48e1-91c6-2bcf08372648\") " pod="openstack/barbican-worker-55f77b7c67-tb7rb" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.044542 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13e050dc-75b5-42df-bd0f-04e850d34786-logs\") pod \"barbican-keystone-listener-5658d7bb68-tdlwd\" (UID: \"13e050dc-75b5-42df-bd0f-04e850d34786\") " pod="openstack/barbican-keystone-listener-5658d7bb68-tdlwd" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.044563 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f91775a7-c80a-4262-ad8a-912d9f1b1da8-ovsdbserver-sb\") pod \"dnsmasq-dns-6874bf8c6f-lpnwz\" (UID: \"f91775a7-c80a-4262-ad8a-912d9f1b1da8\") " pod="openstack/dnsmasq-dns-6874bf8c6f-lpnwz" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.044587 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f91775a7-c80a-4262-ad8a-912d9f1b1da8-dns-svc\") pod \"dnsmasq-dns-6874bf8c6f-lpnwz\" (UID: \"f91775a7-c80a-4262-ad8a-912d9f1b1da8\") " pod="openstack/dnsmasq-dns-6874bf8c6f-lpnwz" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.044608 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f91775a7-c80a-4262-ad8a-912d9f1b1da8-config\") pod \"dnsmasq-dns-6874bf8c6f-lpnwz\" (UID: \"f91775a7-c80a-4262-ad8a-912d9f1b1da8\") " pod="openstack/dnsmasq-dns-6874bf8c6f-lpnwz" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.045363 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69c33f33-e26d-48e1-91c6-2bcf08372648-logs\") pod \"barbican-worker-55f77b7c67-tb7rb\" (UID: \"69c33f33-e26d-48e1-91c6-2bcf08372648\") " pod="openstack/barbican-worker-55f77b7c67-tb7rb" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.046564 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/13e050dc-75b5-42df-bd0f-04e850d34786-logs\") pod \"barbican-keystone-listener-5658d7bb68-tdlwd\" (UID: \"13e050dc-75b5-42df-bd0f-04e850d34786\") " pod="openstack/barbican-keystone-listener-5658d7bb68-tdlwd" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.044330 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-f4df6446b-z2csf"] Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.049733 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-f4df6446b-z2csf" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.052307 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.067390 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smqpz\" (UniqueName: \"kubernetes.io/projected/13e050dc-75b5-42df-bd0f-04e850d34786-kube-api-access-smqpz\") pod \"barbican-keystone-listener-5658d7bb68-tdlwd\" (UID: \"13e050dc-75b5-42df-bd0f-04e850d34786\") " pod="openstack/barbican-keystone-listener-5658d7bb68-tdlwd" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.072653 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plq6r\" (UniqueName: \"kubernetes.io/projected/69c33f33-e26d-48e1-91c6-2bcf08372648-kube-api-access-plq6r\") pod \"barbican-worker-55f77b7c67-tb7rb\" (UID: \"69c33f33-e26d-48e1-91c6-2bcf08372648\") " pod="openstack/barbican-worker-55f77b7c67-tb7rb" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.073613 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13e050dc-75b5-42df-bd0f-04e850d34786-config-data\") pod \"barbican-keystone-listener-5658d7bb68-tdlwd\" (UID: \"13e050dc-75b5-42df-bd0f-04e850d34786\") " pod="openstack/barbican-keystone-listener-5658d7bb68-tdlwd" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.075520 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69c33f33-e26d-48e1-91c6-2bcf08372648-config-data\") pod \"barbican-worker-55f77b7c67-tb7rb\" (UID: \"69c33f33-e26d-48e1-91c6-2bcf08372648\") " pod="openstack/barbican-worker-55f77b7c67-tb7rb" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.076357 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69c33f33-e26d-48e1-91c6-2bcf08372648-config-data-custom\") pod \"barbican-worker-55f77b7c67-tb7rb\" (UID: \"69c33f33-e26d-48e1-91c6-2bcf08372648\") " pod="openstack/barbican-worker-55f77b7c67-tb7rb" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.078450 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/13e050dc-75b5-42df-bd0f-04e850d34786-config-data-custom\") pod \"barbican-keystone-listener-5658d7bb68-tdlwd\" (UID: \"13e050dc-75b5-42df-bd0f-04e850d34786\") " pod="openstack/barbican-keystone-listener-5658d7bb68-tdlwd" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.079211 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13e050dc-75b5-42df-bd0f-04e850d34786-combined-ca-bundle\") pod \"barbican-keystone-listener-5658d7bb68-tdlwd\" (UID: \"13e050dc-75b5-42df-bd0f-04e850d34786\") " pod="openstack/barbican-keystone-listener-5658d7bb68-tdlwd" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.081454 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69c33f33-e26d-48e1-91c6-2bcf08372648-combined-ca-bundle\") pod \"barbican-worker-55f77b7c67-tb7rb\" (UID: \"69c33f33-e26d-48e1-91c6-2bcf08372648\") " pod="openstack/barbican-worker-55f77b7c67-tb7rb" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.081517 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-f4df6446b-z2csf"] Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.146414 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f91775a7-c80a-4262-ad8a-912d9f1b1da8-ovsdbserver-sb\") pod \"dnsmasq-dns-6874bf8c6f-lpnwz\" (UID: \"f91775a7-c80a-4262-ad8a-912d9f1b1da8\") " pod="openstack/dnsmasq-dns-6874bf8c6f-lpnwz" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.146458 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba8e997-3bde-4a23-9748-bd39acb5bcf1-combined-ca-bundle\") pod \"barbican-api-f4df6446b-z2csf\" (UID: \"4ba8e997-3bde-4a23-9748-bd39acb5bcf1\") " pod="openstack/barbican-api-f4df6446b-z2csf" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.146488 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f91775a7-c80a-4262-ad8a-912d9f1b1da8-dns-svc\") pod \"dnsmasq-dns-6874bf8c6f-lpnwz\" (UID: \"f91775a7-c80a-4262-ad8a-912d9f1b1da8\") " pod="openstack/dnsmasq-dns-6874bf8c6f-lpnwz" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.146510 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f91775a7-c80a-4262-ad8a-912d9f1b1da8-config\") pod \"dnsmasq-dns-6874bf8c6f-lpnwz\" (UID: \"f91775a7-c80a-4262-ad8a-912d9f1b1da8\") " pod="openstack/dnsmasq-dns-6874bf8c6f-lpnwz" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.146543 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ba8e997-3bde-4a23-9748-bd39acb5bcf1-config-data-custom\") pod \"barbican-api-f4df6446b-z2csf\" (UID: \"4ba8e997-3bde-4a23-9748-bd39acb5bcf1\") " pod="openstack/barbican-api-f4df6446b-z2csf" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.146583 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ba8e997-3bde-4a23-9748-bd39acb5bcf1-config-data\") pod \"barbican-api-f4df6446b-z2csf\" (UID: \"4ba8e997-3bde-4a23-9748-bd39acb5bcf1\") " pod="openstack/barbican-api-f4df6446b-z2csf" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.146613 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tf46\" (UniqueName: \"kubernetes.io/projected/f91775a7-c80a-4262-ad8a-912d9f1b1da8-kube-api-access-5tf46\") pod \"dnsmasq-dns-6874bf8c6f-lpnwz\" (UID: \"f91775a7-c80a-4262-ad8a-912d9f1b1da8\") " pod="openstack/dnsmasq-dns-6874bf8c6f-lpnwz" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.146643 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ba8e997-3bde-4a23-9748-bd39acb5bcf1-logs\") pod \"barbican-api-f4df6446b-z2csf\" (UID: \"4ba8e997-3bde-4a23-9748-bd39acb5bcf1\") " pod="openstack/barbican-api-f4df6446b-z2csf" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.146662 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f91775a7-c80a-4262-ad8a-912d9f1b1da8-ovsdbserver-nb\") pod \"dnsmasq-dns-6874bf8c6f-lpnwz\" (UID: \"f91775a7-c80a-4262-ad8a-912d9f1b1da8\") " pod="openstack/dnsmasq-dns-6874bf8c6f-lpnwz" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.146685 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzccb\" (UniqueName: \"kubernetes.io/projected/4ba8e997-3bde-4a23-9748-bd39acb5bcf1-kube-api-access-nzccb\") pod \"barbican-api-f4df6446b-z2csf\" (UID: \"4ba8e997-3bde-4a23-9748-bd39acb5bcf1\") " pod="openstack/barbican-api-f4df6446b-z2csf" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.147785 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f91775a7-c80a-4262-ad8a-912d9f1b1da8-ovsdbserver-sb\") pod \"dnsmasq-dns-6874bf8c6f-lpnwz\" (UID: \"f91775a7-c80a-4262-ad8a-912d9f1b1da8\") " pod="openstack/dnsmasq-dns-6874bf8c6f-lpnwz" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.147847 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f91775a7-c80a-4262-ad8a-912d9f1b1da8-dns-svc\") pod \"dnsmasq-dns-6874bf8c6f-lpnwz\" (UID: \"f91775a7-c80a-4262-ad8a-912d9f1b1da8\") " pod="openstack/dnsmasq-dns-6874bf8c6f-lpnwz" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.148363 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f91775a7-c80a-4262-ad8a-912d9f1b1da8-ovsdbserver-nb\") pod \"dnsmasq-dns-6874bf8c6f-lpnwz\" (UID: \"f91775a7-c80a-4262-ad8a-912d9f1b1da8\") " pod="openstack/dnsmasq-dns-6874bf8c6f-lpnwz" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.148446 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f91775a7-c80a-4262-ad8a-912d9f1b1da8-config\") pod \"dnsmasq-dns-6874bf8c6f-lpnwz\" (UID: \"f91775a7-c80a-4262-ad8a-912d9f1b1da8\") " pod="openstack/dnsmasq-dns-6874bf8c6f-lpnwz" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.166422 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tf46\" (UniqueName: \"kubernetes.io/projected/f91775a7-c80a-4262-ad8a-912d9f1b1da8-kube-api-access-5tf46\") pod \"dnsmasq-dns-6874bf8c6f-lpnwz\" (UID: \"f91775a7-c80a-4262-ad8a-912d9f1b1da8\") " pod="openstack/dnsmasq-dns-6874bf8c6f-lpnwz" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.170222 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-55f77b7c67-tb7rb" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.242208 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5658d7bb68-tdlwd" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.247847 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ba8e997-3bde-4a23-9748-bd39acb5bcf1-logs\") pod \"barbican-api-f4df6446b-z2csf\" (UID: \"4ba8e997-3bde-4a23-9748-bd39acb5bcf1\") " pod="openstack/barbican-api-f4df6446b-z2csf" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.247903 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzccb\" (UniqueName: \"kubernetes.io/projected/4ba8e997-3bde-4a23-9748-bd39acb5bcf1-kube-api-access-nzccb\") pod \"barbican-api-f4df6446b-z2csf\" (UID: \"4ba8e997-3bde-4a23-9748-bd39acb5bcf1\") " pod="openstack/barbican-api-f4df6446b-z2csf" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.247957 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba8e997-3bde-4a23-9748-bd39acb5bcf1-combined-ca-bundle\") pod \"barbican-api-f4df6446b-z2csf\" (UID: \"4ba8e997-3bde-4a23-9748-bd39acb5bcf1\") " pod="openstack/barbican-api-f4df6446b-z2csf" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.248011 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ba8e997-3bde-4a23-9748-bd39acb5bcf1-config-data-custom\") pod \"barbican-api-f4df6446b-z2csf\" (UID: \"4ba8e997-3bde-4a23-9748-bd39acb5bcf1\") " pod="openstack/barbican-api-f4df6446b-z2csf" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.248053 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ba8e997-3bde-4a23-9748-bd39acb5bcf1-config-data\") pod \"barbican-api-f4df6446b-z2csf\" (UID: \"4ba8e997-3bde-4a23-9748-bd39acb5bcf1\") " pod="openstack/barbican-api-f4df6446b-z2csf" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.249153 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ba8e997-3bde-4a23-9748-bd39acb5bcf1-logs\") pod \"barbican-api-f4df6446b-z2csf\" (UID: \"4ba8e997-3bde-4a23-9748-bd39acb5bcf1\") " pod="openstack/barbican-api-f4df6446b-z2csf" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.253650 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ba8e997-3bde-4a23-9748-bd39acb5bcf1-combined-ca-bundle\") pod \"barbican-api-f4df6446b-z2csf\" (UID: \"4ba8e997-3bde-4a23-9748-bd39acb5bcf1\") " pod="openstack/barbican-api-f4df6446b-z2csf" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.260543 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ba8e997-3bde-4a23-9748-bd39acb5bcf1-config-data-custom\") pod \"barbican-api-f4df6446b-z2csf\" (UID: \"4ba8e997-3bde-4a23-9748-bd39acb5bcf1\") " pod="openstack/barbican-api-f4df6446b-z2csf" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.271607 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ba8e997-3bde-4a23-9748-bd39acb5bcf1-config-data\") pod \"barbican-api-f4df6446b-z2csf\" (UID: \"4ba8e997-3bde-4a23-9748-bd39acb5bcf1\") " pod="openstack/barbican-api-f4df6446b-z2csf" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.274609 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6874bf8c6f-lpnwz" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.290688 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzccb\" (UniqueName: \"kubernetes.io/projected/4ba8e997-3bde-4a23-9748-bd39acb5bcf1-kube-api-access-nzccb\") pod \"barbican-api-f4df6446b-z2csf\" (UID: \"4ba8e997-3bde-4a23-9748-bd39acb5bcf1\") " pod="openstack/barbican-api-f4df6446b-z2csf" Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.444838 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-f4df6446b-z2csf" Feb 27 17:39:41 crc kubenswrapper[4830]: W0227 17:39:41.802026 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69c33f33_e26d_48e1_91c6_2bcf08372648.slice/crio-df6f9bee9f1933959a03c02f24f2bff2233696e3142135f5649b545b8acb6b57 WatchSource:0}: Error finding container df6f9bee9f1933959a03c02f24f2bff2233696e3142135f5649b545b8acb6b57: Status 404 returned error can't find the container with id df6f9bee9f1933959a03c02f24f2bff2233696e3142135f5649b545b8acb6b57 Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.803456 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-55f77b7c67-tb7rb"] Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.970078 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6874bf8c6f-lpnwz"] Feb 27 17:39:41 crc kubenswrapper[4830]: I0227 17:39:41.994464 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5658d7bb68-tdlwd"] Feb 27 17:39:42 crc kubenswrapper[4830]: I0227 17:39:42.033727 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-f4df6446b-z2csf"] Feb 27 17:39:42 crc kubenswrapper[4830]: I0227 17:39:42.631959 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-f4df6446b-z2csf" event={"ID":"4ba8e997-3bde-4a23-9748-bd39acb5bcf1","Type":"ContainerStarted","Data":"5a23954f2d46f86336454a5e1bce6788e7c0a77e9f9f5d2f1312210aee966a11"} Feb 27 17:39:42 crc kubenswrapper[4830]: I0227 17:39:42.632439 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-f4df6446b-z2csf" event={"ID":"4ba8e997-3bde-4a23-9748-bd39acb5bcf1","Type":"ContainerStarted","Data":"e6893be07a630356cdbd3af6931b86de52a15a8cc4279420f30ffa18e51c6b14"} Feb 27 17:39:42 crc kubenswrapper[4830]: I0227 17:39:42.635581 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-55f77b7c67-tb7rb" event={"ID":"69c33f33-e26d-48e1-91c6-2bcf08372648","Type":"ContainerStarted","Data":"7a44dc7c3ad66a098c62a1c75886869ac8f99e65c4590400e85fdb5e23953379"} Feb 27 17:39:42 crc kubenswrapper[4830]: I0227 17:39:42.635622 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-55f77b7c67-tb7rb" event={"ID":"69c33f33-e26d-48e1-91c6-2bcf08372648","Type":"ContainerStarted","Data":"8ebf71e2b25b103879f8cf8450270982a4b5443a9cf548a7199f618d8735d327"} Feb 27 17:39:42 crc kubenswrapper[4830]: I0227 17:39:42.635637 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-55f77b7c67-tb7rb" event={"ID":"69c33f33-e26d-48e1-91c6-2bcf08372648","Type":"ContainerStarted","Data":"df6f9bee9f1933959a03c02f24f2bff2233696e3142135f5649b545b8acb6b57"} Feb 27 17:39:42 crc kubenswrapper[4830]: I0227 17:39:42.641088 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5658d7bb68-tdlwd" event={"ID":"13e050dc-75b5-42df-bd0f-04e850d34786","Type":"ContainerStarted","Data":"c01e90b48085fcc82652a7656ed8230defb17f24d10faf5d90609388ee186647"} Feb 27 17:39:42 crc kubenswrapper[4830]: I0227 17:39:42.641131 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5658d7bb68-tdlwd" event={"ID":"13e050dc-75b5-42df-bd0f-04e850d34786","Type":"ContainerStarted","Data":"6a140f4e61cbe14aa33ebcff7ea2c3442df6eb7b6b5a7ed3890d570dec4c6118"} Feb 27 17:39:42 crc kubenswrapper[4830]: I0227 17:39:42.642740 4830 generic.go:334] "Generic (PLEG): container finished" podID="f91775a7-c80a-4262-ad8a-912d9f1b1da8" containerID="afbea8777456cbc8f1a81fc08205987b966279a1cf47a28d7acdf37825011c56" exitCode=0 Feb 27 17:39:42 crc kubenswrapper[4830]: I0227 17:39:42.642770 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6874bf8c6f-lpnwz" event={"ID":"f91775a7-c80a-4262-ad8a-912d9f1b1da8","Type":"ContainerDied","Data":"afbea8777456cbc8f1a81fc08205987b966279a1cf47a28d7acdf37825011c56"} Feb 27 17:39:42 crc kubenswrapper[4830]: I0227 17:39:42.642785 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6874bf8c6f-lpnwz" event={"ID":"f91775a7-c80a-4262-ad8a-912d9f1b1da8","Type":"ContainerStarted","Data":"fca611bfd4e17f90912fd74e7eb05da1937e7f38bde6153e758fb57fafb788be"} Feb 27 17:39:42 crc kubenswrapper[4830]: I0227 17:39:42.662311 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-55f77b7c67-tb7rb" podStartSLOduration=2.6622919510000003 podStartE2EDuration="2.662291951s" podCreationTimestamp="2026-02-27 17:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:39:42.658589183 +0000 UTC m=+5578.747861646" watchObservedRunningTime="2026-02-27 17:39:42.662291951 +0000 UTC m=+5578.751564404" Feb 27 17:39:43 crc kubenswrapper[4830]: I0227 17:39:43.674454 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6874bf8c6f-lpnwz" event={"ID":"f91775a7-c80a-4262-ad8a-912d9f1b1da8","Type":"ContainerStarted","Data":"3ed57176e05eab0df493d59b2eb579edae3360ab2f3a539695e07ff20ed1e889"} Feb 27 17:39:43 crc kubenswrapper[4830]: I0227 17:39:43.674566 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6874bf8c6f-lpnwz" Feb 27 17:39:43 crc kubenswrapper[4830]: I0227 17:39:43.676733 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-f4df6446b-z2csf" event={"ID":"4ba8e997-3bde-4a23-9748-bd39acb5bcf1","Type":"ContainerStarted","Data":"4fac654330e4d1eaf3c1d9090a6ea314d50ee82a98967494952b50fc9fe09952"} Feb 27 17:39:43 crc kubenswrapper[4830]: I0227 17:39:43.679416 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-f4df6446b-z2csf" Feb 27 17:39:43 crc kubenswrapper[4830]: I0227 17:39:43.679474 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-f4df6446b-z2csf" Feb 27 17:39:43 crc kubenswrapper[4830]: I0227 17:39:43.683130 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5658d7bb68-tdlwd" event={"ID":"13e050dc-75b5-42df-bd0f-04e850d34786","Type":"ContainerStarted","Data":"22fff6996791175c1c70fa11cb923325936ecf88dd8314d9a9c62431ad434cd1"} Feb 27 17:39:43 crc kubenswrapper[4830]: I0227 17:39:43.737429 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6874bf8c6f-lpnwz" podStartSLOduration=3.73739863 podStartE2EDuration="3.73739863s" podCreationTimestamp="2026-02-27 17:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:39:43.733417034 +0000 UTC m=+5579.822689507" watchObservedRunningTime="2026-02-27 17:39:43.73739863 +0000 UTC m=+5579.826671103" Feb 27 17:39:43 crc kubenswrapper[4830]: I0227 17:39:43.760656 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-f4df6446b-z2csf" podStartSLOduration=2.760634409 podStartE2EDuration="2.760634409s" podCreationTimestamp="2026-02-27 17:39:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:39:43.751803656 +0000 UTC m=+5579.841076119" watchObservedRunningTime="2026-02-27 17:39:43.760634409 +0000 UTC m=+5579.849906872" Feb 27 17:39:43 crc kubenswrapper[4830]: I0227 17:39:43.783552 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5658d7bb68-tdlwd" podStartSLOduration=3.783527419 podStartE2EDuration="3.783527419s" podCreationTimestamp="2026-02-27 17:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:39:43.774854431 +0000 UTC m=+5579.864126894" watchObservedRunningTime="2026-02-27 17:39:43.783527419 +0000 UTC m=+5579.872799892" Feb 27 17:39:49 crc kubenswrapper[4830]: I0227 17:39:49.027473 4830 scope.go:117] "RemoveContainer" containerID="26b377974242a081e8eaee1435ffab810e202e6e94f1f90f1cfedc4d2dfe3e20" Feb 27 17:39:50 crc kubenswrapper[4830]: E0227 17:39:50.141073 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 17:39:50 crc kubenswrapper[4830]: E0227 17:39:50.141511 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m48cm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-gvnvz_openshift-marketplace(f1c73a78-1e95-4481-a273-ba7e3b5a127c): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 17:39:50 crc kubenswrapper[4830]: E0227 17:39:50.142709 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-gvnvz" podUID="f1c73a78-1e95-4481-a273-ba7e3b5a127c" Feb 27 17:39:51 crc kubenswrapper[4830]: I0227 17:39:51.276271 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6874bf8c6f-lpnwz" Feb 27 17:39:51 crc kubenswrapper[4830]: I0227 17:39:51.350193 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68db9477c7-tp8ct"] Feb 27 17:39:51 crc kubenswrapper[4830]: I0227 17:39:51.350640 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68db9477c7-tp8ct" podUID="9694a9b4-cc71-4423-a8dd-56a80240d3cd" containerName="dnsmasq-dns" containerID="cri-o://070d0d0852aa3c2d0c5454543e2b849d27c200f77e8b1db2e019452173412d11" gracePeriod=10 Feb 27 17:39:51 crc kubenswrapper[4830]: I0227 17:39:51.776709 4830 generic.go:334] "Generic (PLEG): container finished" podID="9694a9b4-cc71-4423-a8dd-56a80240d3cd" containerID="070d0d0852aa3c2d0c5454543e2b849d27c200f77e8b1db2e019452173412d11" exitCode=0 Feb 27 17:39:51 crc kubenswrapper[4830]: I0227 17:39:51.776807 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68db9477c7-tp8ct" event={"ID":"9694a9b4-cc71-4423-a8dd-56a80240d3cd","Type":"ContainerDied","Data":"070d0d0852aa3c2d0c5454543e2b849d27c200f77e8b1db2e019452173412d11"} Feb 27 17:39:51 crc kubenswrapper[4830]: I0227 17:39:51.913479 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68db9477c7-tp8ct" Feb 27 17:39:52 crc kubenswrapper[4830]: I0227 17:39:52.062196 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9694a9b4-cc71-4423-a8dd-56a80240d3cd-ovsdbserver-nb\") pod \"9694a9b4-cc71-4423-a8dd-56a80240d3cd\" (UID: \"9694a9b4-cc71-4423-a8dd-56a80240d3cd\") " Feb 27 17:39:52 crc kubenswrapper[4830]: I0227 17:39:52.062270 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9694a9b4-cc71-4423-a8dd-56a80240d3cd-ovsdbserver-sb\") pod \"9694a9b4-cc71-4423-a8dd-56a80240d3cd\" (UID: \"9694a9b4-cc71-4423-a8dd-56a80240d3cd\") " Feb 27 17:39:52 crc kubenswrapper[4830]: I0227 17:39:52.062352 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4sb5\" (UniqueName: \"kubernetes.io/projected/9694a9b4-cc71-4423-a8dd-56a80240d3cd-kube-api-access-w4sb5\") pod \"9694a9b4-cc71-4423-a8dd-56a80240d3cd\" (UID: \"9694a9b4-cc71-4423-a8dd-56a80240d3cd\") " Feb 27 17:39:52 crc kubenswrapper[4830]: I0227 17:39:52.062375 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9694a9b4-cc71-4423-a8dd-56a80240d3cd-dns-svc\") pod \"9694a9b4-cc71-4423-a8dd-56a80240d3cd\" (UID: \"9694a9b4-cc71-4423-a8dd-56a80240d3cd\") " Feb 27 17:39:52 crc kubenswrapper[4830]: I0227 17:39:52.062398 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9694a9b4-cc71-4423-a8dd-56a80240d3cd-config\") pod \"9694a9b4-cc71-4423-a8dd-56a80240d3cd\" (UID: \"9694a9b4-cc71-4423-a8dd-56a80240d3cd\") " Feb 27 17:39:52 crc kubenswrapper[4830]: I0227 17:39:52.069029 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9694a9b4-cc71-4423-a8dd-56a80240d3cd-kube-api-access-w4sb5" (OuterVolumeSpecName: "kube-api-access-w4sb5") pod "9694a9b4-cc71-4423-a8dd-56a80240d3cd" (UID: "9694a9b4-cc71-4423-a8dd-56a80240d3cd"). InnerVolumeSpecName "kube-api-access-w4sb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:39:52 crc kubenswrapper[4830]: I0227 17:39:52.126108 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9694a9b4-cc71-4423-a8dd-56a80240d3cd-config" (OuterVolumeSpecName: "config") pod "9694a9b4-cc71-4423-a8dd-56a80240d3cd" (UID: "9694a9b4-cc71-4423-a8dd-56a80240d3cd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:39:52 crc kubenswrapper[4830]: I0227 17:39:52.131447 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9694a9b4-cc71-4423-a8dd-56a80240d3cd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9694a9b4-cc71-4423-a8dd-56a80240d3cd" (UID: "9694a9b4-cc71-4423-a8dd-56a80240d3cd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:39:52 crc kubenswrapper[4830]: I0227 17:39:52.135411 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9694a9b4-cc71-4423-a8dd-56a80240d3cd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9694a9b4-cc71-4423-a8dd-56a80240d3cd" (UID: "9694a9b4-cc71-4423-a8dd-56a80240d3cd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:39:52 crc kubenswrapper[4830]: I0227 17:39:52.163773 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9694a9b4-cc71-4423-a8dd-56a80240d3cd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9694a9b4-cc71-4423-a8dd-56a80240d3cd" (UID: "9694a9b4-cc71-4423-a8dd-56a80240d3cd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:39:52 crc kubenswrapper[4830]: I0227 17:39:52.164025 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9694a9b4-cc71-4423-a8dd-56a80240d3cd-dns-svc\") pod \"9694a9b4-cc71-4423-a8dd-56a80240d3cd\" (UID: \"9694a9b4-cc71-4423-a8dd-56a80240d3cd\") " Feb 27 17:39:52 crc kubenswrapper[4830]: W0227 17:39:52.164160 4830 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/9694a9b4-cc71-4423-a8dd-56a80240d3cd/volumes/kubernetes.io~configmap/dns-svc Feb 27 17:39:52 crc kubenswrapper[4830]: I0227 17:39:52.164180 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9694a9b4-cc71-4423-a8dd-56a80240d3cd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9694a9b4-cc71-4423-a8dd-56a80240d3cd" (UID: "9694a9b4-cc71-4423-a8dd-56a80240d3cd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:39:52 crc kubenswrapper[4830]: I0227 17:39:52.164386 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9694a9b4-cc71-4423-a8dd-56a80240d3cd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 17:39:52 crc kubenswrapper[4830]: I0227 17:39:52.164408 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9694a9b4-cc71-4423-a8dd-56a80240d3cd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 17:39:52 crc kubenswrapper[4830]: I0227 17:39:52.164422 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4sb5\" (UniqueName: \"kubernetes.io/projected/9694a9b4-cc71-4423-a8dd-56a80240d3cd-kube-api-access-w4sb5\") on node \"crc\" DevicePath \"\"" Feb 27 17:39:52 crc kubenswrapper[4830]: I0227 17:39:52.164433 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9694a9b4-cc71-4423-a8dd-56a80240d3cd-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 17:39:52 crc kubenswrapper[4830]: I0227 17:39:52.164442 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9694a9b4-cc71-4423-a8dd-56a80240d3cd-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:39:52 crc kubenswrapper[4830]: I0227 17:39:52.786930 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-f4df6446b-z2csf" Feb 27 17:39:52 crc kubenswrapper[4830]: I0227 17:39:52.791573 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68db9477c7-tp8ct" event={"ID":"9694a9b4-cc71-4423-a8dd-56a80240d3cd","Type":"ContainerDied","Data":"6eb22a64d615f7c774a14a2c4d0fa464151d5ae585f690c4c3c0f5c1416bdc27"} Feb 27 17:39:52 crc kubenswrapper[4830]: I0227 17:39:52.791637 4830 scope.go:117] "RemoveContainer" containerID="070d0d0852aa3c2d0c5454543e2b849d27c200f77e8b1db2e019452173412d11" Feb 27 17:39:52 crc kubenswrapper[4830]: I0227 17:39:52.791681 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68db9477c7-tp8ct" Feb 27 17:39:52 crc kubenswrapper[4830]: I0227 17:39:52.844673 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68db9477c7-tp8ct"] Feb 27 17:39:52 crc kubenswrapper[4830]: I0227 17:39:52.853387 4830 scope.go:117] "RemoveContainer" containerID="5f471a2ccfea9ad5a27ec27aa94fd81c983b35c5387a8770441570a9112004b9" Feb 27 17:39:52 crc kubenswrapper[4830]: I0227 17:39:52.859106 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68db9477c7-tp8ct"] Feb 27 17:39:52 crc kubenswrapper[4830]: I0227 17:39:52.992813 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-f4df6446b-z2csf" Feb 27 17:39:53 crc kubenswrapper[4830]: E0227 17:39:53.765354 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:39:54 crc kubenswrapper[4830]: I0227 17:39:54.778766 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9694a9b4-cc71-4423-a8dd-56a80240d3cd" path="/var/lib/kubelet/pods/9694a9b4-cc71-4423-a8dd-56a80240d3cd/volumes" Feb 27 17:40:00 crc kubenswrapper[4830]: I0227 17:40:00.158000 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536900-rmh78"] Feb 27 17:40:00 crc kubenswrapper[4830]: E0227 17:40:00.159121 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9694a9b4-cc71-4423-a8dd-56a80240d3cd" containerName="dnsmasq-dns" Feb 27 17:40:00 crc kubenswrapper[4830]: I0227 17:40:00.159135 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="9694a9b4-cc71-4423-a8dd-56a80240d3cd" containerName="dnsmasq-dns" Feb 27 17:40:00 crc kubenswrapper[4830]: E0227 17:40:00.159162 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9694a9b4-cc71-4423-a8dd-56a80240d3cd" containerName="init" Feb 27 17:40:00 crc kubenswrapper[4830]: I0227 17:40:00.159168 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="9694a9b4-cc71-4423-a8dd-56a80240d3cd" containerName="init" Feb 27 17:40:00 crc kubenswrapper[4830]: I0227 17:40:00.159340 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="9694a9b4-cc71-4423-a8dd-56a80240d3cd" containerName="dnsmasq-dns" Feb 27 17:40:00 crc kubenswrapper[4830]: I0227 17:40:00.159956 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536900-rmh78" Feb 27 17:40:00 crc kubenswrapper[4830]: I0227 17:40:00.168854 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536900-rmh78"] Feb 27 17:40:00 crc kubenswrapper[4830]: I0227 17:40:00.329423 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qb46\" (UniqueName: \"kubernetes.io/projected/900b9199-11ea-4332-b62c-81ebc07f20dd-kube-api-access-9qb46\") pod \"auto-csr-approver-29536900-rmh78\" (UID: \"900b9199-11ea-4332-b62c-81ebc07f20dd\") " pod="openshift-infra/auto-csr-approver-29536900-rmh78" Feb 27 17:40:00 crc kubenswrapper[4830]: I0227 17:40:00.431686 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qb46\" (UniqueName: \"kubernetes.io/projected/900b9199-11ea-4332-b62c-81ebc07f20dd-kube-api-access-9qb46\") pod \"auto-csr-approver-29536900-rmh78\" (UID: \"900b9199-11ea-4332-b62c-81ebc07f20dd\") " pod="openshift-infra/auto-csr-approver-29536900-rmh78" Feb 27 17:40:00 crc kubenswrapper[4830]: I0227 17:40:00.469733 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qb46\" (UniqueName: \"kubernetes.io/projected/900b9199-11ea-4332-b62c-81ebc07f20dd-kube-api-access-9qb46\") pod \"auto-csr-approver-29536900-rmh78\" (UID: \"900b9199-11ea-4332-b62c-81ebc07f20dd\") " pod="openshift-infra/auto-csr-approver-29536900-rmh78" Feb 27 17:40:00 crc kubenswrapper[4830]: I0227 17:40:00.488741 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536900-rmh78" Feb 27 17:40:01 crc kubenswrapper[4830]: I0227 17:40:01.035413 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536900-rmh78"] Feb 27 17:40:01 crc kubenswrapper[4830]: I0227 17:40:01.282448 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jdrm8"] Feb 27 17:40:01 crc kubenswrapper[4830]: I0227 17:40:01.288841 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jdrm8" Feb 27 17:40:01 crc kubenswrapper[4830]: I0227 17:40:01.306000 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jdrm8"] Feb 27 17:40:01 crc kubenswrapper[4830]: I0227 17:40:01.453198 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3831b9ac-f5bb-406b-86a7-9874f56ee18d-catalog-content\") pod \"certified-operators-jdrm8\" (UID: \"3831b9ac-f5bb-406b-86a7-9874f56ee18d\") " pod="openshift-marketplace/certified-operators-jdrm8" Feb 27 17:40:01 crc kubenswrapper[4830]: I0227 17:40:01.453608 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3831b9ac-f5bb-406b-86a7-9874f56ee18d-utilities\") pod \"certified-operators-jdrm8\" (UID: \"3831b9ac-f5bb-406b-86a7-9874f56ee18d\") " pod="openshift-marketplace/certified-operators-jdrm8" Feb 27 17:40:01 crc kubenswrapper[4830]: I0227 17:40:01.454128 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z674\" (UniqueName: \"kubernetes.io/projected/3831b9ac-f5bb-406b-86a7-9874f56ee18d-kube-api-access-8z674\") pod \"certified-operators-jdrm8\" (UID: \"3831b9ac-f5bb-406b-86a7-9874f56ee18d\") " pod="openshift-marketplace/certified-operators-jdrm8" Feb 27 17:40:01 crc kubenswrapper[4830]: I0227 17:40:01.558376 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3831b9ac-f5bb-406b-86a7-9874f56ee18d-catalog-content\") pod \"certified-operators-jdrm8\" (UID: \"3831b9ac-f5bb-406b-86a7-9874f56ee18d\") " pod="openshift-marketplace/certified-operators-jdrm8" Feb 27 17:40:01 crc kubenswrapper[4830]: I0227 17:40:01.558510 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3831b9ac-f5bb-406b-86a7-9874f56ee18d-utilities\") pod \"certified-operators-jdrm8\" (UID: \"3831b9ac-f5bb-406b-86a7-9874f56ee18d\") " pod="openshift-marketplace/certified-operators-jdrm8" Feb 27 17:40:01 crc kubenswrapper[4830]: I0227 17:40:01.558782 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z674\" (UniqueName: \"kubernetes.io/projected/3831b9ac-f5bb-406b-86a7-9874f56ee18d-kube-api-access-8z674\") pod \"certified-operators-jdrm8\" (UID: \"3831b9ac-f5bb-406b-86a7-9874f56ee18d\") " pod="openshift-marketplace/certified-operators-jdrm8" Feb 27 17:40:01 crc kubenswrapper[4830]: I0227 17:40:01.559042 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3831b9ac-f5bb-406b-86a7-9874f56ee18d-catalog-content\") pod \"certified-operators-jdrm8\" (UID: \"3831b9ac-f5bb-406b-86a7-9874f56ee18d\") " pod="openshift-marketplace/certified-operators-jdrm8" Feb 27 17:40:01 crc kubenswrapper[4830]: I0227 17:40:01.559664 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3831b9ac-f5bb-406b-86a7-9874f56ee18d-utilities\") pod \"certified-operators-jdrm8\" (UID: \"3831b9ac-f5bb-406b-86a7-9874f56ee18d\") " pod="openshift-marketplace/certified-operators-jdrm8" Feb 27 17:40:01 crc kubenswrapper[4830]: I0227 17:40:01.586773 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z674\" (UniqueName: \"kubernetes.io/projected/3831b9ac-f5bb-406b-86a7-9874f56ee18d-kube-api-access-8z674\") pod \"certified-operators-jdrm8\" (UID: \"3831b9ac-f5bb-406b-86a7-9874f56ee18d\") " pod="openshift-marketplace/certified-operators-jdrm8" Feb 27 17:40:01 crc kubenswrapper[4830]: I0227 17:40:01.631327 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jdrm8" Feb 27 17:40:01 crc kubenswrapper[4830]: E0227 17:40:01.763981 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-gvnvz" podUID="f1c73a78-1e95-4481-a273-ba7e3b5a127c" Feb 27 17:40:01 crc kubenswrapper[4830]: I0227 17:40:01.892518 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536900-rmh78" event={"ID":"900b9199-11ea-4332-b62c-81ebc07f20dd","Type":"ContainerStarted","Data":"8cafb44ecb128e786411883275aa63d942df2ebe21a8c1541621797b76f94052"} Feb 27 17:40:02 crc kubenswrapper[4830]: E0227 17:40:02.108988 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 17:40:02 crc kubenswrapper[4830]: E0227 17:40:02.109649 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 17:40:02 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 17:40:02 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9qb46,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536900-rmh78_openshift-infra(900b9199-11ea-4332-b62c-81ebc07f20dd): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 17:40:02 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 17:40:02 crc kubenswrapper[4830]: E0227 17:40:02.111162 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536900-rmh78" podUID="900b9199-11ea-4332-b62c-81ebc07f20dd" Feb 27 17:40:02 crc kubenswrapper[4830]: I0227 17:40:02.209159 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jdrm8"] Feb 27 17:40:02 crc kubenswrapper[4830]: W0227 17:40:02.211120 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3831b9ac_f5bb_406b_86a7_9874f56ee18d.slice/crio-a12e0fe4d05e91d1aaeb7c3f4aee5798b5436589c097d601cea3876c47aafff8 WatchSource:0}: Error finding container a12e0fe4d05e91d1aaeb7c3f4aee5798b5436589c097d601cea3876c47aafff8: Status 404 returned error can't find the container with id a12e0fe4d05e91d1aaeb7c3f4aee5798b5436589c097d601cea3876c47aafff8 Feb 27 17:40:02 crc kubenswrapper[4830]: I0227 17:40:02.905349 4830 generic.go:334] "Generic (PLEG): container finished" podID="3831b9ac-f5bb-406b-86a7-9874f56ee18d" containerID="6ef88a766ae1ff558ba944739e841e92824a9eb83b762dd0484020a9fee6aef5" exitCode=0 Feb 27 17:40:02 crc kubenswrapper[4830]: I0227 17:40:02.905439 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jdrm8" event={"ID":"3831b9ac-f5bb-406b-86a7-9874f56ee18d","Type":"ContainerDied","Data":"6ef88a766ae1ff558ba944739e841e92824a9eb83b762dd0484020a9fee6aef5"} Feb 27 17:40:02 crc kubenswrapper[4830]: I0227 17:40:02.905514 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jdrm8" event={"ID":"3831b9ac-f5bb-406b-86a7-9874f56ee18d","Type":"ContainerStarted","Data":"a12e0fe4d05e91d1aaeb7c3f4aee5798b5436589c097d601cea3876c47aafff8"} Feb 27 17:40:02 crc kubenswrapper[4830]: E0227 17:40:02.907789 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536900-rmh78" podUID="900b9199-11ea-4332-b62c-81ebc07f20dd" Feb 27 17:40:04 crc kubenswrapper[4830]: I0227 17:40:04.930913 4830 generic.go:334] "Generic (PLEG): container finished" podID="3831b9ac-f5bb-406b-86a7-9874f56ee18d" containerID="2cb3a31a0f70c63636f6612f5f4db0af59a58ea2ba70496bd2ce84220024d764" exitCode=0 Feb 27 17:40:04 crc kubenswrapper[4830]: I0227 17:40:04.931001 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jdrm8" event={"ID":"3831b9ac-f5bb-406b-86a7-9874f56ee18d","Type":"ContainerDied","Data":"2cb3a31a0f70c63636f6612f5f4db0af59a58ea2ba70496bd2ce84220024d764"} Feb 27 17:40:05 crc kubenswrapper[4830]: I0227 17:40:05.175181 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-tvmtc"] Feb 27 17:40:05 crc kubenswrapper[4830]: I0227 17:40:05.176942 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-tvmtc" Feb 27 17:40:05 crc kubenswrapper[4830]: I0227 17:40:05.199373 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-tvmtc"] Feb 27 17:40:05 crc kubenswrapper[4830]: I0227 17:40:05.273386 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-0cac-account-create-update-j4sk4"] Feb 27 17:40:05 crc kubenswrapper[4830]: I0227 17:40:05.274786 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0cac-account-create-update-j4sk4" Feb 27 17:40:05 crc kubenswrapper[4830]: I0227 17:40:05.277887 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 27 17:40:05 crc kubenswrapper[4830]: I0227 17:40:05.281241 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0cac-account-create-update-j4sk4"] Feb 27 17:40:05 crc kubenswrapper[4830]: I0227 17:40:05.346625 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c330e013-ad56-4282-9e44-1b0ca4ceaf6c-operator-scripts\") pod \"neutron-db-create-tvmtc\" (UID: \"c330e013-ad56-4282-9e44-1b0ca4ceaf6c\") " pod="openstack/neutron-db-create-tvmtc" Feb 27 17:40:05 crc kubenswrapper[4830]: I0227 17:40:05.346727 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bfs2\" (UniqueName: \"kubernetes.io/projected/c330e013-ad56-4282-9e44-1b0ca4ceaf6c-kube-api-access-4bfs2\") pod \"neutron-db-create-tvmtc\" (UID: \"c330e013-ad56-4282-9e44-1b0ca4ceaf6c\") " pod="openstack/neutron-db-create-tvmtc" Feb 27 17:40:05 crc kubenswrapper[4830]: I0227 17:40:05.453307 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10cd9813-51dd-4c03-a406-ef763ae8952f-operator-scripts\") pod \"neutron-0cac-account-create-update-j4sk4\" (UID: \"10cd9813-51dd-4c03-a406-ef763ae8952f\") " pod="openstack/neutron-0cac-account-create-update-j4sk4" Feb 27 17:40:05 crc kubenswrapper[4830]: I0227 17:40:05.453354 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c330e013-ad56-4282-9e44-1b0ca4ceaf6c-operator-scripts\") pod \"neutron-db-create-tvmtc\" (UID: \"c330e013-ad56-4282-9e44-1b0ca4ceaf6c\") " pod="openstack/neutron-db-create-tvmtc" Feb 27 17:40:05 crc kubenswrapper[4830]: I0227 17:40:05.453423 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bfs2\" (UniqueName: \"kubernetes.io/projected/c330e013-ad56-4282-9e44-1b0ca4ceaf6c-kube-api-access-4bfs2\") pod \"neutron-db-create-tvmtc\" (UID: \"c330e013-ad56-4282-9e44-1b0ca4ceaf6c\") " pod="openstack/neutron-db-create-tvmtc" Feb 27 17:40:05 crc kubenswrapper[4830]: I0227 17:40:05.453470 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxvbq\" (UniqueName: \"kubernetes.io/projected/10cd9813-51dd-4c03-a406-ef763ae8952f-kube-api-access-vxvbq\") pod \"neutron-0cac-account-create-update-j4sk4\" (UID: \"10cd9813-51dd-4c03-a406-ef763ae8952f\") " pod="openstack/neutron-0cac-account-create-update-j4sk4" Feb 27 17:40:05 crc kubenswrapper[4830]: I0227 17:40:05.454816 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c330e013-ad56-4282-9e44-1b0ca4ceaf6c-operator-scripts\") pod \"neutron-db-create-tvmtc\" (UID: \"c330e013-ad56-4282-9e44-1b0ca4ceaf6c\") " pod="openstack/neutron-db-create-tvmtc" Feb 27 17:40:05 crc kubenswrapper[4830]: I0227 17:40:05.474442 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bfs2\" (UniqueName: \"kubernetes.io/projected/c330e013-ad56-4282-9e44-1b0ca4ceaf6c-kube-api-access-4bfs2\") pod \"neutron-db-create-tvmtc\" (UID: \"c330e013-ad56-4282-9e44-1b0ca4ceaf6c\") " pod="openstack/neutron-db-create-tvmtc" Feb 27 17:40:05 crc kubenswrapper[4830]: I0227 17:40:05.495981 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-tvmtc" Feb 27 17:40:05 crc kubenswrapper[4830]: I0227 17:40:05.554820 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10cd9813-51dd-4c03-a406-ef763ae8952f-operator-scripts\") pod \"neutron-0cac-account-create-update-j4sk4\" (UID: \"10cd9813-51dd-4c03-a406-ef763ae8952f\") " pod="openstack/neutron-0cac-account-create-update-j4sk4" Feb 27 17:40:05 crc kubenswrapper[4830]: I0227 17:40:05.555307 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxvbq\" (UniqueName: \"kubernetes.io/projected/10cd9813-51dd-4c03-a406-ef763ae8952f-kube-api-access-vxvbq\") pod \"neutron-0cac-account-create-update-j4sk4\" (UID: \"10cd9813-51dd-4c03-a406-ef763ae8952f\") " pod="openstack/neutron-0cac-account-create-update-j4sk4" Feb 27 17:40:05 crc kubenswrapper[4830]: I0227 17:40:05.555662 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10cd9813-51dd-4c03-a406-ef763ae8952f-operator-scripts\") pod \"neutron-0cac-account-create-update-j4sk4\" (UID: \"10cd9813-51dd-4c03-a406-ef763ae8952f\") " pod="openstack/neutron-0cac-account-create-update-j4sk4" Feb 27 17:40:05 crc kubenswrapper[4830]: I0227 17:40:05.574151 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxvbq\" (UniqueName: \"kubernetes.io/projected/10cd9813-51dd-4c03-a406-ef763ae8952f-kube-api-access-vxvbq\") pod \"neutron-0cac-account-create-update-j4sk4\" (UID: \"10cd9813-51dd-4c03-a406-ef763ae8952f\") " pod="openstack/neutron-0cac-account-create-update-j4sk4" Feb 27 17:40:05 crc kubenswrapper[4830]: I0227 17:40:05.623306 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0cac-account-create-update-j4sk4" Feb 27 17:40:05 crc kubenswrapper[4830]: I0227 17:40:05.942804 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jdrm8" event={"ID":"3831b9ac-f5bb-406b-86a7-9874f56ee18d","Type":"ContainerStarted","Data":"e6352044745aa42a3c35d9c1888db41a27f992e3dc1192489976666473fd0414"} Feb 27 17:40:05 crc kubenswrapper[4830]: I0227 17:40:05.964959 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jdrm8" podStartSLOduration=2.513115588 podStartE2EDuration="4.964923728s" podCreationTimestamp="2026-02-27 17:40:01 +0000 UTC" firstStartedPulling="2026-02-27 17:40:02.908269091 +0000 UTC m=+5598.997541564" lastFinishedPulling="2026-02-27 17:40:05.360077221 +0000 UTC m=+5601.449349704" observedRunningTime="2026-02-27 17:40:05.963086454 +0000 UTC m=+5602.052358917" watchObservedRunningTime="2026-02-27 17:40:05.964923728 +0000 UTC m=+5602.054196191" Feb 27 17:40:05 crc kubenswrapper[4830]: W0227 17:40:05.994478 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc330e013_ad56_4282_9e44_1b0ca4ceaf6c.slice/crio-19a5ead74e859882592dbc1a955cdaeb8d6c621f855dcdcb014f7f231f52683b WatchSource:0}: Error finding container 19a5ead74e859882592dbc1a955cdaeb8d6c621f855dcdcb014f7f231f52683b: Status 404 returned error can't find the container with id 19a5ead74e859882592dbc1a955cdaeb8d6c621f855dcdcb014f7f231f52683b Feb 27 17:40:05 crc kubenswrapper[4830]: I0227 17:40:05.994833 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-tvmtc"] Feb 27 17:40:06 crc kubenswrapper[4830]: I0227 17:40:06.132403 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0cac-account-create-update-j4sk4"] Feb 27 17:40:06 crc kubenswrapper[4830]: W0227 17:40:06.134635 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10cd9813_51dd_4c03_a406_ef763ae8952f.slice/crio-3715dee8a13134b08b9fecbdd5aba158c9ba14b31e9ff0d161d5736dd82365e7 WatchSource:0}: Error finding container 3715dee8a13134b08b9fecbdd5aba158c9ba14b31e9ff0d161d5736dd82365e7: Status 404 returned error can't find the container with id 3715dee8a13134b08b9fecbdd5aba158c9ba14b31e9ff0d161d5736dd82365e7 Feb 27 17:40:06 crc kubenswrapper[4830]: I0227 17:40:06.955039 4830 generic.go:334] "Generic (PLEG): container finished" podID="c330e013-ad56-4282-9e44-1b0ca4ceaf6c" containerID="05360378ab057b13551d131ac1406057daf407391b34f6e4a5314119293b601e" exitCode=0 Feb 27 17:40:06 crc kubenswrapper[4830]: I0227 17:40:06.955176 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-tvmtc" event={"ID":"c330e013-ad56-4282-9e44-1b0ca4ceaf6c","Type":"ContainerDied","Data":"05360378ab057b13551d131ac1406057daf407391b34f6e4a5314119293b601e"} Feb 27 17:40:06 crc kubenswrapper[4830]: I0227 17:40:06.956418 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-tvmtc" event={"ID":"c330e013-ad56-4282-9e44-1b0ca4ceaf6c","Type":"ContainerStarted","Data":"19a5ead74e859882592dbc1a955cdaeb8d6c621f855dcdcb014f7f231f52683b"} Feb 27 17:40:06 crc kubenswrapper[4830]: I0227 17:40:06.958235 4830 generic.go:334] "Generic (PLEG): container finished" podID="10cd9813-51dd-4c03-a406-ef763ae8952f" containerID="5555cb99baa299be153d20d08b4486f006126c30bd46dbed11c76edee3a19b70" exitCode=0 Feb 27 17:40:06 crc kubenswrapper[4830]: I0227 17:40:06.959038 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0cac-account-create-update-j4sk4" event={"ID":"10cd9813-51dd-4c03-a406-ef763ae8952f","Type":"ContainerDied","Data":"5555cb99baa299be153d20d08b4486f006126c30bd46dbed11c76edee3a19b70"} Feb 27 17:40:06 crc kubenswrapper[4830]: I0227 17:40:06.959091 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0cac-account-create-update-j4sk4" event={"ID":"10cd9813-51dd-4c03-a406-ef763ae8952f","Type":"ContainerStarted","Data":"3715dee8a13134b08b9fecbdd5aba158c9ba14b31e9ff0d161d5736dd82365e7"} Feb 27 17:40:08 crc kubenswrapper[4830]: I0227 17:40:08.441629 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0cac-account-create-update-j4sk4" Feb 27 17:40:08 crc kubenswrapper[4830]: I0227 17:40:08.449509 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-tvmtc" Feb 27 17:40:08 crc kubenswrapper[4830]: I0227 17:40:08.540453 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bfs2\" (UniqueName: \"kubernetes.io/projected/c330e013-ad56-4282-9e44-1b0ca4ceaf6c-kube-api-access-4bfs2\") pod \"c330e013-ad56-4282-9e44-1b0ca4ceaf6c\" (UID: \"c330e013-ad56-4282-9e44-1b0ca4ceaf6c\") " Feb 27 17:40:08 crc kubenswrapper[4830]: I0227 17:40:08.540589 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c330e013-ad56-4282-9e44-1b0ca4ceaf6c-operator-scripts\") pod \"c330e013-ad56-4282-9e44-1b0ca4ceaf6c\" (UID: \"c330e013-ad56-4282-9e44-1b0ca4ceaf6c\") " Feb 27 17:40:08 crc kubenswrapper[4830]: I0227 17:40:08.540658 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10cd9813-51dd-4c03-a406-ef763ae8952f-operator-scripts\") pod \"10cd9813-51dd-4c03-a406-ef763ae8952f\" (UID: \"10cd9813-51dd-4c03-a406-ef763ae8952f\") " Feb 27 17:40:08 crc kubenswrapper[4830]: I0227 17:40:08.540777 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxvbq\" (UniqueName: \"kubernetes.io/projected/10cd9813-51dd-4c03-a406-ef763ae8952f-kube-api-access-vxvbq\") pod \"10cd9813-51dd-4c03-a406-ef763ae8952f\" (UID: \"10cd9813-51dd-4c03-a406-ef763ae8952f\") " Feb 27 17:40:08 crc kubenswrapper[4830]: I0227 17:40:08.541552 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10cd9813-51dd-4c03-a406-ef763ae8952f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "10cd9813-51dd-4c03-a406-ef763ae8952f" (UID: "10cd9813-51dd-4c03-a406-ef763ae8952f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:40:08 crc kubenswrapper[4830]: I0227 17:40:08.541573 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c330e013-ad56-4282-9e44-1b0ca4ceaf6c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c330e013-ad56-4282-9e44-1b0ca4ceaf6c" (UID: "c330e013-ad56-4282-9e44-1b0ca4ceaf6c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:40:08 crc kubenswrapper[4830]: I0227 17:40:08.548153 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c330e013-ad56-4282-9e44-1b0ca4ceaf6c-kube-api-access-4bfs2" (OuterVolumeSpecName: "kube-api-access-4bfs2") pod "c330e013-ad56-4282-9e44-1b0ca4ceaf6c" (UID: "c330e013-ad56-4282-9e44-1b0ca4ceaf6c"). InnerVolumeSpecName "kube-api-access-4bfs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:40:08 crc kubenswrapper[4830]: I0227 17:40:08.548211 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10cd9813-51dd-4c03-a406-ef763ae8952f-kube-api-access-vxvbq" (OuterVolumeSpecName: "kube-api-access-vxvbq") pod "10cd9813-51dd-4c03-a406-ef763ae8952f" (UID: "10cd9813-51dd-4c03-a406-ef763ae8952f"). InnerVolumeSpecName "kube-api-access-vxvbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:40:08 crc kubenswrapper[4830]: I0227 17:40:08.643385 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c330e013-ad56-4282-9e44-1b0ca4ceaf6c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:40:08 crc kubenswrapper[4830]: I0227 17:40:08.643426 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10cd9813-51dd-4c03-a406-ef763ae8952f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:40:08 crc kubenswrapper[4830]: I0227 17:40:08.643440 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxvbq\" (UniqueName: \"kubernetes.io/projected/10cd9813-51dd-4c03-a406-ef763ae8952f-kube-api-access-vxvbq\") on node \"crc\" DevicePath \"\"" Feb 27 17:40:08 crc kubenswrapper[4830]: I0227 17:40:08.643461 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bfs2\" (UniqueName: \"kubernetes.io/projected/c330e013-ad56-4282-9e44-1b0ca4ceaf6c-kube-api-access-4bfs2\") on node \"crc\" DevicePath \"\"" Feb 27 17:40:08 crc kubenswrapper[4830]: E0227 17:40:08.863752 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 17:40:08 crc kubenswrapper[4830]: E0227 17:40:08.863943 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 17:40:08 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 17:40:08 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mdb7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536898-vrwjs_openshift-infra(204eb1af-36ad-4de7-9da7-9a37fefd3026): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 17:40:08 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 17:40:08 crc kubenswrapper[4830]: E0227 17:40:08.865215 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:40:08 crc kubenswrapper[4830]: I0227 17:40:08.983876 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-tvmtc" event={"ID":"c330e013-ad56-4282-9e44-1b0ca4ceaf6c","Type":"ContainerDied","Data":"19a5ead74e859882592dbc1a955cdaeb8d6c621f855dcdcb014f7f231f52683b"} Feb 27 17:40:08 crc kubenswrapper[4830]: I0227 17:40:08.983996 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19a5ead74e859882592dbc1a955cdaeb8d6c621f855dcdcb014f7f231f52683b" Feb 27 17:40:08 crc kubenswrapper[4830]: I0227 17:40:08.984000 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-tvmtc" Feb 27 17:40:08 crc kubenswrapper[4830]: I0227 17:40:08.986659 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0cac-account-create-update-j4sk4" event={"ID":"10cd9813-51dd-4c03-a406-ef763ae8952f","Type":"ContainerDied","Data":"3715dee8a13134b08b9fecbdd5aba158c9ba14b31e9ff0d161d5736dd82365e7"} Feb 27 17:40:08 crc kubenswrapper[4830]: I0227 17:40:08.986710 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0cac-account-create-update-j4sk4" Feb 27 17:40:08 crc kubenswrapper[4830]: I0227 17:40:08.986722 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3715dee8a13134b08b9fecbdd5aba158c9ba14b31e9ff0d161d5736dd82365e7" Feb 27 17:40:10 crc kubenswrapper[4830]: I0227 17:40:10.524324 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-bqrrs"] Feb 27 17:40:10 crc kubenswrapper[4830]: E0227 17:40:10.525112 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10cd9813-51dd-4c03-a406-ef763ae8952f" containerName="mariadb-account-create-update" Feb 27 17:40:10 crc kubenswrapper[4830]: I0227 17:40:10.525126 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="10cd9813-51dd-4c03-a406-ef763ae8952f" containerName="mariadb-account-create-update" Feb 27 17:40:10 crc kubenswrapper[4830]: E0227 17:40:10.525139 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c330e013-ad56-4282-9e44-1b0ca4ceaf6c" containerName="mariadb-database-create" Feb 27 17:40:10 crc kubenswrapper[4830]: I0227 17:40:10.525145 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="c330e013-ad56-4282-9e44-1b0ca4ceaf6c" containerName="mariadb-database-create" Feb 27 17:40:10 crc kubenswrapper[4830]: I0227 17:40:10.525327 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="c330e013-ad56-4282-9e44-1b0ca4ceaf6c" containerName="mariadb-database-create" Feb 27 17:40:10 crc kubenswrapper[4830]: I0227 17:40:10.525338 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="10cd9813-51dd-4c03-a406-ef763ae8952f" containerName="mariadb-account-create-update" Feb 27 17:40:10 crc kubenswrapper[4830]: I0227 17:40:10.526027 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bqrrs" Feb 27 17:40:10 crc kubenswrapper[4830]: I0227 17:40:10.528350 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-57n2d" Feb 27 17:40:10 crc kubenswrapper[4830]: I0227 17:40:10.528784 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 27 17:40:10 crc kubenswrapper[4830]: I0227 17:40:10.529224 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 27 17:40:10 crc kubenswrapper[4830]: I0227 17:40:10.532353 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-bqrrs"] Feb 27 17:40:10 crc kubenswrapper[4830]: I0227 17:40:10.579594 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27-combined-ca-bundle\") pod \"neutron-db-sync-bqrrs\" (UID: \"cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27\") " pod="openstack/neutron-db-sync-bqrrs" Feb 27 17:40:10 crc kubenswrapper[4830]: I0227 17:40:10.579784 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27-config\") pod \"neutron-db-sync-bqrrs\" (UID: \"cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27\") " pod="openstack/neutron-db-sync-bqrrs" Feb 27 17:40:10 crc kubenswrapper[4830]: I0227 17:40:10.579847 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2brbh\" (UniqueName: \"kubernetes.io/projected/cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27-kube-api-access-2brbh\") pod \"neutron-db-sync-bqrrs\" (UID: \"cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27\") " pod="openstack/neutron-db-sync-bqrrs" Feb 27 17:40:10 crc kubenswrapper[4830]: I0227 17:40:10.681812 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27-combined-ca-bundle\") pod \"neutron-db-sync-bqrrs\" (UID: \"cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27\") " pod="openstack/neutron-db-sync-bqrrs" Feb 27 17:40:10 crc kubenswrapper[4830]: I0227 17:40:10.681968 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27-config\") pod \"neutron-db-sync-bqrrs\" (UID: \"cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27\") " pod="openstack/neutron-db-sync-bqrrs" Feb 27 17:40:10 crc kubenswrapper[4830]: I0227 17:40:10.682009 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2brbh\" (UniqueName: \"kubernetes.io/projected/cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27-kube-api-access-2brbh\") pod \"neutron-db-sync-bqrrs\" (UID: \"cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27\") " pod="openstack/neutron-db-sync-bqrrs" Feb 27 17:40:10 crc kubenswrapper[4830]: I0227 17:40:10.689276 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27-combined-ca-bundle\") pod \"neutron-db-sync-bqrrs\" (UID: \"cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27\") " pod="openstack/neutron-db-sync-bqrrs" Feb 27 17:40:10 crc kubenswrapper[4830]: I0227 17:40:10.689802 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27-config\") pod \"neutron-db-sync-bqrrs\" (UID: \"cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27\") " pod="openstack/neutron-db-sync-bqrrs" Feb 27 17:40:10 crc kubenswrapper[4830]: I0227 17:40:10.711687 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2brbh\" (UniqueName: \"kubernetes.io/projected/cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27-kube-api-access-2brbh\") pod \"neutron-db-sync-bqrrs\" (UID: \"cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27\") " pod="openstack/neutron-db-sync-bqrrs" Feb 27 17:40:10 crc kubenswrapper[4830]: I0227 17:40:10.847012 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bqrrs" Feb 27 17:40:11 crc kubenswrapper[4830]: I0227 17:40:11.395247 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-bqrrs"] Feb 27 17:40:11 crc kubenswrapper[4830]: I0227 17:40:11.632112 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jdrm8" Feb 27 17:40:11 crc kubenswrapper[4830]: I0227 17:40:11.632203 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jdrm8" Feb 27 17:40:11 crc kubenswrapper[4830]: I0227 17:40:11.715998 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jdrm8" Feb 27 17:40:12 crc kubenswrapper[4830]: I0227 17:40:12.039714 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bqrrs" event={"ID":"cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27","Type":"ContainerStarted","Data":"1741f85485987bfb4d4d76628430e674b6e549e230ddfafab44f9bad653a361a"} Feb 27 17:40:12 crc kubenswrapper[4830]: I0227 17:40:12.039780 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bqrrs" event={"ID":"cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27","Type":"ContainerStarted","Data":"108585ad29957cc3a298f128c9c95ba8d3fdc269e93098476b343e0919b3da1f"} Feb 27 17:40:12 crc kubenswrapper[4830]: I0227 17:40:12.064158 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-bqrrs" podStartSLOduration=2.064130364 podStartE2EDuration="2.064130364s" podCreationTimestamp="2026-02-27 17:40:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:40:12.060235391 +0000 UTC m=+5608.149507864" watchObservedRunningTime="2026-02-27 17:40:12.064130364 +0000 UTC m=+5608.153402837" Feb 27 17:40:12 crc kubenswrapper[4830]: I0227 17:40:12.095980 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jdrm8" Feb 27 17:40:12 crc kubenswrapper[4830]: I0227 17:40:12.161682 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jdrm8"] Feb 27 17:40:13 crc kubenswrapper[4830]: E0227 17:40:13.766096 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-gvnvz" podUID="f1c73a78-1e95-4481-a273-ba7e3b5a127c" Feb 27 17:40:14 crc kubenswrapper[4830]: I0227 17:40:14.065132 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jdrm8" podUID="3831b9ac-f5bb-406b-86a7-9874f56ee18d" containerName="registry-server" containerID="cri-o://e6352044745aa42a3c35d9c1888db41a27f992e3dc1192489976666473fd0414" gracePeriod=2 Feb 27 17:40:14 crc kubenswrapper[4830]: I0227 17:40:14.547417 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jdrm8" Feb 27 17:40:14 crc kubenswrapper[4830]: I0227 17:40:14.576835 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z674\" (UniqueName: \"kubernetes.io/projected/3831b9ac-f5bb-406b-86a7-9874f56ee18d-kube-api-access-8z674\") pod \"3831b9ac-f5bb-406b-86a7-9874f56ee18d\" (UID: \"3831b9ac-f5bb-406b-86a7-9874f56ee18d\") " Feb 27 17:40:14 crc kubenswrapper[4830]: I0227 17:40:14.576899 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3831b9ac-f5bb-406b-86a7-9874f56ee18d-catalog-content\") pod \"3831b9ac-f5bb-406b-86a7-9874f56ee18d\" (UID: \"3831b9ac-f5bb-406b-86a7-9874f56ee18d\") " Feb 27 17:40:14 crc kubenswrapper[4830]: I0227 17:40:14.577059 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3831b9ac-f5bb-406b-86a7-9874f56ee18d-utilities\") pod \"3831b9ac-f5bb-406b-86a7-9874f56ee18d\" (UID: \"3831b9ac-f5bb-406b-86a7-9874f56ee18d\") " Feb 27 17:40:14 crc kubenswrapper[4830]: I0227 17:40:14.580414 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3831b9ac-f5bb-406b-86a7-9874f56ee18d-utilities" (OuterVolumeSpecName: "utilities") pod "3831b9ac-f5bb-406b-86a7-9874f56ee18d" (UID: "3831b9ac-f5bb-406b-86a7-9874f56ee18d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:40:14 crc kubenswrapper[4830]: I0227 17:40:14.584360 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3831b9ac-f5bb-406b-86a7-9874f56ee18d-kube-api-access-8z674" (OuterVolumeSpecName: "kube-api-access-8z674") pod "3831b9ac-f5bb-406b-86a7-9874f56ee18d" (UID: "3831b9ac-f5bb-406b-86a7-9874f56ee18d"). InnerVolumeSpecName "kube-api-access-8z674". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:40:14 crc kubenswrapper[4830]: I0227 17:40:14.657277 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3831b9ac-f5bb-406b-86a7-9874f56ee18d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3831b9ac-f5bb-406b-86a7-9874f56ee18d" (UID: "3831b9ac-f5bb-406b-86a7-9874f56ee18d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:40:14 crc kubenswrapper[4830]: I0227 17:40:14.679229 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8z674\" (UniqueName: \"kubernetes.io/projected/3831b9ac-f5bb-406b-86a7-9874f56ee18d-kube-api-access-8z674\") on node \"crc\" DevicePath \"\"" Feb 27 17:40:14 crc kubenswrapper[4830]: I0227 17:40:14.679270 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3831b9ac-f5bb-406b-86a7-9874f56ee18d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:40:14 crc kubenswrapper[4830]: I0227 17:40:14.679280 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3831b9ac-f5bb-406b-86a7-9874f56ee18d-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:40:15 crc kubenswrapper[4830]: I0227 17:40:15.077363 4830 generic.go:334] "Generic (PLEG): container finished" podID="3831b9ac-f5bb-406b-86a7-9874f56ee18d" containerID="e6352044745aa42a3c35d9c1888db41a27f992e3dc1192489976666473fd0414" exitCode=0 Feb 27 17:40:15 crc kubenswrapper[4830]: I0227 17:40:15.077463 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jdrm8" event={"ID":"3831b9ac-f5bb-406b-86a7-9874f56ee18d","Type":"ContainerDied","Data":"e6352044745aa42a3c35d9c1888db41a27f992e3dc1192489976666473fd0414"} Feb 27 17:40:15 crc kubenswrapper[4830]: I0227 17:40:15.078065 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jdrm8" event={"ID":"3831b9ac-f5bb-406b-86a7-9874f56ee18d","Type":"ContainerDied","Data":"a12e0fe4d05e91d1aaeb7c3f4aee5798b5436589c097d601cea3876c47aafff8"} Feb 27 17:40:15 crc kubenswrapper[4830]: I0227 17:40:15.077479 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jdrm8" Feb 27 17:40:15 crc kubenswrapper[4830]: I0227 17:40:15.078117 4830 scope.go:117] "RemoveContainer" containerID="e6352044745aa42a3c35d9c1888db41a27f992e3dc1192489976666473fd0414" Feb 27 17:40:15 crc kubenswrapper[4830]: I0227 17:40:15.110831 4830 scope.go:117] "RemoveContainer" containerID="2cb3a31a0f70c63636f6612f5f4db0af59a58ea2ba70496bd2ce84220024d764" Feb 27 17:40:15 crc kubenswrapper[4830]: I0227 17:40:15.111441 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jdrm8"] Feb 27 17:40:15 crc kubenswrapper[4830]: I0227 17:40:15.131678 4830 scope.go:117] "RemoveContainer" containerID="6ef88a766ae1ff558ba944739e841e92824a9eb83b762dd0484020a9fee6aef5" Feb 27 17:40:15 crc kubenswrapper[4830]: I0227 17:40:15.139928 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jdrm8"] Feb 27 17:40:15 crc kubenswrapper[4830]: I0227 17:40:15.172760 4830 scope.go:117] "RemoveContainer" containerID="e6352044745aa42a3c35d9c1888db41a27f992e3dc1192489976666473fd0414" Feb 27 17:40:15 crc kubenswrapper[4830]: E0227 17:40:15.173251 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6352044745aa42a3c35d9c1888db41a27f992e3dc1192489976666473fd0414\": container with ID starting with e6352044745aa42a3c35d9c1888db41a27f992e3dc1192489976666473fd0414 not found: ID does not exist" containerID="e6352044745aa42a3c35d9c1888db41a27f992e3dc1192489976666473fd0414" Feb 27 17:40:15 crc kubenswrapper[4830]: I0227 17:40:15.173306 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6352044745aa42a3c35d9c1888db41a27f992e3dc1192489976666473fd0414"} err="failed to get container status \"e6352044745aa42a3c35d9c1888db41a27f992e3dc1192489976666473fd0414\": rpc error: code = NotFound desc = could not find container \"e6352044745aa42a3c35d9c1888db41a27f992e3dc1192489976666473fd0414\": container with ID starting with e6352044745aa42a3c35d9c1888db41a27f992e3dc1192489976666473fd0414 not found: ID does not exist" Feb 27 17:40:15 crc kubenswrapper[4830]: I0227 17:40:15.173341 4830 scope.go:117] "RemoveContainer" containerID="2cb3a31a0f70c63636f6612f5f4db0af59a58ea2ba70496bd2ce84220024d764" Feb 27 17:40:15 crc kubenswrapper[4830]: E0227 17:40:15.173822 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cb3a31a0f70c63636f6612f5f4db0af59a58ea2ba70496bd2ce84220024d764\": container with ID starting with 2cb3a31a0f70c63636f6612f5f4db0af59a58ea2ba70496bd2ce84220024d764 not found: ID does not exist" containerID="2cb3a31a0f70c63636f6612f5f4db0af59a58ea2ba70496bd2ce84220024d764" Feb 27 17:40:15 crc kubenswrapper[4830]: I0227 17:40:15.173865 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cb3a31a0f70c63636f6612f5f4db0af59a58ea2ba70496bd2ce84220024d764"} err="failed to get container status \"2cb3a31a0f70c63636f6612f5f4db0af59a58ea2ba70496bd2ce84220024d764\": rpc error: code = NotFound desc = could not find container \"2cb3a31a0f70c63636f6612f5f4db0af59a58ea2ba70496bd2ce84220024d764\": container with ID starting with 2cb3a31a0f70c63636f6612f5f4db0af59a58ea2ba70496bd2ce84220024d764 not found: ID does not exist" Feb 27 17:40:15 crc kubenswrapper[4830]: I0227 17:40:15.173899 4830 scope.go:117] "RemoveContainer" containerID="6ef88a766ae1ff558ba944739e841e92824a9eb83b762dd0484020a9fee6aef5" Feb 27 17:40:15 crc kubenswrapper[4830]: E0227 17:40:15.174358 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ef88a766ae1ff558ba944739e841e92824a9eb83b762dd0484020a9fee6aef5\": container with ID starting with 6ef88a766ae1ff558ba944739e841e92824a9eb83b762dd0484020a9fee6aef5 not found: ID does not exist" containerID="6ef88a766ae1ff558ba944739e841e92824a9eb83b762dd0484020a9fee6aef5" Feb 27 17:40:15 crc kubenswrapper[4830]: I0227 17:40:15.174389 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ef88a766ae1ff558ba944739e841e92824a9eb83b762dd0484020a9fee6aef5"} err="failed to get container status \"6ef88a766ae1ff558ba944739e841e92824a9eb83b762dd0484020a9fee6aef5\": rpc error: code = NotFound desc = could not find container \"6ef88a766ae1ff558ba944739e841e92824a9eb83b762dd0484020a9fee6aef5\": container with ID starting with 6ef88a766ae1ff558ba944739e841e92824a9eb83b762dd0484020a9fee6aef5 not found: ID does not exist" Feb 27 17:40:15 crc kubenswrapper[4830]: E0227 17:40:15.642535 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 17:40:15 crc kubenswrapper[4830]: E0227 17:40:15.642684 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 17:40:15 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 17:40:15 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9qb46,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536900-rmh78_openshift-infra(900b9199-11ea-4332-b62c-81ebc07f20dd): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 17:40:15 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 17:40:15 crc kubenswrapper[4830]: E0227 17:40:15.643970 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536900-rmh78" podUID="900b9199-11ea-4332-b62c-81ebc07f20dd" Feb 27 17:40:16 crc kubenswrapper[4830]: I0227 17:40:16.091389 4830 generic.go:334] "Generic (PLEG): container finished" podID="cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27" containerID="1741f85485987bfb4d4d76628430e674b6e549e230ddfafab44f9bad653a361a" exitCode=0 Feb 27 17:40:16 crc kubenswrapper[4830]: I0227 17:40:16.091546 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bqrrs" event={"ID":"cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27","Type":"ContainerDied","Data":"1741f85485987bfb4d4d76628430e674b6e549e230ddfafab44f9bad653a361a"} Feb 27 17:40:16 crc kubenswrapper[4830]: I0227 17:40:16.780013 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3831b9ac-f5bb-406b-86a7-9874f56ee18d" path="/var/lib/kubelet/pods/3831b9ac-f5bb-406b-86a7-9874f56ee18d/volumes" Feb 27 17:40:17 crc kubenswrapper[4830]: I0227 17:40:17.473622 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bqrrs" Feb 27 17:40:17 crc kubenswrapper[4830]: I0227 17:40:17.558690 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2brbh\" (UniqueName: \"kubernetes.io/projected/cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27-kube-api-access-2brbh\") pod \"cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27\" (UID: \"cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27\") " Feb 27 17:40:17 crc kubenswrapper[4830]: I0227 17:40:17.558783 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27-config\") pod \"cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27\" (UID: \"cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27\") " Feb 27 17:40:17 crc kubenswrapper[4830]: I0227 17:40:17.559002 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27-combined-ca-bundle\") pod \"cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27\" (UID: \"cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27\") " Feb 27 17:40:17 crc kubenswrapper[4830]: I0227 17:40:17.565385 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27-kube-api-access-2brbh" (OuterVolumeSpecName: "kube-api-access-2brbh") pod "cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27" (UID: "cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27"). InnerVolumeSpecName "kube-api-access-2brbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:40:17 crc kubenswrapper[4830]: I0227 17:40:17.588057 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27" (UID: "cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:40:17 crc kubenswrapper[4830]: I0227 17:40:17.605710 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27-config" (OuterVolumeSpecName: "config") pod "cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27" (UID: "cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:40:17 crc kubenswrapper[4830]: I0227 17:40:17.661287 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:40:17 crc kubenswrapper[4830]: I0227 17:40:17.661632 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2brbh\" (UniqueName: \"kubernetes.io/projected/cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27-kube-api-access-2brbh\") on node \"crc\" DevicePath \"\"" Feb 27 17:40:17 crc kubenswrapper[4830]: I0227 17:40:17.661738 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.117377 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-bqrrs" event={"ID":"cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27","Type":"ContainerDied","Data":"108585ad29957cc3a298f128c9c95ba8d3fdc269e93098476b343e0919b3da1f"} Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.117427 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="108585ad29957cc3a298f128c9c95ba8d3fdc269e93098476b343e0919b3da1f" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.117528 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-bqrrs" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.329070 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d97fd78cc-58qxt"] Feb 27 17:40:18 crc kubenswrapper[4830]: E0227 17:40:18.329463 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3831b9ac-f5bb-406b-86a7-9874f56ee18d" containerName="extract-content" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.329480 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="3831b9ac-f5bb-406b-86a7-9874f56ee18d" containerName="extract-content" Feb 27 17:40:18 crc kubenswrapper[4830]: E0227 17:40:18.329502 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27" containerName="neutron-db-sync" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.329508 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27" containerName="neutron-db-sync" Feb 27 17:40:18 crc kubenswrapper[4830]: E0227 17:40:18.329526 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3831b9ac-f5bb-406b-86a7-9874f56ee18d" containerName="extract-utilities" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.329533 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="3831b9ac-f5bb-406b-86a7-9874f56ee18d" containerName="extract-utilities" Feb 27 17:40:18 crc kubenswrapper[4830]: E0227 17:40:18.329544 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3831b9ac-f5bb-406b-86a7-9874f56ee18d" containerName="registry-server" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.329549 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="3831b9ac-f5bb-406b-86a7-9874f56ee18d" containerName="registry-server" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.329710 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27" containerName="neutron-db-sync" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.329725 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="3831b9ac-f5bb-406b-86a7-9874f56ee18d" containerName="registry-server" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.330644 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fd78cc-58qxt" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.359798 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fd78cc-58qxt"] Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.376482 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0779d5aa-90c7-4495-b109-f57586a59f70-dns-svc\") pod \"dnsmasq-dns-6d97fd78cc-58qxt\" (UID: \"0779d5aa-90c7-4495-b109-f57586a59f70\") " pod="openstack/dnsmasq-dns-6d97fd78cc-58qxt" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.376549 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0779d5aa-90c7-4495-b109-f57586a59f70-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fd78cc-58qxt\" (UID: \"0779d5aa-90c7-4495-b109-f57586a59f70\") " pod="openstack/dnsmasq-dns-6d97fd78cc-58qxt" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.376588 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tq97\" (UniqueName: \"kubernetes.io/projected/0779d5aa-90c7-4495-b109-f57586a59f70-kube-api-access-7tq97\") pod \"dnsmasq-dns-6d97fd78cc-58qxt\" (UID: \"0779d5aa-90c7-4495-b109-f57586a59f70\") " pod="openstack/dnsmasq-dns-6d97fd78cc-58qxt" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.376606 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0779d5aa-90c7-4495-b109-f57586a59f70-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fd78cc-58qxt\" (UID: \"0779d5aa-90c7-4495-b109-f57586a59f70\") " pod="openstack/dnsmasq-dns-6d97fd78cc-58qxt" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.376642 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0779d5aa-90c7-4495-b109-f57586a59f70-config\") pod \"dnsmasq-dns-6d97fd78cc-58qxt\" (UID: \"0779d5aa-90c7-4495-b109-f57586a59f70\") " pod="openstack/dnsmasq-dns-6d97fd78cc-58qxt" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.429293 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-cbb7cdb9f-mhl2g"] Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.431304 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cbb7cdb9f-mhl2g" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.439125 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-57n2d" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.439310 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.439445 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.469305 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cbb7cdb9f-mhl2g"] Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.478102 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0779d5aa-90c7-4495-b109-f57586a59f70-dns-svc\") pod \"dnsmasq-dns-6d97fd78cc-58qxt\" (UID: \"0779d5aa-90c7-4495-b109-f57586a59f70\") " pod="openstack/dnsmasq-dns-6d97fd78cc-58qxt" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.478161 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0779d5aa-90c7-4495-b109-f57586a59f70-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fd78cc-58qxt\" (UID: \"0779d5aa-90c7-4495-b109-f57586a59f70\") " pod="openstack/dnsmasq-dns-6d97fd78cc-58qxt" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.478198 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2f64f8e1-a586-468f-a64d-18ea603f34c2-httpd-config\") pod \"neutron-cbb7cdb9f-mhl2g\" (UID: \"2f64f8e1-a586-468f-a64d-18ea603f34c2\") " pod="openstack/neutron-cbb7cdb9f-mhl2g" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.478224 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tq97\" (UniqueName: \"kubernetes.io/projected/0779d5aa-90c7-4495-b109-f57586a59f70-kube-api-access-7tq97\") pod \"dnsmasq-dns-6d97fd78cc-58qxt\" (UID: \"0779d5aa-90c7-4495-b109-f57586a59f70\") " pod="openstack/dnsmasq-dns-6d97fd78cc-58qxt" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.478257 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0779d5aa-90c7-4495-b109-f57586a59f70-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fd78cc-58qxt\" (UID: \"0779d5aa-90c7-4495-b109-f57586a59f70\") " pod="openstack/dnsmasq-dns-6d97fd78cc-58qxt" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.478277 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2f64f8e1-a586-468f-a64d-18ea603f34c2-config\") pod \"neutron-cbb7cdb9f-mhl2g\" (UID: \"2f64f8e1-a586-468f-a64d-18ea603f34c2\") " pod="openstack/neutron-cbb7cdb9f-mhl2g" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.478308 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0779d5aa-90c7-4495-b109-f57586a59f70-config\") pod \"dnsmasq-dns-6d97fd78cc-58qxt\" (UID: \"0779d5aa-90c7-4495-b109-f57586a59f70\") " pod="openstack/dnsmasq-dns-6d97fd78cc-58qxt" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.478328 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f64f8e1-a586-468f-a64d-18ea603f34c2-combined-ca-bundle\") pod \"neutron-cbb7cdb9f-mhl2g\" (UID: \"2f64f8e1-a586-468f-a64d-18ea603f34c2\") " pod="openstack/neutron-cbb7cdb9f-mhl2g" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.478346 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf4th\" (UniqueName: \"kubernetes.io/projected/2f64f8e1-a586-468f-a64d-18ea603f34c2-kube-api-access-zf4th\") pod \"neutron-cbb7cdb9f-mhl2g\" (UID: \"2f64f8e1-a586-468f-a64d-18ea603f34c2\") " pod="openstack/neutron-cbb7cdb9f-mhl2g" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.479178 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0779d5aa-90c7-4495-b109-f57586a59f70-dns-svc\") pod \"dnsmasq-dns-6d97fd78cc-58qxt\" (UID: \"0779d5aa-90c7-4495-b109-f57586a59f70\") " pod="openstack/dnsmasq-dns-6d97fd78cc-58qxt" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.479188 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0779d5aa-90c7-4495-b109-f57586a59f70-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fd78cc-58qxt\" (UID: \"0779d5aa-90c7-4495-b109-f57586a59f70\") " pod="openstack/dnsmasq-dns-6d97fd78cc-58qxt" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.479810 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0779d5aa-90c7-4495-b109-f57586a59f70-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fd78cc-58qxt\" (UID: \"0779d5aa-90c7-4495-b109-f57586a59f70\") " pod="openstack/dnsmasq-dns-6d97fd78cc-58qxt" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.479958 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0779d5aa-90c7-4495-b109-f57586a59f70-config\") pod \"dnsmasq-dns-6d97fd78cc-58qxt\" (UID: \"0779d5aa-90c7-4495-b109-f57586a59f70\") " pod="openstack/dnsmasq-dns-6d97fd78cc-58qxt" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.517856 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tq97\" (UniqueName: \"kubernetes.io/projected/0779d5aa-90c7-4495-b109-f57586a59f70-kube-api-access-7tq97\") pod \"dnsmasq-dns-6d97fd78cc-58qxt\" (UID: \"0779d5aa-90c7-4495-b109-f57586a59f70\") " pod="openstack/dnsmasq-dns-6d97fd78cc-58qxt" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.579746 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f64f8e1-a586-468f-a64d-18ea603f34c2-combined-ca-bundle\") pod \"neutron-cbb7cdb9f-mhl2g\" (UID: \"2f64f8e1-a586-468f-a64d-18ea603f34c2\") " pod="openstack/neutron-cbb7cdb9f-mhl2g" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.580244 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf4th\" (UniqueName: \"kubernetes.io/projected/2f64f8e1-a586-468f-a64d-18ea603f34c2-kube-api-access-zf4th\") pod \"neutron-cbb7cdb9f-mhl2g\" (UID: \"2f64f8e1-a586-468f-a64d-18ea603f34c2\") " pod="openstack/neutron-cbb7cdb9f-mhl2g" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.580432 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2f64f8e1-a586-468f-a64d-18ea603f34c2-httpd-config\") pod \"neutron-cbb7cdb9f-mhl2g\" (UID: \"2f64f8e1-a586-468f-a64d-18ea603f34c2\") " pod="openstack/neutron-cbb7cdb9f-mhl2g" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.580471 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2f64f8e1-a586-468f-a64d-18ea603f34c2-config\") pod \"neutron-cbb7cdb9f-mhl2g\" (UID: \"2f64f8e1-a586-468f-a64d-18ea603f34c2\") " pod="openstack/neutron-cbb7cdb9f-mhl2g" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.584333 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f64f8e1-a586-468f-a64d-18ea603f34c2-combined-ca-bundle\") pod \"neutron-cbb7cdb9f-mhl2g\" (UID: \"2f64f8e1-a586-468f-a64d-18ea603f34c2\") " pod="openstack/neutron-cbb7cdb9f-mhl2g" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.586601 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2f64f8e1-a586-468f-a64d-18ea603f34c2-httpd-config\") pod \"neutron-cbb7cdb9f-mhl2g\" (UID: \"2f64f8e1-a586-468f-a64d-18ea603f34c2\") " pod="openstack/neutron-cbb7cdb9f-mhl2g" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.587216 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2f64f8e1-a586-468f-a64d-18ea603f34c2-config\") pod \"neutron-cbb7cdb9f-mhl2g\" (UID: \"2f64f8e1-a586-468f-a64d-18ea603f34c2\") " pod="openstack/neutron-cbb7cdb9f-mhl2g" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.600708 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf4th\" (UniqueName: \"kubernetes.io/projected/2f64f8e1-a586-468f-a64d-18ea603f34c2-kube-api-access-zf4th\") pod \"neutron-cbb7cdb9f-mhl2g\" (UID: \"2f64f8e1-a586-468f-a64d-18ea603f34c2\") " pod="openstack/neutron-cbb7cdb9f-mhl2g" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.651908 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fd78cc-58qxt" Feb 27 17:40:18 crc kubenswrapper[4830]: I0227 17:40:18.758367 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cbb7cdb9f-mhl2g" Feb 27 17:40:19 crc kubenswrapper[4830]: W0227 17:40:19.136873 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0779d5aa_90c7_4495_b109_f57586a59f70.slice/crio-a39bd8c8cbe15eaa1a33cb85b4e1c790a016b9d5bcbde19e5d9a50852c446b4a WatchSource:0}: Error finding container a39bd8c8cbe15eaa1a33cb85b4e1c790a016b9d5bcbde19e5d9a50852c446b4a: Status 404 returned error can't find the container with id a39bd8c8cbe15eaa1a33cb85b4e1c790a016b9d5bcbde19e5d9a50852c446b4a Feb 27 17:40:19 crc kubenswrapper[4830]: I0227 17:40:19.139511 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fd78cc-58qxt"] Feb 27 17:40:19 crc kubenswrapper[4830]: I0227 17:40:19.381180 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cbb7cdb9f-mhl2g"] Feb 27 17:40:19 crc kubenswrapper[4830]: W0227 17:40:19.384185 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f64f8e1_a586_468f_a64d_18ea603f34c2.slice/crio-19806fd7ea6132013c0bc0cb7ee5feb7798301215fd11548ecd0bd86071eb902 WatchSource:0}: Error finding container 19806fd7ea6132013c0bc0cb7ee5feb7798301215fd11548ecd0bd86071eb902: Status 404 returned error can't find the container with id 19806fd7ea6132013c0bc0cb7ee5feb7798301215fd11548ecd0bd86071eb902 Feb 27 17:40:20 crc kubenswrapper[4830]: I0227 17:40:20.136313 4830 generic.go:334] "Generic (PLEG): container finished" podID="0779d5aa-90c7-4495-b109-f57586a59f70" containerID="3d1ba307e4f49e28ff6a625b72b1b7ddb45b91af5cb3869cde497f1824680d24" exitCode=0 Feb 27 17:40:20 crc kubenswrapper[4830]: I0227 17:40:20.136361 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fd78cc-58qxt" event={"ID":"0779d5aa-90c7-4495-b109-f57586a59f70","Type":"ContainerDied","Data":"3d1ba307e4f49e28ff6a625b72b1b7ddb45b91af5cb3869cde497f1824680d24"} Feb 27 17:40:20 crc kubenswrapper[4830]: I0227 17:40:20.136731 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fd78cc-58qxt" event={"ID":"0779d5aa-90c7-4495-b109-f57586a59f70","Type":"ContainerStarted","Data":"a39bd8c8cbe15eaa1a33cb85b4e1c790a016b9d5bcbde19e5d9a50852c446b4a"} Feb 27 17:40:20 crc kubenswrapper[4830]: I0227 17:40:20.139028 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cbb7cdb9f-mhl2g" event={"ID":"2f64f8e1-a586-468f-a64d-18ea603f34c2","Type":"ContainerStarted","Data":"d3bab2a7e276c9604ce92bda7c6c621d57647d8ab825284a6236b8864d950475"} Feb 27 17:40:20 crc kubenswrapper[4830]: I0227 17:40:20.139079 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cbb7cdb9f-mhl2g" event={"ID":"2f64f8e1-a586-468f-a64d-18ea603f34c2","Type":"ContainerStarted","Data":"c7eb67b0e00ba6a3d6dcc6d9e372aeaa3d1a6fc620f493867b3885fde64f9dec"} Feb 27 17:40:20 crc kubenswrapper[4830]: I0227 17:40:20.139096 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cbb7cdb9f-mhl2g" event={"ID":"2f64f8e1-a586-468f-a64d-18ea603f34c2","Type":"ContainerStarted","Data":"19806fd7ea6132013c0bc0cb7ee5feb7798301215fd11548ecd0bd86071eb902"} Feb 27 17:40:20 crc kubenswrapper[4830]: I0227 17:40:20.139182 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-cbb7cdb9f-mhl2g" Feb 27 17:40:20 crc kubenswrapper[4830]: I0227 17:40:20.185292 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-cbb7cdb9f-mhl2g" podStartSLOduration=2.185274769 podStartE2EDuration="2.185274769s" podCreationTimestamp="2026-02-27 17:40:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:40:20.183370993 +0000 UTC m=+5616.272643456" watchObservedRunningTime="2026-02-27 17:40:20.185274769 +0000 UTC m=+5616.274547232" Feb 27 17:40:20 crc kubenswrapper[4830]: E0227 17:40:20.764077 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:40:21 crc kubenswrapper[4830]: I0227 17:40:21.161800 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fd78cc-58qxt" event={"ID":"0779d5aa-90c7-4495-b109-f57586a59f70","Type":"ContainerStarted","Data":"43e35f4517a7f0252050c0fcc312afd2d6c8e1bc5a9f2ff417ec31f1c34a51f9"} Feb 27 17:40:21 crc kubenswrapper[4830]: I0227 17:40:21.161900 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d97fd78cc-58qxt" Feb 27 17:40:21 crc kubenswrapper[4830]: I0227 17:40:21.182267 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d97fd78cc-58qxt" podStartSLOduration=3.182247018 podStartE2EDuration="3.182247018s" podCreationTimestamp="2026-02-27 17:40:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:40:21.182027282 +0000 UTC m=+5617.271299745" watchObservedRunningTime="2026-02-27 17:40:21.182247018 +0000 UTC m=+5617.271519481" Feb 27 17:40:26 crc kubenswrapper[4830]: E0227 17:40:26.765789 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536900-rmh78" podUID="900b9199-11ea-4332-b62c-81ebc07f20dd" Feb 27 17:40:27 crc kubenswrapper[4830]: E0227 17:40:27.763253 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-gvnvz" podUID="f1c73a78-1e95-4481-a273-ba7e3b5a127c" Feb 27 17:40:28 crc kubenswrapper[4830]: I0227 17:40:28.654226 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d97fd78cc-58qxt" Feb 27 17:40:28 crc kubenswrapper[4830]: I0227 17:40:28.727812 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6874bf8c6f-lpnwz"] Feb 27 17:40:28 crc kubenswrapper[4830]: I0227 17:40:28.728136 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6874bf8c6f-lpnwz" podUID="f91775a7-c80a-4262-ad8a-912d9f1b1da8" containerName="dnsmasq-dns" containerID="cri-o://3ed57176e05eab0df493d59b2eb579edae3360ab2f3a539695e07ff20ed1e889" gracePeriod=10 Feb 27 17:40:29 crc kubenswrapper[4830]: I0227 17:40:29.240568 4830 generic.go:334] "Generic (PLEG): container finished" podID="f91775a7-c80a-4262-ad8a-912d9f1b1da8" containerID="3ed57176e05eab0df493d59b2eb579edae3360ab2f3a539695e07ff20ed1e889" exitCode=0 Feb 27 17:40:29 crc kubenswrapper[4830]: I0227 17:40:29.240630 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6874bf8c6f-lpnwz" event={"ID":"f91775a7-c80a-4262-ad8a-912d9f1b1da8","Type":"ContainerDied","Data":"3ed57176e05eab0df493d59b2eb579edae3360ab2f3a539695e07ff20ed1e889"} Feb 27 17:40:29 crc kubenswrapper[4830]: I0227 17:40:29.240664 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6874bf8c6f-lpnwz" event={"ID":"f91775a7-c80a-4262-ad8a-912d9f1b1da8","Type":"ContainerDied","Data":"fca611bfd4e17f90912fd74e7eb05da1937e7f38bde6153e758fb57fafb788be"} Feb 27 17:40:29 crc kubenswrapper[4830]: I0227 17:40:29.240679 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fca611bfd4e17f90912fd74e7eb05da1937e7f38bde6153e758fb57fafb788be" Feb 27 17:40:29 crc kubenswrapper[4830]: I0227 17:40:29.275902 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6874bf8c6f-lpnwz" Feb 27 17:40:29 crc kubenswrapper[4830]: I0227 17:40:29.383484 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f91775a7-c80a-4262-ad8a-912d9f1b1da8-config\") pod \"f91775a7-c80a-4262-ad8a-912d9f1b1da8\" (UID: \"f91775a7-c80a-4262-ad8a-912d9f1b1da8\") " Feb 27 17:40:29 crc kubenswrapper[4830]: I0227 17:40:29.383685 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f91775a7-c80a-4262-ad8a-912d9f1b1da8-ovsdbserver-sb\") pod \"f91775a7-c80a-4262-ad8a-912d9f1b1da8\" (UID: \"f91775a7-c80a-4262-ad8a-912d9f1b1da8\") " Feb 27 17:40:29 crc kubenswrapper[4830]: I0227 17:40:29.383764 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tf46\" (UniqueName: \"kubernetes.io/projected/f91775a7-c80a-4262-ad8a-912d9f1b1da8-kube-api-access-5tf46\") pod \"f91775a7-c80a-4262-ad8a-912d9f1b1da8\" (UID: \"f91775a7-c80a-4262-ad8a-912d9f1b1da8\") " Feb 27 17:40:29 crc kubenswrapper[4830]: I0227 17:40:29.383797 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f91775a7-c80a-4262-ad8a-912d9f1b1da8-dns-svc\") pod \"f91775a7-c80a-4262-ad8a-912d9f1b1da8\" (UID: \"f91775a7-c80a-4262-ad8a-912d9f1b1da8\") " Feb 27 17:40:29 crc kubenswrapper[4830]: I0227 17:40:29.383831 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f91775a7-c80a-4262-ad8a-912d9f1b1da8-ovsdbserver-nb\") pod \"f91775a7-c80a-4262-ad8a-912d9f1b1da8\" (UID: \"f91775a7-c80a-4262-ad8a-912d9f1b1da8\") " Feb 27 17:40:29 crc kubenswrapper[4830]: I0227 17:40:29.419408 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f91775a7-c80a-4262-ad8a-912d9f1b1da8-kube-api-access-5tf46" (OuterVolumeSpecName: "kube-api-access-5tf46") pod "f91775a7-c80a-4262-ad8a-912d9f1b1da8" (UID: "f91775a7-c80a-4262-ad8a-912d9f1b1da8"). InnerVolumeSpecName "kube-api-access-5tf46". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:40:29 crc kubenswrapper[4830]: I0227 17:40:29.439353 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f91775a7-c80a-4262-ad8a-912d9f1b1da8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f91775a7-c80a-4262-ad8a-912d9f1b1da8" (UID: "f91775a7-c80a-4262-ad8a-912d9f1b1da8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:40:29 crc kubenswrapper[4830]: I0227 17:40:29.439977 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f91775a7-c80a-4262-ad8a-912d9f1b1da8-config" (OuterVolumeSpecName: "config") pod "f91775a7-c80a-4262-ad8a-912d9f1b1da8" (UID: "f91775a7-c80a-4262-ad8a-912d9f1b1da8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:40:29 crc kubenswrapper[4830]: I0227 17:40:29.443438 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f91775a7-c80a-4262-ad8a-912d9f1b1da8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f91775a7-c80a-4262-ad8a-912d9f1b1da8" (UID: "f91775a7-c80a-4262-ad8a-912d9f1b1da8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:40:29 crc kubenswrapper[4830]: I0227 17:40:29.465215 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f91775a7-c80a-4262-ad8a-912d9f1b1da8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f91775a7-c80a-4262-ad8a-912d9f1b1da8" (UID: "f91775a7-c80a-4262-ad8a-912d9f1b1da8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:40:29 crc kubenswrapper[4830]: I0227 17:40:29.487531 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f91775a7-c80a-4262-ad8a-912d9f1b1da8-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:40:29 crc kubenswrapper[4830]: I0227 17:40:29.487571 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f91775a7-c80a-4262-ad8a-912d9f1b1da8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 17:40:29 crc kubenswrapper[4830]: I0227 17:40:29.487591 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tf46\" (UniqueName: \"kubernetes.io/projected/f91775a7-c80a-4262-ad8a-912d9f1b1da8-kube-api-access-5tf46\") on node \"crc\" DevicePath \"\"" Feb 27 17:40:29 crc kubenswrapper[4830]: I0227 17:40:29.487604 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f91775a7-c80a-4262-ad8a-912d9f1b1da8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 17:40:29 crc kubenswrapper[4830]: I0227 17:40:29.487616 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f91775a7-c80a-4262-ad8a-912d9f1b1da8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 17:40:30 crc kubenswrapper[4830]: I0227 17:40:30.249408 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6874bf8c6f-lpnwz" Feb 27 17:40:30 crc kubenswrapper[4830]: I0227 17:40:30.308642 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6874bf8c6f-lpnwz"] Feb 27 17:40:30 crc kubenswrapper[4830]: I0227 17:40:30.318289 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6874bf8c6f-lpnwz"] Feb 27 17:40:31 crc kubenswrapper[4830]: I0227 17:40:31.071872 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f91775a7-c80a-4262-ad8a-912d9f1b1da8" path="/var/lib/kubelet/pods/f91775a7-c80a-4262-ad8a-912d9f1b1da8/volumes" Feb 27 17:40:31 crc kubenswrapper[4830]: E0227 17:40:31.765926 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:40:39 crc kubenswrapper[4830]: E0227 17:40:39.505703 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 17:40:39 crc kubenswrapper[4830]: E0227 17:40:39.506709 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m48cm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-gvnvz_openshift-marketplace(f1c73a78-1e95-4481-a273-ba7e3b5a127c): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 17:40:39 crc kubenswrapper[4830]: E0227 17:40:39.508040 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-gvnvz" podUID="f1c73a78-1e95-4481-a273-ba7e3b5a127c" Feb 27 17:40:43 crc kubenswrapper[4830]: I0227 17:40:43.440391 4830 generic.go:334] "Generic (PLEG): container finished" podID="900b9199-11ea-4332-b62c-81ebc07f20dd" containerID="c2817d8d312078614557cfe74230997c282b244e775ab91873dc6592eb036f19" exitCode=0 Feb 27 17:40:43 crc kubenswrapper[4830]: I0227 17:40:43.441152 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536900-rmh78" event={"ID":"900b9199-11ea-4332-b62c-81ebc07f20dd","Type":"ContainerDied","Data":"c2817d8d312078614557cfe74230997c282b244e775ab91873dc6592eb036f19"} Feb 27 17:40:44 crc kubenswrapper[4830]: I0227 17:40:44.834650 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536900-rmh78" Feb 27 17:40:44 crc kubenswrapper[4830]: I0227 17:40:44.947407 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qb46\" (UniqueName: \"kubernetes.io/projected/900b9199-11ea-4332-b62c-81ebc07f20dd-kube-api-access-9qb46\") pod \"900b9199-11ea-4332-b62c-81ebc07f20dd\" (UID: \"900b9199-11ea-4332-b62c-81ebc07f20dd\") " Feb 27 17:40:44 crc kubenswrapper[4830]: I0227 17:40:44.957411 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/900b9199-11ea-4332-b62c-81ebc07f20dd-kube-api-access-9qb46" (OuterVolumeSpecName: "kube-api-access-9qb46") pod "900b9199-11ea-4332-b62c-81ebc07f20dd" (UID: "900b9199-11ea-4332-b62c-81ebc07f20dd"). InnerVolumeSpecName "kube-api-access-9qb46". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:40:45 crc kubenswrapper[4830]: I0227 17:40:45.050562 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qb46\" (UniqueName: \"kubernetes.io/projected/900b9199-11ea-4332-b62c-81ebc07f20dd-kube-api-access-9qb46\") on node \"crc\" DevicePath \"\"" Feb 27 17:40:45 crc kubenswrapper[4830]: I0227 17:40:45.470579 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536900-rmh78" event={"ID":"900b9199-11ea-4332-b62c-81ebc07f20dd","Type":"ContainerDied","Data":"8cafb44ecb128e786411883275aa63d942df2ebe21a8c1541621797b76f94052"} Feb 27 17:40:45 crc kubenswrapper[4830]: I0227 17:40:45.470646 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cafb44ecb128e786411883275aa63d942df2ebe21a8c1541621797b76f94052" Feb 27 17:40:45 crc kubenswrapper[4830]: I0227 17:40:45.470698 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536900-rmh78" Feb 27 17:40:45 crc kubenswrapper[4830]: E0227 17:40:45.769246 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:40:45 crc kubenswrapper[4830]: I0227 17:40:45.940257 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536892-6fnr5"] Feb 27 17:40:45 crc kubenswrapper[4830]: I0227 17:40:45.955063 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536892-6fnr5"] Feb 27 17:40:46 crc kubenswrapper[4830]: I0227 17:40:46.782346 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b7ee878-6086-45a4-a46c-ba5aa7f2d79f" path="/var/lib/kubelet/pods/1b7ee878-6086-45a4-a46c-ba5aa7f2d79f/volumes" Feb 27 17:40:48 crc kubenswrapper[4830]: I0227 17:40:48.775383 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-cbb7cdb9f-mhl2g" Feb 27 17:40:49 crc kubenswrapper[4830]: I0227 17:40:49.101882 4830 scope.go:117] "RemoveContainer" containerID="25164afdd22e06077501b45e41e095380202757a71cdd6bc94d864b4c7eb0a49" Feb 27 17:40:53 crc kubenswrapper[4830]: E0227 17:40:53.766114 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-gvnvz" podUID="f1c73a78-1e95-4481-a273-ba7e3b5a127c" Feb 27 17:40:56 crc kubenswrapper[4830]: I0227 17:40:56.880050 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-7fjgs"] Feb 27 17:40:56 crc kubenswrapper[4830]: E0227 17:40:56.880719 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f91775a7-c80a-4262-ad8a-912d9f1b1da8" containerName="dnsmasq-dns" Feb 27 17:40:56 crc kubenswrapper[4830]: I0227 17:40:56.880734 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f91775a7-c80a-4262-ad8a-912d9f1b1da8" containerName="dnsmasq-dns" Feb 27 17:40:56 crc kubenswrapper[4830]: E0227 17:40:56.880752 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900b9199-11ea-4332-b62c-81ebc07f20dd" containerName="oc" Feb 27 17:40:56 crc kubenswrapper[4830]: I0227 17:40:56.880758 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="900b9199-11ea-4332-b62c-81ebc07f20dd" containerName="oc" Feb 27 17:40:56 crc kubenswrapper[4830]: E0227 17:40:56.880782 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f91775a7-c80a-4262-ad8a-912d9f1b1da8" containerName="init" Feb 27 17:40:56 crc kubenswrapper[4830]: I0227 17:40:56.880788 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f91775a7-c80a-4262-ad8a-912d9f1b1da8" containerName="init" Feb 27 17:40:56 crc kubenswrapper[4830]: I0227 17:40:56.880935 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="900b9199-11ea-4332-b62c-81ebc07f20dd" containerName="oc" Feb 27 17:40:56 crc kubenswrapper[4830]: I0227 17:40:56.880994 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f91775a7-c80a-4262-ad8a-912d9f1b1da8" containerName="dnsmasq-dns" Feb 27 17:40:56 crc kubenswrapper[4830]: I0227 17:40:56.881568 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7fjgs" Feb 27 17:40:56 crc kubenswrapper[4830]: I0227 17:40:56.896462 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-7fjgs"] Feb 27 17:40:56 crc kubenswrapper[4830]: I0227 17:40:56.921933 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jppdl\" (UniqueName: \"kubernetes.io/projected/c1507562-13d1-412c-ace5-6598ce757fdd-kube-api-access-jppdl\") pod \"glance-db-create-7fjgs\" (UID: \"c1507562-13d1-412c-ace5-6598ce757fdd\") " pod="openstack/glance-db-create-7fjgs" Feb 27 17:40:56 crc kubenswrapper[4830]: I0227 17:40:56.922344 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1507562-13d1-412c-ace5-6598ce757fdd-operator-scripts\") pod \"glance-db-create-7fjgs\" (UID: \"c1507562-13d1-412c-ace5-6598ce757fdd\") " pod="openstack/glance-db-create-7fjgs" Feb 27 17:40:56 crc kubenswrapper[4830]: I0227 17:40:56.975099 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-8348-account-create-update-kh6fw"] Feb 27 17:40:56 crc kubenswrapper[4830]: I0227 17:40:56.976181 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8348-account-create-update-kh6fw" Feb 27 17:40:56 crc kubenswrapper[4830]: I0227 17:40:56.978256 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 27 17:40:56 crc kubenswrapper[4830]: I0227 17:40:56.995227 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8348-account-create-update-kh6fw"] Feb 27 17:40:57 crc kubenswrapper[4830]: I0227 17:40:57.025297 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jppdl\" (UniqueName: \"kubernetes.io/projected/c1507562-13d1-412c-ace5-6598ce757fdd-kube-api-access-jppdl\") pod \"glance-db-create-7fjgs\" (UID: \"c1507562-13d1-412c-ace5-6598ce757fdd\") " pod="openstack/glance-db-create-7fjgs" Feb 27 17:40:57 crc kubenswrapper[4830]: I0227 17:40:57.026037 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1507562-13d1-412c-ace5-6598ce757fdd-operator-scripts\") pod \"glance-db-create-7fjgs\" (UID: \"c1507562-13d1-412c-ace5-6598ce757fdd\") " pod="openstack/glance-db-create-7fjgs" Feb 27 17:40:57 crc kubenswrapper[4830]: I0227 17:40:57.027035 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/603ffaad-dc9f-4434-ab69-7d7f0b818991-operator-scripts\") pod \"glance-8348-account-create-update-kh6fw\" (UID: \"603ffaad-dc9f-4434-ab69-7d7f0b818991\") " pod="openstack/glance-8348-account-create-update-kh6fw" Feb 27 17:40:57 crc kubenswrapper[4830]: I0227 17:40:57.027190 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbs9m\" (UniqueName: \"kubernetes.io/projected/603ffaad-dc9f-4434-ab69-7d7f0b818991-kube-api-access-hbs9m\") pod \"glance-8348-account-create-update-kh6fw\" (UID: \"603ffaad-dc9f-4434-ab69-7d7f0b818991\") " pod="openstack/glance-8348-account-create-update-kh6fw" Feb 27 17:40:57 crc kubenswrapper[4830]: I0227 17:40:57.026961 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1507562-13d1-412c-ace5-6598ce757fdd-operator-scripts\") pod \"glance-db-create-7fjgs\" (UID: \"c1507562-13d1-412c-ace5-6598ce757fdd\") " pod="openstack/glance-db-create-7fjgs" Feb 27 17:40:57 crc kubenswrapper[4830]: I0227 17:40:57.049608 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jppdl\" (UniqueName: \"kubernetes.io/projected/c1507562-13d1-412c-ace5-6598ce757fdd-kube-api-access-jppdl\") pod \"glance-db-create-7fjgs\" (UID: \"c1507562-13d1-412c-ace5-6598ce757fdd\") " pod="openstack/glance-db-create-7fjgs" Feb 27 17:40:57 crc kubenswrapper[4830]: I0227 17:40:57.129010 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/603ffaad-dc9f-4434-ab69-7d7f0b818991-operator-scripts\") pod \"glance-8348-account-create-update-kh6fw\" (UID: \"603ffaad-dc9f-4434-ab69-7d7f0b818991\") " pod="openstack/glance-8348-account-create-update-kh6fw" Feb 27 17:40:57 crc kubenswrapper[4830]: I0227 17:40:57.129392 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbs9m\" (UniqueName: \"kubernetes.io/projected/603ffaad-dc9f-4434-ab69-7d7f0b818991-kube-api-access-hbs9m\") pod \"glance-8348-account-create-update-kh6fw\" (UID: \"603ffaad-dc9f-4434-ab69-7d7f0b818991\") " pod="openstack/glance-8348-account-create-update-kh6fw" Feb 27 17:40:57 crc kubenswrapper[4830]: I0227 17:40:57.129872 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/603ffaad-dc9f-4434-ab69-7d7f0b818991-operator-scripts\") pod \"glance-8348-account-create-update-kh6fw\" (UID: \"603ffaad-dc9f-4434-ab69-7d7f0b818991\") " pod="openstack/glance-8348-account-create-update-kh6fw" Feb 27 17:40:57 crc kubenswrapper[4830]: I0227 17:40:57.150007 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbs9m\" (UniqueName: \"kubernetes.io/projected/603ffaad-dc9f-4434-ab69-7d7f0b818991-kube-api-access-hbs9m\") pod \"glance-8348-account-create-update-kh6fw\" (UID: \"603ffaad-dc9f-4434-ab69-7d7f0b818991\") " pod="openstack/glance-8348-account-create-update-kh6fw" Feb 27 17:40:57 crc kubenswrapper[4830]: I0227 17:40:57.208114 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7fjgs" Feb 27 17:40:57 crc kubenswrapper[4830]: I0227 17:40:57.293465 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8348-account-create-update-kh6fw" Feb 27 17:40:57 crc kubenswrapper[4830]: I0227 17:40:57.658567 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8348-account-create-update-kh6fw"] Feb 27 17:40:57 crc kubenswrapper[4830]: W0227 17:40:57.660233 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod603ffaad_dc9f_4434_ab69_7d7f0b818991.slice/crio-f0e7ab02a516fee5ebf60269e99f7c12c82dfb5d8ac7cd8fddcb751761de66d9 WatchSource:0}: Error finding container f0e7ab02a516fee5ebf60269e99f7c12c82dfb5d8ac7cd8fddcb751761de66d9: Status 404 returned error can't find the container with id f0e7ab02a516fee5ebf60269e99f7c12c82dfb5d8ac7cd8fddcb751761de66d9 Feb 27 17:40:57 crc kubenswrapper[4830]: I0227 17:40:57.795298 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-7fjgs"] Feb 27 17:40:58 crc kubenswrapper[4830]: I0227 17:40:58.644534 4830 generic.go:334] "Generic (PLEG): container finished" podID="c1507562-13d1-412c-ace5-6598ce757fdd" containerID="1f0d6ae756f012b90b0ca967dd7d86f0649dc830c16363cfb292fc8b7a069ad9" exitCode=0 Feb 27 17:40:58 crc kubenswrapper[4830]: I0227 17:40:58.644637 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7fjgs" event={"ID":"c1507562-13d1-412c-ace5-6598ce757fdd","Type":"ContainerDied","Data":"1f0d6ae756f012b90b0ca967dd7d86f0649dc830c16363cfb292fc8b7a069ad9"} Feb 27 17:40:58 crc kubenswrapper[4830]: I0227 17:40:58.645236 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7fjgs" event={"ID":"c1507562-13d1-412c-ace5-6598ce757fdd","Type":"ContainerStarted","Data":"f89da41360f7709e37ca1a05dca6db56a915a97f79a21cf9647b986826d5de71"} Feb 27 17:40:58 crc kubenswrapper[4830]: I0227 17:40:58.650790 4830 generic.go:334] "Generic (PLEG): container finished" podID="603ffaad-dc9f-4434-ab69-7d7f0b818991" containerID="20a73155e16a680bcaeef5e2ae214a36a2059c9520795cefe96d510ae3d1a618" exitCode=0 Feb 27 17:40:58 crc kubenswrapper[4830]: I0227 17:40:58.650929 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8348-account-create-update-kh6fw" event={"ID":"603ffaad-dc9f-4434-ab69-7d7f0b818991","Type":"ContainerDied","Data":"20a73155e16a680bcaeef5e2ae214a36a2059c9520795cefe96d510ae3d1a618"} Feb 27 17:40:58 crc kubenswrapper[4830]: I0227 17:40:58.651035 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8348-account-create-update-kh6fw" event={"ID":"603ffaad-dc9f-4434-ab69-7d7f0b818991","Type":"ContainerStarted","Data":"f0e7ab02a516fee5ebf60269e99f7c12c82dfb5d8ac7cd8fddcb751761de66d9"} Feb 27 17:40:59 crc kubenswrapper[4830]: I0227 17:40:59.997684 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7fjgs" Feb 27 17:41:00 crc kubenswrapper[4830]: I0227 17:41:00.020316 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jppdl\" (UniqueName: \"kubernetes.io/projected/c1507562-13d1-412c-ace5-6598ce757fdd-kube-api-access-jppdl\") pod \"c1507562-13d1-412c-ace5-6598ce757fdd\" (UID: \"c1507562-13d1-412c-ace5-6598ce757fdd\") " Feb 27 17:41:00 crc kubenswrapper[4830]: I0227 17:41:00.020464 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1507562-13d1-412c-ace5-6598ce757fdd-operator-scripts\") pod \"c1507562-13d1-412c-ace5-6598ce757fdd\" (UID: \"c1507562-13d1-412c-ace5-6598ce757fdd\") " Feb 27 17:41:00 crc kubenswrapper[4830]: I0227 17:41:00.022001 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1507562-13d1-412c-ace5-6598ce757fdd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c1507562-13d1-412c-ace5-6598ce757fdd" (UID: "c1507562-13d1-412c-ace5-6598ce757fdd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:41:00 crc kubenswrapper[4830]: I0227 17:41:00.047886 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1507562-13d1-412c-ace5-6598ce757fdd-kube-api-access-jppdl" (OuterVolumeSpecName: "kube-api-access-jppdl") pod "c1507562-13d1-412c-ace5-6598ce757fdd" (UID: "c1507562-13d1-412c-ace5-6598ce757fdd"). InnerVolumeSpecName "kube-api-access-jppdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:41:00 crc kubenswrapper[4830]: I0227 17:41:00.122620 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jppdl\" (UniqueName: \"kubernetes.io/projected/c1507562-13d1-412c-ace5-6598ce757fdd-kube-api-access-jppdl\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:00 crc kubenswrapper[4830]: I0227 17:41:00.122703 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1507562-13d1-412c-ace5-6598ce757fdd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:00 crc kubenswrapper[4830]: I0227 17:41:00.129496 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8348-account-create-update-kh6fw" Feb 27 17:41:00 crc kubenswrapper[4830]: I0227 17:41:00.223435 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbs9m\" (UniqueName: \"kubernetes.io/projected/603ffaad-dc9f-4434-ab69-7d7f0b818991-kube-api-access-hbs9m\") pod \"603ffaad-dc9f-4434-ab69-7d7f0b818991\" (UID: \"603ffaad-dc9f-4434-ab69-7d7f0b818991\") " Feb 27 17:41:00 crc kubenswrapper[4830]: I0227 17:41:00.223652 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/603ffaad-dc9f-4434-ab69-7d7f0b818991-operator-scripts\") pod \"603ffaad-dc9f-4434-ab69-7d7f0b818991\" (UID: \"603ffaad-dc9f-4434-ab69-7d7f0b818991\") " Feb 27 17:41:00 crc kubenswrapper[4830]: I0227 17:41:00.225027 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/603ffaad-dc9f-4434-ab69-7d7f0b818991-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "603ffaad-dc9f-4434-ab69-7d7f0b818991" (UID: "603ffaad-dc9f-4434-ab69-7d7f0b818991"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:41:00 crc kubenswrapper[4830]: I0227 17:41:00.235325 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/603ffaad-dc9f-4434-ab69-7d7f0b818991-kube-api-access-hbs9m" (OuterVolumeSpecName: "kube-api-access-hbs9m") pod "603ffaad-dc9f-4434-ab69-7d7f0b818991" (UID: "603ffaad-dc9f-4434-ab69-7d7f0b818991"). InnerVolumeSpecName "kube-api-access-hbs9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:41:00 crc kubenswrapper[4830]: I0227 17:41:00.325851 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbs9m\" (UniqueName: \"kubernetes.io/projected/603ffaad-dc9f-4434-ab69-7d7f0b818991-kube-api-access-hbs9m\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:00 crc kubenswrapper[4830]: I0227 17:41:00.325894 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/603ffaad-dc9f-4434-ab69-7d7f0b818991-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:00 crc kubenswrapper[4830]: I0227 17:41:00.675846 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8348-account-create-update-kh6fw" event={"ID":"603ffaad-dc9f-4434-ab69-7d7f0b818991","Type":"ContainerDied","Data":"f0e7ab02a516fee5ebf60269e99f7c12c82dfb5d8ac7cd8fddcb751761de66d9"} Feb 27 17:41:00 crc kubenswrapper[4830]: I0227 17:41:00.675908 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0e7ab02a516fee5ebf60269e99f7c12c82dfb5d8ac7cd8fddcb751761de66d9" Feb 27 17:41:00 crc kubenswrapper[4830]: I0227 17:41:00.675999 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8348-account-create-update-kh6fw" Feb 27 17:41:00 crc kubenswrapper[4830]: I0227 17:41:00.679835 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7fjgs" event={"ID":"c1507562-13d1-412c-ace5-6598ce757fdd","Type":"ContainerDied","Data":"f89da41360f7709e37ca1a05dca6db56a915a97f79a21cf9647b986826d5de71"} Feb 27 17:41:00 crc kubenswrapper[4830]: I0227 17:41:00.680081 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f89da41360f7709e37ca1a05dca6db56a915a97f79a21cf9647b986826d5de71" Feb 27 17:41:00 crc kubenswrapper[4830]: I0227 17:41:00.680337 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7fjgs" Feb 27 17:41:00 crc kubenswrapper[4830]: E0227 17:41:00.765983 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:41:02 crc kubenswrapper[4830]: I0227 17:41:02.211713 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-6w48k"] Feb 27 17:41:02 crc kubenswrapper[4830]: E0227 17:41:02.214059 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1507562-13d1-412c-ace5-6598ce757fdd" containerName="mariadb-database-create" Feb 27 17:41:02 crc kubenswrapper[4830]: I0227 17:41:02.214185 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1507562-13d1-412c-ace5-6598ce757fdd" containerName="mariadb-database-create" Feb 27 17:41:02 crc kubenswrapper[4830]: E0227 17:41:02.214280 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="603ffaad-dc9f-4434-ab69-7d7f0b818991" containerName="mariadb-account-create-update" Feb 27 17:41:02 crc kubenswrapper[4830]: I0227 17:41:02.215175 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="603ffaad-dc9f-4434-ab69-7d7f0b818991" containerName="mariadb-account-create-update" Feb 27 17:41:02 crc kubenswrapper[4830]: I0227 17:41:02.215523 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="603ffaad-dc9f-4434-ab69-7d7f0b818991" containerName="mariadb-account-create-update" Feb 27 17:41:02 crc kubenswrapper[4830]: I0227 17:41:02.215630 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1507562-13d1-412c-ace5-6598ce757fdd" containerName="mariadb-database-create" Feb 27 17:41:02 crc kubenswrapper[4830]: I0227 17:41:02.216555 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6w48k" Feb 27 17:41:02 crc kubenswrapper[4830]: I0227 17:41:02.221605 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-gsllq" Feb 27 17:41:02 crc kubenswrapper[4830]: I0227 17:41:02.221708 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 27 17:41:02 crc kubenswrapper[4830]: I0227 17:41:02.234560 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-6w48k"] Feb 27 17:41:02 crc kubenswrapper[4830]: I0227 17:41:02.382981 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/277e5f01-7cfa-40fd-a52d-8af10c6090f8-db-sync-config-data\") pod \"glance-db-sync-6w48k\" (UID: \"277e5f01-7cfa-40fd-a52d-8af10c6090f8\") " pod="openstack/glance-db-sync-6w48k" Feb 27 17:41:02 crc kubenswrapper[4830]: I0227 17:41:02.383553 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/277e5f01-7cfa-40fd-a52d-8af10c6090f8-combined-ca-bundle\") pod \"glance-db-sync-6w48k\" (UID: \"277e5f01-7cfa-40fd-a52d-8af10c6090f8\") " pod="openstack/glance-db-sync-6w48k" Feb 27 17:41:02 crc kubenswrapper[4830]: I0227 17:41:02.383745 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4nnl\" (UniqueName: \"kubernetes.io/projected/277e5f01-7cfa-40fd-a52d-8af10c6090f8-kube-api-access-f4nnl\") pod \"glance-db-sync-6w48k\" (UID: \"277e5f01-7cfa-40fd-a52d-8af10c6090f8\") " pod="openstack/glance-db-sync-6w48k" Feb 27 17:41:02 crc kubenswrapper[4830]: I0227 17:41:02.384016 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/277e5f01-7cfa-40fd-a52d-8af10c6090f8-config-data\") pod \"glance-db-sync-6w48k\" (UID: \"277e5f01-7cfa-40fd-a52d-8af10c6090f8\") " pod="openstack/glance-db-sync-6w48k" Feb 27 17:41:02 crc kubenswrapper[4830]: I0227 17:41:02.485892 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/277e5f01-7cfa-40fd-a52d-8af10c6090f8-config-data\") pod \"glance-db-sync-6w48k\" (UID: \"277e5f01-7cfa-40fd-a52d-8af10c6090f8\") " pod="openstack/glance-db-sync-6w48k" Feb 27 17:41:02 crc kubenswrapper[4830]: I0227 17:41:02.486244 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/277e5f01-7cfa-40fd-a52d-8af10c6090f8-db-sync-config-data\") pod \"glance-db-sync-6w48k\" (UID: \"277e5f01-7cfa-40fd-a52d-8af10c6090f8\") " pod="openstack/glance-db-sync-6w48k" Feb 27 17:41:02 crc kubenswrapper[4830]: I0227 17:41:02.486297 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/277e5f01-7cfa-40fd-a52d-8af10c6090f8-combined-ca-bundle\") pod \"glance-db-sync-6w48k\" (UID: \"277e5f01-7cfa-40fd-a52d-8af10c6090f8\") " pod="openstack/glance-db-sync-6w48k" Feb 27 17:41:02 crc kubenswrapper[4830]: I0227 17:41:02.486342 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4nnl\" (UniqueName: \"kubernetes.io/projected/277e5f01-7cfa-40fd-a52d-8af10c6090f8-kube-api-access-f4nnl\") pod \"glance-db-sync-6w48k\" (UID: \"277e5f01-7cfa-40fd-a52d-8af10c6090f8\") " pod="openstack/glance-db-sync-6w48k" Feb 27 17:41:02 crc kubenswrapper[4830]: I0227 17:41:02.494738 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/277e5f01-7cfa-40fd-a52d-8af10c6090f8-config-data\") pod \"glance-db-sync-6w48k\" (UID: \"277e5f01-7cfa-40fd-a52d-8af10c6090f8\") " pod="openstack/glance-db-sync-6w48k" Feb 27 17:41:02 crc kubenswrapper[4830]: I0227 17:41:02.494888 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/277e5f01-7cfa-40fd-a52d-8af10c6090f8-combined-ca-bundle\") pod \"glance-db-sync-6w48k\" (UID: \"277e5f01-7cfa-40fd-a52d-8af10c6090f8\") " pod="openstack/glance-db-sync-6w48k" Feb 27 17:41:02 crc kubenswrapper[4830]: I0227 17:41:02.504414 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/277e5f01-7cfa-40fd-a52d-8af10c6090f8-db-sync-config-data\") pod \"glance-db-sync-6w48k\" (UID: \"277e5f01-7cfa-40fd-a52d-8af10c6090f8\") " pod="openstack/glance-db-sync-6w48k" Feb 27 17:41:02 crc kubenswrapper[4830]: I0227 17:41:02.508475 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4nnl\" (UniqueName: \"kubernetes.io/projected/277e5f01-7cfa-40fd-a52d-8af10c6090f8-kube-api-access-f4nnl\") pod \"glance-db-sync-6w48k\" (UID: \"277e5f01-7cfa-40fd-a52d-8af10c6090f8\") " pod="openstack/glance-db-sync-6w48k" Feb 27 17:41:02 crc kubenswrapper[4830]: I0227 17:41:02.542782 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6w48k" Feb 27 17:41:03 crc kubenswrapper[4830]: I0227 17:41:03.197434 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-6w48k"] Feb 27 17:41:03 crc kubenswrapper[4830]: I0227 17:41:03.720345 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6w48k" event={"ID":"277e5f01-7cfa-40fd-a52d-8af10c6090f8","Type":"ContainerStarted","Data":"87096d857f065cd98159a3d1407337299beb768632dc22535cb87f80bdbfb8ce"} Feb 27 17:41:04 crc kubenswrapper[4830]: I0227 17:41:04.734752 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6w48k" event={"ID":"277e5f01-7cfa-40fd-a52d-8af10c6090f8","Type":"ContainerStarted","Data":"71370417538fd2c6bd53b284b401fca4285542e8326674eb686c6728aa0a07c3"} Feb 27 17:41:04 crc kubenswrapper[4830]: I0227 17:41:04.761732 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-6w48k" podStartSLOduration=2.761665546 podStartE2EDuration="2.761665546s" podCreationTimestamp="2026-02-27 17:41:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:41:04.760340434 +0000 UTC m=+5660.849612907" watchObservedRunningTime="2026-02-27 17:41:04.761665546 +0000 UTC m=+5660.850938049" Feb 27 17:41:06 crc kubenswrapper[4830]: E0227 17:41:06.775377 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-gvnvz" podUID="f1c73a78-1e95-4481-a273-ba7e3b5a127c" Feb 27 17:41:07 crc kubenswrapper[4830]: I0227 17:41:07.768825 4830 generic.go:334] "Generic (PLEG): container finished" podID="277e5f01-7cfa-40fd-a52d-8af10c6090f8" containerID="71370417538fd2c6bd53b284b401fca4285542e8326674eb686c6728aa0a07c3" exitCode=0 Feb 27 17:41:07 crc kubenswrapper[4830]: I0227 17:41:07.768901 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6w48k" event={"ID":"277e5f01-7cfa-40fd-a52d-8af10c6090f8","Type":"ContainerDied","Data":"71370417538fd2c6bd53b284b401fca4285542e8326674eb686c6728aa0a07c3"} Feb 27 17:41:09 crc kubenswrapper[4830]: I0227 17:41:09.327223 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6w48k" Feb 27 17:41:09 crc kubenswrapper[4830]: I0227 17:41:09.460539 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/277e5f01-7cfa-40fd-a52d-8af10c6090f8-config-data\") pod \"277e5f01-7cfa-40fd-a52d-8af10c6090f8\" (UID: \"277e5f01-7cfa-40fd-a52d-8af10c6090f8\") " Feb 27 17:41:09 crc kubenswrapper[4830]: I0227 17:41:09.460723 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/277e5f01-7cfa-40fd-a52d-8af10c6090f8-combined-ca-bundle\") pod \"277e5f01-7cfa-40fd-a52d-8af10c6090f8\" (UID: \"277e5f01-7cfa-40fd-a52d-8af10c6090f8\") " Feb 27 17:41:09 crc kubenswrapper[4830]: I0227 17:41:09.460813 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4nnl\" (UniqueName: \"kubernetes.io/projected/277e5f01-7cfa-40fd-a52d-8af10c6090f8-kube-api-access-f4nnl\") pod \"277e5f01-7cfa-40fd-a52d-8af10c6090f8\" (UID: \"277e5f01-7cfa-40fd-a52d-8af10c6090f8\") " Feb 27 17:41:09 crc kubenswrapper[4830]: I0227 17:41:09.460986 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/277e5f01-7cfa-40fd-a52d-8af10c6090f8-db-sync-config-data\") pod \"277e5f01-7cfa-40fd-a52d-8af10c6090f8\" (UID: \"277e5f01-7cfa-40fd-a52d-8af10c6090f8\") " Feb 27 17:41:09 crc kubenswrapper[4830]: I0227 17:41:09.470580 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/277e5f01-7cfa-40fd-a52d-8af10c6090f8-kube-api-access-f4nnl" (OuterVolumeSpecName: "kube-api-access-f4nnl") pod "277e5f01-7cfa-40fd-a52d-8af10c6090f8" (UID: "277e5f01-7cfa-40fd-a52d-8af10c6090f8"). InnerVolumeSpecName "kube-api-access-f4nnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:41:09 crc kubenswrapper[4830]: I0227 17:41:09.471134 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/277e5f01-7cfa-40fd-a52d-8af10c6090f8-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "277e5f01-7cfa-40fd-a52d-8af10c6090f8" (UID: "277e5f01-7cfa-40fd-a52d-8af10c6090f8"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:41:09 crc kubenswrapper[4830]: I0227 17:41:09.501203 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/277e5f01-7cfa-40fd-a52d-8af10c6090f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "277e5f01-7cfa-40fd-a52d-8af10c6090f8" (UID: "277e5f01-7cfa-40fd-a52d-8af10c6090f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:41:09 crc kubenswrapper[4830]: I0227 17:41:09.559802 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/277e5f01-7cfa-40fd-a52d-8af10c6090f8-config-data" (OuterVolumeSpecName: "config-data") pod "277e5f01-7cfa-40fd-a52d-8af10c6090f8" (UID: "277e5f01-7cfa-40fd-a52d-8af10c6090f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:41:09 crc kubenswrapper[4830]: I0227 17:41:09.564221 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/277e5f01-7cfa-40fd-a52d-8af10c6090f8-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:09 crc kubenswrapper[4830]: I0227 17:41:09.564268 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/277e5f01-7cfa-40fd-a52d-8af10c6090f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:09 crc kubenswrapper[4830]: I0227 17:41:09.564288 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4nnl\" (UniqueName: \"kubernetes.io/projected/277e5f01-7cfa-40fd-a52d-8af10c6090f8-kube-api-access-f4nnl\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:09 crc kubenswrapper[4830]: I0227 17:41:09.564303 4830 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/277e5f01-7cfa-40fd-a52d-8af10c6090f8-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:09 crc kubenswrapper[4830]: I0227 17:41:09.807177 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6w48k" event={"ID":"277e5f01-7cfa-40fd-a52d-8af10c6090f8","Type":"ContainerDied","Data":"87096d857f065cd98159a3d1407337299beb768632dc22535cb87f80bdbfb8ce"} Feb 27 17:41:09 crc kubenswrapper[4830]: I0227 17:41:09.807918 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87096d857f065cd98159a3d1407337299beb768632dc22535cb87f80bdbfb8ce" Feb 27 17:41:09 crc kubenswrapper[4830]: I0227 17:41:09.808088 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6w48k" Feb 27 17:41:09 crc kubenswrapper[4830]: E0227 17:41:09.885408 4830 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod277e5f01_7cfa_40fd_a52d_8af10c6090f8.slice\": RecentStats: unable to find data in memory cache]" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.137722 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 17:41:10 crc kubenswrapper[4830]: E0227 17:41:10.138265 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="277e5f01-7cfa-40fd-a52d-8af10c6090f8" containerName="glance-db-sync" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.138285 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="277e5f01-7cfa-40fd-a52d-8af10c6090f8" containerName="glance-db-sync" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.138492 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="277e5f01-7cfa-40fd-a52d-8af10c6090f8" containerName="glance-db-sync" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.139509 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.143255 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.143502 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.146685 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.153009 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.153155 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-gsllq" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.251503 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9d796c65c-w27f9"] Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.253126 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9d796c65c-w27f9" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.271686 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9d796c65c-w27f9"] Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.284868 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/eeed6188-9398-402f-a80b-e16d1d634cfd-ceph\") pod \"glance-default-external-api-0\" (UID: \"eeed6188-9398-402f-a80b-e16d1d634cfd\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.284942 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeed6188-9398-402f-a80b-e16d1d634cfd-config-data\") pod \"glance-default-external-api-0\" (UID: \"eeed6188-9398-402f-a80b-e16d1d634cfd\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.285258 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmvfg\" (UniqueName: \"kubernetes.io/projected/eeed6188-9398-402f-a80b-e16d1d634cfd-kube-api-access-bmvfg\") pod \"glance-default-external-api-0\" (UID: \"eeed6188-9398-402f-a80b-e16d1d634cfd\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.285327 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eeed6188-9398-402f-a80b-e16d1d634cfd-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"eeed6188-9398-402f-a80b-e16d1d634cfd\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.285643 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeed6188-9398-402f-a80b-e16d1d634cfd-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"eeed6188-9398-402f-a80b-e16d1d634cfd\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.285708 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eeed6188-9398-402f-a80b-e16d1d634cfd-logs\") pod \"glance-default-external-api-0\" (UID: \"eeed6188-9398-402f-a80b-e16d1d634cfd\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.285839 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eeed6188-9398-402f-a80b-e16d1d634cfd-scripts\") pod \"glance-default-external-api-0\" (UID: \"eeed6188-9398-402f-a80b-e16d1d634cfd\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.386247 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.387250 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/eeed6188-9398-402f-a80b-e16d1d634cfd-ceph\") pod \"glance-default-external-api-0\" (UID: \"eeed6188-9398-402f-a80b-e16d1d634cfd\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.387293 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeed6188-9398-402f-a80b-e16d1d634cfd-config-data\") pod \"glance-default-external-api-0\" (UID: \"eeed6188-9398-402f-a80b-e16d1d634cfd\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.387328 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4qpz\" (UniqueName: \"kubernetes.io/projected/4f04e887-5fcb-4a92-9eff-2bef86064d95-kube-api-access-f4qpz\") pod \"dnsmasq-dns-9d796c65c-w27f9\" (UID: \"4f04e887-5fcb-4a92-9eff-2bef86064d95\") " pod="openstack/dnsmasq-dns-9d796c65c-w27f9" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.387374 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f04e887-5fcb-4a92-9eff-2bef86064d95-config\") pod \"dnsmasq-dns-9d796c65c-w27f9\" (UID: \"4f04e887-5fcb-4a92-9eff-2bef86064d95\") " pod="openstack/dnsmasq-dns-9d796c65c-w27f9" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.387412 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmvfg\" (UniqueName: \"kubernetes.io/projected/eeed6188-9398-402f-a80b-e16d1d634cfd-kube-api-access-bmvfg\") pod \"glance-default-external-api-0\" (UID: \"eeed6188-9398-402f-a80b-e16d1d634cfd\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.387437 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eeed6188-9398-402f-a80b-e16d1d634cfd-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"eeed6188-9398-402f-a80b-e16d1d634cfd\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.387470 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f04e887-5fcb-4a92-9eff-2bef86064d95-dns-svc\") pod \"dnsmasq-dns-9d796c65c-w27f9\" (UID: \"4f04e887-5fcb-4a92-9eff-2bef86064d95\") " pod="openstack/dnsmasq-dns-9d796c65c-w27f9" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.387495 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f04e887-5fcb-4a92-9eff-2bef86064d95-ovsdbserver-nb\") pod \"dnsmasq-dns-9d796c65c-w27f9\" (UID: \"4f04e887-5fcb-4a92-9eff-2bef86064d95\") " pod="openstack/dnsmasq-dns-9d796c65c-w27f9" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.387523 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f04e887-5fcb-4a92-9eff-2bef86064d95-ovsdbserver-sb\") pod \"dnsmasq-dns-9d796c65c-w27f9\" (UID: \"4f04e887-5fcb-4a92-9eff-2bef86064d95\") " pod="openstack/dnsmasq-dns-9d796c65c-w27f9" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.387551 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeed6188-9398-402f-a80b-e16d1d634cfd-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"eeed6188-9398-402f-a80b-e16d1d634cfd\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.387569 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eeed6188-9398-402f-a80b-e16d1d634cfd-logs\") pod \"glance-default-external-api-0\" (UID: \"eeed6188-9398-402f-a80b-e16d1d634cfd\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.387601 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eeed6188-9398-402f-a80b-e16d1d634cfd-scripts\") pod \"glance-default-external-api-0\" (UID: \"eeed6188-9398-402f-a80b-e16d1d634cfd\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.388834 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.389065 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eeed6188-9398-402f-a80b-e16d1d634cfd-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"eeed6188-9398-402f-a80b-e16d1d634cfd\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.389301 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eeed6188-9398-402f-a80b-e16d1d634cfd-logs\") pod \"glance-default-external-api-0\" (UID: \"eeed6188-9398-402f-a80b-e16d1d634cfd\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.391845 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.404389 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeed6188-9398-402f-a80b-e16d1d634cfd-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"eeed6188-9398-402f-a80b-e16d1d634cfd\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.404403 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeed6188-9398-402f-a80b-e16d1d634cfd-config-data\") pod \"glance-default-external-api-0\" (UID: \"eeed6188-9398-402f-a80b-e16d1d634cfd\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.404756 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/eeed6188-9398-402f-a80b-e16d1d634cfd-ceph\") pod \"glance-default-external-api-0\" (UID: \"eeed6188-9398-402f-a80b-e16d1d634cfd\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.404844 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eeed6188-9398-402f-a80b-e16d1d634cfd-scripts\") pod \"glance-default-external-api-0\" (UID: \"eeed6188-9398-402f-a80b-e16d1d634cfd\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.408053 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmvfg\" (UniqueName: \"kubernetes.io/projected/eeed6188-9398-402f-a80b-e16d1d634cfd-kube-api-access-bmvfg\") pod \"glance-default-external-api-0\" (UID: \"eeed6188-9398-402f-a80b-e16d1d634cfd\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.431432 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.461563 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.489991 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1b7e3b1-b870-4000-8627-eff61f11aeeb-logs\") pod \"glance-default-internal-api-0\" (UID: \"e1b7e3b1-b870-4000-8627-eff61f11aeeb\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.490075 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4qpz\" (UniqueName: \"kubernetes.io/projected/4f04e887-5fcb-4a92-9eff-2bef86064d95-kube-api-access-f4qpz\") pod \"dnsmasq-dns-9d796c65c-w27f9\" (UID: \"4f04e887-5fcb-4a92-9eff-2bef86064d95\") " pod="openstack/dnsmasq-dns-9d796c65c-w27f9" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.490222 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1b7e3b1-b870-4000-8627-eff61f11aeeb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e1b7e3b1-b870-4000-8627-eff61f11aeeb\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.490281 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f04e887-5fcb-4a92-9eff-2bef86064d95-config\") pod \"dnsmasq-dns-9d796c65c-w27f9\" (UID: \"4f04e887-5fcb-4a92-9eff-2bef86064d95\") " pod="openstack/dnsmasq-dns-9d796c65c-w27f9" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.490436 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1b7e3b1-b870-4000-8627-eff61f11aeeb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e1b7e3b1-b870-4000-8627-eff61f11aeeb\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.490582 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e1b7e3b1-b870-4000-8627-eff61f11aeeb-ceph\") pod \"glance-default-internal-api-0\" (UID: \"e1b7e3b1-b870-4000-8627-eff61f11aeeb\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.490606 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f8lb\" (UniqueName: \"kubernetes.io/projected/e1b7e3b1-b870-4000-8627-eff61f11aeeb-kube-api-access-8f8lb\") pod \"glance-default-internal-api-0\" (UID: \"e1b7e3b1-b870-4000-8627-eff61f11aeeb\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.490734 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f04e887-5fcb-4a92-9eff-2bef86064d95-dns-svc\") pod \"dnsmasq-dns-9d796c65c-w27f9\" (UID: \"4f04e887-5fcb-4a92-9eff-2bef86064d95\") " pod="openstack/dnsmasq-dns-9d796c65c-w27f9" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.490764 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f04e887-5fcb-4a92-9eff-2bef86064d95-ovsdbserver-nb\") pod \"dnsmasq-dns-9d796c65c-w27f9\" (UID: \"4f04e887-5fcb-4a92-9eff-2bef86064d95\") " pod="openstack/dnsmasq-dns-9d796c65c-w27f9" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.490937 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f04e887-5fcb-4a92-9eff-2bef86064d95-ovsdbserver-sb\") pod \"dnsmasq-dns-9d796c65c-w27f9\" (UID: \"4f04e887-5fcb-4a92-9eff-2bef86064d95\") " pod="openstack/dnsmasq-dns-9d796c65c-w27f9" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.491100 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1b7e3b1-b870-4000-8627-eff61f11aeeb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e1b7e3b1-b870-4000-8627-eff61f11aeeb\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.491130 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e1b7e3b1-b870-4000-8627-eff61f11aeeb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e1b7e3b1-b870-4000-8627-eff61f11aeeb\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.491220 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f04e887-5fcb-4a92-9eff-2bef86064d95-config\") pod \"dnsmasq-dns-9d796c65c-w27f9\" (UID: \"4f04e887-5fcb-4a92-9eff-2bef86064d95\") " pod="openstack/dnsmasq-dns-9d796c65c-w27f9" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.493012 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f04e887-5fcb-4a92-9eff-2bef86064d95-dns-svc\") pod \"dnsmasq-dns-9d796c65c-w27f9\" (UID: \"4f04e887-5fcb-4a92-9eff-2bef86064d95\") " pod="openstack/dnsmasq-dns-9d796c65c-w27f9" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.493806 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f04e887-5fcb-4a92-9eff-2bef86064d95-ovsdbserver-sb\") pod \"dnsmasq-dns-9d796c65c-w27f9\" (UID: \"4f04e887-5fcb-4a92-9eff-2bef86064d95\") " pod="openstack/dnsmasq-dns-9d796c65c-w27f9" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.494050 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f04e887-5fcb-4a92-9eff-2bef86064d95-ovsdbserver-nb\") pod \"dnsmasq-dns-9d796c65c-w27f9\" (UID: \"4f04e887-5fcb-4a92-9eff-2bef86064d95\") " pod="openstack/dnsmasq-dns-9d796c65c-w27f9" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.517136 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4qpz\" (UniqueName: \"kubernetes.io/projected/4f04e887-5fcb-4a92-9eff-2bef86064d95-kube-api-access-f4qpz\") pod \"dnsmasq-dns-9d796c65c-w27f9\" (UID: \"4f04e887-5fcb-4a92-9eff-2bef86064d95\") " pod="openstack/dnsmasq-dns-9d796c65c-w27f9" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.593705 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1b7e3b1-b870-4000-8627-eff61f11aeeb-logs\") pod \"glance-default-internal-api-0\" (UID: \"e1b7e3b1-b870-4000-8627-eff61f11aeeb\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.594376 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1b7e3b1-b870-4000-8627-eff61f11aeeb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e1b7e3b1-b870-4000-8627-eff61f11aeeb\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.594444 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1b7e3b1-b870-4000-8627-eff61f11aeeb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e1b7e3b1-b870-4000-8627-eff61f11aeeb\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.594508 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e1b7e3b1-b870-4000-8627-eff61f11aeeb-ceph\") pod \"glance-default-internal-api-0\" (UID: \"e1b7e3b1-b870-4000-8627-eff61f11aeeb\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.594538 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f8lb\" (UniqueName: \"kubernetes.io/projected/e1b7e3b1-b870-4000-8627-eff61f11aeeb-kube-api-access-8f8lb\") pod \"glance-default-internal-api-0\" (UID: \"e1b7e3b1-b870-4000-8627-eff61f11aeeb\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.594609 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1b7e3b1-b870-4000-8627-eff61f11aeeb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e1b7e3b1-b870-4000-8627-eff61f11aeeb\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.594967 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1b7e3b1-b870-4000-8627-eff61f11aeeb-logs\") pod \"glance-default-internal-api-0\" (UID: \"e1b7e3b1-b870-4000-8627-eff61f11aeeb\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.598994 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e1b7e3b1-b870-4000-8627-eff61f11aeeb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e1b7e3b1-b870-4000-8627-eff61f11aeeb\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.599677 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e1b7e3b1-b870-4000-8627-eff61f11aeeb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e1b7e3b1-b870-4000-8627-eff61f11aeeb\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.602048 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1b7e3b1-b870-4000-8627-eff61f11aeeb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e1b7e3b1-b870-4000-8627-eff61f11aeeb\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.602730 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1b7e3b1-b870-4000-8627-eff61f11aeeb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e1b7e3b1-b870-4000-8627-eff61f11aeeb\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.603184 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1b7e3b1-b870-4000-8627-eff61f11aeeb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e1b7e3b1-b870-4000-8627-eff61f11aeeb\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.606051 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e1b7e3b1-b870-4000-8627-eff61f11aeeb-ceph\") pod \"glance-default-internal-api-0\" (UID: \"e1b7e3b1-b870-4000-8627-eff61f11aeeb\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.611619 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f8lb\" (UniqueName: \"kubernetes.io/projected/e1b7e3b1-b870-4000-8627-eff61f11aeeb-kube-api-access-8f8lb\") pod \"glance-default-internal-api-0\" (UID: \"e1b7e3b1-b870-4000-8627-eff61f11aeeb\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.635166 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9d796c65c-w27f9" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.859244 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 17:41:10 crc kubenswrapper[4830]: W0227 17:41:10.865560 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeeed6188_9398_402f_a80b_e16d1d634cfd.slice/crio-86fbcd53c665872442e4dececfa90af74c93d1189f7a0fea5f65573ddf82a6df WatchSource:0}: Error finding container 86fbcd53c665872442e4dececfa90af74c93d1189f7a0fea5f65573ddf82a6df: Status 404 returned error can't find the container with id 86fbcd53c665872442e4dececfa90af74c93d1189f7a0fea5f65573ddf82a6df Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.869297 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 17:41:10 crc kubenswrapper[4830]: I0227 17:41:10.938533 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9d796c65c-w27f9"] Feb 27 17:41:10 crc kubenswrapper[4830]: W0227 17:41:10.943479 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f04e887_5fcb_4a92_9eff_2bef86064d95.slice/crio-a734bd77b9c9ed59f596e81d1190b577569235d61a45dd92e83aacdcb979a0c6 WatchSource:0}: Error finding container a734bd77b9c9ed59f596e81d1190b577569235d61a45dd92e83aacdcb979a0c6: Status 404 returned error can't find the container with id a734bd77b9c9ed59f596e81d1190b577569235d61a45dd92e83aacdcb979a0c6 Feb 27 17:41:11 crc kubenswrapper[4830]: I0227 17:41:11.344311 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 17:41:11 crc kubenswrapper[4830]: I0227 17:41:11.515238 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 17:41:11 crc kubenswrapper[4830]: W0227 17:41:11.531604 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1b7e3b1_b870_4000_8627_eff61f11aeeb.slice/crio-03130b243511d55a9150883f87c01254d6e869f894eb093e35e06cf597b229bf WatchSource:0}: Error finding container 03130b243511d55a9150883f87c01254d6e869f894eb093e35e06cf597b229bf: Status 404 returned error can't find the container with id 03130b243511d55a9150883f87c01254d6e869f894eb093e35e06cf597b229bf Feb 27 17:41:11 crc kubenswrapper[4830]: I0227 17:41:11.843774 4830 generic.go:334] "Generic (PLEG): container finished" podID="4f04e887-5fcb-4a92-9eff-2bef86064d95" containerID="b7f5bff39a429923be48a944cfda90a45f2e8ee3852b5b9470ef72543910087f" exitCode=0 Feb 27 17:41:11 crc kubenswrapper[4830]: I0227 17:41:11.843880 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9d796c65c-w27f9" event={"ID":"4f04e887-5fcb-4a92-9eff-2bef86064d95","Type":"ContainerDied","Data":"b7f5bff39a429923be48a944cfda90a45f2e8ee3852b5b9470ef72543910087f"} Feb 27 17:41:11 crc kubenswrapper[4830]: I0227 17:41:11.844264 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9d796c65c-w27f9" event={"ID":"4f04e887-5fcb-4a92-9eff-2bef86064d95","Type":"ContainerStarted","Data":"a734bd77b9c9ed59f596e81d1190b577569235d61a45dd92e83aacdcb979a0c6"} Feb 27 17:41:11 crc kubenswrapper[4830]: I0227 17:41:11.847617 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e1b7e3b1-b870-4000-8627-eff61f11aeeb","Type":"ContainerStarted","Data":"03130b243511d55a9150883f87c01254d6e869f894eb093e35e06cf597b229bf"} Feb 27 17:41:11 crc kubenswrapper[4830]: I0227 17:41:11.859149 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"eeed6188-9398-402f-a80b-e16d1d634cfd","Type":"ContainerStarted","Data":"2bf6e6fc9719673532fc7d4a837ebb6b91a729a1c16b54e1f8c96edafb9a46c5"} Feb 27 17:41:11 crc kubenswrapper[4830]: I0227 17:41:11.859216 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"eeed6188-9398-402f-a80b-e16d1d634cfd","Type":"ContainerStarted","Data":"86fbcd53c665872442e4dececfa90af74c93d1189f7a0fea5f65573ddf82a6df"} Feb 27 17:41:12 crc kubenswrapper[4830]: I0227 17:41:12.875361 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e1b7e3b1-b870-4000-8627-eff61f11aeeb","Type":"ContainerStarted","Data":"5e0f201ca150662efb93a94f4268c53997cdc9dd0dcb59d1c7c4c1cc51fb5617"} Feb 27 17:41:12 crc kubenswrapper[4830]: I0227 17:41:12.876433 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e1b7e3b1-b870-4000-8627-eff61f11aeeb","Type":"ContainerStarted","Data":"dd920ac26ad1978bd8f1d43253f8eee5c730be43c17d1952cae24a200f61d468"} Feb 27 17:41:12 crc kubenswrapper[4830]: I0227 17:41:12.894123 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"eeed6188-9398-402f-a80b-e16d1d634cfd","Type":"ContainerStarted","Data":"00fb3fd3af8289aa4937e3aa351750718f17ccd436c71f75b3dca8afc49ec4e2"} Feb 27 17:41:12 crc kubenswrapper[4830]: I0227 17:41:12.897909 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9d796c65c-w27f9" event={"ID":"4f04e887-5fcb-4a92-9eff-2bef86064d95","Type":"ContainerStarted","Data":"eba1901f4ba5b5c8ed7f3d84d247dbbf6cf8573c4e266775b26c5dd56d91bf8b"} Feb 27 17:41:12 crc kubenswrapper[4830]: I0227 17:41:12.898447 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9d796c65c-w27f9" Feb 27 17:41:12 crc kubenswrapper[4830]: I0227 17:41:12.899387 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="eeed6188-9398-402f-a80b-e16d1d634cfd" containerName="glance-log" containerID="cri-o://2bf6e6fc9719673532fc7d4a837ebb6b91a729a1c16b54e1f8c96edafb9a46c5" gracePeriod=30 Feb 27 17:41:12 crc kubenswrapper[4830]: I0227 17:41:12.899554 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="eeed6188-9398-402f-a80b-e16d1d634cfd" containerName="glance-httpd" containerID="cri-o://00fb3fd3af8289aa4937e3aa351750718f17ccd436c71f75b3dca8afc49ec4e2" gracePeriod=30 Feb 27 17:41:12 crc kubenswrapper[4830]: I0227 17:41:12.957214 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.9571911010000003 podStartE2EDuration="2.957191101s" podCreationTimestamp="2026-02-27 17:41:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:41:12.922167409 +0000 UTC m=+5669.011440022" watchObservedRunningTime="2026-02-27 17:41:12.957191101 +0000 UTC m=+5669.046463564" Feb 27 17:41:12 crc kubenswrapper[4830]: I0227 17:41:12.982324 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=2.982299115 podStartE2EDuration="2.982299115s" podCreationTimestamp="2026-02-27 17:41:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:41:12.956227988 +0000 UTC m=+5669.045500451" watchObservedRunningTime="2026-02-27 17:41:12.982299115 +0000 UTC m=+5669.071571598" Feb 27 17:41:12 crc kubenswrapper[4830]: I0227 17:41:12.983769 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9d796c65c-w27f9" podStartSLOduration=2.9837615299999998 podStartE2EDuration="2.98376153s" podCreationTimestamp="2026-02-27 17:41:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:41:12.973020112 +0000 UTC m=+5669.062292585" watchObservedRunningTime="2026-02-27 17:41:12.98376153 +0000 UTC m=+5669.073034003" Feb 27 17:41:13 crc kubenswrapper[4830]: I0227 17:41:13.293653 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 17:41:13 crc kubenswrapper[4830]: I0227 17:41:13.571350 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 17:41:13 crc kubenswrapper[4830]: I0227 17:41:13.620078 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmvfg\" (UniqueName: \"kubernetes.io/projected/eeed6188-9398-402f-a80b-e16d1d634cfd-kube-api-access-bmvfg\") pod \"eeed6188-9398-402f-a80b-e16d1d634cfd\" (UID: \"eeed6188-9398-402f-a80b-e16d1d634cfd\") " Feb 27 17:41:13 crc kubenswrapper[4830]: I0227 17:41:13.620461 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeed6188-9398-402f-a80b-e16d1d634cfd-config-data\") pod \"eeed6188-9398-402f-a80b-e16d1d634cfd\" (UID: \"eeed6188-9398-402f-a80b-e16d1d634cfd\") " Feb 27 17:41:13 crc kubenswrapper[4830]: I0227 17:41:13.620524 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eeed6188-9398-402f-a80b-e16d1d634cfd-logs\") pod \"eeed6188-9398-402f-a80b-e16d1d634cfd\" (UID: \"eeed6188-9398-402f-a80b-e16d1d634cfd\") " Feb 27 17:41:13 crc kubenswrapper[4830]: I0227 17:41:13.620562 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eeed6188-9398-402f-a80b-e16d1d634cfd-httpd-run\") pod \"eeed6188-9398-402f-a80b-e16d1d634cfd\" (UID: \"eeed6188-9398-402f-a80b-e16d1d634cfd\") " Feb 27 17:41:13 crc kubenswrapper[4830]: I0227 17:41:13.620610 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeed6188-9398-402f-a80b-e16d1d634cfd-combined-ca-bundle\") pod \"eeed6188-9398-402f-a80b-e16d1d634cfd\" (UID: \"eeed6188-9398-402f-a80b-e16d1d634cfd\") " Feb 27 17:41:13 crc kubenswrapper[4830]: I0227 17:41:13.620630 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/eeed6188-9398-402f-a80b-e16d1d634cfd-ceph\") pod \"eeed6188-9398-402f-a80b-e16d1d634cfd\" (UID: \"eeed6188-9398-402f-a80b-e16d1d634cfd\") " Feb 27 17:41:13 crc kubenswrapper[4830]: I0227 17:41:13.620647 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eeed6188-9398-402f-a80b-e16d1d634cfd-scripts\") pod \"eeed6188-9398-402f-a80b-e16d1d634cfd\" (UID: \"eeed6188-9398-402f-a80b-e16d1d634cfd\") " Feb 27 17:41:13 crc kubenswrapper[4830]: I0227 17:41:13.621159 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eeed6188-9398-402f-a80b-e16d1d634cfd-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "eeed6188-9398-402f-a80b-e16d1d634cfd" (UID: "eeed6188-9398-402f-a80b-e16d1d634cfd"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:41:13 crc kubenswrapper[4830]: I0227 17:41:13.621468 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eeed6188-9398-402f-a80b-e16d1d634cfd-logs" (OuterVolumeSpecName: "logs") pod "eeed6188-9398-402f-a80b-e16d1d634cfd" (UID: "eeed6188-9398-402f-a80b-e16d1d634cfd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:41:13 crc kubenswrapper[4830]: I0227 17:41:13.627716 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eeed6188-9398-402f-a80b-e16d1d634cfd-ceph" (OuterVolumeSpecName: "ceph") pod "eeed6188-9398-402f-a80b-e16d1d634cfd" (UID: "eeed6188-9398-402f-a80b-e16d1d634cfd"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:41:13 crc kubenswrapper[4830]: I0227 17:41:13.645815 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eeed6188-9398-402f-a80b-e16d1d634cfd-scripts" (OuterVolumeSpecName: "scripts") pod "eeed6188-9398-402f-a80b-e16d1d634cfd" (UID: "eeed6188-9398-402f-a80b-e16d1d634cfd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:41:13 crc kubenswrapper[4830]: I0227 17:41:13.646047 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eeed6188-9398-402f-a80b-e16d1d634cfd-kube-api-access-bmvfg" (OuterVolumeSpecName: "kube-api-access-bmvfg") pod "eeed6188-9398-402f-a80b-e16d1d634cfd" (UID: "eeed6188-9398-402f-a80b-e16d1d634cfd"). InnerVolumeSpecName "kube-api-access-bmvfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:41:13 crc kubenswrapper[4830]: I0227 17:41:13.669162 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eeed6188-9398-402f-a80b-e16d1d634cfd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eeed6188-9398-402f-a80b-e16d1d634cfd" (UID: "eeed6188-9398-402f-a80b-e16d1d634cfd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:41:13 crc kubenswrapper[4830]: I0227 17:41:13.718745 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eeed6188-9398-402f-a80b-e16d1d634cfd-config-data" (OuterVolumeSpecName: "config-data") pod "eeed6188-9398-402f-a80b-e16d1d634cfd" (UID: "eeed6188-9398-402f-a80b-e16d1d634cfd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:41:13 crc kubenswrapper[4830]: I0227 17:41:13.722331 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmvfg\" (UniqueName: \"kubernetes.io/projected/eeed6188-9398-402f-a80b-e16d1d634cfd-kube-api-access-bmvfg\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:13 crc kubenswrapper[4830]: I0227 17:41:13.722360 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeed6188-9398-402f-a80b-e16d1d634cfd-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:13 crc kubenswrapper[4830]: I0227 17:41:13.722369 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eeed6188-9398-402f-a80b-e16d1d634cfd-logs\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:13 crc kubenswrapper[4830]: I0227 17:41:13.722379 4830 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eeed6188-9398-402f-a80b-e16d1d634cfd-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:13 crc kubenswrapper[4830]: I0227 17:41:13.722569 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeed6188-9398-402f-a80b-e16d1d634cfd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:13 crc kubenswrapper[4830]: I0227 17:41:13.722596 4830 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/eeed6188-9398-402f-a80b-e16d1d634cfd-ceph\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:13 crc kubenswrapper[4830]: I0227 17:41:13.722649 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eeed6188-9398-402f-a80b-e16d1d634cfd-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:13 crc kubenswrapper[4830]: E0227 17:41:13.764524 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:41:13 crc kubenswrapper[4830]: I0227 17:41:13.912790 4830 generic.go:334] "Generic (PLEG): container finished" podID="eeed6188-9398-402f-a80b-e16d1d634cfd" containerID="00fb3fd3af8289aa4937e3aa351750718f17ccd436c71f75b3dca8afc49ec4e2" exitCode=0 Feb 27 17:41:13 crc kubenswrapper[4830]: I0227 17:41:13.914114 4830 generic.go:334] "Generic (PLEG): container finished" podID="eeed6188-9398-402f-a80b-e16d1d634cfd" containerID="2bf6e6fc9719673532fc7d4a837ebb6b91a729a1c16b54e1f8c96edafb9a46c5" exitCode=143 Feb 27 17:41:13 crc kubenswrapper[4830]: I0227 17:41:13.912897 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 17:41:13 crc kubenswrapper[4830]: I0227 17:41:13.912858 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"eeed6188-9398-402f-a80b-e16d1d634cfd","Type":"ContainerDied","Data":"00fb3fd3af8289aa4937e3aa351750718f17ccd436c71f75b3dca8afc49ec4e2"} Feb 27 17:41:13 crc kubenswrapper[4830]: I0227 17:41:13.914699 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"eeed6188-9398-402f-a80b-e16d1d634cfd","Type":"ContainerDied","Data":"2bf6e6fc9719673532fc7d4a837ebb6b91a729a1c16b54e1f8c96edafb9a46c5"} Feb 27 17:41:13 crc kubenswrapper[4830]: I0227 17:41:13.914809 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"eeed6188-9398-402f-a80b-e16d1d634cfd","Type":"ContainerDied","Data":"86fbcd53c665872442e4dececfa90af74c93d1189f7a0fea5f65573ddf82a6df"} Feb 27 17:41:13 crc kubenswrapper[4830]: I0227 17:41:13.914902 4830 scope.go:117] "RemoveContainer" containerID="00fb3fd3af8289aa4937e3aa351750718f17ccd436c71f75b3dca8afc49ec4e2" Feb 27 17:41:13 crc kubenswrapper[4830]: I0227 17:41:13.953711 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 17:41:13 crc kubenswrapper[4830]: I0227 17:41:13.961214 4830 scope.go:117] "RemoveContainer" containerID="2bf6e6fc9719673532fc7d4a837ebb6b91a729a1c16b54e1f8c96edafb9a46c5" Feb 27 17:41:13 crc kubenswrapper[4830]: I0227 17:41:13.963272 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 17:41:13 crc kubenswrapper[4830]: I0227 17:41:13.986924 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 17:41:13 crc kubenswrapper[4830]: E0227 17:41:13.987309 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeed6188-9398-402f-a80b-e16d1d634cfd" containerName="glance-log" Feb 27 17:41:13 crc kubenswrapper[4830]: I0227 17:41:13.987320 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeed6188-9398-402f-a80b-e16d1d634cfd" containerName="glance-log" Feb 27 17:41:13 crc kubenswrapper[4830]: E0227 17:41:13.987354 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeed6188-9398-402f-a80b-e16d1d634cfd" containerName="glance-httpd" Feb 27 17:41:13 crc kubenswrapper[4830]: I0227 17:41:13.987360 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeed6188-9398-402f-a80b-e16d1d634cfd" containerName="glance-httpd" Feb 27 17:41:13 crc kubenswrapper[4830]: I0227 17:41:13.987505 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeed6188-9398-402f-a80b-e16d1d634cfd" containerName="glance-log" Feb 27 17:41:13 crc kubenswrapper[4830]: I0227 17:41:13.987532 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeed6188-9398-402f-a80b-e16d1d634cfd" containerName="glance-httpd" Feb 27 17:41:13 crc kubenswrapper[4830]: I0227 17:41:13.988453 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 17:41:14 crc kubenswrapper[4830]: I0227 17:41:13.997669 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 27 17:41:14 crc kubenswrapper[4830]: I0227 17:41:14.007540 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 17:41:14 crc kubenswrapper[4830]: I0227 17:41:14.033290 4830 scope.go:117] "RemoveContainer" containerID="00fb3fd3af8289aa4937e3aa351750718f17ccd436c71f75b3dca8afc49ec4e2" Feb 27 17:41:14 crc kubenswrapper[4830]: E0227 17:41:14.034739 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00fb3fd3af8289aa4937e3aa351750718f17ccd436c71f75b3dca8afc49ec4e2\": container with ID starting with 00fb3fd3af8289aa4937e3aa351750718f17ccd436c71f75b3dca8afc49ec4e2 not found: ID does not exist" containerID="00fb3fd3af8289aa4937e3aa351750718f17ccd436c71f75b3dca8afc49ec4e2" Feb 27 17:41:14 crc kubenswrapper[4830]: I0227 17:41:14.034790 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00fb3fd3af8289aa4937e3aa351750718f17ccd436c71f75b3dca8afc49ec4e2"} err="failed to get container status \"00fb3fd3af8289aa4937e3aa351750718f17ccd436c71f75b3dca8afc49ec4e2\": rpc error: code = NotFound desc = could not find container \"00fb3fd3af8289aa4937e3aa351750718f17ccd436c71f75b3dca8afc49ec4e2\": container with ID starting with 00fb3fd3af8289aa4937e3aa351750718f17ccd436c71f75b3dca8afc49ec4e2 not found: ID does not exist" Feb 27 17:41:14 crc kubenswrapper[4830]: I0227 17:41:14.034822 4830 scope.go:117] "RemoveContainer" containerID="2bf6e6fc9719673532fc7d4a837ebb6b91a729a1c16b54e1f8c96edafb9a46c5" Feb 27 17:41:14 crc kubenswrapper[4830]: I0227 17:41:14.035129 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:14 crc kubenswrapper[4830]: E0227 17:41:14.035178 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bf6e6fc9719673532fc7d4a837ebb6b91a729a1c16b54e1f8c96edafb9a46c5\": container with ID starting with 2bf6e6fc9719673532fc7d4a837ebb6b91a729a1c16b54e1f8c96edafb9a46c5 not found: ID does not exist" containerID="2bf6e6fc9719673532fc7d4a837ebb6b91a729a1c16b54e1f8c96edafb9a46c5" Feb 27 17:41:14 crc kubenswrapper[4830]: I0227 17:41:14.035209 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bf6e6fc9719673532fc7d4a837ebb6b91a729a1c16b54e1f8c96edafb9a46c5"} err="failed to get container status \"2bf6e6fc9719673532fc7d4a837ebb6b91a729a1c16b54e1f8c96edafb9a46c5\": rpc error: code = NotFound desc = could not find container \"2bf6e6fc9719673532fc7d4a837ebb6b91a729a1c16b54e1f8c96edafb9a46c5\": container with ID starting with 2bf6e6fc9719673532fc7d4a837ebb6b91a729a1c16b54e1f8c96edafb9a46c5 not found: ID does not exist" Feb 27 17:41:14 crc kubenswrapper[4830]: I0227 17:41:14.035231 4830 scope.go:117] "RemoveContainer" containerID="00fb3fd3af8289aa4937e3aa351750718f17ccd436c71f75b3dca8afc49ec4e2" Feb 27 17:41:14 crc kubenswrapper[4830]: I0227 17:41:14.035235 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzvht\" (UniqueName: \"kubernetes.io/projected/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-kube-api-access-nzvht\") pod \"glance-default-external-api-0\" (UID: \"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:14 crc kubenswrapper[4830]: I0227 17:41:14.035289 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-config-data\") pod \"glance-default-external-api-0\" (UID: \"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:14 crc kubenswrapper[4830]: I0227 17:41:14.035367 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-ceph\") pod \"glance-default-external-api-0\" (UID: \"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:14 crc kubenswrapper[4830]: I0227 17:41:14.035388 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-scripts\") pod \"glance-default-external-api-0\" (UID: \"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:14 crc kubenswrapper[4830]: I0227 17:41:14.035547 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-logs\") pod \"glance-default-external-api-0\" (UID: \"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:14 crc kubenswrapper[4830]: I0227 17:41:14.035582 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:14 crc kubenswrapper[4830]: I0227 17:41:14.036518 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00fb3fd3af8289aa4937e3aa351750718f17ccd436c71f75b3dca8afc49ec4e2"} err="failed to get container status \"00fb3fd3af8289aa4937e3aa351750718f17ccd436c71f75b3dca8afc49ec4e2\": rpc error: code = NotFound desc = could not find container \"00fb3fd3af8289aa4937e3aa351750718f17ccd436c71f75b3dca8afc49ec4e2\": container with ID starting with 00fb3fd3af8289aa4937e3aa351750718f17ccd436c71f75b3dca8afc49ec4e2 not found: ID does not exist" Feb 27 17:41:14 crc kubenswrapper[4830]: I0227 17:41:14.036581 4830 scope.go:117] "RemoveContainer" containerID="2bf6e6fc9719673532fc7d4a837ebb6b91a729a1c16b54e1f8c96edafb9a46c5" Feb 27 17:41:14 crc kubenswrapper[4830]: I0227 17:41:14.038622 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bf6e6fc9719673532fc7d4a837ebb6b91a729a1c16b54e1f8c96edafb9a46c5"} err="failed to get container status \"2bf6e6fc9719673532fc7d4a837ebb6b91a729a1c16b54e1f8c96edafb9a46c5\": rpc error: code = NotFound desc = could not find container \"2bf6e6fc9719673532fc7d4a837ebb6b91a729a1c16b54e1f8c96edafb9a46c5\": container with ID starting with 2bf6e6fc9719673532fc7d4a837ebb6b91a729a1c16b54e1f8c96edafb9a46c5 not found: ID does not exist" Feb 27 17:41:14 crc kubenswrapper[4830]: I0227 17:41:14.136867 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:14 crc kubenswrapper[4830]: I0227 17:41:14.136959 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzvht\" (UniqueName: \"kubernetes.io/projected/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-kube-api-access-nzvht\") pod \"glance-default-external-api-0\" (UID: \"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:14 crc kubenswrapper[4830]: I0227 17:41:14.137002 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-config-data\") pod \"glance-default-external-api-0\" (UID: \"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:14 crc kubenswrapper[4830]: I0227 17:41:14.137055 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-ceph\") pod \"glance-default-external-api-0\" (UID: \"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:14 crc kubenswrapper[4830]: I0227 17:41:14.137084 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-scripts\") pod \"glance-default-external-api-0\" (UID: \"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:14 crc kubenswrapper[4830]: I0227 17:41:14.137153 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-logs\") pod \"glance-default-external-api-0\" (UID: \"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:14 crc kubenswrapper[4830]: I0227 17:41:14.137183 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:14 crc kubenswrapper[4830]: I0227 17:41:14.137452 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:14 crc kubenswrapper[4830]: I0227 17:41:14.138617 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-logs\") pod \"glance-default-external-api-0\" (UID: \"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:14 crc kubenswrapper[4830]: I0227 17:41:14.144130 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:14 crc kubenswrapper[4830]: I0227 17:41:14.144167 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-ceph\") pod \"glance-default-external-api-0\" (UID: \"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:14 crc kubenswrapper[4830]: I0227 17:41:14.144683 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-config-data\") pod \"glance-default-external-api-0\" (UID: \"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:14 crc kubenswrapper[4830]: I0227 17:41:14.145353 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-scripts\") pod \"glance-default-external-api-0\" (UID: \"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:14 crc kubenswrapper[4830]: I0227 17:41:14.157196 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzvht\" (UniqueName: \"kubernetes.io/projected/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-kube-api-access-nzvht\") pod \"glance-default-external-api-0\" (UID: \"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7\") " pod="openstack/glance-default-external-api-0" Feb 27 17:41:14 crc kubenswrapper[4830]: I0227 17:41:14.329460 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 17:41:14 crc kubenswrapper[4830]: I0227 17:41:14.783927 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eeed6188-9398-402f-a80b-e16d1d634cfd" path="/var/lib/kubelet/pods/eeed6188-9398-402f-a80b-e16d1d634cfd/volumes" Feb 27 17:41:14 crc kubenswrapper[4830]: I0227 17:41:14.926409 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e1b7e3b1-b870-4000-8627-eff61f11aeeb" containerName="glance-log" containerID="cri-o://dd920ac26ad1978bd8f1d43253f8eee5c730be43c17d1952cae24a200f61d468" gracePeriod=30 Feb 27 17:41:14 crc kubenswrapper[4830]: I0227 17:41:14.926478 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e1b7e3b1-b870-4000-8627-eff61f11aeeb" containerName="glance-httpd" containerID="cri-o://5e0f201ca150662efb93a94f4268c53997cdc9dd0dcb59d1c7c4c1cc51fb5617" gracePeriod=30 Feb 27 17:41:14 crc kubenswrapper[4830]: I0227 17:41:14.934967 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 17:41:15 crc kubenswrapper[4830]: I0227 17:41:15.946089 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7","Type":"ContainerStarted","Data":"08e5b77d43fbd61b42463151c48883dabac9bf64fa9819a06275e18cc611c769"} Feb 27 17:41:15 crc kubenswrapper[4830]: I0227 17:41:15.947095 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7","Type":"ContainerStarted","Data":"151a9474431af495751426435a282175f4c701117043f07bf2e36495175f058e"} Feb 27 17:41:15 crc kubenswrapper[4830]: I0227 17:41:15.953088 4830 generic.go:334] "Generic (PLEG): container finished" podID="e1b7e3b1-b870-4000-8627-eff61f11aeeb" containerID="5e0f201ca150662efb93a94f4268c53997cdc9dd0dcb59d1c7c4c1cc51fb5617" exitCode=0 Feb 27 17:41:15 crc kubenswrapper[4830]: I0227 17:41:15.953130 4830 generic.go:334] "Generic (PLEG): container finished" podID="e1b7e3b1-b870-4000-8627-eff61f11aeeb" containerID="dd920ac26ad1978bd8f1d43253f8eee5c730be43c17d1952cae24a200f61d468" exitCode=143 Feb 27 17:41:15 crc kubenswrapper[4830]: I0227 17:41:15.953155 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e1b7e3b1-b870-4000-8627-eff61f11aeeb","Type":"ContainerDied","Data":"5e0f201ca150662efb93a94f4268c53997cdc9dd0dcb59d1c7c4c1cc51fb5617"} Feb 27 17:41:15 crc kubenswrapper[4830]: I0227 17:41:15.953260 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e1b7e3b1-b870-4000-8627-eff61f11aeeb","Type":"ContainerDied","Data":"dd920ac26ad1978bd8f1d43253f8eee5c730be43c17d1952cae24a200f61d468"} Feb 27 17:41:15 crc kubenswrapper[4830]: I0227 17:41:15.953273 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e1b7e3b1-b870-4000-8627-eff61f11aeeb","Type":"ContainerDied","Data":"03130b243511d55a9150883f87c01254d6e869f894eb093e35e06cf597b229bf"} Feb 27 17:41:15 crc kubenswrapper[4830]: I0227 17:41:15.953296 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03130b243511d55a9150883f87c01254d6e869f894eb093e35e06cf597b229bf" Feb 27 17:41:15 crc kubenswrapper[4830]: I0227 17:41:15.989785 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 17:41:16 crc kubenswrapper[4830]: I0227 17:41:16.080643 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e1b7e3b1-b870-4000-8627-eff61f11aeeb-httpd-run\") pod \"e1b7e3b1-b870-4000-8627-eff61f11aeeb\" (UID: \"e1b7e3b1-b870-4000-8627-eff61f11aeeb\") " Feb 27 17:41:16 crc kubenswrapper[4830]: I0227 17:41:16.081206 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1b7e3b1-b870-4000-8627-eff61f11aeeb-config-data\") pod \"e1b7e3b1-b870-4000-8627-eff61f11aeeb\" (UID: \"e1b7e3b1-b870-4000-8627-eff61f11aeeb\") " Feb 27 17:41:16 crc kubenswrapper[4830]: I0227 17:41:16.081397 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1b7e3b1-b870-4000-8627-eff61f11aeeb-scripts\") pod \"e1b7e3b1-b870-4000-8627-eff61f11aeeb\" (UID: \"e1b7e3b1-b870-4000-8627-eff61f11aeeb\") " Feb 27 17:41:16 crc kubenswrapper[4830]: I0227 17:41:16.081302 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1b7e3b1-b870-4000-8627-eff61f11aeeb-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e1b7e3b1-b870-4000-8627-eff61f11aeeb" (UID: "e1b7e3b1-b870-4000-8627-eff61f11aeeb"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:41:16 crc kubenswrapper[4830]: I0227 17:41:16.082415 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e1b7e3b1-b870-4000-8627-eff61f11aeeb-ceph\") pod \"e1b7e3b1-b870-4000-8627-eff61f11aeeb\" (UID: \"e1b7e3b1-b870-4000-8627-eff61f11aeeb\") " Feb 27 17:41:16 crc kubenswrapper[4830]: I0227 17:41:16.082560 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1b7e3b1-b870-4000-8627-eff61f11aeeb-logs\") pod \"e1b7e3b1-b870-4000-8627-eff61f11aeeb\" (UID: \"e1b7e3b1-b870-4000-8627-eff61f11aeeb\") " Feb 27 17:41:16 crc kubenswrapper[4830]: I0227 17:41:16.082644 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f8lb\" (UniqueName: \"kubernetes.io/projected/e1b7e3b1-b870-4000-8627-eff61f11aeeb-kube-api-access-8f8lb\") pod \"e1b7e3b1-b870-4000-8627-eff61f11aeeb\" (UID: \"e1b7e3b1-b870-4000-8627-eff61f11aeeb\") " Feb 27 17:41:16 crc kubenswrapper[4830]: I0227 17:41:16.082682 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1b7e3b1-b870-4000-8627-eff61f11aeeb-combined-ca-bundle\") pod \"e1b7e3b1-b870-4000-8627-eff61f11aeeb\" (UID: \"e1b7e3b1-b870-4000-8627-eff61f11aeeb\") " Feb 27 17:41:16 crc kubenswrapper[4830]: I0227 17:41:16.083137 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1b7e3b1-b870-4000-8627-eff61f11aeeb-logs" (OuterVolumeSpecName: "logs") pod "e1b7e3b1-b870-4000-8627-eff61f11aeeb" (UID: "e1b7e3b1-b870-4000-8627-eff61f11aeeb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:41:16 crc kubenswrapper[4830]: I0227 17:41:16.083755 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1b7e3b1-b870-4000-8627-eff61f11aeeb-logs\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:16 crc kubenswrapper[4830]: I0227 17:41:16.083777 4830 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e1b7e3b1-b870-4000-8627-eff61f11aeeb-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:16 crc kubenswrapper[4830]: I0227 17:41:16.087445 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1b7e3b1-b870-4000-8627-eff61f11aeeb-scripts" (OuterVolumeSpecName: "scripts") pod "e1b7e3b1-b870-4000-8627-eff61f11aeeb" (UID: "e1b7e3b1-b870-4000-8627-eff61f11aeeb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:41:16 crc kubenswrapper[4830]: I0227 17:41:16.088251 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1b7e3b1-b870-4000-8627-eff61f11aeeb-ceph" (OuterVolumeSpecName: "ceph") pod "e1b7e3b1-b870-4000-8627-eff61f11aeeb" (UID: "e1b7e3b1-b870-4000-8627-eff61f11aeeb"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:41:16 crc kubenswrapper[4830]: I0227 17:41:16.094396 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1b7e3b1-b870-4000-8627-eff61f11aeeb-kube-api-access-8f8lb" (OuterVolumeSpecName: "kube-api-access-8f8lb") pod "e1b7e3b1-b870-4000-8627-eff61f11aeeb" (UID: "e1b7e3b1-b870-4000-8627-eff61f11aeeb"). InnerVolumeSpecName "kube-api-access-8f8lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:41:16 crc kubenswrapper[4830]: I0227 17:41:16.111863 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1b7e3b1-b870-4000-8627-eff61f11aeeb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1b7e3b1-b870-4000-8627-eff61f11aeeb" (UID: "e1b7e3b1-b870-4000-8627-eff61f11aeeb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:41:16 crc kubenswrapper[4830]: I0227 17:41:16.144769 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1b7e3b1-b870-4000-8627-eff61f11aeeb-config-data" (OuterVolumeSpecName: "config-data") pod "e1b7e3b1-b870-4000-8627-eff61f11aeeb" (UID: "e1b7e3b1-b870-4000-8627-eff61f11aeeb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:41:16 crc kubenswrapper[4830]: I0227 17:41:16.185571 4830 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e1b7e3b1-b870-4000-8627-eff61f11aeeb-ceph\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:16 crc kubenswrapper[4830]: I0227 17:41:16.185601 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8f8lb\" (UniqueName: \"kubernetes.io/projected/e1b7e3b1-b870-4000-8627-eff61f11aeeb-kube-api-access-8f8lb\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:16 crc kubenswrapper[4830]: I0227 17:41:16.185611 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1b7e3b1-b870-4000-8627-eff61f11aeeb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:16 crc kubenswrapper[4830]: I0227 17:41:16.185620 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1b7e3b1-b870-4000-8627-eff61f11aeeb-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:16 crc kubenswrapper[4830]: I0227 17:41:16.185628 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1b7e3b1-b870-4000-8627-eff61f11aeeb-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:16 crc kubenswrapper[4830]: I0227 17:41:16.970688 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 17:41:16 crc kubenswrapper[4830]: I0227 17:41:16.970668 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7","Type":"ContainerStarted","Data":"945f9961f41cab34c1f2ad257ca5c49f7ab25490a9bcff8ea8570507d7a41270"} Feb 27 17:41:17 crc kubenswrapper[4830]: I0227 17:41:17.012257 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.012225301 podStartE2EDuration="4.012225301s" podCreationTimestamp="2026-02-27 17:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:41:17.007837866 +0000 UTC m=+5673.097110359" watchObservedRunningTime="2026-02-27 17:41:17.012225301 +0000 UTC m=+5673.101497794" Feb 27 17:41:17 crc kubenswrapper[4830]: I0227 17:41:17.048071 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 17:41:17 crc kubenswrapper[4830]: I0227 17:41:17.058172 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 17:41:17 crc kubenswrapper[4830]: I0227 17:41:17.083817 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 17:41:17 crc kubenswrapper[4830]: E0227 17:41:17.084489 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1b7e3b1-b870-4000-8627-eff61f11aeeb" containerName="glance-log" Feb 27 17:41:17 crc kubenswrapper[4830]: I0227 17:41:17.084524 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1b7e3b1-b870-4000-8627-eff61f11aeeb" containerName="glance-log" Feb 27 17:41:17 crc kubenswrapper[4830]: E0227 17:41:17.084571 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1b7e3b1-b870-4000-8627-eff61f11aeeb" containerName="glance-httpd" Feb 27 17:41:17 crc kubenswrapper[4830]: I0227 17:41:17.084585 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1b7e3b1-b870-4000-8627-eff61f11aeeb" containerName="glance-httpd" Feb 27 17:41:17 crc kubenswrapper[4830]: I0227 17:41:17.084899 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1b7e3b1-b870-4000-8627-eff61f11aeeb" containerName="glance-log" Feb 27 17:41:17 crc kubenswrapper[4830]: I0227 17:41:17.084967 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1b7e3b1-b870-4000-8627-eff61f11aeeb" containerName="glance-httpd" Feb 27 17:41:17 crc kubenswrapper[4830]: I0227 17:41:17.092420 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 17:41:17 crc kubenswrapper[4830]: I0227 17:41:17.095369 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 27 17:41:17 crc kubenswrapper[4830]: I0227 17:41:17.099415 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 17:41:17 crc kubenswrapper[4830]: I0227 17:41:17.112341 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28170d63-b3d4-4887-bb9d-e17e979cec89-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"28170d63-b3d4-4887-bb9d-e17e979cec89\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:17 crc kubenswrapper[4830]: I0227 17:41:17.112415 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/28170d63-b3d4-4887-bb9d-e17e979cec89-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"28170d63-b3d4-4887-bb9d-e17e979cec89\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:17 crc kubenswrapper[4830]: I0227 17:41:17.112541 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28170d63-b3d4-4887-bb9d-e17e979cec89-logs\") pod \"glance-default-internal-api-0\" (UID: \"28170d63-b3d4-4887-bb9d-e17e979cec89\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:17 crc kubenswrapper[4830]: I0227 17:41:17.112578 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/28170d63-b3d4-4887-bb9d-e17e979cec89-ceph\") pod \"glance-default-internal-api-0\" (UID: \"28170d63-b3d4-4887-bb9d-e17e979cec89\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:17 crc kubenswrapper[4830]: I0227 17:41:17.112601 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28170d63-b3d4-4887-bb9d-e17e979cec89-config-data\") pod \"glance-default-internal-api-0\" (UID: \"28170d63-b3d4-4887-bb9d-e17e979cec89\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:17 crc kubenswrapper[4830]: I0227 17:41:17.112642 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28170d63-b3d4-4887-bb9d-e17e979cec89-scripts\") pod \"glance-default-internal-api-0\" (UID: \"28170d63-b3d4-4887-bb9d-e17e979cec89\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:17 crc kubenswrapper[4830]: I0227 17:41:17.112668 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs52k\" (UniqueName: \"kubernetes.io/projected/28170d63-b3d4-4887-bb9d-e17e979cec89-kube-api-access-qs52k\") pod \"glance-default-internal-api-0\" (UID: \"28170d63-b3d4-4887-bb9d-e17e979cec89\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:17 crc kubenswrapper[4830]: I0227 17:41:17.214730 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28170d63-b3d4-4887-bb9d-e17e979cec89-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"28170d63-b3d4-4887-bb9d-e17e979cec89\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:17 crc kubenswrapper[4830]: I0227 17:41:17.214842 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/28170d63-b3d4-4887-bb9d-e17e979cec89-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"28170d63-b3d4-4887-bb9d-e17e979cec89\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:17 crc kubenswrapper[4830]: I0227 17:41:17.214928 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28170d63-b3d4-4887-bb9d-e17e979cec89-logs\") pod \"glance-default-internal-api-0\" (UID: \"28170d63-b3d4-4887-bb9d-e17e979cec89\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:17 crc kubenswrapper[4830]: I0227 17:41:17.214971 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/28170d63-b3d4-4887-bb9d-e17e979cec89-ceph\") pod \"glance-default-internal-api-0\" (UID: \"28170d63-b3d4-4887-bb9d-e17e979cec89\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:17 crc kubenswrapper[4830]: I0227 17:41:17.214993 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28170d63-b3d4-4887-bb9d-e17e979cec89-config-data\") pod \"glance-default-internal-api-0\" (UID: \"28170d63-b3d4-4887-bb9d-e17e979cec89\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:17 crc kubenswrapper[4830]: I0227 17:41:17.215031 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28170d63-b3d4-4887-bb9d-e17e979cec89-scripts\") pod \"glance-default-internal-api-0\" (UID: \"28170d63-b3d4-4887-bb9d-e17e979cec89\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:17 crc kubenswrapper[4830]: I0227 17:41:17.215056 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs52k\" (UniqueName: \"kubernetes.io/projected/28170d63-b3d4-4887-bb9d-e17e979cec89-kube-api-access-qs52k\") pod \"glance-default-internal-api-0\" (UID: \"28170d63-b3d4-4887-bb9d-e17e979cec89\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:17 crc kubenswrapper[4830]: I0227 17:41:17.216743 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/28170d63-b3d4-4887-bb9d-e17e979cec89-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"28170d63-b3d4-4887-bb9d-e17e979cec89\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:17 crc kubenswrapper[4830]: I0227 17:41:17.217967 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28170d63-b3d4-4887-bb9d-e17e979cec89-logs\") pod \"glance-default-internal-api-0\" (UID: \"28170d63-b3d4-4887-bb9d-e17e979cec89\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:17 crc kubenswrapper[4830]: I0227 17:41:17.222353 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/28170d63-b3d4-4887-bb9d-e17e979cec89-ceph\") pod \"glance-default-internal-api-0\" (UID: \"28170d63-b3d4-4887-bb9d-e17e979cec89\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:17 crc kubenswrapper[4830]: I0227 17:41:17.226579 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28170d63-b3d4-4887-bb9d-e17e979cec89-scripts\") pod \"glance-default-internal-api-0\" (UID: \"28170d63-b3d4-4887-bb9d-e17e979cec89\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:17 crc kubenswrapper[4830]: I0227 17:41:17.227704 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28170d63-b3d4-4887-bb9d-e17e979cec89-config-data\") pod \"glance-default-internal-api-0\" (UID: \"28170d63-b3d4-4887-bb9d-e17e979cec89\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:17 crc kubenswrapper[4830]: I0227 17:41:17.228026 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28170d63-b3d4-4887-bb9d-e17e979cec89-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"28170d63-b3d4-4887-bb9d-e17e979cec89\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:17 crc kubenswrapper[4830]: I0227 17:41:17.234354 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs52k\" (UniqueName: \"kubernetes.io/projected/28170d63-b3d4-4887-bb9d-e17e979cec89-kube-api-access-qs52k\") pod \"glance-default-internal-api-0\" (UID: \"28170d63-b3d4-4887-bb9d-e17e979cec89\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:41:17 crc kubenswrapper[4830]: I0227 17:41:17.436924 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 17:41:18 crc kubenswrapper[4830]: I0227 17:41:18.036413 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 17:41:18 crc kubenswrapper[4830]: W0227 17:41:18.043667 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28170d63_b3d4_4887_bb9d_e17e979cec89.slice/crio-f364dc41e60262ec252156f22bf12b859401f6da1bf6a642ec2dd4dc45e9640c WatchSource:0}: Error finding container f364dc41e60262ec252156f22bf12b859401f6da1bf6a642ec2dd4dc45e9640c: Status 404 returned error can't find the container with id f364dc41e60262ec252156f22bf12b859401f6da1bf6a642ec2dd4dc45e9640c Feb 27 17:41:18 crc kubenswrapper[4830]: I0227 17:41:18.786581 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1b7e3b1-b870-4000-8627-eff61f11aeeb" path="/var/lib/kubelet/pods/e1b7e3b1-b870-4000-8627-eff61f11aeeb/volumes" Feb 27 17:41:18 crc kubenswrapper[4830]: I0227 17:41:18.996841 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"28170d63-b3d4-4887-bb9d-e17e979cec89","Type":"ContainerStarted","Data":"dd4e3c74774bfde30e50e4455cca036b74ea59298b988394eeaeb19a9e5cafcf"} Feb 27 17:41:18 crc kubenswrapper[4830]: I0227 17:41:18.996905 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"28170d63-b3d4-4887-bb9d-e17e979cec89","Type":"ContainerStarted","Data":"f364dc41e60262ec252156f22bf12b859401f6da1bf6a642ec2dd4dc45e9640c"} Feb 27 17:41:20 crc kubenswrapper[4830]: I0227 17:41:20.012158 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"28170d63-b3d4-4887-bb9d-e17e979cec89","Type":"ContainerStarted","Data":"fa61205fa6454ae2809f5029a3713725201018ef2bb09a6eea256357d98e99cd"} Feb 27 17:41:20 crc kubenswrapper[4830]: I0227 17:41:20.053909 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.053873419 podStartE2EDuration="3.053873419s" podCreationTimestamp="2026-02-27 17:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:41:20.038335954 +0000 UTC m=+5676.127608447" watchObservedRunningTime="2026-02-27 17:41:20.053873419 +0000 UTC m=+5676.143145922" Feb 27 17:41:20 crc kubenswrapper[4830]: I0227 17:41:20.637266 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9d796c65c-w27f9" Feb 27 17:41:20 crc kubenswrapper[4830]: I0227 17:41:20.735288 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fd78cc-58qxt"] Feb 27 17:41:20 crc kubenswrapper[4830]: I0227 17:41:20.735572 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d97fd78cc-58qxt" podUID="0779d5aa-90c7-4495-b109-f57586a59f70" containerName="dnsmasq-dns" containerID="cri-o://43e35f4517a7f0252050c0fcc312afd2d6c8e1bc5a9f2ff417ec31f1c34a51f9" gracePeriod=10 Feb 27 17:41:21 crc kubenswrapper[4830]: I0227 17:41:21.029006 4830 generic.go:334] "Generic (PLEG): container finished" podID="0779d5aa-90c7-4495-b109-f57586a59f70" containerID="43e35f4517a7f0252050c0fcc312afd2d6c8e1bc5a9f2ff417ec31f1c34a51f9" exitCode=0 Feb 27 17:41:21 crc kubenswrapper[4830]: I0227 17:41:21.030175 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fd78cc-58qxt" event={"ID":"0779d5aa-90c7-4495-b109-f57586a59f70","Type":"ContainerDied","Data":"43e35f4517a7f0252050c0fcc312afd2d6c8e1bc5a9f2ff417ec31f1c34a51f9"} Feb 27 17:41:21 crc kubenswrapper[4830]: I0227 17:41:21.210127 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fd78cc-58qxt" Feb 27 17:41:21 crc kubenswrapper[4830]: I0227 17:41:21.216388 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0779d5aa-90c7-4495-b109-f57586a59f70-ovsdbserver-nb\") pod \"0779d5aa-90c7-4495-b109-f57586a59f70\" (UID: \"0779d5aa-90c7-4495-b109-f57586a59f70\") " Feb 27 17:41:21 crc kubenswrapper[4830]: I0227 17:41:21.216622 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0779d5aa-90c7-4495-b109-f57586a59f70-config\") pod \"0779d5aa-90c7-4495-b109-f57586a59f70\" (UID: \"0779d5aa-90c7-4495-b109-f57586a59f70\") " Feb 27 17:41:21 crc kubenswrapper[4830]: I0227 17:41:21.216666 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0779d5aa-90c7-4495-b109-f57586a59f70-dns-svc\") pod \"0779d5aa-90c7-4495-b109-f57586a59f70\" (UID: \"0779d5aa-90c7-4495-b109-f57586a59f70\") " Feb 27 17:41:21 crc kubenswrapper[4830]: I0227 17:41:21.216738 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tq97\" (UniqueName: \"kubernetes.io/projected/0779d5aa-90c7-4495-b109-f57586a59f70-kube-api-access-7tq97\") pod \"0779d5aa-90c7-4495-b109-f57586a59f70\" (UID: \"0779d5aa-90c7-4495-b109-f57586a59f70\") " Feb 27 17:41:21 crc kubenswrapper[4830]: I0227 17:41:21.216776 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0779d5aa-90c7-4495-b109-f57586a59f70-ovsdbserver-sb\") pod \"0779d5aa-90c7-4495-b109-f57586a59f70\" (UID: \"0779d5aa-90c7-4495-b109-f57586a59f70\") " Feb 27 17:41:21 crc kubenswrapper[4830]: I0227 17:41:21.223508 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0779d5aa-90c7-4495-b109-f57586a59f70-kube-api-access-7tq97" (OuterVolumeSpecName: "kube-api-access-7tq97") pod "0779d5aa-90c7-4495-b109-f57586a59f70" (UID: "0779d5aa-90c7-4495-b109-f57586a59f70"). InnerVolumeSpecName "kube-api-access-7tq97". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:41:21 crc kubenswrapper[4830]: I0227 17:41:21.291396 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0779d5aa-90c7-4495-b109-f57586a59f70-config" (OuterVolumeSpecName: "config") pod "0779d5aa-90c7-4495-b109-f57586a59f70" (UID: "0779d5aa-90c7-4495-b109-f57586a59f70"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:41:21 crc kubenswrapper[4830]: I0227 17:41:21.292294 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0779d5aa-90c7-4495-b109-f57586a59f70-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0779d5aa-90c7-4495-b109-f57586a59f70" (UID: "0779d5aa-90c7-4495-b109-f57586a59f70"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:41:21 crc kubenswrapper[4830]: I0227 17:41:21.308488 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0779d5aa-90c7-4495-b109-f57586a59f70-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0779d5aa-90c7-4495-b109-f57586a59f70" (UID: "0779d5aa-90c7-4495-b109-f57586a59f70"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:41:21 crc kubenswrapper[4830]: I0227 17:41:21.309046 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0779d5aa-90c7-4495-b109-f57586a59f70-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0779d5aa-90c7-4495-b109-f57586a59f70" (UID: "0779d5aa-90c7-4495-b109-f57586a59f70"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:41:21 crc kubenswrapper[4830]: I0227 17:41:21.319009 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0779d5aa-90c7-4495-b109-f57586a59f70-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:21 crc kubenswrapper[4830]: I0227 17:41:21.319038 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0779d5aa-90c7-4495-b109-f57586a59f70-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:21 crc kubenswrapper[4830]: I0227 17:41:21.319052 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tq97\" (UniqueName: \"kubernetes.io/projected/0779d5aa-90c7-4495-b109-f57586a59f70-kube-api-access-7tq97\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:21 crc kubenswrapper[4830]: I0227 17:41:21.319068 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0779d5aa-90c7-4495-b109-f57586a59f70-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:21 crc kubenswrapper[4830]: I0227 17:41:21.319078 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0779d5aa-90c7-4495-b109-f57586a59f70-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:21 crc kubenswrapper[4830]: E0227 17:41:21.765637 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-gvnvz" podUID="f1c73a78-1e95-4481-a273-ba7e3b5a127c" Feb 27 17:41:22 crc kubenswrapper[4830]: I0227 17:41:22.041821 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fd78cc-58qxt" event={"ID":"0779d5aa-90c7-4495-b109-f57586a59f70","Type":"ContainerDied","Data":"a39bd8c8cbe15eaa1a33cb85b4e1c790a016b9d5bcbde19e5d9a50852c446b4a"} Feb 27 17:41:22 crc kubenswrapper[4830]: I0227 17:41:22.042573 4830 scope.go:117] "RemoveContainer" containerID="43e35f4517a7f0252050c0fcc312afd2d6c8e1bc5a9f2ff417ec31f1c34a51f9" Feb 27 17:41:22 crc kubenswrapper[4830]: I0227 17:41:22.041915 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fd78cc-58qxt" Feb 27 17:41:22 crc kubenswrapper[4830]: I0227 17:41:22.071048 4830 scope.go:117] "RemoveContainer" containerID="3d1ba307e4f49e28ff6a625b72b1b7ddb45b91af5cb3869cde497f1824680d24" Feb 27 17:41:22 crc kubenswrapper[4830]: I0227 17:41:22.093108 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fd78cc-58qxt"] Feb 27 17:41:22 crc kubenswrapper[4830]: I0227 17:41:22.101480 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d97fd78cc-58qxt"] Feb 27 17:41:22 crc kubenswrapper[4830]: I0227 17:41:22.780371 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0779d5aa-90c7-4495-b109-f57586a59f70" path="/var/lib/kubelet/pods/0779d5aa-90c7-4495-b109-f57586a59f70/volumes" Feb 27 17:41:24 crc kubenswrapper[4830]: I0227 17:41:24.330563 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 27 17:41:24 crc kubenswrapper[4830]: I0227 17:41:24.331115 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 27 17:41:24 crc kubenswrapper[4830]: I0227 17:41:24.381655 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 27 17:41:24 crc kubenswrapper[4830]: I0227 17:41:24.405643 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 27 17:41:24 crc kubenswrapper[4830]: E0227 17:41:24.777045 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:41:25 crc kubenswrapper[4830]: I0227 17:41:25.077421 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 27 17:41:25 crc kubenswrapper[4830]: I0227 17:41:25.077520 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 27 17:41:27 crc kubenswrapper[4830]: I0227 17:41:27.180314 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 27 17:41:27 crc kubenswrapper[4830]: I0227 17:41:27.185370 4830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 27 17:41:27 crc kubenswrapper[4830]: I0227 17:41:27.193536 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 27 17:41:27 crc kubenswrapper[4830]: I0227 17:41:27.438140 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 27 17:41:27 crc kubenswrapper[4830]: I0227 17:41:27.438183 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 27 17:41:27 crc kubenswrapper[4830]: I0227 17:41:27.478520 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 27 17:41:27 crc kubenswrapper[4830]: I0227 17:41:27.484454 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 27 17:41:28 crc kubenswrapper[4830]: I0227 17:41:28.136922 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 27 17:41:28 crc kubenswrapper[4830]: I0227 17:41:28.137036 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 27 17:41:29 crc kubenswrapper[4830]: I0227 17:41:29.949727 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 27 17:41:29 crc kubenswrapper[4830]: I0227 17:41:29.990838 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 27 17:41:33 crc kubenswrapper[4830]: I0227 17:41:33.160812 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:41:33 crc kubenswrapper[4830]: I0227 17:41:33.161672 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:41:34 crc kubenswrapper[4830]: E0227 17:41:34.774168 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-gvnvz" podUID="f1c73a78-1e95-4481-a273-ba7e3b5a127c" Feb 27 17:41:36 crc kubenswrapper[4830]: I0227 17:41:36.189911 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-9pfvn"] Feb 27 17:41:36 crc kubenswrapper[4830]: E0227 17:41:36.190965 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0779d5aa-90c7-4495-b109-f57586a59f70" containerName="init" Feb 27 17:41:36 crc kubenswrapper[4830]: I0227 17:41:36.190987 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="0779d5aa-90c7-4495-b109-f57586a59f70" containerName="init" Feb 27 17:41:36 crc kubenswrapper[4830]: E0227 17:41:36.191022 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0779d5aa-90c7-4495-b109-f57586a59f70" containerName="dnsmasq-dns" Feb 27 17:41:36 crc kubenswrapper[4830]: I0227 17:41:36.191032 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="0779d5aa-90c7-4495-b109-f57586a59f70" containerName="dnsmasq-dns" Feb 27 17:41:36 crc kubenswrapper[4830]: I0227 17:41:36.191324 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="0779d5aa-90c7-4495-b109-f57586a59f70" containerName="dnsmasq-dns" Feb 27 17:41:36 crc kubenswrapper[4830]: I0227 17:41:36.192436 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9pfvn" Feb 27 17:41:36 crc kubenswrapper[4830]: I0227 17:41:36.214186 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-9pfvn"] Feb 27 17:41:36 crc kubenswrapper[4830]: I0227 17:41:36.280100 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-a9e5-account-create-update-8sbl6"] Feb 27 17:41:36 crc kubenswrapper[4830]: I0227 17:41:36.282256 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a9e5-account-create-update-8sbl6" Feb 27 17:41:36 crc kubenswrapper[4830]: I0227 17:41:36.290457 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 27 17:41:36 crc kubenswrapper[4830]: I0227 17:41:36.299932 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a9e5-account-create-update-8sbl6"] Feb 27 17:41:36 crc kubenswrapper[4830]: I0227 17:41:36.308113 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nm77\" (UniqueName: \"kubernetes.io/projected/999b0c09-f55e-4f61-b7dd-71580d4003bd-kube-api-access-2nm77\") pod \"placement-db-create-9pfvn\" (UID: \"999b0c09-f55e-4f61-b7dd-71580d4003bd\") " pod="openstack/placement-db-create-9pfvn" Feb 27 17:41:36 crc kubenswrapper[4830]: I0227 17:41:36.308292 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/999b0c09-f55e-4f61-b7dd-71580d4003bd-operator-scripts\") pod \"placement-db-create-9pfvn\" (UID: \"999b0c09-f55e-4f61-b7dd-71580d4003bd\") " pod="openstack/placement-db-create-9pfvn" Feb 27 17:41:36 crc kubenswrapper[4830]: I0227 17:41:36.410473 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plc6m\" (UniqueName: \"kubernetes.io/projected/40ca4866-696f-4bcf-81ca-b7e20a20faa0-kube-api-access-plc6m\") pod \"placement-a9e5-account-create-update-8sbl6\" (UID: \"40ca4866-696f-4bcf-81ca-b7e20a20faa0\") " pod="openstack/placement-a9e5-account-create-update-8sbl6" Feb 27 17:41:36 crc kubenswrapper[4830]: I0227 17:41:36.410541 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nm77\" (UniqueName: \"kubernetes.io/projected/999b0c09-f55e-4f61-b7dd-71580d4003bd-kube-api-access-2nm77\") pod \"placement-db-create-9pfvn\" (UID: \"999b0c09-f55e-4f61-b7dd-71580d4003bd\") " pod="openstack/placement-db-create-9pfvn" Feb 27 17:41:36 crc kubenswrapper[4830]: I0227 17:41:36.410574 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40ca4866-696f-4bcf-81ca-b7e20a20faa0-operator-scripts\") pod \"placement-a9e5-account-create-update-8sbl6\" (UID: \"40ca4866-696f-4bcf-81ca-b7e20a20faa0\") " pod="openstack/placement-a9e5-account-create-update-8sbl6" Feb 27 17:41:36 crc kubenswrapper[4830]: I0227 17:41:36.411059 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/999b0c09-f55e-4f61-b7dd-71580d4003bd-operator-scripts\") pod \"placement-db-create-9pfvn\" (UID: \"999b0c09-f55e-4f61-b7dd-71580d4003bd\") " pod="openstack/placement-db-create-9pfvn" Feb 27 17:41:36 crc kubenswrapper[4830]: I0227 17:41:36.412071 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/999b0c09-f55e-4f61-b7dd-71580d4003bd-operator-scripts\") pod \"placement-db-create-9pfvn\" (UID: \"999b0c09-f55e-4f61-b7dd-71580d4003bd\") " pod="openstack/placement-db-create-9pfvn" Feb 27 17:41:36 crc kubenswrapper[4830]: I0227 17:41:36.437094 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nm77\" (UniqueName: \"kubernetes.io/projected/999b0c09-f55e-4f61-b7dd-71580d4003bd-kube-api-access-2nm77\") pod \"placement-db-create-9pfvn\" (UID: \"999b0c09-f55e-4f61-b7dd-71580d4003bd\") " pod="openstack/placement-db-create-9pfvn" Feb 27 17:41:36 crc kubenswrapper[4830]: I0227 17:41:36.512500 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plc6m\" (UniqueName: \"kubernetes.io/projected/40ca4866-696f-4bcf-81ca-b7e20a20faa0-kube-api-access-plc6m\") pod \"placement-a9e5-account-create-update-8sbl6\" (UID: \"40ca4866-696f-4bcf-81ca-b7e20a20faa0\") " pod="openstack/placement-a9e5-account-create-update-8sbl6" Feb 27 17:41:36 crc kubenswrapper[4830]: I0227 17:41:36.512551 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40ca4866-696f-4bcf-81ca-b7e20a20faa0-operator-scripts\") pod \"placement-a9e5-account-create-update-8sbl6\" (UID: \"40ca4866-696f-4bcf-81ca-b7e20a20faa0\") " pod="openstack/placement-a9e5-account-create-update-8sbl6" Feb 27 17:41:36 crc kubenswrapper[4830]: I0227 17:41:36.514636 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40ca4866-696f-4bcf-81ca-b7e20a20faa0-operator-scripts\") pod \"placement-a9e5-account-create-update-8sbl6\" (UID: \"40ca4866-696f-4bcf-81ca-b7e20a20faa0\") " pod="openstack/placement-a9e5-account-create-update-8sbl6" Feb 27 17:41:36 crc kubenswrapper[4830]: I0227 17:41:36.525741 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9pfvn" Feb 27 17:41:36 crc kubenswrapper[4830]: I0227 17:41:36.534464 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plc6m\" (UniqueName: \"kubernetes.io/projected/40ca4866-696f-4bcf-81ca-b7e20a20faa0-kube-api-access-plc6m\") pod \"placement-a9e5-account-create-update-8sbl6\" (UID: \"40ca4866-696f-4bcf-81ca-b7e20a20faa0\") " pod="openstack/placement-a9e5-account-create-update-8sbl6" Feb 27 17:41:36 crc kubenswrapper[4830]: I0227 17:41:36.605719 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a9e5-account-create-update-8sbl6" Feb 27 17:41:37 crc kubenswrapper[4830]: I0227 17:41:37.105538 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-9pfvn"] Feb 27 17:41:37 crc kubenswrapper[4830]: I0227 17:41:37.231308 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a9e5-account-create-update-8sbl6"] Feb 27 17:41:37 crc kubenswrapper[4830]: I0227 17:41:37.278341 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a9e5-account-create-update-8sbl6" event={"ID":"40ca4866-696f-4bcf-81ca-b7e20a20faa0","Type":"ContainerStarted","Data":"865a8190bacc30815ac385abbdc3bc714aef7af95101f749e739e7ec053774dd"} Feb 27 17:41:37 crc kubenswrapper[4830]: I0227 17:41:37.288356 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9pfvn" event={"ID":"999b0c09-f55e-4f61-b7dd-71580d4003bd","Type":"ContainerStarted","Data":"23e56562b97439c8ca29a75f37d58f75827c5a7bed19c12c1a1a8a6fef736d1f"} Feb 27 17:41:37 crc kubenswrapper[4830]: I0227 17:41:37.288437 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9pfvn" event={"ID":"999b0c09-f55e-4f61-b7dd-71580d4003bd","Type":"ContainerStarted","Data":"557e59ca211b87e421f91dd7447c8f47cbf3c060bb157f64b8a6a5ede4ed618c"} Feb 27 17:41:37 crc kubenswrapper[4830]: I0227 17:41:37.318135 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-9pfvn" podStartSLOduration=1.318099511 podStartE2EDuration="1.318099511s" podCreationTimestamp="2026-02-27 17:41:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:41:37.310290323 +0000 UTC m=+5693.399562796" watchObservedRunningTime="2026-02-27 17:41:37.318099511 +0000 UTC m=+5693.407372014" Feb 27 17:41:38 crc kubenswrapper[4830]: I0227 17:41:38.307576 4830 generic.go:334] "Generic (PLEG): container finished" podID="40ca4866-696f-4bcf-81ca-b7e20a20faa0" containerID="d1f9ed46fa0e79149abfd8a8fbf4baefc01d0dfbe0873f82800363c800abfb57" exitCode=0 Feb 27 17:41:38 crc kubenswrapper[4830]: I0227 17:41:38.307726 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a9e5-account-create-update-8sbl6" event={"ID":"40ca4866-696f-4bcf-81ca-b7e20a20faa0","Type":"ContainerDied","Data":"d1f9ed46fa0e79149abfd8a8fbf4baefc01d0dfbe0873f82800363c800abfb57"} Feb 27 17:41:38 crc kubenswrapper[4830]: I0227 17:41:38.312985 4830 generic.go:334] "Generic (PLEG): container finished" podID="999b0c09-f55e-4f61-b7dd-71580d4003bd" containerID="23e56562b97439c8ca29a75f37d58f75827c5a7bed19c12c1a1a8a6fef736d1f" exitCode=0 Feb 27 17:41:38 crc kubenswrapper[4830]: I0227 17:41:38.313029 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9pfvn" event={"ID":"999b0c09-f55e-4f61-b7dd-71580d4003bd","Type":"ContainerDied","Data":"23e56562b97439c8ca29a75f37d58f75827c5a7bed19c12c1a1a8a6fef736d1f"} Feb 27 17:41:39 crc kubenswrapper[4830]: I0227 17:41:39.860482 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a9e5-account-create-update-8sbl6" Feb 27 17:41:39 crc kubenswrapper[4830]: I0227 17:41:39.866062 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9pfvn" Feb 27 17:41:39 crc kubenswrapper[4830]: E0227 17:41:39.975055 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 17:41:39 crc kubenswrapper[4830]: E0227 17:41:39.975271 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 17:41:39 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 17:41:39 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mdb7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536898-vrwjs_openshift-infra(204eb1af-36ad-4de7-9da7-9a37fefd3026): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 17:41:39 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 17:41:39 crc kubenswrapper[4830]: E0227 17:41:39.976409 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:41:39 crc kubenswrapper[4830]: I0227 17:41:39.996715 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40ca4866-696f-4bcf-81ca-b7e20a20faa0-operator-scripts\") pod \"40ca4866-696f-4bcf-81ca-b7e20a20faa0\" (UID: \"40ca4866-696f-4bcf-81ca-b7e20a20faa0\") " Feb 27 17:41:39 crc kubenswrapper[4830]: I0227 17:41:39.997100 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plc6m\" (UniqueName: \"kubernetes.io/projected/40ca4866-696f-4bcf-81ca-b7e20a20faa0-kube-api-access-plc6m\") pod \"40ca4866-696f-4bcf-81ca-b7e20a20faa0\" (UID: \"40ca4866-696f-4bcf-81ca-b7e20a20faa0\") " Feb 27 17:41:39 crc kubenswrapper[4830]: I0227 17:41:39.997208 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nm77\" (UniqueName: \"kubernetes.io/projected/999b0c09-f55e-4f61-b7dd-71580d4003bd-kube-api-access-2nm77\") pod \"999b0c09-f55e-4f61-b7dd-71580d4003bd\" (UID: \"999b0c09-f55e-4f61-b7dd-71580d4003bd\") " Feb 27 17:41:39 crc kubenswrapper[4830]: I0227 17:41:39.997348 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/999b0c09-f55e-4f61-b7dd-71580d4003bd-operator-scripts\") pod \"999b0c09-f55e-4f61-b7dd-71580d4003bd\" (UID: \"999b0c09-f55e-4f61-b7dd-71580d4003bd\") " Feb 27 17:41:39 crc kubenswrapper[4830]: I0227 17:41:39.998316 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/999b0c09-f55e-4f61-b7dd-71580d4003bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "999b0c09-f55e-4f61-b7dd-71580d4003bd" (UID: "999b0c09-f55e-4f61-b7dd-71580d4003bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:41:39 crc kubenswrapper[4830]: I0227 17:41:39.998331 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40ca4866-696f-4bcf-81ca-b7e20a20faa0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "40ca4866-696f-4bcf-81ca-b7e20a20faa0" (UID: "40ca4866-696f-4bcf-81ca-b7e20a20faa0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:41:40 crc kubenswrapper[4830]: I0227 17:41:40.000573 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40ca4866-696f-4bcf-81ca-b7e20a20faa0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:40 crc kubenswrapper[4830]: I0227 17:41:40.000622 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/999b0c09-f55e-4f61-b7dd-71580d4003bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:40 crc kubenswrapper[4830]: I0227 17:41:40.005081 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40ca4866-696f-4bcf-81ca-b7e20a20faa0-kube-api-access-plc6m" (OuterVolumeSpecName: "kube-api-access-plc6m") pod "40ca4866-696f-4bcf-81ca-b7e20a20faa0" (UID: "40ca4866-696f-4bcf-81ca-b7e20a20faa0"). InnerVolumeSpecName "kube-api-access-plc6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:41:40 crc kubenswrapper[4830]: I0227 17:41:40.007743 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/999b0c09-f55e-4f61-b7dd-71580d4003bd-kube-api-access-2nm77" (OuterVolumeSpecName: "kube-api-access-2nm77") pod "999b0c09-f55e-4f61-b7dd-71580d4003bd" (UID: "999b0c09-f55e-4f61-b7dd-71580d4003bd"). InnerVolumeSpecName "kube-api-access-2nm77". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:41:40 crc kubenswrapper[4830]: I0227 17:41:40.103456 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nm77\" (UniqueName: \"kubernetes.io/projected/999b0c09-f55e-4f61-b7dd-71580d4003bd-kube-api-access-2nm77\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:40 crc kubenswrapper[4830]: I0227 17:41:40.103527 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plc6m\" (UniqueName: \"kubernetes.io/projected/40ca4866-696f-4bcf-81ca-b7e20a20faa0-kube-api-access-plc6m\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:40 crc kubenswrapper[4830]: I0227 17:41:40.346168 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a9e5-account-create-update-8sbl6" event={"ID":"40ca4866-696f-4bcf-81ca-b7e20a20faa0","Type":"ContainerDied","Data":"865a8190bacc30815ac385abbdc3bc714aef7af95101f749e739e7ec053774dd"} Feb 27 17:41:40 crc kubenswrapper[4830]: I0227 17:41:40.346226 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="865a8190bacc30815ac385abbdc3bc714aef7af95101f749e739e7ec053774dd" Feb 27 17:41:40 crc kubenswrapper[4830]: I0227 17:41:40.346316 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a9e5-account-create-update-8sbl6" Feb 27 17:41:40 crc kubenswrapper[4830]: I0227 17:41:40.350170 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9pfvn" event={"ID":"999b0c09-f55e-4f61-b7dd-71580d4003bd","Type":"ContainerDied","Data":"557e59ca211b87e421f91dd7447c8f47cbf3c060bb157f64b8a6a5ede4ed618c"} Feb 27 17:41:40 crc kubenswrapper[4830]: I0227 17:41:40.350244 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="557e59ca211b87e421f91dd7447c8f47cbf3c060bb157f64b8a6a5ede4ed618c" Feb 27 17:41:40 crc kubenswrapper[4830]: I0227 17:41:40.350332 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9pfvn" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.583788 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-qbvfw"] Feb 27 17:41:41 crc kubenswrapper[4830]: E0227 17:41:41.584767 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40ca4866-696f-4bcf-81ca-b7e20a20faa0" containerName="mariadb-account-create-update" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.584781 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="40ca4866-696f-4bcf-81ca-b7e20a20faa0" containerName="mariadb-account-create-update" Feb 27 17:41:41 crc kubenswrapper[4830]: E0227 17:41:41.584805 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="999b0c09-f55e-4f61-b7dd-71580d4003bd" containerName="mariadb-database-create" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.584820 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="999b0c09-f55e-4f61-b7dd-71580d4003bd" containerName="mariadb-database-create" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.584986 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="40ca4866-696f-4bcf-81ca-b7e20a20faa0" containerName="mariadb-account-create-update" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.585005 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="999b0c09-f55e-4f61-b7dd-71580d4003bd" containerName="mariadb-database-create" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.585618 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qbvfw" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.595965 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.595987 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-rl2bc" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.596635 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.606788 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-qbvfw"] Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.635423 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7754d54f49-mb84v"] Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.637353 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7754d54f49-mb84v" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.680402 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7754d54f49-mb84v"] Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.748854 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4wcb\" (UniqueName: \"kubernetes.io/projected/75929ab1-64c8-4a78-822f-b3a2701dbcdd-kube-api-access-g4wcb\") pod \"placement-db-sync-qbvfw\" (UID: \"75929ab1-64c8-4a78-822f-b3a2701dbcdd\") " pod="openstack/placement-db-sync-qbvfw" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.748976 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/601dd6ff-d00f-445a-a010-0f02a2865504-ovsdbserver-sb\") pod \"dnsmasq-dns-7754d54f49-mb84v\" (UID: \"601dd6ff-d00f-445a-a010-0f02a2865504\") " pod="openstack/dnsmasq-dns-7754d54f49-mb84v" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.749018 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svzbm\" (UniqueName: \"kubernetes.io/projected/601dd6ff-d00f-445a-a010-0f02a2865504-kube-api-access-svzbm\") pod \"dnsmasq-dns-7754d54f49-mb84v\" (UID: \"601dd6ff-d00f-445a-a010-0f02a2865504\") " pod="openstack/dnsmasq-dns-7754d54f49-mb84v" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.749055 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75929ab1-64c8-4a78-822f-b3a2701dbcdd-config-data\") pod \"placement-db-sync-qbvfw\" (UID: \"75929ab1-64c8-4a78-822f-b3a2701dbcdd\") " pod="openstack/placement-db-sync-qbvfw" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.749093 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/601dd6ff-d00f-445a-a010-0f02a2865504-dns-svc\") pod \"dnsmasq-dns-7754d54f49-mb84v\" (UID: \"601dd6ff-d00f-445a-a010-0f02a2865504\") " pod="openstack/dnsmasq-dns-7754d54f49-mb84v" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.749127 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75929ab1-64c8-4a78-822f-b3a2701dbcdd-logs\") pod \"placement-db-sync-qbvfw\" (UID: \"75929ab1-64c8-4a78-822f-b3a2701dbcdd\") " pod="openstack/placement-db-sync-qbvfw" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.749161 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75929ab1-64c8-4a78-822f-b3a2701dbcdd-combined-ca-bundle\") pod \"placement-db-sync-qbvfw\" (UID: \"75929ab1-64c8-4a78-822f-b3a2701dbcdd\") " pod="openstack/placement-db-sync-qbvfw" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.749232 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/601dd6ff-d00f-445a-a010-0f02a2865504-ovsdbserver-nb\") pod \"dnsmasq-dns-7754d54f49-mb84v\" (UID: \"601dd6ff-d00f-445a-a010-0f02a2865504\") " pod="openstack/dnsmasq-dns-7754d54f49-mb84v" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.749344 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75929ab1-64c8-4a78-822f-b3a2701dbcdd-scripts\") pod \"placement-db-sync-qbvfw\" (UID: \"75929ab1-64c8-4a78-822f-b3a2701dbcdd\") " pod="openstack/placement-db-sync-qbvfw" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.749417 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/601dd6ff-d00f-445a-a010-0f02a2865504-config\") pod \"dnsmasq-dns-7754d54f49-mb84v\" (UID: \"601dd6ff-d00f-445a-a010-0f02a2865504\") " pod="openstack/dnsmasq-dns-7754d54f49-mb84v" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.851812 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4wcb\" (UniqueName: \"kubernetes.io/projected/75929ab1-64c8-4a78-822f-b3a2701dbcdd-kube-api-access-g4wcb\") pod \"placement-db-sync-qbvfw\" (UID: \"75929ab1-64c8-4a78-822f-b3a2701dbcdd\") " pod="openstack/placement-db-sync-qbvfw" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.851895 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/601dd6ff-d00f-445a-a010-0f02a2865504-ovsdbserver-sb\") pod \"dnsmasq-dns-7754d54f49-mb84v\" (UID: \"601dd6ff-d00f-445a-a010-0f02a2865504\") " pod="openstack/dnsmasq-dns-7754d54f49-mb84v" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.851933 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svzbm\" (UniqueName: \"kubernetes.io/projected/601dd6ff-d00f-445a-a010-0f02a2865504-kube-api-access-svzbm\") pod \"dnsmasq-dns-7754d54f49-mb84v\" (UID: \"601dd6ff-d00f-445a-a010-0f02a2865504\") " pod="openstack/dnsmasq-dns-7754d54f49-mb84v" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.851996 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75929ab1-64c8-4a78-822f-b3a2701dbcdd-config-data\") pod \"placement-db-sync-qbvfw\" (UID: \"75929ab1-64c8-4a78-822f-b3a2701dbcdd\") " pod="openstack/placement-db-sync-qbvfw" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.852032 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/601dd6ff-d00f-445a-a010-0f02a2865504-dns-svc\") pod \"dnsmasq-dns-7754d54f49-mb84v\" (UID: \"601dd6ff-d00f-445a-a010-0f02a2865504\") " pod="openstack/dnsmasq-dns-7754d54f49-mb84v" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.852065 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75929ab1-64c8-4a78-822f-b3a2701dbcdd-logs\") pod \"placement-db-sync-qbvfw\" (UID: \"75929ab1-64c8-4a78-822f-b3a2701dbcdd\") " pod="openstack/placement-db-sync-qbvfw" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.852099 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75929ab1-64c8-4a78-822f-b3a2701dbcdd-combined-ca-bundle\") pod \"placement-db-sync-qbvfw\" (UID: \"75929ab1-64c8-4a78-822f-b3a2701dbcdd\") " pod="openstack/placement-db-sync-qbvfw" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.852165 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/601dd6ff-d00f-445a-a010-0f02a2865504-ovsdbserver-nb\") pod \"dnsmasq-dns-7754d54f49-mb84v\" (UID: \"601dd6ff-d00f-445a-a010-0f02a2865504\") " pod="openstack/dnsmasq-dns-7754d54f49-mb84v" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.852297 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75929ab1-64c8-4a78-822f-b3a2701dbcdd-scripts\") pod \"placement-db-sync-qbvfw\" (UID: \"75929ab1-64c8-4a78-822f-b3a2701dbcdd\") " pod="openstack/placement-db-sync-qbvfw" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.852349 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/601dd6ff-d00f-445a-a010-0f02a2865504-config\") pod \"dnsmasq-dns-7754d54f49-mb84v\" (UID: \"601dd6ff-d00f-445a-a010-0f02a2865504\") " pod="openstack/dnsmasq-dns-7754d54f49-mb84v" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.853763 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/601dd6ff-d00f-445a-a010-0f02a2865504-config\") pod \"dnsmasq-dns-7754d54f49-mb84v\" (UID: \"601dd6ff-d00f-445a-a010-0f02a2865504\") " pod="openstack/dnsmasq-dns-7754d54f49-mb84v" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.854919 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/601dd6ff-d00f-445a-a010-0f02a2865504-ovsdbserver-sb\") pod \"dnsmasq-dns-7754d54f49-mb84v\" (UID: \"601dd6ff-d00f-445a-a010-0f02a2865504\") " pod="openstack/dnsmasq-dns-7754d54f49-mb84v" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.855492 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75929ab1-64c8-4a78-822f-b3a2701dbcdd-logs\") pod \"placement-db-sync-qbvfw\" (UID: \"75929ab1-64c8-4a78-822f-b3a2701dbcdd\") " pod="openstack/placement-db-sync-qbvfw" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.854918 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/601dd6ff-d00f-445a-a010-0f02a2865504-ovsdbserver-nb\") pod \"dnsmasq-dns-7754d54f49-mb84v\" (UID: \"601dd6ff-d00f-445a-a010-0f02a2865504\") " pod="openstack/dnsmasq-dns-7754d54f49-mb84v" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.856146 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/601dd6ff-d00f-445a-a010-0f02a2865504-dns-svc\") pod \"dnsmasq-dns-7754d54f49-mb84v\" (UID: \"601dd6ff-d00f-445a-a010-0f02a2865504\") " pod="openstack/dnsmasq-dns-7754d54f49-mb84v" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.861487 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75929ab1-64c8-4a78-822f-b3a2701dbcdd-combined-ca-bundle\") pod \"placement-db-sync-qbvfw\" (UID: \"75929ab1-64c8-4a78-822f-b3a2701dbcdd\") " pod="openstack/placement-db-sync-qbvfw" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.867462 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75929ab1-64c8-4a78-822f-b3a2701dbcdd-scripts\") pod \"placement-db-sync-qbvfw\" (UID: \"75929ab1-64c8-4a78-822f-b3a2701dbcdd\") " pod="openstack/placement-db-sync-qbvfw" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.870538 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75929ab1-64c8-4a78-822f-b3a2701dbcdd-config-data\") pod \"placement-db-sync-qbvfw\" (UID: \"75929ab1-64c8-4a78-822f-b3a2701dbcdd\") " pod="openstack/placement-db-sync-qbvfw" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.876817 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4wcb\" (UniqueName: \"kubernetes.io/projected/75929ab1-64c8-4a78-822f-b3a2701dbcdd-kube-api-access-g4wcb\") pod \"placement-db-sync-qbvfw\" (UID: \"75929ab1-64c8-4a78-822f-b3a2701dbcdd\") " pod="openstack/placement-db-sync-qbvfw" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.884132 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svzbm\" (UniqueName: \"kubernetes.io/projected/601dd6ff-d00f-445a-a010-0f02a2865504-kube-api-access-svzbm\") pod \"dnsmasq-dns-7754d54f49-mb84v\" (UID: \"601dd6ff-d00f-445a-a010-0f02a2865504\") " pod="openstack/dnsmasq-dns-7754d54f49-mb84v" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.919056 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qbvfw" Feb 27 17:41:41 crc kubenswrapper[4830]: I0227 17:41:41.959800 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7754d54f49-mb84v" Feb 27 17:41:42 crc kubenswrapper[4830]: I0227 17:41:42.423444 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-qbvfw"] Feb 27 17:41:42 crc kubenswrapper[4830]: I0227 17:41:42.609134 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7754d54f49-mb84v"] Feb 27 17:41:42 crc kubenswrapper[4830]: W0227 17:41:42.622176 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod601dd6ff_d00f_445a_a010_0f02a2865504.slice/crio-aa73f85ed04168e3f05f9b8fd6ae245476c8186a096e4f8fce96c8a575c28557 WatchSource:0}: Error finding container aa73f85ed04168e3f05f9b8fd6ae245476c8186a096e4f8fce96c8a575c28557: Status 404 returned error can't find the container with id aa73f85ed04168e3f05f9b8fd6ae245476c8186a096e4f8fce96c8a575c28557 Feb 27 17:41:43 crc kubenswrapper[4830]: I0227 17:41:43.385439 4830 generic.go:334] "Generic (PLEG): container finished" podID="601dd6ff-d00f-445a-a010-0f02a2865504" containerID="48f8da0811a5be819e5103e1d788aac8a6d8efe3e32c87f3081ada865670b870" exitCode=0 Feb 27 17:41:43 crc kubenswrapper[4830]: I0227 17:41:43.386062 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7754d54f49-mb84v" event={"ID":"601dd6ff-d00f-445a-a010-0f02a2865504","Type":"ContainerDied","Data":"48f8da0811a5be819e5103e1d788aac8a6d8efe3e32c87f3081ada865670b870"} Feb 27 17:41:43 crc kubenswrapper[4830]: I0227 17:41:43.386111 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7754d54f49-mb84v" event={"ID":"601dd6ff-d00f-445a-a010-0f02a2865504","Type":"ContainerStarted","Data":"aa73f85ed04168e3f05f9b8fd6ae245476c8186a096e4f8fce96c8a575c28557"} Feb 27 17:41:43 crc kubenswrapper[4830]: I0227 17:41:43.410551 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qbvfw" event={"ID":"75929ab1-64c8-4a78-822f-b3a2701dbcdd","Type":"ContainerStarted","Data":"829a58febd013e9966ceec94981836968748df8ff2a4c693b3d3e273263ae144"} Feb 27 17:41:43 crc kubenswrapper[4830]: I0227 17:41:43.410697 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qbvfw" event={"ID":"75929ab1-64c8-4a78-822f-b3a2701dbcdd","Type":"ContainerStarted","Data":"2220d6513644b054e1d606cc23f978d4d5ef40a6e6779f9a0e9e566c140df3ba"} Feb 27 17:41:43 crc kubenswrapper[4830]: I0227 17:41:43.470672 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-qbvfw" podStartSLOduration=2.470593078 podStartE2EDuration="2.470593078s" podCreationTimestamp="2026-02-27 17:41:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:41:43.450473925 +0000 UTC m=+5699.539746428" watchObservedRunningTime="2026-02-27 17:41:43.470593078 +0000 UTC m=+5699.559865581" Feb 27 17:41:44 crc kubenswrapper[4830]: I0227 17:41:44.423041 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7754d54f49-mb84v" event={"ID":"601dd6ff-d00f-445a-a010-0f02a2865504","Type":"ContainerStarted","Data":"365ba8e03a0a1032f6b453885d2c4c289a3ed3775ef2719f6bd4b98407e30716"} Feb 27 17:41:44 crc kubenswrapper[4830]: I0227 17:41:44.423825 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7754d54f49-mb84v" Feb 27 17:41:44 crc kubenswrapper[4830]: I0227 17:41:44.450902 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7754d54f49-mb84v" podStartSLOduration=3.450876455 podStartE2EDuration="3.450876455s" podCreationTimestamp="2026-02-27 17:41:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:41:44.443200261 +0000 UTC m=+5700.532472764" watchObservedRunningTime="2026-02-27 17:41:44.450876455 +0000 UTC m=+5700.540148948" Feb 27 17:41:45 crc kubenswrapper[4830]: I0227 17:41:45.434537 4830 generic.go:334] "Generic (PLEG): container finished" podID="75929ab1-64c8-4a78-822f-b3a2701dbcdd" containerID="829a58febd013e9966ceec94981836968748df8ff2a4c693b3d3e273263ae144" exitCode=0 Feb 27 17:41:45 crc kubenswrapper[4830]: I0227 17:41:45.434666 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qbvfw" event={"ID":"75929ab1-64c8-4a78-822f-b3a2701dbcdd","Type":"ContainerDied","Data":"829a58febd013e9966ceec94981836968748df8ff2a4c693b3d3e273263ae144"} Feb 27 17:41:46 crc kubenswrapper[4830]: E0227 17:41:46.771311 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-gvnvz" podUID="f1c73a78-1e95-4481-a273-ba7e3b5a127c" Feb 27 17:41:46 crc kubenswrapper[4830]: I0227 17:41:46.953300 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qbvfw" Feb 27 17:41:47 crc kubenswrapper[4830]: I0227 17:41:47.064914 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4wcb\" (UniqueName: \"kubernetes.io/projected/75929ab1-64c8-4a78-822f-b3a2701dbcdd-kube-api-access-g4wcb\") pod \"75929ab1-64c8-4a78-822f-b3a2701dbcdd\" (UID: \"75929ab1-64c8-4a78-822f-b3a2701dbcdd\") " Feb 27 17:41:47 crc kubenswrapper[4830]: I0227 17:41:47.065069 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75929ab1-64c8-4a78-822f-b3a2701dbcdd-scripts\") pod \"75929ab1-64c8-4a78-822f-b3a2701dbcdd\" (UID: \"75929ab1-64c8-4a78-822f-b3a2701dbcdd\") " Feb 27 17:41:47 crc kubenswrapper[4830]: I0227 17:41:47.065176 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75929ab1-64c8-4a78-822f-b3a2701dbcdd-combined-ca-bundle\") pod \"75929ab1-64c8-4a78-822f-b3a2701dbcdd\" (UID: \"75929ab1-64c8-4a78-822f-b3a2701dbcdd\") " Feb 27 17:41:47 crc kubenswrapper[4830]: I0227 17:41:47.065279 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75929ab1-64c8-4a78-822f-b3a2701dbcdd-config-data\") pod \"75929ab1-64c8-4a78-822f-b3a2701dbcdd\" (UID: \"75929ab1-64c8-4a78-822f-b3a2701dbcdd\") " Feb 27 17:41:47 crc kubenswrapper[4830]: I0227 17:41:47.065300 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75929ab1-64c8-4a78-822f-b3a2701dbcdd-logs\") pod \"75929ab1-64c8-4a78-822f-b3a2701dbcdd\" (UID: \"75929ab1-64c8-4a78-822f-b3a2701dbcdd\") " Feb 27 17:41:47 crc kubenswrapper[4830]: I0227 17:41:47.065872 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75929ab1-64c8-4a78-822f-b3a2701dbcdd-logs" (OuterVolumeSpecName: "logs") pod "75929ab1-64c8-4a78-822f-b3a2701dbcdd" (UID: "75929ab1-64c8-4a78-822f-b3a2701dbcdd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:41:47 crc kubenswrapper[4830]: I0227 17:41:47.076198 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75929ab1-64c8-4a78-822f-b3a2701dbcdd-scripts" (OuterVolumeSpecName: "scripts") pod "75929ab1-64c8-4a78-822f-b3a2701dbcdd" (UID: "75929ab1-64c8-4a78-822f-b3a2701dbcdd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:41:47 crc kubenswrapper[4830]: I0227 17:41:47.079187 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75929ab1-64c8-4a78-822f-b3a2701dbcdd-kube-api-access-g4wcb" (OuterVolumeSpecName: "kube-api-access-g4wcb") pod "75929ab1-64c8-4a78-822f-b3a2701dbcdd" (UID: "75929ab1-64c8-4a78-822f-b3a2701dbcdd"). InnerVolumeSpecName "kube-api-access-g4wcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:41:47 crc kubenswrapper[4830]: I0227 17:41:47.108499 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75929ab1-64c8-4a78-822f-b3a2701dbcdd-config-data" (OuterVolumeSpecName: "config-data") pod "75929ab1-64c8-4a78-822f-b3a2701dbcdd" (UID: "75929ab1-64c8-4a78-822f-b3a2701dbcdd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:41:47 crc kubenswrapper[4830]: I0227 17:41:47.111654 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75929ab1-64c8-4a78-822f-b3a2701dbcdd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75929ab1-64c8-4a78-822f-b3a2701dbcdd" (UID: "75929ab1-64c8-4a78-822f-b3a2701dbcdd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:41:47 crc kubenswrapper[4830]: I0227 17:41:47.167519 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75929ab1-64c8-4a78-822f-b3a2701dbcdd-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:47 crc kubenswrapper[4830]: I0227 17:41:47.167569 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75929ab1-64c8-4a78-822f-b3a2701dbcdd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:47 crc kubenswrapper[4830]: I0227 17:41:47.167583 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75929ab1-64c8-4a78-822f-b3a2701dbcdd-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:47 crc kubenswrapper[4830]: I0227 17:41:47.167594 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75929ab1-64c8-4a78-822f-b3a2701dbcdd-logs\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:47 crc kubenswrapper[4830]: I0227 17:41:47.167606 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4wcb\" (UniqueName: \"kubernetes.io/projected/75929ab1-64c8-4a78-822f-b3a2701dbcdd-kube-api-access-g4wcb\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:47 crc kubenswrapper[4830]: I0227 17:41:47.457420 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qbvfw" event={"ID":"75929ab1-64c8-4a78-822f-b3a2701dbcdd","Type":"ContainerDied","Data":"2220d6513644b054e1d606cc23f978d4d5ef40a6e6779f9a0e9e566c140df3ba"} Feb 27 17:41:47 crc kubenswrapper[4830]: I0227 17:41:47.457475 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2220d6513644b054e1d606cc23f978d4d5ef40a6e6779f9a0e9e566c140df3ba" Feb 27 17:41:47 crc kubenswrapper[4830]: I0227 17:41:47.457496 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qbvfw" Feb 27 17:41:48 crc kubenswrapper[4830]: I0227 17:41:48.180068 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-597449fbf6-zh885"] Feb 27 17:41:48 crc kubenswrapper[4830]: E0227 17:41:48.180490 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75929ab1-64c8-4a78-822f-b3a2701dbcdd" containerName="placement-db-sync" Feb 27 17:41:48 crc kubenswrapper[4830]: I0227 17:41:48.180505 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="75929ab1-64c8-4a78-822f-b3a2701dbcdd" containerName="placement-db-sync" Feb 27 17:41:48 crc kubenswrapper[4830]: I0227 17:41:48.180691 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="75929ab1-64c8-4a78-822f-b3a2701dbcdd" containerName="placement-db-sync" Feb 27 17:41:48 crc kubenswrapper[4830]: I0227 17:41:48.181679 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-597449fbf6-zh885" Feb 27 17:41:48 crc kubenswrapper[4830]: I0227 17:41:48.184688 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 27 17:41:48 crc kubenswrapper[4830]: I0227 17:41:48.192501 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 27 17:41:48 crc kubenswrapper[4830]: I0227 17:41:48.192624 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-rl2bc" Feb 27 17:41:48 crc kubenswrapper[4830]: I0227 17:41:48.215639 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-597449fbf6-zh885"] Feb 27 17:41:48 crc kubenswrapper[4830]: I0227 17:41:48.295606 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7957ffb0-fa18-4c4b-b17e-7160a1c5f41f-combined-ca-bundle\") pod \"placement-597449fbf6-zh885\" (UID: \"7957ffb0-fa18-4c4b-b17e-7160a1c5f41f\") " pod="openstack/placement-597449fbf6-zh885" Feb 27 17:41:48 crc kubenswrapper[4830]: I0227 17:41:48.295805 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7957ffb0-fa18-4c4b-b17e-7160a1c5f41f-config-data\") pod \"placement-597449fbf6-zh885\" (UID: \"7957ffb0-fa18-4c4b-b17e-7160a1c5f41f\") " pod="openstack/placement-597449fbf6-zh885" Feb 27 17:41:48 crc kubenswrapper[4830]: I0227 17:41:48.296044 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7957ffb0-fa18-4c4b-b17e-7160a1c5f41f-scripts\") pod \"placement-597449fbf6-zh885\" (UID: \"7957ffb0-fa18-4c4b-b17e-7160a1c5f41f\") " pod="openstack/placement-597449fbf6-zh885" Feb 27 17:41:48 crc kubenswrapper[4830]: I0227 17:41:48.296247 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9wxm\" (UniqueName: \"kubernetes.io/projected/7957ffb0-fa18-4c4b-b17e-7160a1c5f41f-kube-api-access-f9wxm\") pod \"placement-597449fbf6-zh885\" (UID: \"7957ffb0-fa18-4c4b-b17e-7160a1c5f41f\") " pod="openstack/placement-597449fbf6-zh885" Feb 27 17:41:48 crc kubenswrapper[4830]: I0227 17:41:48.296359 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7957ffb0-fa18-4c4b-b17e-7160a1c5f41f-logs\") pod \"placement-597449fbf6-zh885\" (UID: \"7957ffb0-fa18-4c4b-b17e-7160a1c5f41f\") " pod="openstack/placement-597449fbf6-zh885" Feb 27 17:41:48 crc kubenswrapper[4830]: I0227 17:41:48.398383 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7957ffb0-fa18-4c4b-b17e-7160a1c5f41f-config-data\") pod \"placement-597449fbf6-zh885\" (UID: \"7957ffb0-fa18-4c4b-b17e-7160a1c5f41f\") " pod="openstack/placement-597449fbf6-zh885" Feb 27 17:41:48 crc kubenswrapper[4830]: I0227 17:41:48.398464 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7957ffb0-fa18-4c4b-b17e-7160a1c5f41f-scripts\") pod \"placement-597449fbf6-zh885\" (UID: \"7957ffb0-fa18-4c4b-b17e-7160a1c5f41f\") " pod="openstack/placement-597449fbf6-zh885" Feb 27 17:41:48 crc kubenswrapper[4830]: I0227 17:41:48.398514 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9wxm\" (UniqueName: \"kubernetes.io/projected/7957ffb0-fa18-4c4b-b17e-7160a1c5f41f-kube-api-access-f9wxm\") pod \"placement-597449fbf6-zh885\" (UID: \"7957ffb0-fa18-4c4b-b17e-7160a1c5f41f\") " pod="openstack/placement-597449fbf6-zh885" Feb 27 17:41:48 crc kubenswrapper[4830]: I0227 17:41:48.398539 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7957ffb0-fa18-4c4b-b17e-7160a1c5f41f-logs\") pod \"placement-597449fbf6-zh885\" (UID: \"7957ffb0-fa18-4c4b-b17e-7160a1c5f41f\") " pod="openstack/placement-597449fbf6-zh885" Feb 27 17:41:48 crc kubenswrapper[4830]: I0227 17:41:48.398612 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7957ffb0-fa18-4c4b-b17e-7160a1c5f41f-combined-ca-bundle\") pod \"placement-597449fbf6-zh885\" (UID: \"7957ffb0-fa18-4c4b-b17e-7160a1c5f41f\") " pod="openstack/placement-597449fbf6-zh885" Feb 27 17:41:48 crc kubenswrapper[4830]: I0227 17:41:48.399170 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7957ffb0-fa18-4c4b-b17e-7160a1c5f41f-logs\") pod \"placement-597449fbf6-zh885\" (UID: \"7957ffb0-fa18-4c4b-b17e-7160a1c5f41f\") " pod="openstack/placement-597449fbf6-zh885" Feb 27 17:41:48 crc kubenswrapper[4830]: I0227 17:41:48.402590 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7957ffb0-fa18-4c4b-b17e-7160a1c5f41f-scripts\") pod \"placement-597449fbf6-zh885\" (UID: \"7957ffb0-fa18-4c4b-b17e-7160a1c5f41f\") " pod="openstack/placement-597449fbf6-zh885" Feb 27 17:41:48 crc kubenswrapper[4830]: I0227 17:41:48.403257 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7957ffb0-fa18-4c4b-b17e-7160a1c5f41f-config-data\") pod \"placement-597449fbf6-zh885\" (UID: \"7957ffb0-fa18-4c4b-b17e-7160a1c5f41f\") " pod="openstack/placement-597449fbf6-zh885" Feb 27 17:41:48 crc kubenswrapper[4830]: I0227 17:41:48.404284 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7957ffb0-fa18-4c4b-b17e-7160a1c5f41f-combined-ca-bundle\") pod \"placement-597449fbf6-zh885\" (UID: \"7957ffb0-fa18-4c4b-b17e-7160a1c5f41f\") " pod="openstack/placement-597449fbf6-zh885" Feb 27 17:41:48 crc kubenswrapper[4830]: I0227 17:41:48.415907 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9wxm\" (UniqueName: \"kubernetes.io/projected/7957ffb0-fa18-4c4b-b17e-7160a1c5f41f-kube-api-access-f9wxm\") pod \"placement-597449fbf6-zh885\" (UID: \"7957ffb0-fa18-4c4b-b17e-7160a1c5f41f\") " pod="openstack/placement-597449fbf6-zh885" Feb 27 17:41:48 crc kubenswrapper[4830]: I0227 17:41:48.522448 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-597449fbf6-zh885" Feb 27 17:41:49 crc kubenswrapper[4830]: I0227 17:41:49.001514 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-597449fbf6-zh885"] Feb 27 17:41:49 crc kubenswrapper[4830]: I0227 17:41:49.283107 4830 scope.go:117] "RemoveContainer" containerID="31102fddd1ac7d179d4134f82cb88832c5c80e8bd3ad53f87fc53c09096c59fb" Feb 27 17:41:49 crc kubenswrapper[4830]: I0227 17:41:49.320760 4830 scope.go:117] "RemoveContainer" containerID="f3fc59651524f079b13d12be58df71cd15350f127c4285da7ae7c34f7ceb8ff6" Feb 27 17:41:49 crc kubenswrapper[4830]: I0227 17:41:49.494872 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-597449fbf6-zh885" event={"ID":"7957ffb0-fa18-4c4b-b17e-7160a1c5f41f","Type":"ContainerStarted","Data":"7ae7c8575a9ec8bc0443ee9094a5fc552067fd579cacab1da804afde7d37bb0c"} Feb 27 17:41:49 crc kubenswrapper[4830]: I0227 17:41:49.494960 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-597449fbf6-zh885" event={"ID":"7957ffb0-fa18-4c4b-b17e-7160a1c5f41f","Type":"ContainerStarted","Data":"658cfd1bd1f88f2382d30c26737b0ad66c41cce5fba9e1287fcb08e04c600718"} Feb 27 17:41:49 crc kubenswrapper[4830]: I0227 17:41:49.494980 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-597449fbf6-zh885" event={"ID":"7957ffb0-fa18-4c4b-b17e-7160a1c5f41f","Type":"ContainerStarted","Data":"d7c307c580a1c82ddaea5830aa3237dcc604976428a0fe7f62515d76a083e437"} Feb 27 17:41:50 crc kubenswrapper[4830]: I0227 17:41:50.503126 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-597449fbf6-zh885" Feb 27 17:41:50 crc kubenswrapper[4830]: I0227 17:41:50.527551 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-597449fbf6-zh885" podStartSLOduration=2.527528699 podStartE2EDuration="2.527528699s" podCreationTimestamp="2026-02-27 17:41:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:41:50.522652382 +0000 UTC m=+5706.611924855" watchObservedRunningTime="2026-02-27 17:41:50.527528699 +0000 UTC m=+5706.616801172" Feb 27 17:41:51 crc kubenswrapper[4830]: I0227 17:41:51.512309 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-597449fbf6-zh885" Feb 27 17:41:51 crc kubenswrapper[4830]: I0227 17:41:51.962240 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7754d54f49-mb84v" Feb 27 17:41:52 crc kubenswrapper[4830]: I0227 17:41:52.045023 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9d796c65c-w27f9"] Feb 27 17:41:52 crc kubenswrapper[4830]: I0227 17:41:52.045397 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9d796c65c-w27f9" podUID="4f04e887-5fcb-4a92-9eff-2bef86064d95" containerName="dnsmasq-dns" containerID="cri-o://eba1901f4ba5b5c8ed7f3d84d247dbbf6cf8573c4e266775b26c5dd56d91bf8b" gracePeriod=10 Feb 27 17:41:52 crc kubenswrapper[4830]: I0227 17:41:52.526179 4830 generic.go:334] "Generic (PLEG): container finished" podID="4f04e887-5fcb-4a92-9eff-2bef86064d95" containerID="eba1901f4ba5b5c8ed7f3d84d247dbbf6cf8573c4e266775b26c5dd56d91bf8b" exitCode=0 Feb 27 17:41:52 crc kubenswrapper[4830]: I0227 17:41:52.526254 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9d796c65c-w27f9" event={"ID":"4f04e887-5fcb-4a92-9eff-2bef86064d95","Type":"ContainerDied","Data":"eba1901f4ba5b5c8ed7f3d84d247dbbf6cf8573c4e266775b26c5dd56d91bf8b"} Feb 27 17:41:52 crc kubenswrapper[4830]: I0227 17:41:52.526691 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9d796c65c-w27f9" event={"ID":"4f04e887-5fcb-4a92-9eff-2bef86064d95","Type":"ContainerDied","Data":"a734bd77b9c9ed59f596e81d1190b577569235d61a45dd92e83aacdcb979a0c6"} Feb 27 17:41:52 crc kubenswrapper[4830]: I0227 17:41:52.526709 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a734bd77b9c9ed59f596e81d1190b577569235d61a45dd92e83aacdcb979a0c6" Feb 27 17:41:52 crc kubenswrapper[4830]: I0227 17:41:52.551645 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9d796c65c-w27f9" Feb 27 17:41:52 crc kubenswrapper[4830]: I0227 17:41:52.596582 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f04e887-5fcb-4a92-9eff-2bef86064d95-ovsdbserver-sb\") pod \"4f04e887-5fcb-4a92-9eff-2bef86064d95\" (UID: \"4f04e887-5fcb-4a92-9eff-2bef86064d95\") " Feb 27 17:41:52 crc kubenswrapper[4830]: I0227 17:41:52.596656 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4qpz\" (UniqueName: \"kubernetes.io/projected/4f04e887-5fcb-4a92-9eff-2bef86064d95-kube-api-access-f4qpz\") pod \"4f04e887-5fcb-4a92-9eff-2bef86064d95\" (UID: \"4f04e887-5fcb-4a92-9eff-2bef86064d95\") " Feb 27 17:41:52 crc kubenswrapper[4830]: I0227 17:41:52.596744 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f04e887-5fcb-4a92-9eff-2bef86064d95-config\") pod \"4f04e887-5fcb-4a92-9eff-2bef86064d95\" (UID: \"4f04e887-5fcb-4a92-9eff-2bef86064d95\") " Feb 27 17:41:52 crc kubenswrapper[4830]: I0227 17:41:52.596911 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f04e887-5fcb-4a92-9eff-2bef86064d95-ovsdbserver-nb\") pod \"4f04e887-5fcb-4a92-9eff-2bef86064d95\" (UID: \"4f04e887-5fcb-4a92-9eff-2bef86064d95\") " Feb 27 17:41:52 crc kubenswrapper[4830]: I0227 17:41:52.597636 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f04e887-5fcb-4a92-9eff-2bef86064d95-dns-svc\") pod \"4f04e887-5fcb-4a92-9eff-2bef86064d95\" (UID: \"4f04e887-5fcb-4a92-9eff-2bef86064d95\") " Feb 27 17:41:52 crc kubenswrapper[4830]: I0227 17:41:52.605376 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f04e887-5fcb-4a92-9eff-2bef86064d95-kube-api-access-f4qpz" (OuterVolumeSpecName: "kube-api-access-f4qpz") pod "4f04e887-5fcb-4a92-9eff-2bef86064d95" (UID: "4f04e887-5fcb-4a92-9eff-2bef86064d95"). InnerVolumeSpecName "kube-api-access-f4qpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:41:52 crc kubenswrapper[4830]: I0227 17:41:52.644450 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f04e887-5fcb-4a92-9eff-2bef86064d95-config" (OuterVolumeSpecName: "config") pod "4f04e887-5fcb-4a92-9eff-2bef86064d95" (UID: "4f04e887-5fcb-4a92-9eff-2bef86064d95"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:41:52 crc kubenswrapper[4830]: I0227 17:41:52.646285 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f04e887-5fcb-4a92-9eff-2bef86064d95-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4f04e887-5fcb-4a92-9eff-2bef86064d95" (UID: "4f04e887-5fcb-4a92-9eff-2bef86064d95"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:41:52 crc kubenswrapper[4830]: I0227 17:41:52.649021 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f04e887-5fcb-4a92-9eff-2bef86064d95-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4f04e887-5fcb-4a92-9eff-2bef86064d95" (UID: "4f04e887-5fcb-4a92-9eff-2bef86064d95"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:41:52 crc kubenswrapper[4830]: I0227 17:41:52.670466 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f04e887-5fcb-4a92-9eff-2bef86064d95-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4f04e887-5fcb-4a92-9eff-2bef86064d95" (UID: "4f04e887-5fcb-4a92-9eff-2bef86064d95"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:41:52 crc kubenswrapper[4830]: I0227 17:41:52.699602 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f04e887-5fcb-4a92-9eff-2bef86064d95-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:52 crc kubenswrapper[4830]: I0227 17:41:52.699636 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f04e887-5fcb-4a92-9eff-2bef86064d95-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:52 crc kubenswrapper[4830]: I0227 17:41:52.699647 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f04e887-5fcb-4a92-9eff-2bef86064d95-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:52 crc kubenswrapper[4830]: I0227 17:41:52.699657 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4qpz\" (UniqueName: \"kubernetes.io/projected/4f04e887-5fcb-4a92-9eff-2bef86064d95-kube-api-access-f4qpz\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:52 crc kubenswrapper[4830]: I0227 17:41:52.699668 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f04e887-5fcb-4a92-9eff-2bef86064d95-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:41:53 crc kubenswrapper[4830]: I0227 17:41:53.546238 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9d796c65c-w27f9" Feb 27 17:41:53 crc kubenswrapper[4830]: I0227 17:41:53.591151 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9d796c65c-w27f9"] Feb 27 17:41:53 crc kubenswrapper[4830]: I0227 17:41:53.599116 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9d796c65c-w27f9"] Feb 27 17:41:54 crc kubenswrapper[4830]: I0227 17:41:54.783028 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f04e887-5fcb-4a92-9eff-2bef86064d95" path="/var/lib/kubelet/pods/4f04e887-5fcb-4a92-9eff-2bef86064d95/volumes" Feb 27 17:41:55 crc kubenswrapper[4830]: E0227 17:41:55.769049 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:42:00 crc kubenswrapper[4830]: I0227 17:42:00.168558 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536902-2942n"] Feb 27 17:42:00 crc kubenswrapper[4830]: E0227 17:42:00.169362 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f04e887-5fcb-4a92-9eff-2bef86064d95" containerName="dnsmasq-dns" Feb 27 17:42:00 crc kubenswrapper[4830]: I0227 17:42:00.169708 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f04e887-5fcb-4a92-9eff-2bef86064d95" containerName="dnsmasq-dns" Feb 27 17:42:00 crc kubenswrapper[4830]: E0227 17:42:00.169737 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f04e887-5fcb-4a92-9eff-2bef86064d95" containerName="init" Feb 27 17:42:00 crc kubenswrapper[4830]: I0227 17:42:00.169743 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f04e887-5fcb-4a92-9eff-2bef86064d95" containerName="init" Feb 27 17:42:00 crc kubenswrapper[4830]: I0227 17:42:00.169928 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f04e887-5fcb-4a92-9eff-2bef86064d95" containerName="dnsmasq-dns" Feb 27 17:42:00 crc kubenswrapper[4830]: I0227 17:42:00.170574 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536902-2942n" Feb 27 17:42:00 crc kubenswrapper[4830]: I0227 17:42:00.193742 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536902-2942n"] Feb 27 17:42:00 crc kubenswrapper[4830]: I0227 17:42:00.269674 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnv2t\" (UniqueName: \"kubernetes.io/projected/5ec03666-94da-435d-bfc4-5b7f8ed237b2-kube-api-access-tnv2t\") pod \"auto-csr-approver-29536902-2942n\" (UID: \"5ec03666-94da-435d-bfc4-5b7f8ed237b2\") " pod="openshift-infra/auto-csr-approver-29536902-2942n" Feb 27 17:42:00 crc kubenswrapper[4830]: I0227 17:42:00.371148 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnv2t\" (UniqueName: \"kubernetes.io/projected/5ec03666-94da-435d-bfc4-5b7f8ed237b2-kube-api-access-tnv2t\") pod \"auto-csr-approver-29536902-2942n\" (UID: \"5ec03666-94da-435d-bfc4-5b7f8ed237b2\") " pod="openshift-infra/auto-csr-approver-29536902-2942n" Feb 27 17:42:00 crc kubenswrapper[4830]: I0227 17:42:00.393619 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnv2t\" (UniqueName: \"kubernetes.io/projected/5ec03666-94da-435d-bfc4-5b7f8ed237b2-kube-api-access-tnv2t\") pod \"auto-csr-approver-29536902-2942n\" (UID: \"5ec03666-94da-435d-bfc4-5b7f8ed237b2\") " pod="openshift-infra/auto-csr-approver-29536902-2942n" Feb 27 17:42:00 crc kubenswrapper[4830]: I0227 17:42:00.506486 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536902-2942n" Feb 27 17:42:00 crc kubenswrapper[4830]: I0227 17:42:00.812818 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536902-2942n"] Feb 27 17:42:01 crc kubenswrapper[4830]: I0227 17:42:01.649976 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536902-2942n" event={"ID":"5ec03666-94da-435d-bfc4-5b7f8ed237b2","Type":"ContainerStarted","Data":"293ea699f829ab9b72268160e9a9024cd34698c859e79be210273b77834b4810"} Feb 27 17:42:01 crc kubenswrapper[4830]: E0227 17:42:01.783795 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 17:42:01 crc kubenswrapper[4830]: E0227 17:42:01.784772 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 17:42:01 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 17:42:01 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tnv2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536902-2942n_openshift-infra(5ec03666-94da-435d-bfc4-5b7f8ed237b2): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 17:42:01 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 17:42:01 crc kubenswrapper[4830]: E0227 17:42:01.786022 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536902-2942n" podUID="5ec03666-94da-435d-bfc4-5b7f8ed237b2" Feb 27 17:42:02 crc kubenswrapper[4830]: E0227 17:42:02.667581 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536902-2942n" podUID="5ec03666-94da-435d-bfc4-5b7f8ed237b2" Feb 27 17:42:03 crc kubenswrapper[4830]: I0227 17:42:03.160807 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:42:03 crc kubenswrapper[4830]: I0227 17:42:03.161465 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:42:03 crc kubenswrapper[4830]: I0227 17:42:03.696201 4830 generic.go:334] "Generic (PLEG): container finished" podID="f1c73a78-1e95-4481-a273-ba7e3b5a127c" containerID="8aa3cf10641c7a30956ffbf83f1f1ebb51706bb6d279b83de759068d9f24a02f" exitCode=0 Feb 27 17:42:03 crc kubenswrapper[4830]: I0227 17:42:03.696290 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvnvz" event={"ID":"f1c73a78-1e95-4481-a273-ba7e3b5a127c","Type":"ContainerDied","Data":"8aa3cf10641c7a30956ffbf83f1f1ebb51706bb6d279b83de759068d9f24a02f"} Feb 27 17:42:04 crc kubenswrapper[4830]: I0227 17:42:04.714567 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvnvz" event={"ID":"f1c73a78-1e95-4481-a273-ba7e3b5a127c","Type":"ContainerStarted","Data":"53b353c0a20292fc12677b6363d0bb1821eaa21e59ec49846095a0fdf80ef7c7"} Feb 27 17:42:04 crc kubenswrapper[4830]: I0227 17:42:04.769473 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gvnvz" podStartSLOduration=3.020286138 podStartE2EDuration="3m47.76944052s" podCreationTimestamp="2026-02-27 17:38:17 +0000 UTC" firstStartedPulling="2026-02-27 17:38:19.632666275 +0000 UTC m=+5495.721938778" lastFinishedPulling="2026-02-27 17:42:04.381820657 +0000 UTC m=+5720.471093160" observedRunningTime="2026-02-27 17:42:04.741661042 +0000 UTC m=+5720.830933535" watchObservedRunningTime="2026-02-27 17:42:04.76944052 +0000 UTC m=+5720.858712983" Feb 27 17:42:07 crc kubenswrapper[4830]: I0227 17:42:07.928299 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gvnvz" Feb 27 17:42:07 crc kubenswrapper[4830]: I0227 17:42:07.928756 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gvnvz" Feb 27 17:42:07 crc kubenswrapper[4830]: I0227 17:42:07.987365 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gvnvz" Feb 27 17:42:09 crc kubenswrapper[4830]: E0227 17:42:09.796742 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:42:17 crc kubenswrapper[4830]: I0227 17:42:17.977156 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gvnvz" Feb 27 17:42:18 crc kubenswrapper[4830]: I0227 17:42:18.042131 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvnvz"] Feb 27 17:42:18 crc kubenswrapper[4830]: I0227 17:42:18.900259 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gvnvz" podUID="f1c73a78-1e95-4481-a273-ba7e3b5a127c" containerName="registry-server" containerID="cri-o://53b353c0a20292fc12677b6363d0bb1821eaa21e59ec49846095a0fdf80ef7c7" gracePeriod=2 Feb 27 17:42:18 crc kubenswrapper[4830]: E0227 17:42:18.926758 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 17:42:18 crc kubenswrapper[4830]: E0227 17:42:18.926908 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 17:42:18 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 17:42:18 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tnv2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536902-2942n_openshift-infra(5ec03666-94da-435d-bfc4-5b7f8ed237b2): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 17:42:18 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 17:42:18 crc kubenswrapper[4830]: E0227 17:42:18.928817 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536902-2942n" podUID="5ec03666-94da-435d-bfc4-5b7f8ed237b2" Feb 27 17:42:19 crc kubenswrapper[4830]: I0227 17:42:19.439881 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gvnvz" Feb 27 17:42:19 crc kubenswrapper[4830]: I0227 17:42:19.517687 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-597449fbf6-zh885" Feb 27 17:42:19 crc kubenswrapper[4830]: I0227 17:42:19.518835 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-597449fbf6-zh885" Feb 27 17:42:19 crc kubenswrapper[4830]: I0227 17:42:19.574771 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1c73a78-1e95-4481-a273-ba7e3b5a127c-utilities\") pod \"f1c73a78-1e95-4481-a273-ba7e3b5a127c\" (UID: \"f1c73a78-1e95-4481-a273-ba7e3b5a127c\") " Feb 27 17:42:19 crc kubenswrapper[4830]: I0227 17:42:19.574894 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1c73a78-1e95-4481-a273-ba7e3b5a127c-catalog-content\") pod \"f1c73a78-1e95-4481-a273-ba7e3b5a127c\" (UID: \"f1c73a78-1e95-4481-a273-ba7e3b5a127c\") " Feb 27 17:42:19 crc kubenswrapper[4830]: I0227 17:42:19.574995 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m48cm\" (UniqueName: \"kubernetes.io/projected/f1c73a78-1e95-4481-a273-ba7e3b5a127c-kube-api-access-m48cm\") pod \"f1c73a78-1e95-4481-a273-ba7e3b5a127c\" (UID: \"f1c73a78-1e95-4481-a273-ba7e3b5a127c\") " Feb 27 17:42:19 crc kubenswrapper[4830]: I0227 17:42:19.576334 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1c73a78-1e95-4481-a273-ba7e3b5a127c-utilities" (OuterVolumeSpecName: "utilities") pod "f1c73a78-1e95-4481-a273-ba7e3b5a127c" (UID: "f1c73a78-1e95-4481-a273-ba7e3b5a127c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:42:19 crc kubenswrapper[4830]: I0227 17:42:19.595341 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1c73a78-1e95-4481-a273-ba7e3b5a127c-kube-api-access-m48cm" (OuterVolumeSpecName: "kube-api-access-m48cm") pod "f1c73a78-1e95-4481-a273-ba7e3b5a127c" (UID: "f1c73a78-1e95-4481-a273-ba7e3b5a127c"). InnerVolumeSpecName "kube-api-access-m48cm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:42:19 crc kubenswrapper[4830]: I0227 17:42:19.612067 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1c73a78-1e95-4481-a273-ba7e3b5a127c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f1c73a78-1e95-4481-a273-ba7e3b5a127c" (UID: "f1c73a78-1e95-4481-a273-ba7e3b5a127c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:42:19 crc kubenswrapper[4830]: I0227 17:42:19.677272 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m48cm\" (UniqueName: \"kubernetes.io/projected/f1c73a78-1e95-4481-a273-ba7e3b5a127c-kube-api-access-m48cm\") on node \"crc\" DevicePath \"\"" Feb 27 17:42:19 crc kubenswrapper[4830]: I0227 17:42:19.677485 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1c73a78-1e95-4481-a273-ba7e3b5a127c-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:42:19 crc kubenswrapper[4830]: I0227 17:42:19.677548 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1c73a78-1e95-4481-a273-ba7e3b5a127c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:42:19 crc kubenswrapper[4830]: I0227 17:42:19.908959 4830 generic.go:334] "Generic (PLEG): container finished" podID="f1c73a78-1e95-4481-a273-ba7e3b5a127c" containerID="53b353c0a20292fc12677b6363d0bb1821eaa21e59ec49846095a0fdf80ef7c7" exitCode=0 Feb 27 17:42:19 crc kubenswrapper[4830]: I0227 17:42:19.909041 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gvnvz" Feb 27 17:42:19 crc kubenswrapper[4830]: I0227 17:42:19.909087 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvnvz" event={"ID":"f1c73a78-1e95-4481-a273-ba7e3b5a127c","Type":"ContainerDied","Data":"53b353c0a20292fc12677b6363d0bb1821eaa21e59ec49846095a0fdf80ef7c7"} Feb 27 17:42:19 crc kubenswrapper[4830]: I0227 17:42:19.909123 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvnvz" event={"ID":"f1c73a78-1e95-4481-a273-ba7e3b5a127c","Type":"ContainerDied","Data":"8067f7bc39866687dce562e122ca85297eea79801f1d919ce1d8cf42af4d53c7"} Feb 27 17:42:19 crc kubenswrapper[4830]: I0227 17:42:19.909141 4830 scope.go:117] "RemoveContainer" containerID="53b353c0a20292fc12677b6363d0bb1821eaa21e59ec49846095a0fdf80ef7c7" Feb 27 17:42:19 crc kubenswrapper[4830]: I0227 17:42:19.950895 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvnvz"] Feb 27 17:42:19 crc kubenswrapper[4830]: I0227 17:42:19.952143 4830 scope.go:117] "RemoveContainer" containerID="8aa3cf10641c7a30956ffbf83f1f1ebb51706bb6d279b83de759068d9f24a02f" Feb 27 17:42:19 crc kubenswrapper[4830]: I0227 17:42:19.971653 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvnvz"] Feb 27 17:42:20 crc kubenswrapper[4830]: I0227 17:42:20.002148 4830 scope.go:117] "RemoveContainer" containerID="706b3557b618cda4f51cdbcde480fe025d87a38446876837cc03418c665b3fc5" Feb 27 17:42:20 crc kubenswrapper[4830]: I0227 17:42:20.031204 4830 scope.go:117] "RemoveContainer" containerID="53b353c0a20292fc12677b6363d0bb1821eaa21e59ec49846095a0fdf80ef7c7" Feb 27 17:42:20 crc kubenswrapper[4830]: E0227 17:42:20.032164 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53b353c0a20292fc12677b6363d0bb1821eaa21e59ec49846095a0fdf80ef7c7\": container with ID starting with 53b353c0a20292fc12677b6363d0bb1821eaa21e59ec49846095a0fdf80ef7c7 not found: ID does not exist" containerID="53b353c0a20292fc12677b6363d0bb1821eaa21e59ec49846095a0fdf80ef7c7" Feb 27 17:42:20 crc kubenswrapper[4830]: I0227 17:42:20.032206 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53b353c0a20292fc12677b6363d0bb1821eaa21e59ec49846095a0fdf80ef7c7"} err="failed to get container status \"53b353c0a20292fc12677b6363d0bb1821eaa21e59ec49846095a0fdf80ef7c7\": rpc error: code = NotFound desc = could not find container \"53b353c0a20292fc12677b6363d0bb1821eaa21e59ec49846095a0fdf80ef7c7\": container with ID starting with 53b353c0a20292fc12677b6363d0bb1821eaa21e59ec49846095a0fdf80ef7c7 not found: ID does not exist" Feb 27 17:42:20 crc kubenswrapper[4830]: I0227 17:42:20.032232 4830 scope.go:117] "RemoveContainer" containerID="8aa3cf10641c7a30956ffbf83f1f1ebb51706bb6d279b83de759068d9f24a02f" Feb 27 17:42:20 crc kubenswrapper[4830]: E0227 17:42:20.033512 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8aa3cf10641c7a30956ffbf83f1f1ebb51706bb6d279b83de759068d9f24a02f\": container with ID starting with 8aa3cf10641c7a30956ffbf83f1f1ebb51706bb6d279b83de759068d9f24a02f not found: ID does not exist" containerID="8aa3cf10641c7a30956ffbf83f1f1ebb51706bb6d279b83de759068d9f24a02f" Feb 27 17:42:20 crc kubenswrapper[4830]: I0227 17:42:20.033544 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8aa3cf10641c7a30956ffbf83f1f1ebb51706bb6d279b83de759068d9f24a02f"} err="failed to get container status \"8aa3cf10641c7a30956ffbf83f1f1ebb51706bb6d279b83de759068d9f24a02f\": rpc error: code = NotFound desc = could not find container \"8aa3cf10641c7a30956ffbf83f1f1ebb51706bb6d279b83de759068d9f24a02f\": container with ID starting with 8aa3cf10641c7a30956ffbf83f1f1ebb51706bb6d279b83de759068d9f24a02f not found: ID does not exist" Feb 27 17:42:20 crc kubenswrapper[4830]: I0227 17:42:20.033561 4830 scope.go:117] "RemoveContainer" containerID="706b3557b618cda4f51cdbcde480fe025d87a38446876837cc03418c665b3fc5" Feb 27 17:42:20 crc kubenswrapper[4830]: E0227 17:42:20.035295 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"706b3557b618cda4f51cdbcde480fe025d87a38446876837cc03418c665b3fc5\": container with ID starting with 706b3557b618cda4f51cdbcde480fe025d87a38446876837cc03418c665b3fc5 not found: ID does not exist" containerID="706b3557b618cda4f51cdbcde480fe025d87a38446876837cc03418c665b3fc5" Feb 27 17:42:20 crc kubenswrapper[4830]: I0227 17:42:20.035354 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"706b3557b618cda4f51cdbcde480fe025d87a38446876837cc03418c665b3fc5"} err="failed to get container status \"706b3557b618cda4f51cdbcde480fe025d87a38446876837cc03418c665b3fc5\": rpc error: code = NotFound desc = could not find container \"706b3557b618cda4f51cdbcde480fe025d87a38446876837cc03418c665b3fc5\": container with ID starting with 706b3557b618cda4f51cdbcde480fe025d87a38446876837cc03418c665b3fc5 not found: ID does not exist" Feb 27 17:42:20 crc kubenswrapper[4830]: I0227 17:42:20.779373 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1c73a78-1e95-4481-a273-ba7e3b5a127c" path="/var/lib/kubelet/pods/f1c73a78-1e95-4481-a273-ba7e3b5a127c/volumes" Feb 27 17:42:22 crc kubenswrapper[4830]: E0227 17:42:22.765246 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:42:31 crc kubenswrapper[4830]: E0227 17:42:31.765354 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536902-2942n" podUID="5ec03666-94da-435d-bfc4-5b7f8ed237b2" Feb 27 17:42:33 crc kubenswrapper[4830]: I0227 17:42:33.160483 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:42:33 crc kubenswrapper[4830]: I0227 17:42:33.161054 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:42:33 crc kubenswrapper[4830]: I0227 17:42:33.161104 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 17:42:33 crc kubenswrapper[4830]: I0227 17:42:33.161845 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0f73af46b765b3d98f3f5e9883b49887ac4f2ac3485e91a001e18c5d351646d8"} pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 17:42:33 crc kubenswrapper[4830]: I0227 17:42:33.161889 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" containerID="cri-o://0f73af46b765b3d98f3f5e9883b49887ac4f2ac3485e91a001e18c5d351646d8" gracePeriod=600 Feb 27 17:42:33 crc kubenswrapper[4830]: E0227 17:42:33.312234 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:42:34 crc kubenswrapper[4830]: I0227 17:42:34.087105 4830 generic.go:334] "Generic (PLEG): container finished" podID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerID="0f73af46b765b3d98f3f5e9883b49887ac4f2ac3485e91a001e18c5d351646d8" exitCode=0 Feb 27 17:42:34 crc kubenswrapper[4830]: I0227 17:42:34.087178 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerDied","Data":"0f73af46b765b3d98f3f5e9883b49887ac4f2ac3485e91a001e18c5d351646d8"} Feb 27 17:42:34 crc kubenswrapper[4830]: I0227 17:42:34.087245 4830 scope.go:117] "RemoveContainer" containerID="22fbcacd37ad840c90f07fc1e16c44d308f846d0fbace0b7a3cfa023009541af" Feb 27 17:42:34 crc kubenswrapper[4830]: I0227 17:42:34.088116 4830 scope.go:117] "RemoveContainer" containerID="0f73af46b765b3d98f3f5e9883b49887ac4f2ac3485e91a001e18c5d351646d8" Feb 27 17:42:34 crc kubenswrapper[4830]: E0227 17:42:34.088467 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:42:36 crc kubenswrapper[4830]: E0227 17:42:36.771844 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.428906 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-dzdtc"] Feb 27 17:42:44 crc kubenswrapper[4830]: E0227 17:42:44.429872 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c73a78-1e95-4481-a273-ba7e3b5a127c" containerName="registry-server" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.429886 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c73a78-1e95-4481-a273-ba7e3b5a127c" containerName="registry-server" Feb 27 17:42:44 crc kubenswrapper[4830]: E0227 17:42:44.429902 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c73a78-1e95-4481-a273-ba7e3b5a127c" containerName="extract-utilities" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.429908 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c73a78-1e95-4481-a273-ba7e3b5a127c" containerName="extract-utilities" Feb 27 17:42:44 crc kubenswrapper[4830]: E0227 17:42:44.429933 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c73a78-1e95-4481-a273-ba7e3b5a127c" containerName="extract-content" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.429940 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c73a78-1e95-4481-a273-ba7e3b5a127c" containerName="extract-content" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.430098 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1c73a78-1e95-4481-a273-ba7e3b5a127c" containerName="registry-server" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.430656 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dzdtc" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.439211 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dzdtc"] Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.466787 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1476f120-cb3a-4ddb-8876-14c9cd912d49-operator-scripts\") pod \"nova-api-db-create-dzdtc\" (UID: \"1476f120-cb3a-4ddb-8876-14c9cd912d49\") " pod="openstack/nova-api-db-create-dzdtc" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.466823 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxhcd\" (UniqueName: \"kubernetes.io/projected/1476f120-cb3a-4ddb-8876-14c9cd912d49-kube-api-access-dxhcd\") pod \"nova-api-db-create-dzdtc\" (UID: \"1476f120-cb3a-4ddb-8876-14c9cd912d49\") " pod="openstack/nova-api-db-create-dzdtc" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.532022 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-7npc7"] Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.533057 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7npc7" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.542035 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-7npc7"] Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.568606 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gkx9\" (UniqueName: \"kubernetes.io/projected/efe5a2c2-2f81-419f-ba45-287441964844-kube-api-access-4gkx9\") pod \"nova-cell0-db-create-7npc7\" (UID: \"efe5a2c2-2f81-419f-ba45-287441964844\") " pod="openstack/nova-cell0-db-create-7npc7" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.568842 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1476f120-cb3a-4ddb-8876-14c9cd912d49-operator-scripts\") pod \"nova-api-db-create-dzdtc\" (UID: \"1476f120-cb3a-4ddb-8876-14c9cd912d49\") " pod="openstack/nova-api-db-create-dzdtc" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.568892 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxhcd\" (UniqueName: \"kubernetes.io/projected/1476f120-cb3a-4ddb-8876-14c9cd912d49-kube-api-access-dxhcd\") pod \"nova-api-db-create-dzdtc\" (UID: \"1476f120-cb3a-4ddb-8876-14c9cd912d49\") " pod="openstack/nova-api-db-create-dzdtc" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.569126 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efe5a2c2-2f81-419f-ba45-287441964844-operator-scripts\") pod \"nova-cell0-db-create-7npc7\" (UID: \"efe5a2c2-2f81-419f-ba45-287441964844\") " pod="openstack/nova-cell0-db-create-7npc7" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.569799 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1476f120-cb3a-4ddb-8876-14c9cd912d49-operator-scripts\") pod \"nova-api-db-create-dzdtc\" (UID: \"1476f120-cb3a-4ddb-8876-14c9cd912d49\") " pod="openstack/nova-api-db-create-dzdtc" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.588463 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxhcd\" (UniqueName: \"kubernetes.io/projected/1476f120-cb3a-4ddb-8876-14c9cd912d49-kube-api-access-dxhcd\") pod \"nova-api-db-create-dzdtc\" (UID: \"1476f120-cb3a-4ddb-8876-14c9cd912d49\") " pod="openstack/nova-api-db-create-dzdtc" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.635241 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0173-account-create-update-vcvw8"] Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.636344 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0173-account-create-update-vcvw8" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.638459 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.658688 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0173-account-create-update-vcvw8"] Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.670856 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbg6b\" (UniqueName: \"kubernetes.io/projected/faee69e3-9f85-4d66-91c8-76e6888f678c-kube-api-access-xbg6b\") pod \"nova-api-0173-account-create-update-vcvw8\" (UID: \"faee69e3-9f85-4d66-91c8-76e6888f678c\") " pod="openstack/nova-api-0173-account-create-update-vcvw8" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.670940 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efe5a2c2-2f81-419f-ba45-287441964844-operator-scripts\") pod \"nova-cell0-db-create-7npc7\" (UID: \"efe5a2c2-2f81-419f-ba45-287441964844\") " pod="openstack/nova-cell0-db-create-7npc7" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.671329 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gkx9\" (UniqueName: \"kubernetes.io/projected/efe5a2c2-2f81-419f-ba45-287441964844-kube-api-access-4gkx9\") pod \"nova-cell0-db-create-7npc7\" (UID: \"efe5a2c2-2f81-419f-ba45-287441964844\") " pod="openstack/nova-cell0-db-create-7npc7" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.671537 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/faee69e3-9f85-4d66-91c8-76e6888f678c-operator-scripts\") pod \"nova-api-0173-account-create-update-vcvw8\" (UID: \"faee69e3-9f85-4d66-91c8-76e6888f678c\") " pod="openstack/nova-api-0173-account-create-update-vcvw8" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.671671 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efe5a2c2-2f81-419f-ba45-287441964844-operator-scripts\") pod \"nova-cell0-db-create-7npc7\" (UID: \"efe5a2c2-2f81-419f-ba45-287441964844\") " pod="openstack/nova-cell0-db-create-7npc7" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.691577 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gkx9\" (UniqueName: \"kubernetes.io/projected/efe5a2c2-2f81-419f-ba45-287441964844-kube-api-access-4gkx9\") pod \"nova-cell0-db-create-7npc7\" (UID: \"efe5a2c2-2f81-419f-ba45-287441964844\") " pod="openstack/nova-cell0-db-create-7npc7" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.776969 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbg6b\" (UniqueName: \"kubernetes.io/projected/faee69e3-9f85-4d66-91c8-76e6888f678c-kube-api-access-xbg6b\") pod \"nova-api-0173-account-create-update-vcvw8\" (UID: \"faee69e3-9f85-4d66-91c8-76e6888f678c\") " pod="openstack/nova-api-0173-account-create-update-vcvw8" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.777161 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/faee69e3-9f85-4d66-91c8-76e6888f678c-operator-scripts\") pod \"nova-api-0173-account-create-update-vcvw8\" (UID: \"faee69e3-9f85-4d66-91c8-76e6888f678c\") " pod="openstack/nova-api-0173-account-create-update-vcvw8" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.777922 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/faee69e3-9f85-4d66-91c8-76e6888f678c-operator-scripts\") pod \"nova-api-0173-account-create-update-vcvw8\" (UID: \"faee69e3-9f85-4d66-91c8-76e6888f678c\") " pod="openstack/nova-api-0173-account-create-update-vcvw8" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.792710 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dzdtc" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.862927 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7npc7" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.878642 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbg6b\" (UniqueName: \"kubernetes.io/projected/faee69e3-9f85-4d66-91c8-76e6888f678c-kube-api-access-xbg6b\") pod \"nova-api-0173-account-create-update-vcvw8\" (UID: \"faee69e3-9f85-4d66-91c8-76e6888f678c\") " pod="openstack/nova-api-0173-account-create-update-vcvw8" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.890700 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-99dgz"] Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.892435 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-99dgz"] Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.892530 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-99dgz" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.898134 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54fa8c61-cab3-4696-93d5-32120c184f0b-operator-scripts\") pod \"nova-cell1-db-create-99dgz\" (UID: \"54fa8c61-cab3-4696-93d5-32120c184f0b\") " pod="openstack/nova-cell1-db-create-99dgz" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.898202 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhtbs\" (UniqueName: \"kubernetes.io/projected/54fa8c61-cab3-4696-93d5-32120c184f0b-kube-api-access-hhtbs\") pod \"nova-cell1-db-create-99dgz\" (UID: \"54fa8c61-cab3-4696-93d5-32120c184f0b\") " pod="openstack/nova-cell1-db-create-99dgz" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.926065 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-43a2-account-create-update-wd25q"] Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.927390 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-43a2-account-create-update-wd25q" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.936218 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-43a2-account-create-update-wd25q"] Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.940716 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.953060 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0173-account-create-update-vcvw8" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.998615 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54fa8c61-cab3-4696-93d5-32120c184f0b-operator-scripts\") pod \"nova-cell1-db-create-99dgz\" (UID: \"54fa8c61-cab3-4696-93d5-32120c184f0b\") " pod="openstack/nova-cell1-db-create-99dgz" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.998683 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhtbs\" (UniqueName: \"kubernetes.io/projected/54fa8c61-cab3-4696-93d5-32120c184f0b-kube-api-access-hhtbs\") pod \"nova-cell1-db-create-99dgz\" (UID: \"54fa8c61-cab3-4696-93d5-32120c184f0b\") " pod="openstack/nova-cell1-db-create-99dgz" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.998717 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/458a8af3-6366-427a-8641-9b5014271de7-operator-scripts\") pod \"nova-cell0-43a2-account-create-update-wd25q\" (UID: \"458a8af3-6366-427a-8641-9b5014271de7\") " pod="openstack/nova-cell0-43a2-account-create-update-wd25q" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.998816 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n28tw\" (UniqueName: \"kubernetes.io/projected/458a8af3-6366-427a-8641-9b5014271de7-kube-api-access-n28tw\") pod \"nova-cell0-43a2-account-create-update-wd25q\" (UID: \"458a8af3-6366-427a-8641-9b5014271de7\") " pod="openstack/nova-cell0-43a2-account-create-update-wd25q" Feb 27 17:42:44 crc kubenswrapper[4830]: I0227 17:42:44.999747 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54fa8c61-cab3-4696-93d5-32120c184f0b-operator-scripts\") pod \"nova-cell1-db-create-99dgz\" (UID: \"54fa8c61-cab3-4696-93d5-32120c184f0b\") " pod="openstack/nova-cell1-db-create-99dgz" Feb 27 17:42:45 crc kubenswrapper[4830]: I0227 17:42:45.021798 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhtbs\" (UniqueName: \"kubernetes.io/projected/54fa8c61-cab3-4696-93d5-32120c184f0b-kube-api-access-hhtbs\") pod \"nova-cell1-db-create-99dgz\" (UID: \"54fa8c61-cab3-4696-93d5-32120c184f0b\") " pod="openstack/nova-cell1-db-create-99dgz" Feb 27 17:42:45 crc kubenswrapper[4830]: I0227 17:42:45.057371 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-411a-account-create-update-qrfdh"] Feb 27 17:42:45 crc kubenswrapper[4830]: I0227 17:42:45.058873 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-411a-account-create-update-qrfdh" Feb 27 17:42:45 crc kubenswrapper[4830]: I0227 17:42:45.065799 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 27 17:42:45 crc kubenswrapper[4830]: I0227 17:42:45.084289 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-411a-account-create-update-qrfdh"] Feb 27 17:42:45 crc kubenswrapper[4830]: I0227 17:42:45.101521 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d43b6d8-d47a-4b6e-8dbb-27a222cd971f-operator-scripts\") pod \"nova-cell1-411a-account-create-update-qrfdh\" (UID: \"3d43b6d8-d47a-4b6e-8dbb-27a222cd971f\") " pod="openstack/nova-cell1-411a-account-create-update-qrfdh" Feb 27 17:42:45 crc kubenswrapper[4830]: I0227 17:42:45.101579 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/458a8af3-6366-427a-8641-9b5014271de7-operator-scripts\") pod \"nova-cell0-43a2-account-create-update-wd25q\" (UID: \"458a8af3-6366-427a-8641-9b5014271de7\") " pod="openstack/nova-cell0-43a2-account-create-update-wd25q" Feb 27 17:42:45 crc kubenswrapper[4830]: I0227 17:42:45.101640 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp62n\" (UniqueName: \"kubernetes.io/projected/3d43b6d8-d47a-4b6e-8dbb-27a222cd971f-kube-api-access-mp62n\") pod \"nova-cell1-411a-account-create-update-qrfdh\" (UID: \"3d43b6d8-d47a-4b6e-8dbb-27a222cd971f\") " pod="openstack/nova-cell1-411a-account-create-update-qrfdh" Feb 27 17:42:45 crc kubenswrapper[4830]: I0227 17:42:45.101708 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n28tw\" (UniqueName: \"kubernetes.io/projected/458a8af3-6366-427a-8641-9b5014271de7-kube-api-access-n28tw\") pod \"nova-cell0-43a2-account-create-update-wd25q\" (UID: \"458a8af3-6366-427a-8641-9b5014271de7\") " pod="openstack/nova-cell0-43a2-account-create-update-wd25q" Feb 27 17:42:45 crc kubenswrapper[4830]: I0227 17:42:45.104448 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/458a8af3-6366-427a-8641-9b5014271de7-operator-scripts\") pod \"nova-cell0-43a2-account-create-update-wd25q\" (UID: \"458a8af3-6366-427a-8641-9b5014271de7\") " pod="openstack/nova-cell0-43a2-account-create-update-wd25q" Feb 27 17:42:45 crc kubenswrapper[4830]: I0227 17:42:45.139907 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n28tw\" (UniqueName: \"kubernetes.io/projected/458a8af3-6366-427a-8641-9b5014271de7-kube-api-access-n28tw\") pod \"nova-cell0-43a2-account-create-update-wd25q\" (UID: \"458a8af3-6366-427a-8641-9b5014271de7\") " pod="openstack/nova-cell0-43a2-account-create-update-wd25q" Feb 27 17:42:45 crc kubenswrapper[4830]: I0227 17:42:45.203239 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d43b6d8-d47a-4b6e-8dbb-27a222cd971f-operator-scripts\") pod \"nova-cell1-411a-account-create-update-qrfdh\" (UID: \"3d43b6d8-d47a-4b6e-8dbb-27a222cd971f\") " pod="openstack/nova-cell1-411a-account-create-update-qrfdh" Feb 27 17:42:45 crc kubenswrapper[4830]: I0227 17:42:45.204758 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d43b6d8-d47a-4b6e-8dbb-27a222cd971f-operator-scripts\") pod \"nova-cell1-411a-account-create-update-qrfdh\" (UID: \"3d43b6d8-d47a-4b6e-8dbb-27a222cd971f\") " pod="openstack/nova-cell1-411a-account-create-update-qrfdh" Feb 27 17:42:45 crc kubenswrapper[4830]: I0227 17:42:45.204894 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp62n\" (UniqueName: \"kubernetes.io/projected/3d43b6d8-d47a-4b6e-8dbb-27a222cd971f-kube-api-access-mp62n\") pod \"nova-cell1-411a-account-create-update-qrfdh\" (UID: \"3d43b6d8-d47a-4b6e-8dbb-27a222cd971f\") " pod="openstack/nova-cell1-411a-account-create-update-qrfdh" Feb 27 17:42:45 crc kubenswrapper[4830]: I0227 17:42:45.224588 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp62n\" (UniqueName: \"kubernetes.io/projected/3d43b6d8-d47a-4b6e-8dbb-27a222cd971f-kube-api-access-mp62n\") pod \"nova-cell1-411a-account-create-update-qrfdh\" (UID: \"3d43b6d8-d47a-4b6e-8dbb-27a222cd971f\") " pod="openstack/nova-cell1-411a-account-create-update-qrfdh" Feb 27 17:42:45 crc kubenswrapper[4830]: I0227 17:42:45.237829 4830 generic.go:334] "Generic (PLEG): container finished" podID="5ec03666-94da-435d-bfc4-5b7f8ed237b2" containerID="5a33dc38119460dac374b266d5f931d6a5fc8cd244d372f370999214ad65d58f" exitCode=0 Feb 27 17:42:45 crc kubenswrapper[4830]: I0227 17:42:45.237881 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536902-2942n" event={"ID":"5ec03666-94da-435d-bfc4-5b7f8ed237b2","Type":"ContainerDied","Data":"5a33dc38119460dac374b266d5f931d6a5fc8cd244d372f370999214ad65d58f"} Feb 27 17:42:45 crc kubenswrapper[4830]: I0227 17:42:45.257651 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-99dgz" Feb 27 17:42:45 crc kubenswrapper[4830]: I0227 17:42:45.268849 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-43a2-account-create-update-wd25q" Feb 27 17:42:45 crc kubenswrapper[4830]: I0227 17:42:45.459583 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-411a-account-create-update-qrfdh" Feb 27 17:42:45 crc kubenswrapper[4830]: I0227 17:42:45.496673 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dzdtc"] Feb 27 17:42:45 crc kubenswrapper[4830]: W0227 17:42:45.501773 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1476f120_cb3a_4ddb_8876_14c9cd912d49.slice/crio-26dbb0033fc728c77c7909b4307d8f8857f270022ce722160f3b8ec00eed239b WatchSource:0}: Error finding container 26dbb0033fc728c77c7909b4307d8f8857f270022ce722160f3b8ec00eed239b: Status 404 returned error can't find the container with id 26dbb0033fc728c77c7909b4307d8f8857f270022ce722160f3b8ec00eed239b Feb 27 17:42:45 crc kubenswrapper[4830]: I0227 17:42:45.558889 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-7npc7"] Feb 27 17:42:45 crc kubenswrapper[4830]: I0227 17:42:45.627475 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0173-account-create-update-vcvw8"] Feb 27 17:42:45 crc kubenswrapper[4830]: W0227 17:42:45.635396 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfaee69e3_9f85_4d66_91c8_76e6888f678c.slice/crio-a8c8c94f2e04315d54bcff18a994538318af5b0f72775201875f4d078d2aa8bf WatchSource:0}: Error finding container a8c8c94f2e04315d54bcff18a994538318af5b0f72775201875f4d078d2aa8bf: Status 404 returned error can't find the container with id a8c8c94f2e04315d54bcff18a994538318af5b0f72775201875f4d078d2aa8bf Feb 27 17:42:45 crc kubenswrapper[4830]: I0227 17:42:45.758161 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-99dgz"] Feb 27 17:42:45 crc kubenswrapper[4830]: W0227 17:42:45.773563 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54fa8c61_cab3_4696_93d5_32120c184f0b.slice/crio-7ae43fb35a5d5ffce8d6140d52787e8f63808c89e64536ab0e4e839e8b3172f0 WatchSource:0}: Error finding container 7ae43fb35a5d5ffce8d6140d52787e8f63808c89e64536ab0e4e839e8b3172f0: Status 404 returned error can't find the container with id 7ae43fb35a5d5ffce8d6140d52787e8f63808c89e64536ab0e4e839e8b3172f0 Feb 27 17:42:45 crc kubenswrapper[4830]: I0227 17:42:45.857935 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-43a2-account-create-update-wd25q"] Feb 27 17:42:45 crc kubenswrapper[4830]: W0227 17:42:45.862889 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod458a8af3_6366_427a_8641_9b5014271de7.slice/crio-19b7e603325dc9ecc5a16a926b5e5f426fe6b04b33438064a87e70fe468b180c WatchSource:0}: Error finding container 19b7e603325dc9ecc5a16a926b5e5f426fe6b04b33438064a87e70fe468b180c: Status 404 returned error can't find the container with id 19b7e603325dc9ecc5a16a926b5e5f426fe6b04b33438064a87e70fe468b180c Feb 27 17:42:46 crc kubenswrapper[4830]: I0227 17:42:46.010681 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-411a-account-create-update-qrfdh"] Feb 27 17:42:46 crc kubenswrapper[4830]: W0227 17:42:46.040116 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d43b6d8_d47a_4b6e_8dbb_27a222cd971f.slice/crio-0850b38b74f68fc521cdd5bd63802a2459c541efc8cd7bcc0e7deec0357241e5 WatchSource:0}: Error finding container 0850b38b74f68fc521cdd5bd63802a2459c541efc8cd7bcc0e7deec0357241e5: Status 404 returned error can't find the container with id 0850b38b74f68fc521cdd5bd63802a2459c541efc8cd7bcc0e7deec0357241e5 Feb 27 17:42:46 crc kubenswrapper[4830]: I0227 17:42:46.249840 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-99dgz" event={"ID":"54fa8c61-cab3-4696-93d5-32120c184f0b","Type":"ContainerStarted","Data":"a80924d0975c926796267ebea18562103390ff8a948cc38ff6e01a9908d57e50"} Feb 27 17:42:46 crc kubenswrapper[4830]: I0227 17:42:46.249890 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-99dgz" event={"ID":"54fa8c61-cab3-4696-93d5-32120c184f0b","Type":"ContainerStarted","Data":"7ae43fb35a5d5ffce8d6140d52787e8f63808c89e64536ab0e4e839e8b3172f0"} Feb 27 17:42:46 crc kubenswrapper[4830]: I0227 17:42:46.252738 4830 generic.go:334] "Generic (PLEG): container finished" podID="faee69e3-9f85-4d66-91c8-76e6888f678c" containerID="e14be69ff82f403db9606cf6289c49390341a1eaafabf98bc23c6d11df3670b3" exitCode=0 Feb 27 17:42:46 crc kubenswrapper[4830]: I0227 17:42:46.252800 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0173-account-create-update-vcvw8" event={"ID":"faee69e3-9f85-4d66-91c8-76e6888f678c","Type":"ContainerDied","Data":"e14be69ff82f403db9606cf6289c49390341a1eaafabf98bc23c6d11df3670b3"} Feb 27 17:42:46 crc kubenswrapper[4830]: I0227 17:42:46.252820 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0173-account-create-update-vcvw8" event={"ID":"faee69e3-9f85-4d66-91c8-76e6888f678c","Type":"ContainerStarted","Data":"a8c8c94f2e04315d54bcff18a994538318af5b0f72775201875f4d078d2aa8bf"} Feb 27 17:42:46 crc kubenswrapper[4830]: I0227 17:42:46.255148 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-411a-account-create-update-qrfdh" event={"ID":"3d43b6d8-d47a-4b6e-8dbb-27a222cd971f","Type":"ContainerStarted","Data":"0850b38b74f68fc521cdd5bd63802a2459c541efc8cd7bcc0e7deec0357241e5"} Feb 27 17:42:46 crc kubenswrapper[4830]: I0227 17:42:46.256919 4830 generic.go:334] "Generic (PLEG): container finished" podID="1476f120-cb3a-4ddb-8876-14c9cd912d49" containerID="074b328a067e7c0b45867468df72d820653ebaa0fd03f032bf1952c6e9c5e5b7" exitCode=0 Feb 27 17:42:46 crc kubenswrapper[4830]: I0227 17:42:46.256991 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dzdtc" event={"ID":"1476f120-cb3a-4ddb-8876-14c9cd912d49","Type":"ContainerDied","Data":"074b328a067e7c0b45867468df72d820653ebaa0fd03f032bf1952c6e9c5e5b7"} Feb 27 17:42:46 crc kubenswrapper[4830]: I0227 17:42:46.257012 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dzdtc" event={"ID":"1476f120-cb3a-4ddb-8876-14c9cd912d49","Type":"ContainerStarted","Data":"26dbb0033fc728c77c7909b4307d8f8857f270022ce722160f3b8ec00eed239b"} Feb 27 17:42:46 crc kubenswrapper[4830]: I0227 17:42:46.258995 4830 generic.go:334] "Generic (PLEG): container finished" podID="efe5a2c2-2f81-419f-ba45-287441964844" containerID="9320716aa73ae27cd08d34aaf0c214120b64afcd8ea2f3012ac92e711ce2a3a6" exitCode=0 Feb 27 17:42:46 crc kubenswrapper[4830]: I0227 17:42:46.259143 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-7npc7" event={"ID":"efe5a2c2-2f81-419f-ba45-287441964844","Type":"ContainerDied","Data":"9320716aa73ae27cd08d34aaf0c214120b64afcd8ea2f3012ac92e711ce2a3a6"} Feb 27 17:42:46 crc kubenswrapper[4830]: I0227 17:42:46.259209 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-7npc7" event={"ID":"efe5a2c2-2f81-419f-ba45-287441964844","Type":"ContainerStarted","Data":"a9797cd1eb6c6aa8f032c414a5c95175fd38777ba18f41e2a20392cfec4eec59"} Feb 27 17:42:46 crc kubenswrapper[4830]: I0227 17:42:46.264982 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-43a2-account-create-update-wd25q" event={"ID":"458a8af3-6366-427a-8641-9b5014271de7","Type":"ContainerStarted","Data":"19b7e603325dc9ecc5a16a926b5e5f426fe6b04b33438064a87e70fe468b180c"} Feb 27 17:42:46 crc kubenswrapper[4830]: I0227 17:42:46.278516 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-99dgz" podStartSLOduration=2.278496287 podStartE2EDuration="2.278496287s" podCreationTimestamp="2026-02-27 17:42:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:42:46.267027702 +0000 UTC m=+5762.356300165" watchObservedRunningTime="2026-02-27 17:42:46.278496287 +0000 UTC m=+5762.367768750" Feb 27 17:42:46 crc kubenswrapper[4830]: I0227 17:42:46.593046 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536902-2942n" Feb 27 17:42:46 crc kubenswrapper[4830]: I0227 17:42:46.738930 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnv2t\" (UniqueName: \"kubernetes.io/projected/5ec03666-94da-435d-bfc4-5b7f8ed237b2-kube-api-access-tnv2t\") pod \"5ec03666-94da-435d-bfc4-5b7f8ed237b2\" (UID: \"5ec03666-94da-435d-bfc4-5b7f8ed237b2\") " Feb 27 17:42:46 crc kubenswrapper[4830]: I0227 17:42:46.745820 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ec03666-94da-435d-bfc4-5b7f8ed237b2-kube-api-access-tnv2t" (OuterVolumeSpecName: "kube-api-access-tnv2t") pod "5ec03666-94da-435d-bfc4-5b7f8ed237b2" (UID: "5ec03666-94da-435d-bfc4-5b7f8ed237b2"). InnerVolumeSpecName "kube-api-access-tnv2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:42:46 crc kubenswrapper[4830]: I0227 17:42:46.843337 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnv2t\" (UniqueName: \"kubernetes.io/projected/5ec03666-94da-435d-bfc4-5b7f8ed237b2-kube-api-access-tnv2t\") on node \"crc\" DevicePath \"\"" Feb 27 17:42:47 crc kubenswrapper[4830]: I0227 17:42:47.279479 4830 generic.go:334] "Generic (PLEG): container finished" podID="458a8af3-6366-427a-8641-9b5014271de7" containerID="00ce924c2d7d1449257642d31c8cfda7074d454ea033bd9e486dc28819419487" exitCode=0 Feb 27 17:42:47 crc kubenswrapper[4830]: I0227 17:42:47.279613 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-43a2-account-create-update-wd25q" event={"ID":"458a8af3-6366-427a-8641-9b5014271de7","Type":"ContainerDied","Data":"00ce924c2d7d1449257642d31c8cfda7074d454ea033bd9e486dc28819419487"} Feb 27 17:42:47 crc kubenswrapper[4830]: I0227 17:42:47.283688 4830 generic.go:334] "Generic (PLEG): container finished" podID="54fa8c61-cab3-4696-93d5-32120c184f0b" containerID="a80924d0975c926796267ebea18562103390ff8a948cc38ff6e01a9908d57e50" exitCode=0 Feb 27 17:42:47 crc kubenswrapper[4830]: I0227 17:42:47.283815 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-99dgz" event={"ID":"54fa8c61-cab3-4696-93d5-32120c184f0b","Type":"ContainerDied","Data":"a80924d0975c926796267ebea18562103390ff8a948cc38ff6e01a9908d57e50"} Feb 27 17:42:47 crc kubenswrapper[4830]: I0227 17:42:47.291626 4830 generic.go:334] "Generic (PLEG): container finished" podID="3d43b6d8-d47a-4b6e-8dbb-27a222cd971f" containerID="33efaa3263cf90f03be2b5d8ffc0d24676f1e485d67916311511375e814cee21" exitCode=0 Feb 27 17:42:47 crc kubenswrapper[4830]: I0227 17:42:47.291788 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-411a-account-create-update-qrfdh" event={"ID":"3d43b6d8-d47a-4b6e-8dbb-27a222cd971f","Type":"ContainerDied","Data":"33efaa3263cf90f03be2b5d8ffc0d24676f1e485d67916311511375e814cee21"} Feb 27 17:42:47 crc kubenswrapper[4830]: I0227 17:42:47.294721 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536902-2942n" event={"ID":"5ec03666-94da-435d-bfc4-5b7f8ed237b2","Type":"ContainerDied","Data":"293ea699f829ab9b72268160e9a9024cd34698c859e79be210273b77834b4810"} Feb 27 17:42:47 crc kubenswrapper[4830]: I0227 17:42:47.294772 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="293ea699f829ab9b72268160e9a9024cd34698c859e79be210273b77834b4810" Feb 27 17:42:47 crc kubenswrapper[4830]: I0227 17:42:47.294817 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536902-2942n" Feb 27 17:42:47 crc kubenswrapper[4830]: I0227 17:42:47.678347 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536894-d74wz"] Feb 27 17:42:47 crc kubenswrapper[4830]: I0227 17:42:47.681813 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7npc7" Feb 27 17:42:47 crc kubenswrapper[4830]: I0227 17:42:47.689448 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536894-d74wz"] Feb 27 17:42:47 crc kubenswrapper[4830]: I0227 17:42:47.795510 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dzdtc" Feb 27 17:42:47 crc kubenswrapper[4830]: I0227 17:42:47.801798 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0173-account-create-update-vcvw8" Feb 27 17:42:47 crc kubenswrapper[4830]: I0227 17:42:47.867487 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efe5a2c2-2f81-419f-ba45-287441964844-operator-scripts\") pod \"efe5a2c2-2f81-419f-ba45-287441964844\" (UID: \"efe5a2c2-2f81-419f-ba45-287441964844\") " Feb 27 17:42:47 crc kubenswrapper[4830]: I0227 17:42:47.867548 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gkx9\" (UniqueName: \"kubernetes.io/projected/efe5a2c2-2f81-419f-ba45-287441964844-kube-api-access-4gkx9\") pod \"efe5a2c2-2f81-419f-ba45-287441964844\" (UID: \"efe5a2c2-2f81-419f-ba45-287441964844\") " Feb 27 17:42:47 crc kubenswrapper[4830]: I0227 17:42:47.868325 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efe5a2c2-2f81-419f-ba45-287441964844-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "efe5a2c2-2f81-419f-ba45-287441964844" (UID: "efe5a2c2-2f81-419f-ba45-287441964844"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:42:47 crc kubenswrapper[4830]: I0227 17:42:47.868802 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efe5a2c2-2f81-419f-ba45-287441964844-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:42:47 crc kubenswrapper[4830]: I0227 17:42:47.871765 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efe5a2c2-2f81-419f-ba45-287441964844-kube-api-access-4gkx9" (OuterVolumeSpecName: "kube-api-access-4gkx9") pod "efe5a2c2-2f81-419f-ba45-287441964844" (UID: "efe5a2c2-2f81-419f-ba45-287441964844"). InnerVolumeSpecName "kube-api-access-4gkx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:42:47 crc kubenswrapper[4830]: I0227 17:42:47.970308 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/faee69e3-9f85-4d66-91c8-76e6888f678c-operator-scripts\") pod \"faee69e3-9f85-4d66-91c8-76e6888f678c\" (UID: \"faee69e3-9f85-4d66-91c8-76e6888f678c\") " Feb 27 17:42:47 crc kubenswrapper[4830]: I0227 17:42:47.970463 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxhcd\" (UniqueName: \"kubernetes.io/projected/1476f120-cb3a-4ddb-8876-14c9cd912d49-kube-api-access-dxhcd\") pod \"1476f120-cb3a-4ddb-8876-14c9cd912d49\" (UID: \"1476f120-cb3a-4ddb-8876-14c9cd912d49\") " Feb 27 17:42:47 crc kubenswrapper[4830]: I0227 17:42:47.970664 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1476f120-cb3a-4ddb-8876-14c9cd912d49-operator-scripts\") pod \"1476f120-cb3a-4ddb-8876-14c9cd912d49\" (UID: \"1476f120-cb3a-4ddb-8876-14c9cd912d49\") " Feb 27 17:42:47 crc kubenswrapper[4830]: I0227 17:42:47.970713 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbg6b\" (UniqueName: \"kubernetes.io/projected/faee69e3-9f85-4d66-91c8-76e6888f678c-kube-api-access-xbg6b\") pod \"faee69e3-9f85-4d66-91c8-76e6888f678c\" (UID: \"faee69e3-9f85-4d66-91c8-76e6888f678c\") " Feb 27 17:42:47 crc kubenswrapper[4830]: I0227 17:42:47.970813 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/faee69e3-9f85-4d66-91c8-76e6888f678c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "faee69e3-9f85-4d66-91c8-76e6888f678c" (UID: "faee69e3-9f85-4d66-91c8-76e6888f678c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:42:47 crc kubenswrapper[4830]: I0227 17:42:47.971147 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gkx9\" (UniqueName: \"kubernetes.io/projected/efe5a2c2-2f81-419f-ba45-287441964844-kube-api-access-4gkx9\") on node \"crc\" DevicePath \"\"" Feb 27 17:42:47 crc kubenswrapper[4830]: I0227 17:42:47.971181 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/faee69e3-9f85-4d66-91c8-76e6888f678c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:42:47 crc kubenswrapper[4830]: I0227 17:42:47.971563 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1476f120-cb3a-4ddb-8876-14c9cd912d49-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1476f120-cb3a-4ddb-8876-14c9cd912d49" (UID: "1476f120-cb3a-4ddb-8876-14c9cd912d49"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:42:47 crc kubenswrapper[4830]: I0227 17:42:47.973942 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faee69e3-9f85-4d66-91c8-76e6888f678c-kube-api-access-xbg6b" (OuterVolumeSpecName: "kube-api-access-xbg6b") pod "faee69e3-9f85-4d66-91c8-76e6888f678c" (UID: "faee69e3-9f85-4d66-91c8-76e6888f678c"). InnerVolumeSpecName "kube-api-access-xbg6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:42:47 crc kubenswrapper[4830]: I0227 17:42:47.974561 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1476f120-cb3a-4ddb-8876-14c9cd912d49-kube-api-access-dxhcd" (OuterVolumeSpecName: "kube-api-access-dxhcd") pod "1476f120-cb3a-4ddb-8876-14c9cd912d49" (UID: "1476f120-cb3a-4ddb-8876-14c9cd912d49"). InnerVolumeSpecName "kube-api-access-dxhcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:42:48 crc kubenswrapper[4830]: I0227 17:42:48.073448 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1476f120-cb3a-4ddb-8876-14c9cd912d49-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:42:48 crc kubenswrapper[4830]: I0227 17:42:48.073501 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbg6b\" (UniqueName: \"kubernetes.io/projected/faee69e3-9f85-4d66-91c8-76e6888f678c-kube-api-access-xbg6b\") on node \"crc\" DevicePath \"\"" Feb 27 17:42:48 crc kubenswrapper[4830]: I0227 17:42:48.073520 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxhcd\" (UniqueName: \"kubernetes.io/projected/1476f120-cb3a-4ddb-8876-14c9cd912d49-kube-api-access-dxhcd\") on node \"crc\" DevicePath \"\"" Feb 27 17:42:48 crc kubenswrapper[4830]: I0227 17:42:48.307717 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0173-account-create-update-vcvw8" event={"ID":"faee69e3-9f85-4d66-91c8-76e6888f678c","Type":"ContainerDied","Data":"a8c8c94f2e04315d54bcff18a994538318af5b0f72775201875f4d078d2aa8bf"} Feb 27 17:42:48 crc kubenswrapper[4830]: I0227 17:42:48.307762 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8c8c94f2e04315d54bcff18a994538318af5b0f72775201875f4d078d2aa8bf" Feb 27 17:42:48 crc kubenswrapper[4830]: I0227 17:42:48.307800 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0173-account-create-update-vcvw8" Feb 27 17:42:48 crc kubenswrapper[4830]: I0227 17:42:48.310206 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dzdtc" event={"ID":"1476f120-cb3a-4ddb-8876-14c9cd912d49","Type":"ContainerDied","Data":"26dbb0033fc728c77c7909b4307d8f8857f270022ce722160f3b8ec00eed239b"} Feb 27 17:42:48 crc kubenswrapper[4830]: I0227 17:42:48.310230 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26dbb0033fc728c77c7909b4307d8f8857f270022ce722160f3b8ec00eed239b" Feb 27 17:42:48 crc kubenswrapper[4830]: I0227 17:42:48.310291 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dzdtc" Feb 27 17:42:48 crc kubenswrapper[4830]: I0227 17:42:48.313201 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7npc7" Feb 27 17:42:48 crc kubenswrapper[4830]: I0227 17:42:48.313212 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-7npc7" event={"ID":"efe5a2c2-2f81-419f-ba45-287441964844","Type":"ContainerDied","Data":"a9797cd1eb6c6aa8f032c414a5c95175fd38777ba18f41e2a20392cfec4eec59"} Feb 27 17:42:48 crc kubenswrapper[4830]: I0227 17:42:48.313296 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9797cd1eb6c6aa8f032c414a5c95175fd38777ba18f41e2a20392cfec4eec59" Feb 27 17:42:48 crc kubenswrapper[4830]: I0227 17:42:48.787879 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d453e6f-c44f-480c-bda1-650c519b749a" path="/var/lib/kubelet/pods/4d453e6f-c44f-480c-bda1-650c519b749a/volumes" Feb 27 17:42:48 crc kubenswrapper[4830]: I0227 17:42:48.800907 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-411a-account-create-update-qrfdh" Feb 27 17:42:48 crc kubenswrapper[4830]: I0227 17:42:48.889237 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-99dgz" Feb 27 17:42:48 crc kubenswrapper[4830]: I0227 17:42:48.918464 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-43a2-account-create-update-wd25q" Feb 27 17:42:48 crc kubenswrapper[4830]: I0227 17:42:48.992285 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54fa8c61-cab3-4696-93d5-32120c184f0b-operator-scripts\") pod \"54fa8c61-cab3-4696-93d5-32120c184f0b\" (UID: \"54fa8c61-cab3-4696-93d5-32120c184f0b\") " Feb 27 17:42:48 crc kubenswrapper[4830]: I0227 17:42:48.992361 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhtbs\" (UniqueName: \"kubernetes.io/projected/54fa8c61-cab3-4696-93d5-32120c184f0b-kube-api-access-hhtbs\") pod \"54fa8c61-cab3-4696-93d5-32120c184f0b\" (UID: \"54fa8c61-cab3-4696-93d5-32120c184f0b\") " Feb 27 17:42:48 crc kubenswrapper[4830]: I0227 17:42:48.992410 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mp62n\" (UniqueName: \"kubernetes.io/projected/3d43b6d8-d47a-4b6e-8dbb-27a222cd971f-kube-api-access-mp62n\") pod \"3d43b6d8-d47a-4b6e-8dbb-27a222cd971f\" (UID: \"3d43b6d8-d47a-4b6e-8dbb-27a222cd971f\") " Feb 27 17:42:48 crc kubenswrapper[4830]: I0227 17:42:48.992562 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d43b6d8-d47a-4b6e-8dbb-27a222cd971f-operator-scripts\") pod \"3d43b6d8-d47a-4b6e-8dbb-27a222cd971f\" (UID: \"3d43b6d8-d47a-4b6e-8dbb-27a222cd971f\") " Feb 27 17:42:48 crc kubenswrapper[4830]: I0227 17:42:48.993215 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54fa8c61-cab3-4696-93d5-32120c184f0b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "54fa8c61-cab3-4696-93d5-32120c184f0b" (UID: "54fa8c61-cab3-4696-93d5-32120c184f0b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:42:48 crc kubenswrapper[4830]: I0227 17:42:48.993322 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d43b6d8-d47a-4b6e-8dbb-27a222cd971f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3d43b6d8-d47a-4b6e-8dbb-27a222cd971f" (UID: "3d43b6d8-d47a-4b6e-8dbb-27a222cd971f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:42:48 crc kubenswrapper[4830]: I0227 17:42:48.998350 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54fa8c61-cab3-4696-93d5-32120c184f0b-kube-api-access-hhtbs" (OuterVolumeSpecName: "kube-api-access-hhtbs") pod "54fa8c61-cab3-4696-93d5-32120c184f0b" (UID: "54fa8c61-cab3-4696-93d5-32120c184f0b"). InnerVolumeSpecName "kube-api-access-hhtbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:42:48 crc kubenswrapper[4830]: I0227 17:42:48.998677 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d43b6d8-d47a-4b6e-8dbb-27a222cd971f-kube-api-access-mp62n" (OuterVolumeSpecName: "kube-api-access-mp62n") pod "3d43b6d8-d47a-4b6e-8dbb-27a222cd971f" (UID: "3d43b6d8-d47a-4b6e-8dbb-27a222cd971f"). InnerVolumeSpecName "kube-api-access-mp62n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:42:49 crc kubenswrapper[4830]: I0227 17:42:49.094133 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n28tw\" (UniqueName: \"kubernetes.io/projected/458a8af3-6366-427a-8641-9b5014271de7-kube-api-access-n28tw\") pod \"458a8af3-6366-427a-8641-9b5014271de7\" (UID: \"458a8af3-6366-427a-8641-9b5014271de7\") " Feb 27 17:42:49 crc kubenswrapper[4830]: I0227 17:42:49.094559 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/458a8af3-6366-427a-8641-9b5014271de7-operator-scripts\") pod \"458a8af3-6366-427a-8641-9b5014271de7\" (UID: \"458a8af3-6366-427a-8641-9b5014271de7\") " Feb 27 17:42:49 crc kubenswrapper[4830]: I0227 17:42:49.095159 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d43b6d8-d47a-4b6e-8dbb-27a222cd971f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:42:49 crc kubenswrapper[4830]: I0227 17:42:49.095159 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/458a8af3-6366-427a-8641-9b5014271de7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "458a8af3-6366-427a-8641-9b5014271de7" (UID: "458a8af3-6366-427a-8641-9b5014271de7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:42:49 crc kubenswrapper[4830]: I0227 17:42:49.095178 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54fa8c61-cab3-4696-93d5-32120c184f0b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:42:49 crc kubenswrapper[4830]: I0227 17:42:49.095239 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhtbs\" (UniqueName: \"kubernetes.io/projected/54fa8c61-cab3-4696-93d5-32120c184f0b-kube-api-access-hhtbs\") on node \"crc\" DevicePath \"\"" Feb 27 17:42:49 crc kubenswrapper[4830]: I0227 17:42:49.095264 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mp62n\" (UniqueName: \"kubernetes.io/projected/3d43b6d8-d47a-4b6e-8dbb-27a222cd971f-kube-api-access-mp62n\") on node \"crc\" DevicePath \"\"" Feb 27 17:42:49 crc kubenswrapper[4830]: I0227 17:42:49.097008 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/458a8af3-6366-427a-8641-9b5014271de7-kube-api-access-n28tw" (OuterVolumeSpecName: "kube-api-access-n28tw") pod "458a8af3-6366-427a-8641-9b5014271de7" (UID: "458a8af3-6366-427a-8641-9b5014271de7"). InnerVolumeSpecName "kube-api-access-n28tw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:42:49 crc kubenswrapper[4830]: I0227 17:42:49.198630 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n28tw\" (UniqueName: \"kubernetes.io/projected/458a8af3-6366-427a-8641-9b5014271de7-kube-api-access-n28tw\") on node \"crc\" DevicePath \"\"" Feb 27 17:42:49 crc kubenswrapper[4830]: I0227 17:42:49.198701 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/458a8af3-6366-427a-8641-9b5014271de7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:42:49 crc kubenswrapper[4830]: I0227 17:42:49.327534 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-411a-account-create-update-qrfdh" event={"ID":"3d43b6d8-d47a-4b6e-8dbb-27a222cd971f","Type":"ContainerDied","Data":"0850b38b74f68fc521cdd5bd63802a2459c541efc8cd7bcc0e7deec0357241e5"} Feb 27 17:42:49 crc kubenswrapper[4830]: I0227 17:42:49.327575 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-411a-account-create-update-qrfdh" Feb 27 17:42:49 crc kubenswrapper[4830]: I0227 17:42:49.327597 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0850b38b74f68fc521cdd5bd63802a2459c541efc8cd7bcc0e7deec0357241e5" Feb 27 17:42:49 crc kubenswrapper[4830]: I0227 17:42:49.330099 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-43a2-account-create-update-wd25q" event={"ID":"458a8af3-6366-427a-8641-9b5014271de7","Type":"ContainerDied","Data":"19b7e603325dc9ecc5a16a926b5e5f426fe6b04b33438064a87e70fe468b180c"} Feb 27 17:42:49 crc kubenswrapper[4830]: I0227 17:42:49.330143 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19b7e603325dc9ecc5a16a926b5e5f426fe6b04b33438064a87e70fe468b180c" Feb 27 17:42:49 crc kubenswrapper[4830]: I0227 17:42:49.330209 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-43a2-account-create-update-wd25q" Feb 27 17:42:49 crc kubenswrapper[4830]: I0227 17:42:49.332045 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-99dgz" event={"ID":"54fa8c61-cab3-4696-93d5-32120c184f0b","Type":"ContainerDied","Data":"7ae43fb35a5d5ffce8d6140d52787e8f63808c89e64536ab0e4e839e8b3172f0"} Feb 27 17:42:49 crc kubenswrapper[4830]: I0227 17:42:49.332100 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ae43fb35a5d5ffce8d6140d52787e8f63808c89e64536ab0e4e839e8b3172f0" Feb 27 17:42:49 crc kubenswrapper[4830]: I0227 17:42:49.332136 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-99dgz" Feb 27 17:42:49 crc kubenswrapper[4830]: I0227 17:42:49.456996 4830 scope.go:117] "RemoveContainer" containerID="181f95bdb98422c1ec4757b625802270a141a2ad650e80a8426133b764a0c4d8" Feb 27 17:42:49 crc kubenswrapper[4830]: I0227 17:42:49.763251 4830 scope.go:117] "RemoveContainer" containerID="0f73af46b765b3d98f3f5e9883b49887ac4f2ac3485e91a001e18c5d351646d8" Feb 27 17:42:49 crc kubenswrapper[4830]: E0227 17:42:49.763551 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:42:50 crc kubenswrapper[4830]: E0227 17:42:50.764562 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:42:55 crc kubenswrapper[4830]: I0227 17:42:55.182019 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6vf7l"] Feb 27 17:42:55 crc kubenswrapper[4830]: E0227 17:42:55.182810 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54fa8c61-cab3-4696-93d5-32120c184f0b" containerName="mariadb-database-create" Feb 27 17:42:55 crc kubenswrapper[4830]: I0227 17:42:55.182825 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="54fa8c61-cab3-4696-93d5-32120c184f0b" containerName="mariadb-database-create" Feb 27 17:42:55 crc kubenswrapper[4830]: E0227 17:42:55.182854 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1476f120-cb3a-4ddb-8876-14c9cd912d49" containerName="mariadb-database-create" Feb 27 17:42:55 crc kubenswrapper[4830]: I0227 17:42:55.182862 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="1476f120-cb3a-4ddb-8876-14c9cd912d49" containerName="mariadb-database-create" Feb 27 17:42:55 crc kubenswrapper[4830]: E0227 17:42:55.182872 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faee69e3-9f85-4d66-91c8-76e6888f678c" containerName="mariadb-account-create-update" Feb 27 17:42:55 crc kubenswrapper[4830]: I0227 17:42:55.182880 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="faee69e3-9f85-4d66-91c8-76e6888f678c" containerName="mariadb-account-create-update" Feb 27 17:42:55 crc kubenswrapper[4830]: E0227 17:42:55.182904 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d43b6d8-d47a-4b6e-8dbb-27a222cd971f" containerName="mariadb-account-create-update" Feb 27 17:42:55 crc kubenswrapper[4830]: I0227 17:42:55.182924 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d43b6d8-d47a-4b6e-8dbb-27a222cd971f" containerName="mariadb-account-create-update" Feb 27 17:42:55 crc kubenswrapper[4830]: E0227 17:42:55.182934 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="458a8af3-6366-427a-8641-9b5014271de7" containerName="mariadb-account-create-update" Feb 27 17:42:55 crc kubenswrapper[4830]: I0227 17:42:55.182961 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="458a8af3-6366-427a-8641-9b5014271de7" containerName="mariadb-account-create-update" Feb 27 17:42:55 crc kubenswrapper[4830]: E0227 17:42:55.182974 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efe5a2c2-2f81-419f-ba45-287441964844" containerName="mariadb-database-create" Feb 27 17:42:55 crc kubenswrapper[4830]: I0227 17:42:55.182982 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="efe5a2c2-2f81-419f-ba45-287441964844" containerName="mariadb-database-create" Feb 27 17:42:55 crc kubenswrapper[4830]: E0227 17:42:55.182993 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ec03666-94da-435d-bfc4-5b7f8ed237b2" containerName="oc" Feb 27 17:42:55 crc kubenswrapper[4830]: I0227 17:42:55.183002 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ec03666-94da-435d-bfc4-5b7f8ed237b2" containerName="oc" Feb 27 17:42:55 crc kubenswrapper[4830]: I0227 17:42:55.183184 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="458a8af3-6366-427a-8641-9b5014271de7" containerName="mariadb-account-create-update" Feb 27 17:42:55 crc kubenswrapper[4830]: I0227 17:42:55.183201 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ec03666-94da-435d-bfc4-5b7f8ed237b2" containerName="oc" Feb 27 17:42:55 crc kubenswrapper[4830]: I0227 17:42:55.183218 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="efe5a2c2-2f81-419f-ba45-287441964844" containerName="mariadb-database-create" Feb 27 17:42:55 crc kubenswrapper[4830]: I0227 17:42:55.183232 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d43b6d8-d47a-4b6e-8dbb-27a222cd971f" containerName="mariadb-account-create-update" Feb 27 17:42:55 crc kubenswrapper[4830]: I0227 17:42:55.183247 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="1476f120-cb3a-4ddb-8876-14c9cd912d49" containerName="mariadb-database-create" Feb 27 17:42:55 crc kubenswrapper[4830]: I0227 17:42:55.183259 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="faee69e3-9f85-4d66-91c8-76e6888f678c" containerName="mariadb-account-create-update" Feb 27 17:42:55 crc kubenswrapper[4830]: I0227 17:42:55.183276 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="54fa8c61-cab3-4696-93d5-32120c184f0b" containerName="mariadb-database-create" Feb 27 17:42:55 crc kubenswrapper[4830]: I0227 17:42:55.185751 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6vf7l" Feb 27 17:42:55 crc kubenswrapper[4830]: I0227 17:42:55.188908 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-v7lp6" Feb 27 17:42:55 crc kubenswrapper[4830]: I0227 17:42:55.189322 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 27 17:42:55 crc kubenswrapper[4830]: I0227 17:42:55.189349 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 27 17:42:55 crc kubenswrapper[4830]: I0227 17:42:55.208218 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6vf7l"] Feb 27 17:42:55 crc kubenswrapper[4830]: I0227 17:42:55.341725 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f727246-5bd6-417b-b56f-d9c8913ec2c7-config-data\") pod \"nova-cell0-conductor-db-sync-6vf7l\" (UID: \"7f727246-5bd6-417b-b56f-d9c8913ec2c7\") " pod="openstack/nova-cell0-conductor-db-sync-6vf7l" Feb 27 17:42:55 crc kubenswrapper[4830]: I0227 17:42:55.341813 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f727246-5bd6-417b-b56f-d9c8913ec2c7-scripts\") pod \"nova-cell0-conductor-db-sync-6vf7l\" (UID: \"7f727246-5bd6-417b-b56f-d9c8913ec2c7\") " pod="openstack/nova-cell0-conductor-db-sync-6vf7l" Feb 27 17:42:55 crc kubenswrapper[4830]: I0227 17:42:55.341941 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggm2h\" (UniqueName: \"kubernetes.io/projected/7f727246-5bd6-417b-b56f-d9c8913ec2c7-kube-api-access-ggm2h\") pod \"nova-cell0-conductor-db-sync-6vf7l\" (UID: \"7f727246-5bd6-417b-b56f-d9c8913ec2c7\") " pod="openstack/nova-cell0-conductor-db-sync-6vf7l" Feb 27 17:42:55 crc kubenswrapper[4830]: I0227 17:42:55.342311 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f727246-5bd6-417b-b56f-d9c8913ec2c7-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-6vf7l\" (UID: \"7f727246-5bd6-417b-b56f-d9c8913ec2c7\") " pod="openstack/nova-cell0-conductor-db-sync-6vf7l" Feb 27 17:42:55 crc kubenswrapper[4830]: I0227 17:42:55.444822 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f727246-5bd6-417b-b56f-d9c8913ec2c7-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-6vf7l\" (UID: \"7f727246-5bd6-417b-b56f-d9c8913ec2c7\") " pod="openstack/nova-cell0-conductor-db-sync-6vf7l" Feb 27 17:42:55 crc kubenswrapper[4830]: I0227 17:42:55.445026 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f727246-5bd6-417b-b56f-d9c8913ec2c7-config-data\") pod \"nova-cell0-conductor-db-sync-6vf7l\" (UID: \"7f727246-5bd6-417b-b56f-d9c8913ec2c7\") " pod="openstack/nova-cell0-conductor-db-sync-6vf7l" Feb 27 17:42:55 crc kubenswrapper[4830]: I0227 17:42:55.445108 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f727246-5bd6-417b-b56f-d9c8913ec2c7-scripts\") pod \"nova-cell0-conductor-db-sync-6vf7l\" (UID: \"7f727246-5bd6-417b-b56f-d9c8913ec2c7\") " pod="openstack/nova-cell0-conductor-db-sync-6vf7l" Feb 27 17:42:55 crc kubenswrapper[4830]: I0227 17:42:55.445167 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggm2h\" (UniqueName: \"kubernetes.io/projected/7f727246-5bd6-417b-b56f-d9c8913ec2c7-kube-api-access-ggm2h\") pod \"nova-cell0-conductor-db-sync-6vf7l\" (UID: \"7f727246-5bd6-417b-b56f-d9c8913ec2c7\") " pod="openstack/nova-cell0-conductor-db-sync-6vf7l" Feb 27 17:42:55 crc kubenswrapper[4830]: I0227 17:42:55.452938 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f727246-5bd6-417b-b56f-d9c8913ec2c7-scripts\") pod \"nova-cell0-conductor-db-sync-6vf7l\" (UID: \"7f727246-5bd6-417b-b56f-d9c8913ec2c7\") " pod="openstack/nova-cell0-conductor-db-sync-6vf7l" Feb 27 17:42:55 crc kubenswrapper[4830]: I0227 17:42:55.453008 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f727246-5bd6-417b-b56f-d9c8913ec2c7-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-6vf7l\" (UID: \"7f727246-5bd6-417b-b56f-d9c8913ec2c7\") " pod="openstack/nova-cell0-conductor-db-sync-6vf7l" Feb 27 17:42:55 crc kubenswrapper[4830]: I0227 17:42:55.456853 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f727246-5bd6-417b-b56f-d9c8913ec2c7-config-data\") pod \"nova-cell0-conductor-db-sync-6vf7l\" (UID: \"7f727246-5bd6-417b-b56f-d9c8913ec2c7\") " pod="openstack/nova-cell0-conductor-db-sync-6vf7l" Feb 27 17:42:55 crc kubenswrapper[4830]: I0227 17:42:55.463869 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggm2h\" (UniqueName: \"kubernetes.io/projected/7f727246-5bd6-417b-b56f-d9c8913ec2c7-kube-api-access-ggm2h\") pod \"nova-cell0-conductor-db-sync-6vf7l\" (UID: \"7f727246-5bd6-417b-b56f-d9c8913ec2c7\") " pod="openstack/nova-cell0-conductor-db-sync-6vf7l" Feb 27 17:42:55 crc kubenswrapper[4830]: I0227 17:42:55.507804 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6vf7l" Feb 27 17:42:55 crc kubenswrapper[4830]: I0227 17:42:55.800560 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6vf7l"] Feb 27 17:42:56 crc kubenswrapper[4830]: I0227 17:42:56.404620 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6vf7l" event={"ID":"7f727246-5bd6-417b-b56f-d9c8913ec2c7","Type":"ContainerStarted","Data":"776de742b2e8d0a4a3e92855d0ca447fa7f9d8077499d1940b60d39f08e69d48"} Feb 27 17:42:56 crc kubenswrapper[4830]: I0227 17:42:56.405101 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6vf7l" event={"ID":"7f727246-5bd6-417b-b56f-d9c8913ec2c7","Type":"ContainerStarted","Data":"b6d1f084640bb0c5af169bf2e8046a501eeb12fb3d45b47c305783fb6aa65a40"} Feb 27 17:42:56 crc kubenswrapper[4830]: I0227 17:42:56.422459 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-6vf7l" podStartSLOduration=1.422441806 podStartE2EDuration="1.422441806s" podCreationTimestamp="2026-02-27 17:42:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:42:56.417219681 +0000 UTC m=+5772.506492144" watchObservedRunningTime="2026-02-27 17:42:56.422441806 +0000 UTC m=+5772.511714269" Feb 27 17:43:01 crc kubenswrapper[4830]: I0227 17:43:01.463936 4830 generic.go:334] "Generic (PLEG): container finished" podID="7f727246-5bd6-417b-b56f-d9c8913ec2c7" containerID="776de742b2e8d0a4a3e92855d0ca447fa7f9d8077499d1940b60d39f08e69d48" exitCode=0 Feb 27 17:43:01 crc kubenswrapper[4830]: I0227 17:43:01.464019 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6vf7l" event={"ID":"7f727246-5bd6-417b-b56f-d9c8913ec2c7","Type":"ContainerDied","Data":"776de742b2e8d0a4a3e92855d0ca447fa7f9d8077499d1940b60d39f08e69d48"} Feb 27 17:43:01 crc kubenswrapper[4830]: I0227 17:43:01.765323 4830 scope.go:117] "RemoveContainer" containerID="0f73af46b765b3d98f3f5e9883b49887ac4f2ac3485e91a001e18c5d351646d8" Feb 27 17:43:01 crc kubenswrapper[4830]: E0227 17:43:01.765580 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:43:02 crc kubenswrapper[4830]: I0227 17:43:02.862480 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6vf7l" Feb 27 17:43:02 crc kubenswrapper[4830]: I0227 17:43:02.867587 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f727246-5bd6-417b-b56f-d9c8913ec2c7-combined-ca-bundle\") pod \"7f727246-5bd6-417b-b56f-d9c8913ec2c7\" (UID: \"7f727246-5bd6-417b-b56f-d9c8913ec2c7\") " Feb 27 17:43:02 crc kubenswrapper[4830]: I0227 17:43:02.867917 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f727246-5bd6-417b-b56f-d9c8913ec2c7-config-data\") pod \"7f727246-5bd6-417b-b56f-d9c8913ec2c7\" (UID: \"7f727246-5bd6-417b-b56f-d9c8913ec2c7\") " Feb 27 17:43:02 crc kubenswrapper[4830]: I0227 17:43:02.868013 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f727246-5bd6-417b-b56f-d9c8913ec2c7-scripts\") pod \"7f727246-5bd6-417b-b56f-d9c8913ec2c7\" (UID: \"7f727246-5bd6-417b-b56f-d9c8913ec2c7\") " Feb 27 17:43:02 crc kubenswrapper[4830]: I0227 17:43:02.868238 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggm2h\" (UniqueName: \"kubernetes.io/projected/7f727246-5bd6-417b-b56f-d9c8913ec2c7-kube-api-access-ggm2h\") pod \"7f727246-5bd6-417b-b56f-d9c8913ec2c7\" (UID: \"7f727246-5bd6-417b-b56f-d9c8913ec2c7\") " Feb 27 17:43:02 crc kubenswrapper[4830]: I0227 17:43:02.891090 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f727246-5bd6-417b-b56f-d9c8913ec2c7-scripts" (OuterVolumeSpecName: "scripts") pod "7f727246-5bd6-417b-b56f-d9c8913ec2c7" (UID: "7f727246-5bd6-417b-b56f-d9c8913ec2c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:43:02 crc kubenswrapper[4830]: I0227 17:43:02.891490 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f727246-5bd6-417b-b56f-d9c8913ec2c7-kube-api-access-ggm2h" (OuterVolumeSpecName: "kube-api-access-ggm2h") pod "7f727246-5bd6-417b-b56f-d9c8913ec2c7" (UID: "7f727246-5bd6-417b-b56f-d9c8913ec2c7"). InnerVolumeSpecName "kube-api-access-ggm2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:43:02 crc kubenswrapper[4830]: I0227 17:43:02.922842 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f727246-5bd6-417b-b56f-d9c8913ec2c7-config-data" (OuterVolumeSpecName: "config-data") pod "7f727246-5bd6-417b-b56f-d9c8913ec2c7" (UID: "7f727246-5bd6-417b-b56f-d9c8913ec2c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:43:02 crc kubenswrapper[4830]: I0227 17:43:02.924060 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f727246-5bd6-417b-b56f-d9c8913ec2c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f727246-5bd6-417b-b56f-d9c8913ec2c7" (UID: "7f727246-5bd6-417b-b56f-d9c8913ec2c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:43:02 crc kubenswrapper[4830]: I0227 17:43:02.972694 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggm2h\" (UniqueName: \"kubernetes.io/projected/7f727246-5bd6-417b-b56f-d9c8913ec2c7-kube-api-access-ggm2h\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:02 crc kubenswrapper[4830]: I0227 17:43:02.972740 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f727246-5bd6-417b-b56f-d9c8913ec2c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:02 crc kubenswrapper[4830]: I0227 17:43:02.972751 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f727246-5bd6-417b-b56f-d9c8913ec2c7-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:02 crc kubenswrapper[4830]: I0227 17:43:02.972761 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f727246-5bd6-417b-b56f-d9c8913ec2c7-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:03 crc kubenswrapper[4830]: I0227 17:43:03.497139 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6vf7l" event={"ID":"7f727246-5bd6-417b-b56f-d9c8913ec2c7","Type":"ContainerDied","Data":"b6d1f084640bb0c5af169bf2e8046a501eeb12fb3d45b47c305783fb6aa65a40"} Feb 27 17:43:03 crc kubenswrapper[4830]: I0227 17:43:03.497224 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6d1f084640bb0c5af169bf2e8046a501eeb12fb3d45b47c305783fb6aa65a40" Feb 27 17:43:03 crc kubenswrapper[4830]: I0227 17:43:03.498553 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6vf7l" Feb 27 17:43:03 crc kubenswrapper[4830]: I0227 17:43:03.596996 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 17:43:03 crc kubenswrapper[4830]: E0227 17:43:03.597468 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f727246-5bd6-417b-b56f-d9c8913ec2c7" containerName="nova-cell0-conductor-db-sync" Feb 27 17:43:03 crc kubenswrapper[4830]: I0227 17:43:03.597496 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f727246-5bd6-417b-b56f-d9c8913ec2c7" containerName="nova-cell0-conductor-db-sync" Feb 27 17:43:03 crc kubenswrapper[4830]: I0227 17:43:03.597705 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f727246-5bd6-417b-b56f-d9c8913ec2c7" containerName="nova-cell0-conductor-db-sync" Feb 27 17:43:03 crc kubenswrapper[4830]: I0227 17:43:03.598633 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 27 17:43:03 crc kubenswrapper[4830]: I0227 17:43:03.604221 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 27 17:43:03 crc kubenswrapper[4830]: I0227 17:43:03.604813 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-v7lp6" Feb 27 17:43:03 crc kubenswrapper[4830]: I0227 17:43:03.623609 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 17:43:03 crc kubenswrapper[4830]: I0227 17:43:03.688043 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/debf2adf-e44d-4329-9470-740f206ac43b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"debf2adf-e44d-4329-9470-740f206ac43b\") " pod="openstack/nova-cell0-conductor-0" Feb 27 17:43:03 crc kubenswrapper[4830]: I0227 17:43:03.688117 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/debf2adf-e44d-4329-9470-740f206ac43b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"debf2adf-e44d-4329-9470-740f206ac43b\") " pod="openstack/nova-cell0-conductor-0" Feb 27 17:43:03 crc kubenswrapper[4830]: I0227 17:43:03.688244 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs869\" (UniqueName: \"kubernetes.io/projected/debf2adf-e44d-4329-9470-740f206ac43b-kube-api-access-bs869\") pod \"nova-cell0-conductor-0\" (UID: \"debf2adf-e44d-4329-9470-740f206ac43b\") " pod="openstack/nova-cell0-conductor-0" Feb 27 17:43:03 crc kubenswrapper[4830]: E0227 17:43:03.764835 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:43:03 crc kubenswrapper[4830]: I0227 17:43:03.789982 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/debf2adf-e44d-4329-9470-740f206ac43b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"debf2adf-e44d-4329-9470-740f206ac43b\") " pod="openstack/nova-cell0-conductor-0" Feb 27 17:43:03 crc kubenswrapper[4830]: I0227 17:43:03.790069 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/debf2adf-e44d-4329-9470-740f206ac43b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"debf2adf-e44d-4329-9470-740f206ac43b\") " pod="openstack/nova-cell0-conductor-0" Feb 27 17:43:03 crc kubenswrapper[4830]: I0227 17:43:03.790268 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs869\" (UniqueName: \"kubernetes.io/projected/debf2adf-e44d-4329-9470-740f206ac43b-kube-api-access-bs869\") pod \"nova-cell0-conductor-0\" (UID: \"debf2adf-e44d-4329-9470-740f206ac43b\") " pod="openstack/nova-cell0-conductor-0" Feb 27 17:43:03 crc kubenswrapper[4830]: I0227 17:43:03.798886 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/debf2adf-e44d-4329-9470-740f206ac43b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"debf2adf-e44d-4329-9470-740f206ac43b\") " pod="openstack/nova-cell0-conductor-0" Feb 27 17:43:03 crc kubenswrapper[4830]: I0227 17:43:03.799318 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/debf2adf-e44d-4329-9470-740f206ac43b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"debf2adf-e44d-4329-9470-740f206ac43b\") " pod="openstack/nova-cell0-conductor-0" Feb 27 17:43:03 crc kubenswrapper[4830]: I0227 17:43:03.826876 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs869\" (UniqueName: \"kubernetes.io/projected/debf2adf-e44d-4329-9470-740f206ac43b-kube-api-access-bs869\") pod \"nova-cell0-conductor-0\" (UID: \"debf2adf-e44d-4329-9470-740f206ac43b\") " pod="openstack/nova-cell0-conductor-0" Feb 27 17:43:03 crc kubenswrapper[4830]: I0227 17:43:03.926400 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 27 17:43:04 crc kubenswrapper[4830]: I0227 17:43:04.265642 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 17:43:04 crc kubenswrapper[4830]: W0227 17:43:04.271400 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddebf2adf_e44d_4329_9470_740f206ac43b.slice/crio-329ca1b76e952ba4c5dcc1fbcf4234a5f6e1e9af7f34545d37e2e7a78df18263 WatchSource:0}: Error finding container 329ca1b76e952ba4c5dcc1fbcf4234a5f6e1e9af7f34545d37e2e7a78df18263: Status 404 returned error can't find the container with id 329ca1b76e952ba4c5dcc1fbcf4234a5f6e1e9af7f34545d37e2e7a78df18263 Feb 27 17:43:04 crc kubenswrapper[4830]: I0227 17:43:04.517613 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"debf2adf-e44d-4329-9470-740f206ac43b","Type":"ContainerStarted","Data":"59134d028826964bffc3afa6087405079139f7a2f6866323cb23a0d7881aee4a"} Feb 27 17:43:04 crc kubenswrapper[4830]: I0227 17:43:04.518201 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 27 17:43:04 crc kubenswrapper[4830]: I0227 17:43:04.518216 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"debf2adf-e44d-4329-9470-740f206ac43b","Type":"ContainerStarted","Data":"329ca1b76e952ba4c5dcc1fbcf4234a5f6e1e9af7f34545d37e2e7a78df18263"} Feb 27 17:43:04 crc kubenswrapper[4830]: I0227 17:43:04.546192 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=1.546157395 podStartE2EDuration="1.546157395s" podCreationTimestamp="2026-02-27 17:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:43:04.533349667 +0000 UTC m=+5780.622622170" watchObservedRunningTime="2026-02-27 17:43:04.546157395 +0000 UTC m=+5780.635429898" Feb 27 17:43:13 crc kubenswrapper[4830]: I0227 17:43:13.984220 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.449353 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-ndmjp"] Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.450894 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ndmjp" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.453278 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.453437 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.459457 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-ndmjp"] Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.586453 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z875f\" (UniqueName: \"kubernetes.io/projected/61ef5006-416b-43e0-a9f1-7b69382403be-kube-api-access-z875f\") pod \"nova-cell0-cell-mapping-ndmjp\" (UID: \"61ef5006-416b-43e0-a9f1-7b69382403be\") " pod="openstack/nova-cell0-cell-mapping-ndmjp" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.586548 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61ef5006-416b-43e0-a9f1-7b69382403be-config-data\") pod \"nova-cell0-cell-mapping-ndmjp\" (UID: \"61ef5006-416b-43e0-a9f1-7b69382403be\") " pod="openstack/nova-cell0-cell-mapping-ndmjp" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.586575 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61ef5006-416b-43e0-a9f1-7b69382403be-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-ndmjp\" (UID: \"61ef5006-416b-43e0-a9f1-7b69382403be\") " pod="openstack/nova-cell0-cell-mapping-ndmjp" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.586601 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61ef5006-416b-43e0-a9f1-7b69382403be-scripts\") pod \"nova-cell0-cell-mapping-ndmjp\" (UID: \"61ef5006-416b-43e0-a9f1-7b69382403be\") " pod="openstack/nova-cell0-cell-mapping-ndmjp" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.598164 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.599515 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.602657 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.634477 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.691046 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7\") " pod="openstack/nova-scheduler-0" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.691097 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61ef5006-416b-43e0-a9f1-7b69382403be-config-data\") pod \"nova-cell0-cell-mapping-ndmjp\" (UID: \"61ef5006-416b-43e0-a9f1-7b69382403be\") " pod="openstack/nova-cell0-cell-mapping-ndmjp" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.691121 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61ef5006-416b-43e0-a9f1-7b69382403be-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-ndmjp\" (UID: \"61ef5006-416b-43e0-a9f1-7b69382403be\") " pod="openstack/nova-cell0-cell-mapping-ndmjp" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.691147 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61ef5006-416b-43e0-a9f1-7b69382403be-scripts\") pod \"nova-cell0-cell-mapping-ndmjp\" (UID: \"61ef5006-416b-43e0-a9f1-7b69382403be\") " pod="openstack/nova-cell0-cell-mapping-ndmjp" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.691214 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqqvl\" (UniqueName: \"kubernetes.io/projected/4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7-kube-api-access-gqqvl\") pod \"nova-scheduler-0\" (UID: \"4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7\") " pod="openstack/nova-scheduler-0" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.691234 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7-config-data\") pod \"nova-scheduler-0\" (UID: \"4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7\") " pod="openstack/nova-scheduler-0" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.691267 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z875f\" (UniqueName: \"kubernetes.io/projected/61ef5006-416b-43e0-a9f1-7b69382403be-kube-api-access-z875f\") pod \"nova-cell0-cell-mapping-ndmjp\" (UID: \"61ef5006-416b-43e0-a9f1-7b69382403be\") " pod="openstack/nova-cell0-cell-mapping-ndmjp" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.694480 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.696004 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.702026 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61ef5006-416b-43e0-a9f1-7b69382403be-config-data\") pod \"nova-cell0-cell-mapping-ndmjp\" (UID: \"61ef5006-416b-43e0-a9f1-7b69382403be\") " pod="openstack/nova-cell0-cell-mapping-ndmjp" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.704559 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61ef5006-416b-43e0-a9f1-7b69382403be-scripts\") pod \"nova-cell0-cell-mapping-ndmjp\" (UID: \"61ef5006-416b-43e0-a9f1-7b69382403be\") " pod="openstack/nova-cell0-cell-mapping-ndmjp" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.708332 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.711262 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61ef5006-416b-43e0-a9f1-7b69382403be-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-ndmjp\" (UID: \"61ef5006-416b-43e0-a9f1-7b69382403be\") " pod="openstack/nova-cell0-cell-mapping-ndmjp" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.730582 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.747885 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z875f\" (UniqueName: \"kubernetes.io/projected/61ef5006-416b-43e0-a9f1-7b69382403be-kube-api-access-z875f\") pod \"nova-cell0-cell-mapping-ndmjp\" (UID: \"61ef5006-416b-43e0-a9f1-7b69382403be\") " pod="openstack/nova-cell0-cell-mapping-ndmjp" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.751103 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.758524 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.760508 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.793004 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7\") " pod="openstack/nova-scheduler-0" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.793146 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqqvl\" (UniqueName: \"kubernetes.io/projected/4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7-kube-api-access-gqqvl\") pod \"nova-scheduler-0\" (UID: \"4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7\") " pod="openstack/nova-scheduler-0" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.793177 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7-config-data\") pod \"nova-scheduler-0\" (UID: \"4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7\") " pod="openstack/nova-scheduler-0" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.817843 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7-config-data\") pod \"nova-scheduler-0\" (UID: \"4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7\") " pod="openstack/nova-scheduler-0" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.818233 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ndmjp" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.820790 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7\") " pod="openstack/nova-scheduler-0" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.833845 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqqvl\" (UniqueName: \"kubernetes.io/projected/4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7-kube-api-access-gqqvl\") pod \"nova-scheduler-0\" (UID: \"4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7\") " pod="openstack/nova-scheduler-0" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.834465 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.856416 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.861204 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.872078 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.895390 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.901979 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bd39a49-2ce3-4ac6-aec1-316d99d5826c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2bd39a49-2ce3-4ac6-aec1-316d99d5826c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.902043 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/def86847-3e92-471b-bcc2-f74e9bbc81d7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"def86847-3e92-471b-bcc2-f74e9bbc81d7\") " pod="openstack/nova-metadata-0" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.902103 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/def86847-3e92-471b-bcc2-f74e9bbc81d7-config-data\") pod \"nova-metadata-0\" (UID: \"def86847-3e92-471b-bcc2-f74e9bbc81d7\") " pod="openstack/nova-metadata-0" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.902153 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jshrb\" (UniqueName: \"kubernetes.io/projected/def86847-3e92-471b-bcc2-f74e9bbc81d7-kube-api-access-jshrb\") pod \"nova-metadata-0\" (UID: \"def86847-3e92-471b-bcc2-f74e9bbc81d7\") " pod="openstack/nova-metadata-0" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.902252 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bd39a49-2ce3-4ac6-aec1-316d99d5826c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2bd39a49-2ce3-4ac6-aec1-316d99d5826c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.902281 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcj2z\" (UniqueName: \"kubernetes.io/projected/2bd39a49-2ce3-4ac6-aec1-316d99d5826c-kube-api-access-tcj2z\") pod \"nova-cell1-novncproxy-0\" (UID: \"2bd39a49-2ce3-4ac6-aec1-316d99d5826c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.902311 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/def86847-3e92-471b-bcc2-f74e9bbc81d7-logs\") pod \"nova-metadata-0\" (UID: \"def86847-3e92-471b-bcc2-f74e9bbc81d7\") " pod="openstack/nova-metadata-0" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.912620 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57fd8cfc4f-fkfd5"] Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.914335 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57fd8cfc4f-fkfd5" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.924380 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 17:43:14 crc kubenswrapper[4830]: I0227 17:43:14.948930 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57fd8cfc4f-fkfd5"] Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.005079 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcj2z\" (UniqueName: \"kubernetes.io/projected/2bd39a49-2ce3-4ac6-aec1-316d99d5826c-kube-api-access-tcj2z\") pod \"nova-cell1-novncproxy-0\" (UID: \"2bd39a49-2ce3-4ac6-aec1-316d99d5826c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.005150 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/def86847-3e92-471b-bcc2-f74e9bbc81d7-logs\") pod \"nova-metadata-0\" (UID: \"def86847-3e92-471b-bcc2-f74e9bbc81d7\") " pod="openstack/nova-metadata-0" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.005208 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f130a868-eb17-446f-ae5a-91383cdbc74f-config-data\") pod \"nova-api-0\" (UID: \"f130a868-eb17-446f-ae5a-91383cdbc74f\") " pod="openstack/nova-api-0" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.005246 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bd39a49-2ce3-4ac6-aec1-316d99d5826c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2bd39a49-2ce3-4ac6-aec1-316d99d5826c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.006055 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/def86847-3e92-471b-bcc2-f74e9bbc81d7-logs\") pod \"nova-metadata-0\" (UID: \"def86847-3e92-471b-bcc2-f74e9bbc81d7\") " pod="openstack/nova-metadata-0" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.009027 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bd39a49-2ce3-4ac6-aec1-316d99d5826c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2bd39a49-2ce3-4ac6-aec1-316d99d5826c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.013174 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/def86847-3e92-471b-bcc2-f74e9bbc81d7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"def86847-3e92-471b-bcc2-f74e9bbc81d7\") " pod="openstack/nova-metadata-0" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.013410 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f130a868-eb17-446f-ae5a-91383cdbc74f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f130a868-eb17-446f-ae5a-91383cdbc74f\") " pod="openstack/nova-api-0" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.013492 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/def86847-3e92-471b-bcc2-f74e9bbc81d7-config-data\") pod \"nova-metadata-0\" (UID: \"def86847-3e92-471b-bcc2-f74e9bbc81d7\") " pod="openstack/nova-metadata-0" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.013586 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f130a868-eb17-446f-ae5a-91383cdbc74f-logs\") pod \"nova-api-0\" (UID: \"f130a868-eb17-446f-ae5a-91383cdbc74f\") " pod="openstack/nova-api-0" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.013678 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jshrb\" (UniqueName: \"kubernetes.io/projected/def86847-3e92-471b-bcc2-f74e9bbc81d7-kube-api-access-jshrb\") pod \"nova-metadata-0\" (UID: \"def86847-3e92-471b-bcc2-f74e9bbc81d7\") " pod="openstack/nova-metadata-0" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.013967 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94rqx\" (UniqueName: \"kubernetes.io/projected/f130a868-eb17-446f-ae5a-91383cdbc74f-kube-api-access-94rqx\") pod \"nova-api-0\" (UID: \"f130a868-eb17-446f-ae5a-91383cdbc74f\") " pod="openstack/nova-api-0" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.014096 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bd39a49-2ce3-4ac6-aec1-316d99d5826c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2bd39a49-2ce3-4ac6-aec1-316d99d5826c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.017465 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bd39a49-2ce3-4ac6-aec1-316d99d5826c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2bd39a49-2ce3-4ac6-aec1-316d99d5826c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.019056 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/def86847-3e92-471b-bcc2-f74e9bbc81d7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"def86847-3e92-471b-bcc2-f74e9bbc81d7\") " pod="openstack/nova-metadata-0" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.022109 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/def86847-3e92-471b-bcc2-f74e9bbc81d7-config-data\") pod \"nova-metadata-0\" (UID: \"def86847-3e92-471b-bcc2-f74e9bbc81d7\") " pod="openstack/nova-metadata-0" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.029284 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcj2z\" (UniqueName: \"kubernetes.io/projected/2bd39a49-2ce3-4ac6-aec1-316d99d5826c-kube-api-access-tcj2z\") pod \"nova-cell1-novncproxy-0\" (UID: \"2bd39a49-2ce3-4ac6-aec1-316d99d5826c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.049060 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jshrb\" (UniqueName: \"kubernetes.io/projected/def86847-3e92-471b-bcc2-f74e9bbc81d7-kube-api-access-jshrb\") pod \"nova-metadata-0\" (UID: \"def86847-3e92-471b-bcc2-f74e9bbc81d7\") " pod="openstack/nova-metadata-0" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.116212 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94rqx\" (UniqueName: \"kubernetes.io/projected/f130a868-eb17-446f-ae5a-91383cdbc74f-kube-api-access-94rqx\") pod \"nova-api-0\" (UID: \"f130a868-eb17-446f-ae5a-91383cdbc74f\") " pod="openstack/nova-api-0" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.116305 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8lf6\" (UniqueName: \"kubernetes.io/projected/bffeb097-0b73-4ade-8ea4-2a64979aeaf6-kube-api-access-l8lf6\") pod \"dnsmasq-dns-57fd8cfc4f-fkfd5\" (UID: \"bffeb097-0b73-4ade-8ea4-2a64979aeaf6\") " pod="openstack/dnsmasq-dns-57fd8cfc4f-fkfd5" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.116333 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bffeb097-0b73-4ade-8ea4-2a64979aeaf6-dns-svc\") pod \"dnsmasq-dns-57fd8cfc4f-fkfd5\" (UID: \"bffeb097-0b73-4ade-8ea4-2a64979aeaf6\") " pod="openstack/dnsmasq-dns-57fd8cfc4f-fkfd5" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.116369 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f130a868-eb17-446f-ae5a-91383cdbc74f-config-data\") pod \"nova-api-0\" (UID: \"f130a868-eb17-446f-ae5a-91383cdbc74f\") " pod="openstack/nova-api-0" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.116400 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bffeb097-0b73-4ade-8ea4-2a64979aeaf6-ovsdbserver-nb\") pod \"dnsmasq-dns-57fd8cfc4f-fkfd5\" (UID: \"bffeb097-0b73-4ade-8ea4-2a64979aeaf6\") " pod="openstack/dnsmasq-dns-57fd8cfc4f-fkfd5" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.116447 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f130a868-eb17-446f-ae5a-91383cdbc74f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f130a868-eb17-446f-ae5a-91383cdbc74f\") " pod="openstack/nova-api-0" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.116488 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f130a868-eb17-446f-ae5a-91383cdbc74f-logs\") pod \"nova-api-0\" (UID: \"f130a868-eb17-446f-ae5a-91383cdbc74f\") " pod="openstack/nova-api-0" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.116532 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bffeb097-0b73-4ade-8ea4-2a64979aeaf6-config\") pod \"dnsmasq-dns-57fd8cfc4f-fkfd5\" (UID: \"bffeb097-0b73-4ade-8ea4-2a64979aeaf6\") " pod="openstack/dnsmasq-dns-57fd8cfc4f-fkfd5" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.116551 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bffeb097-0b73-4ade-8ea4-2a64979aeaf6-ovsdbserver-sb\") pod \"dnsmasq-dns-57fd8cfc4f-fkfd5\" (UID: \"bffeb097-0b73-4ade-8ea4-2a64979aeaf6\") " pod="openstack/dnsmasq-dns-57fd8cfc4f-fkfd5" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.117795 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f130a868-eb17-446f-ae5a-91383cdbc74f-logs\") pod \"nova-api-0\" (UID: \"f130a868-eb17-446f-ae5a-91383cdbc74f\") " pod="openstack/nova-api-0" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.120369 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f130a868-eb17-446f-ae5a-91383cdbc74f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f130a868-eb17-446f-ae5a-91383cdbc74f\") " pod="openstack/nova-api-0" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.120771 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f130a868-eb17-446f-ae5a-91383cdbc74f-config-data\") pod \"nova-api-0\" (UID: \"f130a868-eb17-446f-ae5a-91383cdbc74f\") " pod="openstack/nova-api-0" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.135161 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94rqx\" (UniqueName: \"kubernetes.io/projected/f130a868-eb17-446f-ae5a-91383cdbc74f-kube-api-access-94rqx\") pod \"nova-api-0\" (UID: \"f130a868-eb17-446f-ae5a-91383cdbc74f\") " pod="openstack/nova-api-0" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.219526 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bffeb097-0b73-4ade-8ea4-2a64979aeaf6-ovsdbserver-nb\") pod \"dnsmasq-dns-57fd8cfc4f-fkfd5\" (UID: \"bffeb097-0b73-4ade-8ea4-2a64979aeaf6\") " pod="openstack/dnsmasq-dns-57fd8cfc4f-fkfd5" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.219680 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bffeb097-0b73-4ade-8ea4-2a64979aeaf6-config\") pod \"dnsmasq-dns-57fd8cfc4f-fkfd5\" (UID: \"bffeb097-0b73-4ade-8ea4-2a64979aeaf6\") " pod="openstack/dnsmasq-dns-57fd8cfc4f-fkfd5" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.219706 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bffeb097-0b73-4ade-8ea4-2a64979aeaf6-ovsdbserver-sb\") pod \"dnsmasq-dns-57fd8cfc4f-fkfd5\" (UID: \"bffeb097-0b73-4ade-8ea4-2a64979aeaf6\") " pod="openstack/dnsmasq-dns-57fd8cfc4f-fkfd5" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.219766 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8lf6\" (UniqueName: \"kubernetes.io/projected/bffeb097-0b73-4ade-8ea4-2a64979aeaf6-kube-api-access-l8lf6\") pod \"dnsmasq-dns-57fd8cfc4f-fkfd5\" (UID: \"bffeb097-0b73-4ade-8ea4-2a64979aeaf6\") " pod="openstack/dnsmasq-dns-57fd8cfc4f-fkfd5" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.219791 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bffeb097-0b73-4ade-8ea4-2a64979aeaf6-dns-svc\") pod \"dnsmasq-dns-57fd8cfc4f-fkfd5\" (UID: \"bffeb097-0b73-4ade-8ea4-2a64979aeaf6\") " pod="openstack/dnsmasq-dns-57fd8cfc4f-fkfd5" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.221014 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bffeb097-0b73-4ade-8ea4-2a64979aeaf6-config\") pod \"dnsmasq-dns-57fd8cfc4f-fkfd5\" (UID: \"bffeb097-0b73-4ade-8ea4-2a64979aeaf6\") " pod="openstack/dnsmasq-dns-57fd8cfc4f-fkfd5" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.221061 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bffeb097-0b73-4ade-8ea4-2a64979aeaf6-ovsdbserver-nb\") pod \"dnsmasq-dns-57fd8cfc4f-fkfd5\" (UID: \"bffeb097-0b73-4ade-8ea4-2a64979aeaf6\") " pod="openstack/dnsmasq-dns-57fd8cfc4f-fkfd5" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.221743 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bffeb097-0b73-4ade-8ea4-2a64979aeaf6-dns-svc\") pod \"dnsmasq-dns-57fd8cfc4f-fkfd5\" (UID: \"bffeb097-0b73-4ade-8ea4-2a64979aeaf6\") " pod="openstack/dnsmasq-dns-57fd8cfc4f-fkfd5" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.221865 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bffeb097-0b73-4ade-8ea4-2a64979aeaf6-ovsdbserver-sb\") pod \"dnsmasq-dns-57fd8cfc4f-fkfd5\" (UID: \"bffeb097-0b73-4ade-8ea4-2a64979aeaf6\") " pod="openstack/dnsmasq-dns-57fd8cfc4f-fkfd5" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.236978 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.244585 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8lf6\" (UniqueName: \"kubernetes.io/projected/bffeb097-0b73-4ade-8ea4-2a64979aeaf6-kube-api-access-l8lf6\") pod \"dnsmasq-dns-57fd8cfc4f-fkfd5\" (UID: \"bffeb097-0b73-4ade-8ea4-2a64979aeaf6\") " pod="openstack/dnsmasq-dns-57fd8cfc4f-fkfd5" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.256281 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.274608 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.284968 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57fd8cfc4f-fkfd5" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.446235 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.467079 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-kq4fc"] Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.469531 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-kq4fc" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.475064 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.475512 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.484030 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-kq4fc"] Feb 27 17:43:15 crc kubenswrapper[4830]: W0227 17:43:15.484923 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4cde5adc_f8ac_4c3a_b8e2_a192b62bedf7.slice/crio-7978251f81fd8e3f8923656868202dcc62b86881b795f90001c6b037f50b8f75 WatchSource:0}: Error finding container 7978251f81fd8e3f8923656868202dcc62b86881b795f90001c6b037f50b8f75: Status 404 returned error can't find the container with id 7978251f81fd8e3f8923656868202dcc62b86881b795f90001c6b037f50b8f75 Feb 27 17:43:15 crc kubenswrapper[4830]: W0227 17:43:15.487484 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61ef5006_416b_43e0_a9f1_7b69382403be.slice/crio-fc70d53b8e89622e7c25bb6973f5986cc96c74befb51d5a4e2052f71126b568e WatchSource:0}: Error finding container fc70d53b8e89622e7c25bb6973f5986cc96c74befb51d5a4e2052f71126b568e: Status 404 returned error can't find the container with id fc70d53b8e89622e7c25bb6973f5986cc96c74befb51d5a4e2052f71126b568e Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.495742 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-ndmjp"] Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.628099 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dr99\" (UniqueName: \"kubernetes.io/projected/ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c-kube-api-access-6dr99\") pod \"nova-cell1-conductor-db-sync-kq4fc\" (UID: \"ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c\") " pod="openstack/nova-cell1-conductor-db-sync-kq4fc" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.628372 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c-config-data\") pod \"nova-cell1-conductor-db-sync-kq4fc\" (UID: \"ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c\") " pod="openstack/nova-cell1-conductor-db-sync-kq4fc" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.628630 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-kq4fc\" (UID: \"ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c\") " pod="openstack/nova-cell1-conductor-db-sync-kq4fc" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.628704 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c-scripts\") pod \"nova-cell1-conductor-db-sync-kq4fc\" (UID: \"ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c\") " pod="openstack/nova-cell1-conductor-db-sync-kq4fc" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.661549 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ndmjp" event={"ID":"61ef5006-416b-43e0-a9f1-7b69382403be","Type":"ContainerStarted","Data":"fc70d53b8e89622e7c25bb6973f5986cc96c74befb51d5a4e2052f71126b568e"} Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.663299 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7","Type":"ContainerStarted","Data":"7978251f81fd8e3f8923656868202dcc62b86881b795f90001c6b037f50b8f75"} Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.731266 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dr99\" (UniqueName: \"kubernetes.io/projected/ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c-kube-api-access-6dr99\") pod \"nova-cell1-conductor-db-sync-kq4fc\" (UID: \"ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c\") " pod="openstack/nova-cell1-conductor-db-sync-kq4fc" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.731360 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c-config-data\") pod \"nova-cell1-conductor-db-sync-kq4fc\" (UID: \"ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c\") " pod="openstack/nova-cell1-conductor-db-sync-kq4fc" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.731415 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-kq4fc\" (UID: \"ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c\") " pod="openstack/nova-cell1-conductor-db-sync-kq4fc" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.731441 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c-scripts\") pod \"nova-cell1-conductor-db-sync-kq4fc\" (UID: \"ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c\") " pod="openstack/nova-cell1-conductor-db-sync-kq4fc" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.736366 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c-scripts\") pod \"nova-cell1-conductor-db-sync-kq4fc\" (UID: \"ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c\") " pod="openstack/nova-cell1-conductor-db-sync-kq4fc" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.752874 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dr99\" (UniqueName: \"kubernetes.io/projected/ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c-kube-api-access-6dr99\") pod \"nova-cell1-conductor-db-sync-kq4fc\" (UID: \"ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c\") " pod="openstack/nova-cell1-conductor-db-sync-kq4fc" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.754769 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.756349 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c-config-data\") pod \"nova-cell1-conductor-db-sync-kq4fc\" (UID: \"ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c\") " pod="openstack/nova-cell1-conductor-db-sync-kq4fc" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.756736 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-kq4fc\" (UID: \"ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c\") " pod="openstack/nova-cell1-conductor-db-sync-kq4fc" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.820219 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-kq4fc" Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.943123 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57fd8cfc4f-fkfd5"] Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.969810 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:43:15 crc kubenswrapper[4830]: I0227 17:43:15.979618 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 17:43:16 crc kubenswrapper[4830]: I0227 17:43:16.368431 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-kq4fc"] Feb 27 17:43:16 crc kubenswrapper[4830]: I0227 17:43:16.676773 4830 generic.go:334] "Generic (PLEG): container finished" podID="bffeb097-0b73-4ade-8ea4-2a64979aeaf6" containerID="0a7ce7b7f3399284623af1cb03abdcfed6ecd557b8e7a48338df06401e258595" exitCode=0 Feb 27 17:43:16 crc kubenswrapper[4830]: I0227 17:43:16.676871 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57fd8cfc4f-fkfd5" event={"ID":"bffeb097-0b73-4ade-8ea4-2a64979aeaf6","Type":"ContainerDied","Data":"0a7ce7b7f3399284623af1cb03abdcfed6ecd557b8e7a48338df06401e258595"} Feb 27 17:43:16 crc kubenswrapper[4830]: I0227 17:43:16.676914 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57fd8cfc4f-fkfd5" event={"ID":"bffeb097-0b73-4ade-8ea4-2a64979aeaf6","Type":"ContainerStarted","Data":"82472f0be044648f9e384565bf065c692bbdab4b0374901bdb88397fb63c2996"} Feb 27 17:43:16 crc kubenswrapper[4830]: I0227 17:43:16.681404 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f130a868-eb17-446f-ae5a-91383cdbc74f","Type":"ContainerStarted","Data":"57181ff001ac4e88ddd86529a637c3bf16bd14faf7307f3331a45a0542ad98b0"} Feb 27 17:43:16 crc kubenswrapper[4830]: I0227 17:43:16.681450 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f130a868-eb17-446f-ae5a-91383cdbc74f","Type":"ContainerStarted","Data":"a279dbdc2e4988fecd11fa384d473e1f5526bcce86435cf0b9585b095ddadf7b"} Feb 27 17:43:16 crc kubenswrapper[4830]: I0227 17:43:16.681465 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f130a868-eb17-446f-ae5a-91383cdbc74f","Type":"ContainerStarted","Data":"0aedd4135edbec81c619bba59424b72933348430a595296d22afb79e16e5df0e"} Feb 27 17:43:16 crc kubenswrapper[4830]: I0227 17:43:16.694690 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"def86847-3e92-471b-bcc2-f74e9bbc81d7","Type":"ContainerStarted","Data":"bbbe93c71e52498bfba29579a1a1b07039bcd1f784ba34a4b356cdde581b0498"} Feb 27 17:43:16 crc kubenswrapper[4830]: I0227 17:43:16.694746 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"def86847-3e92-471b-bcc2-f74e9bbc81d7","Type":"ContainerStarted","Data":"4d3cd6e345b3f9541069fd7136ad641811ed65127c378e8f3c5dda7092c435ee"} Feb 27 17:43:16 crc kubenswrapper[4830]: I0227 17:43:16.694761 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"def86847-3e92-471b-bcc2-f74e9bbc81d7","Type":"ContainerStarted","Data":"b81b6035f605f6f2ca5e9813c836025b1b319958eb6be9969f7ff0d5e3196ce1"} Feb 27 17:43:16 crc kubenswrapper[4830]: I0227 17:43:16.705130 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ndmjp" event={"ID":"61ef5006-416b-43e0-a9f1-7b69382403be","Type":"ContainerStarted","Data":"81c39bf03dba3f79e5092a516a3901e60d5f58387766449ed4e8712344ebc8c1"} Feb 27 17:43:16 crc kubenswrapper[4830]: I0227 17:43:16.716317 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7","Type":"ContainerStarted","Data":"69899d04f75d9cce352e42624bb054fdd77ddb28e30def641fdb17041406d389"} Feb 27 17:43:16 crc kubenswrapper[4830]: I0227 17:43:16.732881 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2bd39a49-2ce3-4ac6-aec1-316d99d5826c","Type":"ContainerStarted","Data":"f6d0526901a9a4b9aa9a02cef7cc69c87ccf982dce23e12e734ec2de215090d5"} Feb 27 17:43:16 crc kubenswrapper[4830]: I0227 17:43:16.732938 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2bd39a49-2ce3-4ac6-aec1-316d99d5826c","Type":"ContainerStarted","Data":"07196a3529fe89f221d626c8a6869243093fae3dd46ed8b1d6779427082481c3"} Feb 27 17:43:16 crc kubenswrapper[4830]: I0227 17:43:16.736044 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-kq4fc" event={"ID":"ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c","Type":"ContainerStarted","Data":"a9bf9967bbceebd84b8dc260a334a1841094db77babbd526ae46b5b15bdef700"} Feb 27 17:43:16 crc kubenswrapper[4830]: I0227 17:43:16.736099 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-kq4fc" event={"ID":"ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c","Type":"ContainerStarted","Data":"72916fc395c1a74df9cbbedba854f70b0bb9da02e426888e449425b0b95409d8"} Feb 27 17:43:16 crc kubenswrapper[4830]: I0227 17:43:16.736398 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.736381458 podStartE2EDuration="2.736381458s" podCreationTimestamp="2026-02-27 17:43:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:43:16.7223476 +0000 UTC m=+5792.811620063" watchObservedRunningTime="2026-02-27 17:43:16.736381458 +0000 UTC m=+5792.825653911" Feb 27 17:43:16 crc kubenswrapper[4830]: I0227 17:43:16.758804 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.758775997 podStartE2EDuration="2.758775997s" podCreationTimestamp="2026-02-27 17:43:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:43:16.756886421 +0000 UTC m=+5792.846158884" watchObservedRunningTime="2026-02-27 17:43:16.758775997 +0000 UTC m=+5792.848048460" Feb 27 17:43:16 crc kubenswrapper[4830]: I0227 17:43:16.763962 4830 scope.go:117] "RemoveContainer" containerID="0f73af46b765b3d98f3f5e9883b49887ac4f2ac3485e91a001e18c5d351646d8" Feb 27 17:43:16 crc kubenswrapper[4830]: E0227 17:43:16.764214 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:43:16 crc kubenswrapper[4830]: I0227 17:43:16.807502 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.807484438 podStartE2EDuration="2.807484438s" podCreationTimestamp="2026-02-27 17:43:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:43:16.804857195 +0000 UTC m=+5792.894129658" watchObservedRunningTime="2026-02-27 17:43:16.807484438 +0000 UTC m=+5792.896756901" Feb 27 17:43:16 crc kubenswrapper[4830]: I0227 17:43:16.837710 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-ndmjp" podStartSLOduration=2.837675424 podStartE2EDuration="2.837675424s" podCreationTimestamp="2026-02-27 17:43:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:43:16.836436004 +0000 UTC m=+5792.925708467" watchObservedRunningTime="2026-02-27 17:43:16.837675424 +0000 UTC m=+5792.926947887" Feb 27 17:43:16 crc kubenswrapper[4830]: I0227 17:43:16.861056 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-kq4fc" podStartSLOduration=1.861024976 podStartE2EDuration="1.861024976s" podCreationTimestamp="2026-02-27 17:43:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:43:16.852748837 +0000 UTC m=+5792.942021300" watchObservedRunningTime="2026-02-27 17:43:16.861024976 +0000 UTC m=+5792.950297439" Feb 27 17:43:16 crc kubenswrapper[4830]: I0227 17:43:16.900978 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.900929385 podStartE2EDuration="2.900929385s" podCreationTimestamp="2026-02-27 17:43:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:43:16.882920933 +0000 UTC m=+5792.972193396" watchObservedRunningTime="2026-02-27 17:43:16.900929385 +0000 UTC m=+5792.990201838" Feb 27 17:43:17 crc kubenswrapper[4830]: I0227 17:43:17.750336 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57fd8cfc4f-fkfd5" event={"ID":"bffeb097-0b73-4ade-8ea4-2a64979aeaf6","Type":"ContainerStarted","Data":"33495a6609ce6a02b046a20973bcc16da1ecd244375f83edaeaac9a05ceab2e2"} Feb 27 17:43:17 crc kubenswrapper[4830]: I0227 17:43:17.784136 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57fd8cfc4f-fkfd5" podStartSLOduration=3.784109547 podStartE2EDuration="3.784109547s" podCreationTimestamp="2026-02-27 17:43:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:43:17.773564084 +0000 UTC m=+5793.862836547" watchObservedRunningTime="2026-02-27 17:43:17.784109547 +0000 UTC m=+5793.873382010" Feb 27 17:43:18 crc kubenswrapper[4830]: I0227 17:43:18.759501 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57fd8cfc4f-fkfd5" Feb 27 17:43:18 crc kubenswrapper[4830]: E0227 17:43:18.789169 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:43:19 crc kubenswrapper[4830]: I0227 17:43:19.775832 4830 generic.go:334] "Generic (PLEG): container finished" podID="ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c" containerID="a9bf9967bbceebd84b8dc260a334a1841094db77babbd526ae46b5b15bdef700" exitCode=0 Feb 27 17:43:19 crc kubenswrapper[4830]: I0227 17:43:19.777120 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-kq4fc" event={"ID":"ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c","Type":"ContainerDied","Data":"a9bf9967bbceebd84b8dc260a334a1841094db77babbd526ae46b5b15bdef700"} Feb 27 17:43:19 crc kubenswrapper[4830]: I0227 17:43:19.925079 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 27 17:43:20 crc kubenswrapper[4830]: I0227 17:43:20.238555 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:43:20 crc kubenswrapper[4830]: I0227 17:43:20.257610 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 27 17:43:20 crc kubenswrapper[4830]: I0227 17:43:20.258299 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 27 17:43:21 crc kubenswrapper[4830]: I0227 17:43:21.243840 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-kq4fc" Feb 27 17:43:21 crc kubenswrapper[4830]: I0227 17:43:21.397654 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c-config-data\") pod \"ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c\" (UID: \"ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c\") " Feb 27 17:43:21 crc kubenswrapper[4830]: I0227 17:43:21.397722 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dr99\" (UniqueName: \"kubernetes.io/projected/ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c-kube-api-access-6dr99\") pod \"ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c\" (UID: \"ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c\") " Feb 27 17:43:21 crc kubenswrapper[4830]: I0227 17:43:21.397933 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c-combined-ca-bundle\") pod \"ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c\" (UID: \"ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c\") " Feb 27 17:43:21 crc kubenswrapper[4830]: I0227 17:43:21.398014 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c-scripts\") pod \"ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c\" (UID: \"ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c\") " Feb 27 17:43:21 crc kubenswrapper[4830]: I0227 17:43:21.419234 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c-scripts" (OuterVolumeSpecName: "scripts") pod "ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c" (UID: "ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:43:21 crc kubenswrapper[4830]: I0227 17:43:21.419390 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c-kube-api-access-6dr99" (OuterVolumeSpecName: "kube-api-access-6dr99") pod "ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c" (UID: "ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c"). InnerVolumeSpecName "kube-api-access-6dr99". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:43:21 crc kubenswrapper[4830]: I0227 17:43:21.456722 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c" (UID: "ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:43:21 crc kubenswrapper[4830]: I0227 17:43:21.458543 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c-config-data" (OuterVolumeSpecName: "config-data") pod "ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c" (UID: "ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:43:21 crc kubenswrapper[4830]: I0227 17:43:21.504839 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:21 crc kubenswrapper[4830]: I0227 17:43:21.504886 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dr99\" (UniqueName: \"kubernetes.io/projected/ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c-kube-api-access-6dr99\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:21 crc kubenswrapper[4830]: I0227 17:43:21.504914 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:21 crc kubenswrapper[4830]: I0227 17:43:21.504932 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:21 crc kubenswrapper[4830]: I0227 17:43:21.805724 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-kq4fc" Feb 27 17:43:21 crc kubenswrapper[4830]: I0227 17:43:21.805765 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-kq4fc" event={"ID":"ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c","Type":"ContainerDied","Data":"72916fc395c1a74df9cbbedba854f70b0bb9da02e426888e449425b0b95409d8"} Feb 27 17:43:21 crc kubenswrapper[4830]: I0227 17:43:21.806533 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72916fc395c1a74df9cbbedba854f70b0bb9da02e426888e449425b0b95409d8" Feb 27 17:43:21 crc kubenswrapper[4830]: I0227 17:43:21.809722 4830 generic.go:334] "Generic (PLEG): container finished" podID="61ef5006-416b-43e0-a9f1-7b69382403be" containerID="81c39bf03dba3f79e5092a516a3901e60d5f58387766449ed4e8712344ebc8c1" exitCode=0 Feb 27 17:43:21 crc kubenswrapper[4830]: I0227 17:43:21.809813 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ndmjp" event={"ID":"61ef5006-416b-43e0-a9f1-7b69382403be","Type":"ContainerDied","Data":"81c39bf03dba3f79e5092a516a3901e60d5f58387766449ed4e8712344ebc8c1"} Feb 27 17:43:21 crc kubenswrapper[4830]: I0227 17:43:21.917622 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 27 17:43:21 crc kubenswrapper[4830]: E0227 17:43:21.923252 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c" containerName="nova-cell1-conductor-db-sync" Feb 27 17:43:21 crc kubenswrapper[4830]: I0227 17:43:21.923300 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c" containerName="nova-cell1-conductor-db-sync" Feb 27 17:43:21 crc kubenswrapper[4830]: I0227 17:43:21.924831 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c" containerName="nova-cell1-conductor-db-sync" Feb 27 17:43:21 crc kubenswrapper[4830]: I0227 17:43:21.925965 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 27 17:43:21 crc kubenswrapper[4830]: I0227 17:43:21.938867 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 27 17:43:21 crc kubenswrapper[4830]: I0227 17:43:21.939467 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 27 17:43:22 crc kubenswrapper[4830]: I0227 17:43:22.117898 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71167e3e-162d-4836-939e-0abbc7a1217c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"71167e3e-162d-4836-939e-0abbc7a1217c\") " pod="openstack/nova-cell1-conductor-0" Feb 27 17:43:22 crc kubenswrapper[4830]: I0227 17:43:22.118162 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71167e3e-162d-4836-939e-0abbc7a1217c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"71167e3e-162d-4836-939e-0abbc7a1217c\") " pod="openstack/nova-cell1-conductor-0" Feb 27 17:43:22 crc kubenswrapper[4830]: I0227 17:43:22.118249 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hfpf\" (UniqueName: \"kubernetes.io/projected/71167e3e-162d-4836-939e-0abbc7a1217c-kube-api-access-5hfpf\") pod \"nova-cell1-conductor-0\" (UID: \"71167e3e-162d-4836-939e-0abbc7a1217c\") " pod="openstack/nova-cell1-conductor-0" Feb 27 17:43:22 crc kubenswrapper[4830]: I0227 17:43:22.221259 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71167e3e-162d-4836-939e-0abbc7a1217c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"71167e3e-162d-4836-939e-0abbc7a1217c\") " pod="openstack/nova-cell1-conductor-0" Feb 27 17:43:22 crc kubenswrapper[4830]: I0227 17:43:22.221366 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hfpf\" (UniqueName: \"kubernetes.io/projected/71167e3e-162d-4836-939e-0abbc7a1217c-kube-api-access-5hfpf\") pod \"nova-cell1-conductor-0\" (UID: \"71167e3e-162d-4836-939e-0abbc7a1217c\") " pod="openstack/nova-cell1-conductor-0" Feb 27 17:43:22 crc kubenswrapper[4830]: I0227 17:43:22.221459 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71167e3e-162d-4836-939e-0abbc7a1217c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"71167e3e-162d-4836-939e-0abbc7a1217c\") " pod="openstack/nova-cell1-conductor-0" Feb 27 17:43:22 crc kubenswrapper[4830]: I0227 17:43:22.232194 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71167e3e-162d-4836-939e-0abbc7a1217c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"71167e3e-162d-4836-939e-0abbc7a1217c\") " pod="openstack/nova-cell1-conductor-0" Feb 27 17:43:22 crc kubenswrapper[4830]: I0227 17:43:22.237838 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71167e3e-162d-4836-939e-0abbc7a1217c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"71167e3e-162d-4836-939e-0abbc7a1217c\") " pod="openstack/nova-cell1-conductor-0" Feb 27 17:43:22 crc kubenswrapper[4830]: I0227 17:43:22.247906 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hfpf\" (UniqueName: \"kubernetes.io/projected/71167e3e-162d-4836-939e-0abbc7a1217c-kube-api-access-5hfpf\") pod \"nova-cell1-conductor-0\" (UID: \"71167e3e-162d-4836-939e-0abbc7a1217c\") " pod="openstack/nova-cell1-conductor-0" Feb 27 17:43:22 crc kubenswrapper[4830]: I0227 17:43:22.263747 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 27 17:43:22 crc kubenswrapper[4830]: I0227 17:43:22.608657 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 27 17:43:22 crc kubenswrapper[4830]: I0227 17:43:22.825296 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"71167e3e-162d-4836-939e-0abbc7a1217c","Type":"ContainerStarted","Data":"62a2cbc065dd21bb8521d3ad3d54106d8235ade1cb92aadf9969c7828771ddae"} Feb 27 17:43:23 crc kubenswrapper[4830]: I0227 17:43:23.152101 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ndmjp" Feb 27 17:43:23 crc kubenswrapper[4830]: I0227 17:43:23.349167 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z875f\" (UniqueName: \"kubernetes.io/projected/61ef5006-416b-43e0-a9f1-7b69382403be-kube-api-access-z875f\") pod \"61ef5006-416b-43e0-a9f1-7b69382403be\" (UID: \"61ef5006-416b-43e0-a9f1-7b69382403be\") " Feb 27 17:43:23 crc kubenswrapper[4830]: I0227 17:43:23.349257 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61ef5006-416b-43e0-a9f1-7b69382403be-config-data\") pod \"61ef5006-416b-43e0-a9f1-7b69382403be\" (UID: \"61ef5006-416b-43e0-a9f1-7b69382403be\") " Feb 27 17:43:23 crc kubenswrapper[4830]: I0227 17:43:23.349935 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61ef5006-416b-43e0-a9f1-7b69382403be-scripts\") pod \"61ef5006-416b-43e0-a9f1-7b69382403be\" (UID: \"61ef5006-416b-43e0-a9f1-7b69382403be\") " Feb 27 17:43:23 crc kubenswrapper[4830]: I0227 17:43:23.350199 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61ef5006-416b-43e0-a9f1-7b69382403be-combined-ca-bundle\") pod \"61ef5006-416b-43e0-a9f1-7b69382403be\" (UID: \"61ef5006-416b-43e0-a9f1-7b69382403be\") " Feb 27 17:43:23 crc kubenswrapper[4830]: I0227 17:43:23.357897 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61ef5006-416b-43e0-a9f1-7b69382403be-kube-api-access-z875f" (OuterVolumeSpecName: "kube-api-access-z875f") pod "61ef5006-416b-43e0-a9f1-7b69382403be" (UID: "61ef5006-416b-43e0-a9f1-7b69382403be"). InnerVolumeSpecName "kube-api-access-z875f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:43:23 crc kubenswrapper[4830]: I0227 17:43:23.357960 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61ef5006-416b-43e0-a9f1-7b69382403be-scripts" (OuterVolumeSpecName: "scripts") pod "61ef5006-416b-43e0-a9f1-7b69382403be" (UID: "61ef5006-416b-43e0-a9f1-7b69382403be"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:43:23 crc kubenswrapper[4830]: I0227 17:43:23.383175 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61ef5006-416b-43e0-a9f1-7b69382403be-config-data" (OuterVolumeSpecName: "config-data") pod "61ef5006-416b-43e0-a9f1-7b69382403be" (UID: "61ef5006-416b-43e0-a9f1-7b69382403be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:43:23 crc kubenswrapper[4830]: I0227 17:43:23.384676 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61ef5006-416b-43e0-a9f1-7b69382403be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61ef5006-416b-43e0-a9f1-7b69382403be" (UID: "61ef5006-416b-43e0-a9f1-7b69382403be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:43:23 crc kubenswrapper[4830]: I0227 17:43:23.453517 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z875f\" (UniqueName: \"kubernetes.io/projected/61ef5006-416b-43e0-a9f1-7b69382403be-kube-api-access-z875f\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:23 crc kubenswrapper[4830]: I0227 17:43:23.453564 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61ef5006-416b-43e0-a9f1-7b69382403be-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:23 crc kubenswrapper[4830]: I0227 17:43:23.453576 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61ef5006-416b-43e0-a9f1-7b69382403be-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:23 crc kubenswrapper[4830]: I0227 17:43:23.453586 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61ef5006-416b-43e0-a9f1-7b69382403be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:23 crc kubenswrapper[4830]: I0227 17:43:23.868032 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"71167e3e-162d-4836-939e-0abbc7a1217c","Type":"ContainerStarted","Data":"7d7ada0491ba00be4bc229dd1643af1a324ae49a6bf62e3d0bb68329193d84a5"} Feb 27 17:43:23 crc kubenswrapper[4830]: I0227 17:43:23.869233 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 27 17:43:23 crc kubenswrapper[4830]: I0227 17:43:23.875164 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-ndmjp" event={"ID":"61ef5006-416b-43e0-a9f1-7b69382403be","Type":"ContainerDied","Data":"fc70d53b8e89622e7c25bb6973f5986cc96c74befb51d5a4e2052f71126b568e"} Feb 27 17:43:23 crc kubenswrapper[4830]: I0227 17:43:23.875210 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc70d53b8e89622e7c25bb6973f5986cc96c74befb51d5a4e2052f71126b568e" Feb 27 17:43:23 crc kubenswrapper[4830]: I0227 17:43:23.875307 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-ndmjp" Feb 27 17:43:23 crc kubenswrapper[4830]: I0227 17:43:23.915065 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.915042236 podStartE2EDuration="2.915042236s" podCreationTimestamp="2026-02-27 17:43:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:43:23.896396418 +0000 UTC m=+5799.985668911" watchObservedRunningTime="2026-02-27 17:43:23.915042236 +0000 UTC m=+5800.004314709" Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.067735 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.068212 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f130a868-eb17-446f-ae5a-91383cdbc74f" containerName="nova-api-api" containerID="cri-o://57181ff001ac4e88ddd86529a637c3bf16bd14faf7307f3331a45a0542ad98b0" gracePeriod=30 Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.068611 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f130a868-eb17-446f-ae5a-91383cdbc74f" containerName="nova-api-log" containerID="cri-o://a279dbdc2e4988fecd11fa384d473e1f5526bcce86435cf0b9585b095ddadf7b" gracePeriod=30 Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.142885 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.143232 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7" containerName="nova-scheduler-scheduler" containerID="cri-o://69899d04f75d9cce352e42624bb054fdd77ddb28e30def641fdb17041406d389" gracePeriod=30 Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.159700 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.160140 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="def86847-3e92-471b-bcc2-f74e9bbc81d7" containerName="nova-metadata-log" containerID="cri-o://4d3cd6e345b3f9541069fd7136ad641811ed65127c378e8f3c5dda7092c435ee" gracePeriod=30 Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.160293 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="def86847-3e92-471b-bcc2-f74e9bbc81d7" containerName="nova-metadata-metadata" containerID="cri-o://bbbe93c71e52498bfba29579a1a1b07039bcd1f784ba34a4b356cdde581b0498" gracePeriod=30 Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.632466 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.723524 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.790557 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94rqx\" (UniqueName: \"kubernetes.io/projected/f130a868-eb17-446f-ae5a-91383cdbc74f-kube-api-access-94rqx\") pod \"f130a868-eb17-446f-ae5a-91383cdbc74f\" (UID: \"f130a868-eb17-446f-ae5a-91383cdbc74f\") " Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.790611 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f130a868-eb17-446f-ae5a-91383cdbc74f-combined-ca-bundle\") pod \"f130a868-eb17-446f-ae5a-91383cdbc74f\" (UID: \"f130a868-eb17-446f-ae5a-91383cdbc74f\") " Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.790863 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/def86847-3e92-471b-bcc2-f74e9bbc81d7-combined-ca-bundle\") pod \"def86847-3e92-471b-bcc2-f74e9bbc81d7\" (UID: \"def86847-3e92-471b-bcc2-f74e9bbc81d7\") " Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.790919 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f130a868-eb17-446f-ae5a-91383cdbc74f-config-data\") pod \"f130a868-eb17-446f-ae5a-91383cdbc74f\" (UID: \"f130a868-eb17-446f-ae5a-91383cdbc74f\") " Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.791015 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/def86847-3e92-471b-bcc2-f74e9bbc81d7-logs\") pod \"def86847-3e92-471b-bcc2-f74e9bbc81d7\" (UID: \"def86847-3e92-471b-bcc2-f74e9bbc81d7\") " Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.791147 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f130a868-eb17-446f-ae5a-91383cdbc74f-logs\") pod \"f130a868-eb17-446f-ae5a-91383cdbc74f\" (UID: \"f130a868-eb17-446f-ae5a-91383cdbc74f\") " Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.793681 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f130a868-eb17-446f-ae5a-91383cdbc74f-logs" (OuterVolumeSpecName: "logs") pod "f130a868-eb17-446f-ae5a-91383cdbc74f" (UID: "f130a868-eb17-446f-ae5a-91383cdbc74f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.794021 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/def86847-3e92-471b-bcc2-f74e9bbc81d7-logs" (OuterVolumeSpecName: "logs") pod "def86847-3e92-471b-bcc2-f74e9bbc81d7" (UID: "def86847-3e92-471b-bcc2-f74e9bbc81d7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.797881 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f130a868-eb17-446f-ae5a-91383cdbc74f-kube-api-access-94rqx" (OuterVolumeSpecName: "kube-api-access-94rqx") pod "f130a868-eb17-446f-ae5a-91383cdbc74f" (UID: "f130a868-eb17-446f-ae5a-91383cdbc74f"). InnerVolumeSpecName "kube-api-access-94rqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.818227 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f130a868-eb17-446f-ae5a-91383cdbc74f-config-data" (OuterVolumeSpecName: "config-data") pod "f130a868-eb17-446f-ae5a-91383cdbc74f" (UID: "f130a868-eb17-446f-ae5a-91383cdbc74f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.821008 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/def86847-3e92-471b-bcc2-f74e9bbc81d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "def86847-3e92-471b-bcc2-f74e9bbc81d7" (UID: "def86847-3e92-471b-bcc2-f74e9bbc81d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.824058 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f130a868-eb17-446f-ae5a-91383cdbc74f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f130a868-eb17-446f-ae5a-91383cdbc74f" (UID: "f130a868-eb17-446f-ae5a-91383cdbc74f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.888121 4830 generic.go:334] "Generic (PLEG): container finished" podID="f130a868-eb17-446f-ae5a-91383cdbc74f" containerID="57181ff001ac4e88ddd86529a637c3bf16bd14faf7307f3331a45a0542ad98b0" exitCode=0 Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.888177 4830 generic.go:334] "Generic (PLEG): container finished" podID="f130a868-eb17-446f-ae5a-91383cdbc74f" containerID="a279dbdc2e4988fecd11fa384d473e1f5526bcce86435cf0b9585b095ddadf7b" exitCode=143 Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.888178 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.888261 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f130a868-eb17-446f-ae5a-91383cdbc74f","Type":"ContainerDied","Data":"57181ff001ac4e88ddd86529a637c3bf16bd14faf7307f3331a45a0542ad98b0"} Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.888310 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f130a868-eb17-446f-ae5a-91383cdbc74f","Type":"ContainerDied","Data":"a279dbdc2e4988fecd11fa384d473e1f5526bcce86435cf0b9585b095ddadf7b"} Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.888328 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f130a868-eb17-446f-ae5a-91383cdbc74f","Type":"ContainerDied","Data":"0aedd4135edbec81c619bba59424b72933348430a595296d22afb79e16e5df0e"} Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.888356 4830 scope.go:117] "RemoveContainer" containerID="57181ff001ac4e88ddd86529a637c3bf16bd14faf7307f3331a45a0542ad98b0" Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.892335 4830 generic.go:334] "Generic (PLEG): container finished" podID="def86847-3e92-471b-bcc2-f74e9bbc81d7" containerID="bbbe93c71e52498bfba29579a1a1b07039bcd1f784ba34a4b356cdde581b0498" exitCode=0 Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.892395 4830 generic.go:334] "Generic (PLEG): container finished" podID="def86847-3e92-471b-bcc2-f74e9bbc81d7" containerID="4d3cd6e345b3f9541069fd7136ad641811ed65127c378e8f3c5dda7092c435ee" exitCode=143 Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.892719 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/def86847-3e92-471b-bcc2-f74e9bbc81d7-config-data\") pod \"def86847-3e92-471b-bcc2-f74e9bbc81d7\" (UID: \"def86847-3e92-471b-bcc2-f74e9bbc81d7\") " Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.892868 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jshrb\" (UniqueName: \"kubernetes.io/projected/def86847-3e92-471b-bcc2-f74e9bbc81d7-kube-api-access-jshrb\") pod \"def86847-3e92-471b-bcc2-f74e9bbc81d7\" (UID: \"def86847-3e92-471b-bcc2-f74e9bbc81d7\") " Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.893647 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.893813 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"def86847-3e92-471b-bcc2-f74e9bbc81d7","Type":"ContainerDied","Data":"bbbe93c71e52498bfba29579a1a1b07039bcd1f784ba34a4b356cdde581b0498"} Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.893861 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"def86847-3e92-471b-bcc2-f74e9bbc81d7","Type":"ContainerDied","Data":"4d3cd6e345b3f9541069fd7136ad641811ed65127c378e8f3c5dda7092c435ee"} Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.893884 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"def86847-3e92-471b-bcc2-f74e9bbc81d7","Type":"ContainerDied","Data":"b81b6035f605f6f2ca5e9813c836025b1b319958eb6be9969f7ff0d5e3196ce1"} Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.894544 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f130a868-eb17-446f-ae5a-91383cdbc74f-logs\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.894690 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94rqx\" (UniqueName: \"kubernetes.io/projected/f130a868-eb17-446f-ae5a-91383cdbc74f-kube-api-access-94rqx\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.894718 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f130a868-eb17-446f-ae5a-91383cdbc74f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.894733 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/def86847-3e92-471b-bcc2-f74e9bbc81d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.894747 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f130a868-eb17-446f-ae5a-91383cdbc74f-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.894762 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/def86847-3e92-471b-bcc2-f74e9bbc81d7-logs\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.900470 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/def86847-3e92-471b-bcc2-f74e9bbc81d7-kube-api-access-jshrb" (OuterVolumeSpecName: "kube-api-access-jshrb") pod "def86847-3e92-471b-bcc2-f74e9bbc81d7" (UID: "def86847-3e92-471b-bcc2-f74e9bbc81d7"). InnerVolumeSpecName "kube-api-access-jshrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.925157 4830 scope.go:117] "RemoveContainer" containerID="a279dbdc2e4988fecd11fa384d473e1f5526bcce86435cf0b9585b095ddadf7b" Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.932039 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/def86847-3e92-471b-bcc2-f74e9bbc81d7-config-data" (OuterVolumeSpecName: "config-data") pod "def86847-3e92-471b-bcc2-f74e9bbc81d7" (UID: "def86847-3e92-471b-bcc2-f74e9bbc81d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.952413 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.973497 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.992300 4830 scope.go:117] "RemoveContainer" containerID="57181ff001ac4e88ddd86529a637c3bf16bd14faf7307f3331a45a0542ad98b0" Feb 27 17:43:24 crc kubenswrapper[4830]: E0227 17:43:24.992869 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57181ff001ac4e88ddd86529a637c3bf16bd14faf7307f3331a45a0542ad98b0\": container with ID starting with 57181ff001ac4e88ddd86529a637c3bf16bd14faf7307f3331a45a0542ad98b0 not found: ID does not exist" containerID="57181ff001ac4e88ddd86529a637c3bf16bd14faf7307f3331a45a0542ad98b0" Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.992923 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57181ff001ac4e88ddd86529a637c3bf16bd14faf7307f3331a45a0542ad98b0"} err="failed to get container status \"57181ff001ac4e88ddd86529a637c3bf16bd14faf7307f3331a45a0542ad98b0\": rpc error: code = NotFound desc = could not find container \"57181ff001ac4e88ddd86529a637c3bf16bd14faf7307f3331a45a0542ad98b0\": container with ID starting with 57181ff001ac4e88ddd86529a637c3bf16bd14faf7307f3331a45a0542ad98b0 not found: ID does not exist" Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.992971 4830 scope.go:117] "RemoveContainer" containerID="a279dbdc2e4988fecd11fa384d473e1f5526bcce86435cf0b9585b095ddadf7b" Feb 27 17:43:24 crc kubenswrapper[4830]: E0227 17:43:24.993312 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a279dbdc2e4988fecd11fa384d473e1f5526bcce86435cf0b9585b095ddadf7b\": container with ID starting with a279dbdc2e4988fecd11fa384d473e1f5526bcce86435cf0b9585b095ddadf7b not found: ID does not exist" containerID="a279dbdc2e4988fecd11fa384d473e1f5526bcce86435cf0b9585b095ddadf7b" Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.993356 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a279dbdc2e4988fecd11fa384d473e1f5526bcce86435cf0b9585b095ddadf7b"} err="failed to get container status \"a279dbdc2e4988fecd11fa384d473e1f5526bcce86435cf0b9585b095ddadf7b\": rpc error: code = NotFound desc = could not find container \"a279dbdc2e4988fecd11fa384d473e1f5526bcce86435cf0b9585b095ddadf7b\": container with ID starting with a279dbdc2e4988fecd11fa384d473e1f5526bcce86435cf0b9585b095ddadf7b not found: ID does not exist" Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.993386 4830 scope.go:117] "RemoveContainer" containerID="57181ff001ac4e88ddd86529a637c3bf16bd14faf7307f3331a45a0542ad98b0" Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.993700 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57181ff001ac4e88ddd86529a637c3bf16bd14faf7307f3331a45a0542ad98b0"} err="failed to get container status \"57181ff001ac4e88ddd86529a637c3bf16bd14faf7307f3331a45a0542ad98b0\": rpc error: code = NotFound desc = could not find container \"57181ff001ac4e88ddd86529a637c3bf16bd14faf7307f3331a45a0542ad98b0\": container with ID starting with 57181ff001ac4e88ddd86529a637c3bf16bd14faf7307f3331a45a0542ad98b0 not found: ID does not exist" Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.993762 4830 scope.go:117] "RemoveContainer" containerID="a279dbdc2e4988fecd11fa384d473e1f5526bcce86435cf0b9585b095ddadf7b" Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.994200 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a279dbdc2e4988fecd11fa384d473e1f5526bcce86435cf0b9585b095ddadf7b"} err="failed to get container status \"a279dbdc2e4988fecd11fa384d473e1f5526bcce86435cf0b9585b095ddadf7b\": rpc error: code = NotFound desc = could not find container \"a279dbdc2e4988fecd11fa384d473e1f5526bcce86435cf0b9585b095ddadf7b\": container with ID starting with a279dbdc2e4988fecd11fa384d473e1f5526bcce86435cf0b9585b095ddadf7b not found: ID does not exist" Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.994270 4830 scope.go:117] "RemoveContainer" containerID="bbbe93c71e52498bfba29579a1a1b07039bcd1f784ba34a4b356cdde581b0498" Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.996812 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/def86847-3e92-471b-bcc2-f74e9bbc81d7-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:24 crc kubenswrapper[4830]: I0227 17:43:24.996844 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jshrb\" (UniqueName: \"kubernetes.io/projected/def86847-3e92-471b-bcc2-f74e9bbc81d7-kube-api-access-jshrb\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.000041 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 27 17:43:25 crc kubenswrapper[4830]: E0227 17:43:25.000599 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f130a868-eb17-446f-ae5a-91383cdbc74f" containerName="nova-api-log" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.000625 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f130a868-eb17-446f-ae5a-91383cdbc74f" containerName="nova-api-log" Feb 27 17:43:25 crc kubenswrapper[4830]: E0227 17:43:25.000658 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="def86847-3e92-471b-bcc2-f74e9bbc81d7" containerName="nova-metadata-log" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.000669 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="def86847-3e92-471b-bcc2-f74e9bbc81d7" containerName="nova-metadata-log" Feb 27 17:43:25 crc kubenswrapper[4830]: E0227 17:43:25.000689 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61ef5006-416b-43e0-a9f1-7b69382403be" containerName="nova-manage" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.000700 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="61ef5006-416b-43e0-a9f1-7b69382403be" containerName="nova-manage" Feb 27 17:43:25 crc kubenswrapper[4830]: E0227 17:43:25.000723 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f130a868-eb17-446f-ae5a-91383cdbc74f" containerName="nova-api-api" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.000732 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f130a868-eb17-446f-ae5a-91383cdbc74f" containerName="nova-api-api" Feb 27 17:43:25 crc kubenswrapper[4830]: E0227 17:43:25.000750 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="def86847-3e92-471b-bcc2-f74e9bbc81d7" containerName="nova-metadata-metadata" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.000759 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="def86847-3e92-471b-bcc2-f74e9bbc81d7" containerName="nova-metadata-metadata" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.000993 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f130a868-eb17-446f-ae5a-91383cdbc74f" containerName="nova-api-log" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.001013 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f130a868-eb17-446f-ae5a-91383cdbc74f" containerName="nova-api-api" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.001029 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="def86847-3e92-471b-bcc2-f74e9bbc81d7" containerName="nova-metadata-log" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.001047 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="def86847-3e92-471b-bcc2-f74e9bbc81d7" containerName="nova-metadata-metadata" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.001063 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="61ef5006-416b-43e0-a9f1-7b69382403be" containerName="nova-manage" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.002373 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.006764 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.015145 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.047109 4830 scope.go:117] "RemoveContainer" containerID="4d3cd6e345b3f9541069fd7136ad641811ed65127c378e8f3c5dda7092c435ee" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.086918 4830 scope.go:117] "RemoveContainer" containerID="bbbe93c71e52498bfba29579a1a1b07039bcd1f784ba34a4b356cdde581b0498" Feb 27 17:43:25 crc kubenswrapper[4830]: E0227 17:43:25.089791 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbbe93c71e52498bfba29579a1a1b07039bcd1f784ba34a4b356cdde581b0498\": container with ID starting with bbbe93c71e52498bfba29579a1a1b07039bcd1f784ba34a4b356cdde581b0498 not found: ID does not exist" containerID="bbbe93c71e52498bfba29579a1a1b07039bcd1f784ba34a4b356cdde581b0498" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.089825 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbbe93c71e52498bfba29579a1a1b07039bcd1f784ba34a4b356cdde581b0498"} err="failed to get container status \"bbbe93c71e52498bfba29579a1a1b07039bcd1f784ba34a4b356cdde581b0498\": rpc error: code = NotFound desc = could not find container \"bbbe93c71e52498bfba29579a1a1b07039bcd1f784ba34a4b356cdde581b0498\": container with ID starting with bbbe93c71e52498bfba29579a1a1b07039bcd1f784ba34a4b356cdde581b0498 not found: ID does not exist" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.089848 4830 scope.go:117] "RemoveContainer" containerID="4d3cd6e345b3f9541069fd7136ad641811ed65127c378e8f3c5dda7092c435ee" Feb 27 17:43:25 crc kubenswrapper[4830]: E0227 17:43:25.093029 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d3cd6e345b3f9541069fd7136ad641811ed65127c378e8f3c5dda7092c435ee\": container with ID starting with 4d3cd6e345b3f9541069fd7136ad641811ed65127c378e8f3c5dda7092c435ee not found: ID does not exist" containerID="4d3cd6e345b3f9541069fd7136ad641811ed65127c378e8f3c5dda7092c435ee" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.093057 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d3cd6e345b3f9541069fd7136ad641811ed65127c378e8f3c5dda7092c435ee"} err="failed to get container status \"4d3cd6e345b3f9541069fd7136ad641811ed65127c378e8f3c5dda7092c435ee\": rpc error: code = NotFound desc = could not find container \"4d3cd6e345b3f9541069fd7136ad641811ed65127c378e8f3c5dda7092c435ee\": container with ID starting with 4d3cd6e345b3f9541069fd7136ad641811ed65127c378e8f3c5dda7092c435ee not found: ID does not exist" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.093074 4830 scope.go:117] "RemoveContainer" containerID="bbbe93c71e52498bfba29579a1a1b07039bcd1f784ba34a4b356cdde581b0498" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.098889 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbbe93c71e52498bfba29579a1a1b07039bcd1f784ba34a4b356cdde581b0498"} err="failed to get container status \"bbbe93c71e52498bfba29579a1a1b07039bcd1f784ba34a4b356cdde581b0498\": rpc error: code = NotFound desc = could not find container \"bbbe93c71e52498bfba29579a1a1b07039bcd1f784ba34a4b356cdde581b0498\": container with ID starting with bbbe93c71e52498bfba29579a1a1b07039bcd1f784ba34a4b356cdde581b0498 not found: ID does not exist" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.098957 4830 scope.go:117] "RemoveContainer" containerID="4d3cd6e345b3f9541069fd7136ad641811ed65127c378e8f3c5dda7092c435ee" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.099831 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d3cd6e345b3f9541069fd7136ad641811ed65127c378e8f3c5dda7092c435ee"} err="failed to get container status \"4d3cd6e345b3f9541069fd7136ad641811ed65127c378e8f3c5dda7092c435ee\": rpc error: code = NotFound desc = could not find container \"4d3cd6e345b3f9541069fd7136ad641811ed65127c378e8f3c5dda7092c435ee\": container with ID starting with 4d3cd6e345b3f9541069fd7136ad641811ed65127c378e8f3c5dda7092c435ee not found: ID does not exist" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.201204 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hsjj\" (UniqueName: \"kubernetes.io/projected/8aec15ae-952f-4209-9b96-bd90f7e16b44-kube-api-access-8hsjj\") pod \"nova-api-0\" (UID: \"8aec15ae-952f-4209-9b96-bd90f7e16b44\") " pod="openstack/nova-api-0" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.202176 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8aec15ae-952f-4209-9b96-bd90f7e16b44-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8aec15ae-952f-4209-9b96-bd90f7e16b44\") " pod="openstack/nova-api-0" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.202282 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8aec15ae-952f-4209-9b96-bd90f7e16b44-logs\") pod \"nova-api-0\" (UID: \"8aec15ae-952f-4209-9b96-bd90f7e16b44\") " pod="openstack/nova-api-0" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.202340 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8aec15ae-952f-4209-9b96-bd90f7e16b44-config-data\") pod \"nova-api-0\" (UID: \"8aec15ae-952f-4209-9b96-bd90f7e16b44\") " pod="openstack/nova-api-0" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.235183 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.238476 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.242966 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.249769 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.290096 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57fd8cfc4f-fkfd5" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.300324 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.305432 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.311725 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8aec15ae-952f-4209-9b96-bd90f7e16b44-config-data\") pod \"nova-api-0\" (UID: \"8aec15ae-952f-4209-9b96-bd90f7e16b44\") " pod="openstack/nova-api-0" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.312962 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hsjj\" (UniqueName: \"kubernetes.io/projected/8aec15ae-952f-4209-9b96-bd90f7e16b44-kube-api-access-8hsjj\") pod \"nova-api-0\" (UID: \"8aec15ae-952f-4209-9b96-bd90f7e16b44\") " pod="openstack/nova-api-0" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.313265 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8aec15ae-952f-4209-9b96-bd90f7e16b44-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8aec15ae-952f-4209-9b96-bd90f7e16b44\") " pod="openstack/nova-api-0" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.313377 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8aec15ae-952f-4209-9b96-bd90f7e16b44-logs\") pod \"nova-api-0\" (UID: \"8aec15ae-952f-4209-9b96-bd90f7e16b44\") " pod="openstack/nova-api-0" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.315385 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.319195 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8aec15ae-952f-4209-9b96-bd90f7e16b44-logs\") pod \"nova-api-0\" (UID: \"8aec15ae-952f-4209-9b96-bd90f7e16b44\") " pod="openstack/nova-api-0" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.322126 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8aec15ae-952f-4209-9b96-bd90f7e16b44-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8aec15ae-952f-4209-9b96-bd90f7e16b44\") " pod="openstack/nova-api-0" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.362767 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.363889 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8aec15ae-952f-4209-9b96-bd90f7e16b44-config-data\") pod \"nova-api-0\" (UID: \"8aec15ae-952f-4209-9b96-bd90f7e16b44\") " pod="openstack/nova-api-0" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.385488 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hsjj\" (UniqueName: \"kubernetes.io/projected/8aec15ae-952f-4209-9b96-bd90f7e16b44-kube-api-access-8hsjj\") pod \"nova-api-0\" (UID: \"8aec15ae-952f-4209-9b96-bd90f7e16b44\") " pod="openstack/nova-api-0" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.418100 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/610822bd-ca63-45d2-9a7e-c9dd6a5068e9-config-data\") pod \"nova-metadata-0\" (UID: \"610822bd-ca63-45d2-9a7e-c9dd6a5068e9\") " pod="openstack/nova-metadata-0" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.418676 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/610822bd-ca63-45d2-9a7e-c9dd6a5068e9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"610822bd-ca63-45d2-9a7e-c9dd6a5068e9\") " pod="openstack/nova-metadata-0" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.418748 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftmq6\" (UniqueName: \"kubernetes.io/projected/610822bd-ca63-45d2-9a7e-c9dd6a5068e9-kube-api-access-ftmq6\") pod \"nova-metadata-0\" (UID: \"610822bd-ca63-45d2-9a7e-c9dd6a5068e9\") " pod="openstack/nova-metadata-0" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.418772 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/610822bd-ca63-45d2-9a7e-c9dd6a5068e9-logs\") pod \"nova-metadata-0\" (UID: \"610822bd-ca63-45d2-9a7e-c9dd6a5068e9\") " pod="openstack/nova-metadata-0" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.441992 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7754d54f49-mb84v"] Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.454356 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7754d54f49-mb84v" podUID="601dd6ff-d00f-445a-a010-0f02a2865504" containerName="dnsmasq-dns" containerID="cri-o://365ba8e03a0a1032f6b453885d2c4c289a3ed3775ef2719f6bd4b98407e30716" gracePeriod=10 Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.521203 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/610822bd-ca63-45d2-9a7e-c9dd6a5068e9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"610822bd-ca63-45d2-9a7e-c9dd6a5068e9\") " pod="openstack/nova-metadata-0" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.521323 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftmq6\" (UniqueName: \"kubernetes.io/projected/610822bd-ca63-45d2-9a7e-c9dd6a5068e9-kube-api-access-ftmq6\") pod \"nova-metadata-0\" (UID: \"610822bd-ca63-45d2-9a7e-c9dd6a5068e9\") " pod="openstack/nova-metadata-0" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.521355 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/610822bd-ca63-45d2-9a7e-c9dd6a5068e9-logs\") pod \"nova-metadata-0\" (UID: \"610822bd-ca63-45d2-9a7e-c9dd6a5068e9\") " pod="openstack/nova-metadata-0" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.521404 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/610822bd-ca63-45d2-9a7e-c9dd6a5068e9-config-data\") pod \"nova-metadata-0\" (UID: \"610822bd-ca63-45d2-9a7e-c9dd6a5068e9\") " pod="openstack/nova-metadata-0" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.522856 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/610822bd-ca63-45d2-9a7e-c9dd6a5068e9-logs\") pod \"nova-metadata-0\" (UID: \"610822bd-ca63-45d2-9a7e-c9dd6a5068e9\") " pod="openstack/nova-metadata-0" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.525150 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/610822bd-ca63-45d2-9a7e-c9dd6a5068e9-config-data\") pod \"nova-metadata-0\" (UID: \"610822bd-ca63-45d2-9a7e-c9dd6a5068e9\") " pod="openstack/nova-metadata-0" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.525609 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/610822bd-ca63-45d2-9a7e-c9dd6a5068e9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"610822bd-ca63-45d2-9a7e-c9dd6a5068e9\") " pod="openstack/nova-metadata-0" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.542504 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftmq6\" (UniqueName: \"kubernetes.io/projected/610822bd-ca63-45d2-9a7e-c9dd6a5068e9-kube-api-access-ftmq6\") pod \"nova-metadata-0\" (UID: \"610822bd-ca63-45d2-9a7e-c9dd6a5068e9\") " pod="openstack/nova-metadata-0" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.639204 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.763362 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.880350 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7754d54f49-mb84v" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.941584 4830 generic.go:334] "Generic (PLEG): container finished" podID="601dd6ff-d00f-445a-a010-0f02a2865504" containerID="365ba8e03a0a1032f6b453885d2c4c289a3ed3775ef2719f6bd4b98407e30716" exitCode=0 Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.941784 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7754d54f49-mb84v" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.941889 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7754d54f49-mb84v" event={"ID":"601dd6ff-d00f-445a-a010-0f02a2865504","Type":"ContainerDied","Data":"365ba8e03a0a1032f6b453885d2c4c289a3ed3775ef2719f6bd4b98407e30716"} Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.941937 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7754d54f49-mb84v" event={"ID":"601dd6ff-d00f-445a-a010-0f02a2865504","Type":"ContainerDied","Data":"aa73f85ed04168e3f05f9b8fd6ae245476c8186a096e4f8fce96c8a575c28557"} Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.941980 4830 scope.go:117] "RemoveContainer" containerID="365ba8e03a0a1032f6b453885d2c4c289a3ed3775ef2719f6bd4b98407e30716" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.956420 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:43:25 crc kubenswrapper[4830]: I0227 17:43:25.972147 4830 scope.go:117] "RemoveContainer" containerID="48f8da0811a5be819e5103e1d788aac8a6d8efe3e32c87f3081ada865670b870" Feb 27 17:43:26 crc kubenswrapper[4830]: I0227 17:43:26.011646 4830 scope.go:117] "RemoveContainer" containerID="365ba8e03a0a1032f6b453885d2c4c289a3ed3775ef2719f6bd4b98407e30716" Feb 27 17:43:26 crc kubenswrapper[4830]: E0227 17:43:26.019576 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"365ba8e03a0a1032f6b453885d2c4c289a3ed3775ef2719f6bd4b98407e30716\": container with ID starting with 365ba8e03a0a1032f6b453885d2c4c289a3ed3775ef2719f6bd4b98407e30716 not found: ID does not exist" containerID="365ba8e03a0a1032f6b453885d2c4c289a3ed3775ef2719f6bd4b98407e30716" Feb 27 17:43:26 crc kubenswrapper[4830]: I0227 17:43:26.019634 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"365ba8e03a0a1032f6b453885d2c4c289a3ed3775ef2719f6bd4b98407e30716"} err="failed to get container status \"365ba8e03a0a1032f6b453885d2c4c289a3ed3775ef2719f6bd4b98407e30716\": rpc error: code = NotFound desc = could not find container \"365ba8e03a0a1032f6b453885d2c4c289a3ed3775ef2719f6bd4b98407e30716\": container with ID starting with 365ba8e03a0a1032f6b453885d2c4c289a3ed3775ef2719f6bd4b98407e30716 not found: ID does not exist" Feb 27 17:43:26 crc kubenswrapper[4830]: I0227 17:43:26.019667 4830 scope.go:117] "RemoveContainer" containerID="48f8da0811a5be819e5103e1d788aac8a6d8efe3e32c87f3081ada865670b870" Feb 27 17:43:26 crc kubenswrapper[4830]: E0227 17:43:26.020069 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48f8da0811a5be819e5103e1d788aac8a6d8efe3e32c87f3081ada865670b870\": container with ID starting with 48f8da0811a5be819e5103e1d788aac8a6d8efe3e32c87f3081ada865670b870 not found: ID does not exist" containerID="48f8da0811a5be819e5103e1d788aac8a6d8efe3e32c87f3081ada865670b870" Feb 27 17:43:26 crc kubenswrapper[4830]: I0227 17:43:26.020133 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48f8da0811a5be819e5103e1d788aac8a6d8efe3e32c87f3081ada865670b870"} err="failed to get container status \"48f8da0811a5be819e5103e1d788aac8a6d8efe3e32c87f3081ada865670b870\": rpc error: code = NotFound desc = could not find container \"48f8da0811a5be819e5103e1d788aac8a6d8efe3e32c87f3081ada865670b870\": container with ID starting with 48f8da0811a5be819e5103e1d788aac8a6d8efe3e32c87f3081ada865670b870 not found: ID does not exist" Feb 27 17:43:26 crc kubenswrapper[4830]: I0227 17:43:26.039766 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/601dd6ff-d00f-445a-a010-0f02a2865504-ovsdbserver-sb\") pod \"601dd6ff-d00f-445a-a010-0f02a2865504\" (UID: \"601dd6ff-d00f-445a-a010-0f02a2865504\") " Feb 27 17:43:26 crc kubenswrapper[4830]: I0227 17:43:26.039820 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svzbm\" (UniqueName: \"kubernetes.io/projected/601dd6ff-d00f-445a-a010-0f02a2865504-kube-api-access-svzbm\") pod \"601dd6ff-d00f-445a-a010-0f02a2865504\" (UID: \"601dd6ff-d00f-445a-a010-0f02a2865504\") " Feb 27 17:43:26 crc kubenswrapper[4830]: I0227 17:43:26.039898 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/601dd6ff-d00f-445a-a010-0f02a2865504-ovsdbserver-nb\") pod \"601dd6ff-d00f-445a-a010-0f02a2865504\" (UID: \"601dd6ff-d00f-445a-a010-0f02a2865504\") " Feb 27 17:43:26 crc kubenswrapper[4830]: I0227 17:43:26.040049 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/601dd6ff-d00f-445a-a010-0f02a2865504-dns-svc\") pod \"601dd6ff-d00f-445a-a010-0f02a2865504\" (UID: \"601dd6ff-d00f-445a-a010-0f02a2865504\") " Feb 27 17:43:26 crc kubenswrapper[4830]: I0227 17:43:26.040142 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/601dd6ff-d00f-445a-a010-0f02a2865504-config\") pod \"601dd6ff-d00f-445a-a010-0f02a2865504\" (UID: \"601dd6ff-d00f-445a-a010-0f02a2865504\") " Feb 27 17:43:26 crc kubenswrapper[4830]: I0227 17:43:26.064369 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/601dd6ff-d00f-445a-a010-0f02a2865504-kube-api-access-svzbm" (OuterVolumeSpecName: "kube-api-access-svzbm") pod "601dd6ff-d00f-445a-a010-0f02a2865504" (UID: "601dd6ff-d00f-445a-a010-0f02a2865504"). InnerVolumeSpecName "kube-api-access-svzbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:43:26 crc kubenswrapper[4830]: I0227 17:43:26.102207 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/601dd6ff-d00f-445a-a010-0f02a2865504-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "601dd6ff-d00f-445a-a010-0f02a2865504" (UID: "601dd6ff-d00f-445a-a010-0f02a2865504"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:43:26 crc kubenswrapper[4830]: I0227 17:43:26.104524 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/601dd6ff-d00f-445a-a010-0f02a2865504-config" (OuterVolumeSpecName: "config") pod "601dd6ff-d00f-445a-a010-0f02a2865504" (UID: "601dd6ff-d00f-445a-a010-0f02a2865504"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:43:26 crc kubenswrapper[4830]: I0227 17:43:26.108197 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/601dd6ff-d00f-445a-a010-0f02a2865504-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "601dd6ff-d00f-445a-a010-0f02a2865504" (UID: "601dd6ff-d00f-445a-a010-0f02a2865504"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:43:26 crc kubenswrapper[4830]: I0227 17:43:26.114216 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/601dd6ff-d00f-445a-a010-0f02a2865504-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "601dd6ff-d00f-445a-a010-0f02a2865504" (UID: "601dd6ff-d00f-445a-a010-0f02a2865504"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:43:26 crc kubenswrapper[4830]: I0227 17:43:26.145366 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/601dd6ff-d00f-445a-a010-0f02a2865504-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:26 crc kubenswrapper[4830]: I0227 17:43:26.145397 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svzbm\" (UniqueName: \"kubernetes.io/projected/601dd6ff-d00f-445a-a010-0f02a2865504-kube-api-access-svzbm\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:26 crc kubenswrapper[4830]: I0227 17:43:26.145409 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/601dd6ff-d00f-445a-a010-0f02a2865504-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:26 crc kubenswrapper[4830]: I0227 17:43:26.145420 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/601dd6ff-d00f-445a-a010-0f02a2865504-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:26 crc kubenswrapper[4830]: I0227 17:43:26.145430 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/601dd6ff-d00f-445a-a010-0f02a2865504-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:26 crc kubenswrapper[4830]: I0227 17:43:26.179154 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 17:43:26 crc kubenswrapper[4830]: I0227 17:43:26.299907 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7754d54f49-mb84v"] Feb 27 17:43:26 crc kubenswrapper[4830]: I0227 17:43:26.310156 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7754d54f49-mb84v"] Feb 27 17:43:26 crc kubenswrapper[4830]: I0227 17:43:26.338710 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:43:26 crc kubenswrapper[4830]: I0227 17:43:26.777157 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="601dd6ff-d00f-445a-a010-0f02a2865504" path="/var/lib/kubelet/pods/601dd6ff-d00f-445a-a010-0f02a2865504/volumes" Feb 27 17:43:26 crc kubenswrapper[4830]: I0227 17:43:26.778124 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="def86847-3e92-471b-bcc2-f74e9bbc81d7" path="/var/lib/kubelet/pods/def86847-3e92-471b-bcc2-f74e9bbc81d7/volumes" Feb 27 17:43:26 crc kubenswrapper[4830]: I0227 17:43:26.779033 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f130a868-eb17-446f-ae5a-91383cdbc74f" path="/var/lib/kubelet/pods/f130a868-eb17-446f-ae5a-91383cdbc74f/volumes" Feb 27 17:43:26 crc kubenswrapper[4830]: I0227 17:43:26.955405 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8aec15ae-952f-4209-9b96-bd90f7e16b44","Type":"ContainerStarted","Data":"e85f0ac8519365b549ae950038dae5a8ae1cae18a01629940514cbfdc49431bb"} Feb 27 17:43:26 crc kubenswrapper[4830]: I0227 17:43:26.955861 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8aec15ae-952f-4209-9b96-bd90f7e16b44","Type":"ContainerStarted","Data":"117522c3b30cba765b6dd74111347985bbd0a2d347e8f01d93bb80eccd696e1c"} Feb 27 17:43:26 crc kubenswrapper[4830]: I0227 17:43:26.955877 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8aec15ae-952f-4209-9b96-bd90f7e16b44","Type":"ContainerStarted","Data":"b24ad9c9f2b9e15e802dc7a141cec604dfd74394fc0939cc8b3d74f8714c15e0"} Feb 27 17:43:26 crc kubenswrapper[4830]: I0227 17:43:26.958621 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"610822bd-ca63-45d2-9a7e-c9dd6a5068e9","Type":"ContainerStarted","Data":"438da0477ae1142648ec558d04441037b39c0d416032c2b5ce7d672078313955"} Feb 27 17:43:26 crc kubenswrapper[4830]: I0227 17:43:26.958677 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"610822bd-ca63-45d2-9a7e-c9dd6a5068e9","Type":"ContainerStarted","Data":"7e929a86a7665398771a3c70ce16950b9a81af7b338ddc722d5cc0186b454323"} Feb 27 17:43:26 crc kubenswrapper[4830]: I0227 17:43:26.958689 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"610822bd-ca63-45d2-9a7e-c9dd6a5068e9","Type":"ContainerStarted","Data":"dc98881443635cf3db94df9a0fa58283cbf03f28be2aadad60eb565038304e1c"} Feb 27 17:43:26 crc kubenswrapper[4830]: I0227 17:43:26.986860 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.986834248 podStartE2EDuration="2.986834248s" podCreationTimestamp="2026-02-27 17:43:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:43:26.985508256 +0000 UTC m=+5803.074780719" watchObservedRunningTime="2026-02-27 17:43:26.986834248 +0000 UTC m=+5803.076106711" Feb 27 17:43:27 crc kubenswrapper[4830]: I0227 17:43:27.027470 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.027450515 podStartE2EDuration="2.027450515s" podCreationTimestamp="2026-02-27 17:43:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:43:27.015578999 +0000 UTC m=+5803.104851462" watchObservedRunningTime="2026-02-27 17:43:27.027450515 +0000 UTC m=+5803.116722978" Feb 27 17:43:27 crc kubenswrapper[4830]: I0227 17:43:27.298759 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 27 17:43:27 crc kubenswrapper[4830]: I0227 17:43:27.416117 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 17:43:27 crc kubenswrapper[4830]: I0227 17:43:27.571493 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqqvl\" (UniqueName: \"kubernetes.io/projected/4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7-kube-api-access-gqqvl\") pod \"4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7\" (UID: \"4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7\") " Feb 27 17:43:27 crc kubenswrapper[4830]: I0227 17:43:27.572055 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7-combined-ca-bundle\") pod \"4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7\" (UID: \"4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7\") " Feb 27 17:43:27 crc kubenswrapper[4830]: I0227 17:43:27.572197 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7-config-data\") pod \"4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7\" (UID: \"4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7\") " Feb 27 17:43:27 crc kubenswrapper[4830]: I0227 17:43:27.579257 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7-kube-api-access-gqqvl" (OuterVolumeSpecName: "kube-api-access-gqqvl") pod "4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7" (UID: "4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7"). InnerVolumeSpecName "kube-api-access-gqqvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:43:27 crc kubenswrapper[4830]: I0227 17:43:27.614144 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7-config-data" (OuterVolumeSpecName: "config-data") pod "4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7" (UID: "4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:43:27 crc kubenswrapper[4830]: I0227 17:43:27.618465 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7" (UID: "4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:43:27 crc kubenswrapper[4830]: I0227 17:43:27.674482 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqqvl\" (UniqueName: \"kubernetes.io/projected/4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7-kube-api-access-gqqvl\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:27 crc kubenswrapper[4830]: I0227 17:43:27.674529 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:27 crc kubenswrapper[4830]: I0227 17:43:27.674548 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:27 crc kubenswrapper[4830]: I0227 17:43:27.863481 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-c4dnj"] Feb 27 17:43:27 crc kubenswrapper[4830]: E0227 17:43:27.864134 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7" containerName="nova-scheduler-scheduler" Feb 27 17:43:27 crc kubenswrapper[4830]: I0227 17:43:27.864150 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7" containerName="nova-scheduler-scheduler" Feb 27 17:43:27 crc kubenswrapper[4830]: E0227 17:43:27.864170 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="601dd6ff-d00f-445a-a010-0f02a2865504" containerName="init" Feb 27 17:43:27 crc kubenswrapper[4830]: I0227 17:43:27.864177 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="601dd6ff-d00f-445a-a010-0f02a2865504" containerName="init" Feb 27 17:43:27 crc kubenswrapper[4830]: E0227 17:43:27.864228 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="601dd6ff-d00f-445a-a010-0f02a2865504" containerName="dnsmasq-dns" Feb 27 17:43:27 crc kubenswrapper[4830]: I0227 17:43:27.864234 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="601dd6ff-d00f-445a-a010-0f02a2865504" containerName="dnsmasq-dns" Feb 27 17:43:27 crc kubenswrapper[4830]: I0227 17:43:27.864494 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="601dd6ff-d00f-445a-a010-0f02a2865504" containerName="dnsmasq-dns" Feb 27 17:43:27 crc kubenswrapper[4830]: I0227 17:43:27.864516 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7" containerName="nova-scheduler-scheduler" Feb 27 17:43:27 crc kubenswrapper[4830]: I0227 17:43:27.865483 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-c4dnj" Feb 27 17:43:27 crc kubenswrapper[4830]: I0227 17:43:27.872258 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 27 17:43:27 crc kubenswrapper[4830]: I0227 17:43:27.872516 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 27 17:43:27 crc kubenswrapper[4830]: I0227 17:43:27.882114 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-c4dnj"] Feb 27 17:43:27 crc kubenswrapper[4830]: I0227 17:43:27.975326 4830 generic.go:334] "Generic (PLEG): container finished" podID="4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7" containerID="69899d04f75d9cce352e42624bb054fdd77ddb28e30def641fdb17041406d389" exitCode=0 Feb 27 17:43:27 crc kubenswrapper[4830]: I0227 17:43:27.975403 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7","Type":"ContainerDied","Data":"69899d04f75d9cce352e42624bb054fdd77ddb28e30def641fdb17041406d389"} Feb 27 17:43:27 crc kubenswrapper[4830]: I0227 17:43:27.975461 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 17:43:27 crc kubenswrapper[4830]: I0227 17:43:27.975492 4830 scope.go:117] "RemoveContainer" containerID="69899d04f75d9cce352e42624bb054fdd77ddb28e30def641fdb17041406d389" Feb 27 17:43:27 crc kubenswrapper[4830]: I0227 17:43:27.975470 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7","Type":"ContainerDied","Data":"7978251f81fd8e3f8923656868202dcc62b86881b795f90001c6b037f50b8f75"} Feb 27 17:43:27 crc kubenswrapper[4830]: I0227 17:43:27.983582 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/683c8608-155c-4dfc-89ca-0710ffbb8ea6-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-c4dnj\" (UID: \"683c8608-155c-4dfc-89ca-0710ffbb8ea6\") " pod="openstack/nova-cell1-cell-mapping-c4dnj" Feb 27 17:43:27 crc kubenswrapper[4830]: I0227 17:43:27.983672 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2q72\" (UniqueName: \"kubernetes.io/projected/683c8608-155c-4dfc-89ca-0710ffbb8ea6-kube-api-access-d2q72\") pod \"nova-cell1-cell-mapping-c4dnj\" (UID: \"683c8608-155c-4dfc-89ca-0710ffbb8ea6\") " pod="openstack/nova-cell1-cell-mapping-c4dnj" Feb 27 17:43:27 crc kubenswrapper[4830]: I0227 17:43:27.983708 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/683c8608-155c-4dfc-89ca-0710ffbb8ea6-config-data\") pod \"nova-cell1-cell-mapping-c4dnj\" (UID: \"683c8608-155c-4dfc-89ca-0710ffbb8ea6\") " pod="openstack/nova-cell1-cell-mapping-c4dnj" Feb 27 17:43:27 crc kubenswrapper[4830]: I0227 17:43:27.984047 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/683c8608-155c-4dfc-89ca-0710ffbb8ea6-scripts\") pod \"nova-cell1-cell-mapping-c4dnj\" (UID: \"683c8608-155c-4dfc-89ca-0710ffbb8ea6\") " pod="openstack/nova-cell1-cell-mapping-c4dnj" Feb 27 17:43:28 crc kubenswrapper[4830]: I0227 17:43:28.018659 4830 scope.go:117] "RemoveContainer" containerID="69899d04f75d9cce352e42624bb054fdd77ddb28e30def641fdb17041406d389" Feb 27 17:43:28 crc kubenswrapper[4830]: E0227 17:43:28.019402 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69899d04f75d9cce352e42624bb054fdd77ddb28e30def641fdb17041406d389\": container with ID starting with 69899d04f75d9cce352e42624bb054fdd77ddb28e30def641fdb17041406d389 not found: ID does not exist" containerID="69899d04f75d9cce352e42624bb054fdd77ddb28e30def641fdb17041406d389" Feb 27 17:43:28 crc kubenswrapper[4830]: I0227 17:43:28.019483 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69899d04f75d9cce352e42624bb054fdd77ddb28e30def641fdb17041406d389"} err="failed to get container status \"69899d04f75d9cce352e42624bb054fdd77ddb28e30def641fdb17041406d389\": rpc error: code = NotFound desc = could not find container \"69899d04f75d9cce352e42624bb054fdd77ddb28e30def641fdb17041406d389\": container with ID starting with 69899d04f75d9cce352e42624bb054fdd77ddb28e30def641fdb17041406d389 not found: ID does not exist" Feb 27 17:43:28 crc kubenswrapper[4830]: I0227 17:43:28.060200 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 17:43:28 crc kubenswrapper[4830]: I0227 17:43:28.081376 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 17:43:28 crc kubenswrapper[4830]: I0227 17:43:28.086420 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/683c8608-155c-4dfc-89ca-0710ffbb8ea6-scripts\") pod \"nova-cell1-cell-mapping-c4dnj\" (UID: \"683c8608-155c-4dfc-89ca-0710ffbb8ea6\") " pod="openstack/nova-cell1-cell-mapping-c4dnj" Feb 27 17:43:28 crc kubenswrapper[4830]: I0227 17:43:28.086567 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/683c8608-155c-4dfc-89ca-0710ffbb8ea6-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-c4dnj\" (UID: \"683c8608-155c-4dfc-89ca-0710ffbb8ea6\") " pod="openstack/nova-cell1-cell-mapping-c4dnj" Feb 27 17:43:28 crc kubenswrapper[4830]: I0227 17:43:28.086617 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2q72\" (UniqueName: \"kubernetes.io/projected/683c8608-155c-4dfc-89ca-0710ffbb8ea6-kube-api-access-d2q72\") pod \"nova-cell1-cell-mapping-c4dnj\" (UID: \"683c8608-155c-4dfc-89ca-0710ffbb8ea6\") " pod="openstack/nova-cell1-cell-mapping-c4dnj" Feb 27 17:43:28 crc kubenswrapper[4830]: I0227 17:43:28.086653 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/683c8608-155c-4dfc-89ca-0710ffbb8ea6-config-data\") pod \"nova-cell1-cell-mapping-c4dnj\" (UID: \"683c8608-155c-4dfc-89ca-0710ffbb8ea6\") " pod="openstack/nova-cell1-cell-mapping-c4dnj" Feb 27 17:43:28 crc kubenswrapper[4830]: I0227 17:43:28.093613 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/683c8608-155c-4dfc-89ca-0710ffbb8ea6-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-c4dnj\" (UID: \"683c8608-155c-4dfc-89ca-0710ffbb8ea6\") " pod="openstack/nova-cell1-cell-mapping-c4dnj" Feb 27 17:43:28 crc kubenswrapper[4830]: I0227 17:43:28.100132 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/683c8608-155c-4dfc-89ca-0710ffbb8ea6-scripts\") pod \"nova-cell1-cell-mapping-c4dnj\" (UID: \"683c8608-155c-4dfc-89ca-0710ffbb8ea6\") " pod="openstack/nova-cell1-cell-mapping-c4dnj" Feb 27 17:43:28 crc kubenswrapper[4830]: I0227 17:43:28.101911 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/683c8608-155c-4dfc-89ca-0710ffbb8ea6-config-data\") pod \"nova-cell1-cell-mapping-c4dnj\" (UID: \"683c8608-155c-4dfc-89ca-0710ffbb8ea6\") " pod="openstack/nova-cell1-cell-mapping-c4dnj" Feb 27 17:43:28 crc kubenswrapper[4830]: I0227 17:43:28.111631 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2q72\" (UniqueName: \"kubernetes.io/projected/683c8608-155c-4dfc-89ca-0710ffbb8ea6-kube-api-access-d2q72\") pod \"nova-cell1-cell-mapping-c4dnj\" (UID: \"683c8608-155c-4dfc-89ca-0710ffbb8ea6\") " pod="openstack/nova-cell1-cell-mapping-c4dnj" Feb 27 17:43:28 crc kubenswrapper[4830]: I0227 17:43:28.116480 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 17:43:28 crc kubenswrapper[4830]: I0227 17:43:28.118488 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 17:43:28 crc kubenswrapper[4830]: I0227 17:43:28.120609 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 27 17:43:28 crc kubenswrapper[4830]: I0227 17:43:28.129444 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 17:43:28 crc kubenswrapper[4830]: I0227 17:43:28.200200 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-c4dnj" Feb 27 17:43:28 crc kubenswrapper[4830]: I0227 17:43:28.290844 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78014205-9653-44a4-a659-0deefc09785c-config-data\") pod \"nova-scheduler-0\" (UID: \"78014205-9653-44a4-a659-0deefc09785c\") " pod="openstack/nova-scheduler-0" Feb 27 17:43:28 crc kubenswrapper[4830]: I0227 17:43:28.291357 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78014205-9653-44a4-a659-0deefc09785c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"78014205-9653-44a4-a659-0deefc09785c\") " pod="openstack/nova-scheduler-0" Feb 27 17:43:28 crc kubenswrapper[4830]: I0227 17:43:28.291525 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42q4k\" (UniqueName: \"kubernetes.io/projected/78014205-9653-44a4-a659-0deefc09785c-kube-api-access-42q4k\") pod \"nova-scheduler-0\" (UID: \"78014205-9653-44a4-a659-0deefc09785c\") " pod="openstack/nova-scheduler-0" Feb 27 17:43:28 crc kubenswrapper[4830]: I0227 17:43:28.394462 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78014205-9653-44a4-a659-0deefc09785c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"78014205-9653-44a4-a659-0deefc09785c\") " pod="openstack/nova-scheduler-0" Feb 27 17:43:28 crc kubenswrapper[4830]: I0227 17:43:28.394708 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42q4k\" (UniqueName: \"kubernetes.io/projected/78014205-9653-44a4-a659-0deefc09785c-kube-api-access-42q4k\") pod \"nova-scheduler-0\" (UID: \"78014205-9653-44a4-a659-0deefc09785c\") " pod="openstack/nova-scheduler-0" Feb 27 17:43:28 crc kubenswrapper[4830]: I0227 17:43:28.394832 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78014205-9653-44a4-a659-0deefc09785c-config-data\") pod \"nova-scheduler-0\" (UID: \"78014205-9653-44a4-a659-0deefc09785c\") " pod="openstack/nova-scheduler-0" Feb 27 17:43:28 crc kubenswrapper[4830]: I0227 17:43:28.402015 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78014205-9653-44a4-a659-0deefc09785c-config-data\") pod \"nova-scheduler-0\" (UID: \"78014205-9653-44a4-a659-0deefc09785c\") " pod="openstack/nova-scheduler-0" Feb 27 17:43:28 crc kubenswrapper[4830]: I0227 17:43:28.408384 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78014205-9653-44a4-a659-0deefc09785c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"78014205-9653-44a4-a659-0deefc09785c\") " pod="openstack/nova-scheduler-0" Feb 27 17:43:28 crc kubenswrapper[4830]: I0227 17:43:28.416391 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42q4k\" (UniqueName: \"kubernetes.io/projected/78014205-9653-44a4-a659-0deefc09785c-kube-api-access-42q4k\") pod \"nova-scheduler-0\" (UID: \"78014205-9653-44a4-a659-0deefc09785c\") " pod="openstack/nova-scheduler-0" Feb 27 17:43:28 crc kubenswrapper[4830]: I0227 17:43:28.453239 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 17:43:28 crc kubenswrapper[4830]: I0227 17:43:28.690816 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-c4dnj"] Feb 27 17:43:28 crc kubenswrapper[4830]: W0227 17:43:28.699207 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod683c8608_155c_4dfc_89ca_0710ffbb8ea6.slice/crio-465d5251397cc4fd944215aa5c338b479b60b834d21c3a26d3b37443a0d2c466 WatchSource:0}: Error finding container 465d5251397cc4fd944215aa5c338b479b60b834d21c3a26d3b37443a0d2c466: Status 404 returned error can't find the container with id 465d5251397cc4fd944215aa5c338b479b60b834d21c3a26d3b37443a0d2c466 Feb 27 17:43:28 crc kubenswrapper[4830]: I0227 17:43:28.776984 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7" path="/var/lib/kubelet/pods/4cde5adc-f8ac-4c3a-b8e2-a192b62bedf7/volumes" Feb 27 17:43:28 crc kubenswrapper[4830]: I0227 17:43:28.993932 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-c4dnj" event={"ID":"683c8608-155c-4dfc-89ca-0710ffbb8ea6","Type":"ContainerStarted","Data":"abbdd76657daf1d0b034f8f7bf5e22ad0804162eb5fc26addce2f418dc7fe2ec"} Feb 27 17:43:28 crc kubenswrapper[4830]: I0227 17:43:28.994300 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-c4dnj" event={"ID":"683c8608-155c-4dfc-89ca-0710ffbb8ea6","Type":"ContainerStarted","Data":"465d5251397cc4fd944215aa5c338b479b60b834d21c3a26d3b37443a0d2c466"} Feb 27 17:43:28 crc kubenswrapper[4830]: I0227 17:43:28.997499 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 17:43:29 crc kubenswrapper[4830]: W0227 17:43:29.007887 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78014205_9653_44a4_a659_0deefc09785c.slice/crio-c99fc1ee2fe9afefeb0b4f15d695ed992269fa1b5ef702576fa9848f877918b7 WatchSource:0}: Error finding container c99fc1ee2fe9afefeb0b4f15d695ed992269fa1b5ef702576fa9848f877918b7: Status 404 returned error can't find the container with id c99fc1ee2fe9afefeb0b4f15d695ed992269fa1b5ef702576fa9848f877918b7 Feb 27 17:43:29 crc kubenswrapper[4830]: I0227 17:43:29.026650 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-c4dnj" podStartSLOduration=2.026627579 podStartE2EDuration="2.026627579s" podCreationTimestamp="2026-02-27 17:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:43:29.015201504 +0000 UTC m=+5805.104473967" watchObservedRunningTime="2026-02-27 17:43:29.026627579 +0000 UTC m=+5805.115900052" Feb 27 17:43:29 crc kubenswrapper[4830]: I0227 17:43:29.764592 4830 scope.go:117] "RemoveContainer" containerID="0f73af46b765b3d98f3f5e9883b49887ac4f2ac3485e91a001e18c5d351646d8" Feb 27 17:43:29 crc kubenswrapper[4830]: E0227 17:43:29.764884 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:43:30 crc kubenswrapper[4830]: I0227 17:43:30.019624 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"78014205-9653-44a4-a659-0deefc09785c","Type":"ContainerStarted","Data":"3e594665aee8dff29ae97bd496d75e542b4f560c4ec711f80fb2c2aeaceba08d"} Feb 27 17:43:30 crc kubenswrapper[4830]: I0227 17:43:30.020217 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"78014205-9653-44a4-a659-0deefc09785c","Type":"ContainerStarted","Data":"c99fc1ee2fe9afefeb0b4f15d695ed992269fa1b5ef702576fa9848f877918b7"} Feb 27 17:43:30 crc kubenswrapper[4830]: I0227 17:43:30.056183 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.05615375 podStartE2EDuration="2.05615375s" podCreationTimestamp="2026-02-27 17:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:43:30.042881081 +0000 UTC m=+5806.132153584" watchObservedRunningTime="2026-02-27 17:43:30.05615375 +0000 UTC m=+5806.145426253" Feb 27 17:43:30 crc kubenswrapper[4830]: I0227 17:43:30.785002 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 27 17:43:30 crc kubenswrapper[4830]: I0227 17:43:30.785072 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 27 17:43:32 crc kubenswrapper[4830]: E0227 17:43:32.768249 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:43:33 crc kubenswrapper[4830]: I0227 17:43:33.454262 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 27 17:43:34 crc kubenswrapper[4830]: I0227 17:43:34.093489 4830 generic.go:334] "Generic (PLEG): container finished" podID="683c8608-155c-4dfc-89ca-0710ffbb8ea6" containerID="abbdd76657daf1d0b034f8f7bf5e22ad0804162eb5fc26addce2f418dc7fe2ec" exitCode=0 Feb 27 17:43:34 crc kubenswrapper[4830]: I0227 17:43:34.093823 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-c4dnj" event={"ID":"683c8608-155c-4dfc-89ca-0710ffbb8ea6","Type":"ContainerDied","Data":"abbdd76657daf1d0b034f8f7bf5e22ad0804162eb5fc26addce2f418dc7fe2ec"} Feb 27 17:43:35 crc kubenswrapper[4830]: I0227 17:43:35.539284 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-c4dnj" Feb 27 17:43:35 crc kubenswrapper[4830]: I0227 17:43:35.640507 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 27 17:43:35 crc kubenswrapper[4830]: I0227 17:43:35.640605 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 27 17:43:35 crc kubenswrapper[4830]: I0227 17:43:35.687965 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2q72\" (UniqueName: \"kubernetes.io/projected/683c8608-155c-4dfc-89ca-0710ffbb8ea6-kube-api-access-d2q72\") pod \"683c8608-155c-4dfc-89ca-0710ffbb8ea6\" (UID: \"683c8608-155c-4dfc-89ca-0710ffbb8ea6\") " Feb 27 17:43:35 crc kubenswrapper[4830]: I0227 17:43:35.688201 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/683c8608-155c-4dfc-89ca-0710ffbb8ea6-scripts\") pod \"683c8608-155c-4dfc-89ca-0710ffbb8ea6\" (UID: \"683c8608-155c-4dfc-89ca-0710ffbb8ea6\") " Feb 27 17:43:35 crc kubenswrapper[4830]: I0227 17:43:35.688476 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/683c8608-155c-4dfc-89ca-0710ffbb8ea6-combined-ca-bundle\") pod \"683c8608-155c-4dfc-89ca-0710ffbb8ea6\" (UID: \"683c8608-155c-4dfc-89ca-0710ffbb8ea6\") " Feb 27 17:43:35 crc kubenswrapper[4830]: I0227 17:43:35.688577 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/683c8608-155c-4dfc-89ca-0710ffbb8ea6-config-data\") pod \"683c8608-155c-4dfc-89ca-0710ffbb8ea6\" (UID: \"683c8608-155c-4dfc-89ca-0710ffbb8ea6\") " Feb 27 17:43:35 crc kubenswrapper[4830]: I0227 17:43:35.698260 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/683c8608-155c-4dfc-89ca-0710ffbb8ea6-kube-api-access-d2q72" (OuterVolumeSpecName: "kube-api-access-d2q72") pod "683c8608-155c-4dfc-89ca-0710ffbb8ea6" (UID: "683c8608-155c-4dfc-89ca-0710ffbb8ea6"). InnerVolumeSpecName "kube-api-access-d2q72". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:43:35 crc kubenswrapper[4830]: I0227 17:43:35.702254 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/683c8608-155c-4dfc-89ca-0710ffbb8ea6-scripts" (OuterVolumeSpecName: "scripts") pod "683c8608-155c-4dfc-89ca-0710ffbb8ea6" (UID: "683c8608-155c-4dfc-89ca-0710ffbb8ea6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:43:35 crc kubenswrapper[4830]: I0227 17:43:35.733828 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/683c8608-155c-4dfc-89ca-0710ffbb8ea6-config-data" (OuterVolumeSpecName: "config-data") pod "683c8608-155c-4dfc-89ca-0710ffbb8ea6" (UID: "683c8608-155c-4dfc-89ca-0710ffbb8ea6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:43:35 crc kubenswrapper[4830]: I0227 17:43:35.739072 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/683c8608-155c-4dfc-89ca-0710ffbb8ea6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "683c8608-155c-4dfc-89ca-0710ffbb8ea6" (UID: "683c8608-155c-4dfc-89ca-0710ffbb8ea6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:43:35 crc kubenswrapper[4830]: I0227 17:43:35.764634 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 27 17:43:35 crc kubenswrapper[4830]: I0227 17:43:35.764756 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 27 17:43:35 crc kubenswrapper[4830]: I0227 17:43:35.792574 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2q72\" (UniqueName: \"kubernetes.io/projected/683c8608-155c-4dfc-89ca-0710ffbb8ea6-kube-api-access-d2q72\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:35 crc kubenswrapper[4830]: I0227 17:43:35.792674 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/683c8608-155c-4dfc-89ca-0710ffbb8ea6-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:35 crc kubenswrapper[4830]: I0227 17:43:35.792696 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/683c8608-155c-4dfc-89ca-0710ffbb8ea6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:35 crc kubenswrapper[4830]: I0227 17:43:35.792716 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/683c8608-155c-4dfc-89ca-0710ffbb8ea6-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:36 crc kubenswrapper[4830]: I0227 17:43:36.134651 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-c4dnj" event={"ID":"683c8608-155c-4dfc-89ca-0710ffbb8ea6","Type":"ContainerDied","Data":"465d5251397cc4fd944215aa5c338b479b60b834d21c3a26d3b37443a0d2c466"} Feb 27 17:43:36 crc kubenswrapper[4830]: I0227 17:43:36.134734 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="465d5251397cc4fd944215aa5c338b479b60b834d21c3a26d3b37443a0d2c466" Feb 27 17:43:36 crc kubenswrapper[4830]: I0227 17:43:36.134876 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-c4dnj" Feb 27 17:43:36 crc kubenswrapper[4830]: I0227 17:43:36.331032 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 17:43:36 crc kubenswrapper[4830]: I0227 17:43:36.331422 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8aec15ae-952f-4209-9b96-bd90f7e16b44" containerName="nova-api-log" containerID="cri-o://117522c3b30cba765b6dd74111347985bbd0a2d347e8f01d93bb80eccd696e1c" gracePeriod=30 Feb 27 17:43:36 crc kubenswrapper[4830]: I0227 17:43:36.331703 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8aec15ae-952f-4209-9b96-bd90f7e16b44" containerName="nova-api-api" containerID="cri-o://e85f0ac8519365b549ae950038dae5a8ae1cae18a01629940514cbfdc49431bb" gracePeriod=30 Feb 27 17:43:36 crc kubenswrapper[4830]: I0227 17:43:36.346908 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8aec15ae-952f-4209-9b96-bd90f7e16b44" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.107:8774/\": EOF" Feb 27 17:43:36 crc kubenswrapper[4830]: I0227 17:43:36.347265 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8aec15ae-952f-4209-9b96-bd90f7e16b44" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.107:8774/\": EOF" Feb 27 17:43:36 crc kubenswrapper[4830]: I0227 17:43:36.348437 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 17:43:36 crc kubenswrapper[4830]: I0227 17:43:36.348797 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="78014205-9653-44a4-a659-0deefc09785c" containerName="nova-scheduler-scheduler" containerID="cri-o://3e594665aee8dff29ae97bd496d75e542b4f560c4ec711f80fb2c2aeaceba08d" gracePeriod=30 Feb 27 17:43:36 crc kubenswrapper[4830]: I0227 17:43:36.378393 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:43:36 crc kubenswrapper[4830]: I0227 17:43:36.379214 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="610822bd-ca63-45d2-9a7e-c9dd6a5068e9" containerName="nova-metadata-log" containerID="cri-o://7e929a86a7665398771a3c70ce16950b9a81af7b338ddc722d5cc0186b454323" gracePeriod=30 Feb 27 17:43:36 crc kubenswrapper[4830]: I0227 17:43:36.379474 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="610822bd-ca63-45d2-9a7e-c9dd6a5068e9" containerName="nova-metadata-metadata" containerID="cri-o://438da0477ae1142648ec558d04441037b39c0d416032c2b5ce7d672078313955" gracePeriod=30 Feb 27 17:43:36 crc kubenswrapper[4830]: I0227 17:43:36.387236 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="610822bd-ca63-45d2-9a7e-c9dd6a5068e9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.108:8775/\": EOF" Feb 27 17:43:36 crc kubenswrapper[4830]: I0227 17:43:36.387421 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="610822bd-ca63-45d2-9a7e-c9dd6a5068e9" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.108:8775/\": EOF" Feb 27 17:43:37 crc kubenswrapper[4830]: I0227 17:43:37.154157 4830 generic.go:334] "Generic (PLEG): container finished" podID="610822bd-ca63-45d2-9a7e-c9dd6a5068e9" containerID="7e929a86a7665398771a3c70ce16950b9a81af7b338ddc722d5cc0186b454323" exitCode=143 Feb 27 17:43:37 crc kubenswrapper[4830]: I0227 17:43:37.154220 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"610822bd-ca63-45d2-9a7e-c9dd6a5068e9","Type":"ContainerDied","Data":"7e929a86a7665398771a3c70ce16950b9a81af7b338ddc722d5cc0186b454323"} Feb 27 17:43:37 crc kubenswrapper[4830]: I0227 17:43:37.156743 4830 generic.go:334] "Generic (PLEG): container finished" podID="8aec15ae-952f-4209-9b96-bd90f7e16b44" containerID="117522c3b30cba765b6dd74111347985bbd0a2d347e8f01d93bb80eccd696e1c" exitCode=143 Feb 27 17:43:37 crc kubenswrapper[4830]: I0227 17:43:37.156778 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8aec15ae-952f-4209-9b96-bd90f7e16b44","Type":"ContainerDied","Data":"117522c3b30cba765b6dd74111347985bbd0a2d347e8f01d93bb80eccd696e1c"} Feb 27 17:43:40 crc kubenswrapper[4830]: I0227 17:43:40.727246 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 17:43:40 crc kubenswrapper[4830]: I0227 17:43:40.828022 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78014205-9653-44a4-a659-0deefc09785c-config-data\") pod \"78014205-9653-44a4-a659-0deefc09785c\" (UID: \"78014205-9653-44a4-a659-0deefc09785c\") " Feb 27 17:43:40 crc kubenswrapper[4830]: I0227 17:43:40.828150 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78014205-9653-44a4-a659-0deefc09785c-combined-ca-bundle\") pod \"78014205-9653-44a4-a659-0deefc09785c\" (UID: \"78014205-9653-44a4-a659-0deefc09785c\") " Feb 27 17:43:40 crc kubenswrapper[4830]: I0227 17:43:40.828191 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42q4k\" (UniqueName: \"kubernetes.io/projected/78014205-9653-44a4-a659-0deefc09785c-kube-api-access-42q4k\") pod \"78014205-9653-44a4-a659-0deefc09785c\" (UID: \"78014205-9653-44a4-a659-0deefc09785c\") " Feb 27 17:43:40 crc kubenswrapper[4830]: I0227 17:43:40.834161 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78014205-9653-44a4-a659-0deefc09785c-kube-api-access-42q4k" (OuterVolumeSpecName: "kube-api-access-42q4k") pod "78014205-9653-44a4-a659-0deefc09785c" (UID: "78014205-9653-44a4-a659-0deefc09785c"). InnerVolumeSpecName "kube-api-access-42q4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:43:40 crc kubenswrapper[4830]: I0227 17:43:40.859851 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78014205-9653-44a4-a659-0deefc09785c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78014205-9653-44a4-a659-0deefc09785c" (UID: "78014205-9653-44a4-a659-0deefc09785c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:43:40 crc kubenswrapper[4830]: I0227 17:43:40.860173 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78014205-9653-44a4-a659-0deefc09785c-config-data" (OuterVolumeSpecName: "config-data") pod "78014205-9653-44a4-a659-0deefc09785c" (UID: "78014205-9653-44a4-a659-0deefc09785c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:43:40 crc kubenswrapper[4830]: I0227 17:43:40.930567 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42q4k\" (UniqueName: \"kubernetes.io/projected/78014205-9653-44a4-a659-0deefc09785c-kube-api-access-42q4k\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:40 crc kubenswrapper[4830]: I0227 17:43:40.930622 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78014205-9653-44a4-a659-0deefc09785c-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:40 crc kubenswrapper[4830]: I0227 17:43:40.930645 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78014205-9653-44a4-a659-0deefc09785c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.113182 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.202294 4830 generic.go:334] "Generic (PLEG): container finished" podID="610822bd-ca63-45d2-9a7e-c9dd6a5068e9" containerID="438da0477ae1142648ec558d04441037b39c0d416032c2b5ce7d672078313955" exitCode=0 Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.202355 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"610822bd-ca63-45d2-9a7e-c9dd6a5068e9","Type":"ContainerDied","Data":"438da0477ae1142648ec558d04441037b39c0d416032c2b5ce7d672078313955"} Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.202382 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"610822bd-ca63-45d2-9a7e-c9dd6a5068e9","Type":"ContainerDied","Data":"dc98881443635cf3db94df9a0fa58283cbf03f28be2aadad60eb565038304e1c"} Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.202398 4830 scope.go:117] "RemoveContainer" containerID="438da0477ae1142648ec558d04441037b39c0d416032c2b5ce7d672078313955" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.202513 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.206369 4830 generic.go:334] "Generic (PLEG): container finished" podID="78014205-9653-44a4-a659-0deefc09785c" containerID="3e594665aee8dff29ae97bd496d75e542b4f560c4ec711f80fb2c2aeaceba08d" exitCode=0 Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.206425 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"78014205-9653-44a4-a659-0deefc09785c","Type":"ContainerDied","Data":"3e594665aee8dff29ae97bd496d75e542b4f560c4ec711f80fb2c2aeaceba08d"} Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.206457 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"78014205-9653-44a4-a659-0deefc09785c","Type":"ContainerDied","Data":"c99fc1ee2fe9afefeb0b4f15d695ed992269fa1b5ef702576fa9848f877918b7"} Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.206512 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.228742 4830 scope.go:117] "RemoveContainer" containerID="7e929a86a7665398771a3c70ce16950b9a81af7b338ddc722d5cc0186b454323" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.234883 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/610822bd-ca63-45d2-9a7e-c9dd6a5068e9-logs\") pod \"610822bd-ca63-45d2-9a7e-c9dd6a5068e9\" (UID: \"610822bd-ca63-45d2-9a7e-c9dd6a5068e9\") " Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.234988 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftmq6\" (UniqueName: \"kubernetes.io/projected/610822bd-ca63-45d2-9a7e-c9dd6a5068e9-kube-api-access-ftmq6\") pod \"610822bd-ca63-45d2-9a7e-c9dd6a5068e9\" (UID: \"610822bd-ca63-45d2-9a7e-c9dd6a5068e9\") " Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.235071 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/610822bd-ca63-45d2-9a7e-c9dd6a5068e9-config-data\") pod \"610822bd-ca63-45d2-9a7e-c9dd6a5068e9\" (UID: \"610822bd-ca63-45d2-9a7e-c9dd6a5068e9\") " Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.235368 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/610822bd-ca63-45d2-9a7e-c9dd6a5068e9-combined-ca-bundle\") pod \"610822bd-ca63-45d2-9a7e-c9dd6a5068e9\" (UID: \"610822bd-ca63-45d2-9a7e-c9dd6a5068e9\") " Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.235750 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/610822bd-ca63-45d2-9a7e-c9dd6a5068e9-logs" (OuterVolumeSpecName: "logs") pod "610822bd-ca63-45d2-9a7e-c9dd6a5068e9" (UID: "610822bd-ca63-45d2-9a7e-c9dd6a5068e9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.236241 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/610822bd-ca63-45d2-9a7e-c9dd6a5068e9-logs\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.247324 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.256361 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/610822bd-ca63-45d2-9a7e-c9dd6a5068e9-kube-api-access-ftmq6" (OuterVolumeSpecName: "kube-api-access-ftmq6") pod "610822bd-ca63-45d2-9a7e-c9dd6a5068e9" (UID: "610822bd-ca63-45d2-9a7e-c9dd6a5068e9"). InnerVolumeSpecName "kube-api-access-ftmq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.258829 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.266047 4830 scope.go:117] "RemoveContainer" containerID="438da0477ae1142648ec558d04441037b39c0d416032c2b5ce7d672078313955" Feb 27 17:43:41 crc kubenswrapper[4830]: E0227 17:43:41.266571 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"438da0477ae1142648ec558d04441037b39c0d416032c2b5ce7d672078313955\": container with ID starting with 438da0477ae1142648ec558d04441037b39c0d416032c2b5ce7d672078313955 not found: ID does not exist" containerID="438da0477ae1142648ec558d04441037b39c0d416032c2b5ce7d672078313955" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.266615 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"438da0477ae1142648ec558d04441037b39c0d416032c2b5ce7d672078313955"} err="failed to get container status \"438da0477ae1142648ec558d04441037b39c0d416032c2b5ce7d672078313955\": rpc error: code = NotFound desc = could not find container \"438da0477ae1142648ec558d04441037b39c0d416032c2b5ce7d672078313955\": container with ID starting with 438da0477ae1142648ec558d04441037b39c0d416032c2b5ce7d672078313955 not found: ID does not exist" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.266651 4830 scope.go:117] "RemoveContainer" containerID="7e929a86a7665398771a3c70ce16950b9a81af7b338ddc722d5cc0186b454323" Feb 27 17:43:41 crc kubenswrapper[4830]: E0227 17:43:41.266929 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e929a86a7665398771a3c70ce16950b9a81af7b338ddc722d5cc0186b454323\": container with ID starting with 7e929a86a7665398771a3c70ce16950b9a81af7b338ddc722d5cc0186b454323 not found: ID does not exist" containerID="7e929a86a7665398771a3c70ce16950b9a81af7b338ddc722d5cc0186b454323" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.266972 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e929a86a7665398771a3c70ce16950b9a81af7b338ddc722d5cc0186b454323"} err="failed to get container status \"7e929a86a7665398771a3c70ce16950b9a81af7b338ddc722d5cc0186b454323\": rpc error: code = NotFound desc = could not find container \"7e929a86a7665398771a3c70ce16950b9a81af7b338ddc722d5cc0186b454323\": container with ID starting with 7e929a86a7665398771a3c70ce16950b9a81af7b338ddc722d5cc0186b454323 not found: ID does not exist" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.266989 4830 scope.go:117] "RemoveContainer" containerID="3e594665aee8dff29ae97bd496d75e542b4f560c4ec711f80fb2c2aeaceba08d" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.272789 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/610822bd-ca63-45d2-9a7e-c9dd6a5068e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "610822bd-ca63-45d2-9a7e-c9dd6a5068e9" (UID: "610822bd-ca63-45d2-9a7e-c9dd6a5068e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.274179 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/610822bd-ca63-45d2-9a7e-c9dd6a5068e9-config-data" (OuterVolumeSpecName: "config-data") pod "610822bd-ca63-45d2-9a7e-c9dd6a5068e9" (UID: "610822bd-ca63-45d2-9a7e-c9dd6a5068e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.282102 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 17:43:41 crc kubenswrapper[4830]: E0227 17:43:41.285517 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="610822bd-ca63-45d2-9a7e-c9dd6a5068e9" containerName="nova-metadata-metadata" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.285559 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="610822bd-ca63-45d2-9a7e-c9dd6a5068e9" containerName="nova-metadata-metadata" Feb 27 17:43:41 crc kubenswrapper[4830]: E0227 17:43:41.285576 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="610822bd-ca63-45d2-9a7e-c9dd6a5068e9" containerName="nova-metadata-log" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.285586 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="610822bd-ca63-45d2-9a7e-c9dd6a5068e9" containerName="nova-metadata-log" Feb 27 17:43:41 crc kubenswrapper[4830]: E0227 17:43:41.285622 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78014205-9653-44a4-a659-0deefc09785c" containerName="nova-scheduler-scheduler" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.285633 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="78014205-9653-44a4-a659-0deefc09785c" containerName="nova-scheduler-scheduler" Feb 27 17:43:41 crc kubenswrapper[4830]: E0227 17:43:41.285644 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="683c8608-155c-4dfc-89ca-0710ffbb8ea6" containerName="nova-manage" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.285652 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="683c8608-155c-4dfc-89ca-0710ffbb8ea6" containerName="nova-manage" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.285871 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="610822bd-ca63-45d2-9a7e-c9dd6a5068e9" containerName="nova-metadata-metadata" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.285895 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="78014205-9653-44a4-a659-0deefc09785c" containerName="nova-scheduler-scheduler" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.285915 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="683c8608-155c-4dfc-89ca-0710ffbb8ea6" containerName="nova-manage" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.285925 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="610822bd-ca63-45d2-9a7e-c9dd6a5068e9" containerName="nova-metadata-log" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.286729 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.291292 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.298705 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.338257 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/610822bd-ca63-45d2-9a7e-c9dd6a5068e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.338290 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftmq6\" (UniqueName: \"kubernetes.io/projected/610822bd-ca63-45d2-9a7e-c9dd6a5068e9-kube-api-access-ftmq6\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.338302 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/610822bd-ca63-45d2-9a7e-c9dd6a5068e9-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.372566 4830 scope.go:117] "RemoveContainer" containerID="3e594665aee8dff29ae97bd496d75e542b4f560c4ec711f80fb2c2aeaceba08d" Feb 27 17:43:41 crc kubenswrapper[4830]: E0227 17:43:41.372918 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e594665aee8dff29ae97bd496d75e542b4f560c4ec711f80fb2c2aeaceba08d\": container with ID starting with 3e594665aee8dff29ae97bd496d75e542b4f560c4ec711f80fb2c2aeaceba08d not found: ID does not exist" containerID="3e594665aee8dff29ae97bd496d75e542b4f560c4ec711f80fb2c2aeaceba08d" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.372983 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e594665aee8dff29ae97bd496d75e542b4f560c4ec711f80fb2c2aeaceba08d"} err="failed to get container status \"3e594665aee8dff29ae97bd496d75e542b4f560c4ec711f80fb2c2aeaceba08d\": rpc error: code = NotFound desc = could not find container \"3e594665aee8dff29ae97bd496d75e542b4f560c4ec711f80fb2c2aeaceba08d\": container with ID starting with 3e594665aee8dff29ae97bd496d75e542b4f560c4ec711f80fb2c2aeaceba08d not found: ID does not exist" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.439272 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85f07766-38ac-48a4-9ed2-e87e5cc56093-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"85f07766-38ac-48a4-9ed2-e87e5cc56093\") " pod="openstack/nova-scheduler-0" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.439413 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x4s7\" (UniqueName: \"kubernetes.io/projected/85f07766-38ac-48a4-9ed2-e87e5cc56093-kube-api-access-7x4s7\") pod \"nova-scheduler-0\" (UID: \"85f07766-38ac-48a4-9ed2-e87e5cc56093\") " pod="openstack/nova-scheduler-0" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.439457 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85f07766-38ac-48a4-9ed2-e87e5cc56093-config-data\") pod \"nova-scheduler-0\" (UID: \"85f07766-38ac-48a4-9ed2-e87e5cc56093\") " pod="openstack/nova-scheduler-0" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.540746 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85f07766-38ac-48a4-9ed2-e87e5cc56093-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"85f07766-38ac-48a4-9ed2-e87e5cc56093\") " pod="openstack/nova-scheduler-0" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.540845 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x4s7\" (UniqueName: \"kubernetes.io/projected/85f07766-38ac-48a4-9ed2-e87e5cc56093-kube-api-access-7x4s7\") pod \"nova-scheduler-0\" (UID: \"85f07766-38ac-48a4-9ed2-e87e5cc56093\") " pod="openstack/nova-scheduler-0" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.540865 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85f07766-38ac-48a4-9ed2-e87e5cc56093-config-data\") pod \"nova-scheduler-0\" (UID: \"85f07766-38ac-48a4-9ed2-e87e5cc56093\") " pod="openstack/nova-scheduler-0" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.546102 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85f07766-38ac-48a4-9ed2-e87e5cc56093-config-data\") pod \"nova-scheduler-0\" (UID: \"85f07766-38ac-48a4-9ed2-e87e5cc56093\") " pod="openstack/nova-scheduler-0" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.547585 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85f07766-38ac-48a4-9ed2-e87e5cc56093-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"85f07766-38ac-48a4-9ed2-e87e5cc56093\") " pod="openstack/nova-scheduler-0" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.552756 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.567256 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.571886 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x4s7\" (UniqueName: \"kubernetes.io/projected/85f07766-38ac-48a4-9ed2-e87e5cc56093-kube-api-access-7x4s7\") pod \"nova-scheduler-0\" (UID: \"85f07766-38ac-48a4-9ed2-e87e5cc56093\") " pod="openstack/nova-scheduler-0" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.584633 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.587320 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.589936 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.593167 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.642144 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deab7dc6-3048-4721-8688-57ecae22876e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"deab7dc6-3048-4721-8688-57ecae22876e\") " pod="openstack/nova-metadata-0" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.642187 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deab7dc6-3048-4721-8688-57ecae22876e-config-data\") pod \"nova-metadata-0\" (UID: \"deab7dc6-3048-4721-8688-57ecae22876e\") " pod="openstack/nova-metadata-0" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.642238 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deab7dc6-3048-4721-8688-57ecae22876e-logs\") pod \"nova-metadata-0\" (UID: \"deab7dc6-3048-4721-8688-57ecae22876e\") " pod="openstack/nova-metadata-0" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.642279 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fsmt\" (UniqueName: \"kubernetes.io/projected/deab7dc6-3048-4721-8688-57ecae22876e-kube-api-access-5fsmt\") pod \"nova-metadata-0\" (UID: \"deab7dc6-3048-4721-8688-57ecae22876e\") " pod="openstack/nova-metadata-0" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.666096 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.744449 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fsmt\" (UniqueName: \"kubernetes.io/projected/deab7dc6-3048-4721-8688-57ecae22876e-kube-api-access-5fsmt\") pod \"nova-metadata-0\" (UID: \"deab7dc6-3048-4721-8688-57ecae22876e\") " pod="openstack/nova-metadata-0" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.744580 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deab7dc6-3048-4721-8688-57ecae22876e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"deab7dc6-3048-4721-8688-57ecae22876e\") " pod="openstack/nova-metadata-0" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.744600 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deab7dc6-3048-4721-8688-57ecae22876e-config-data\") pod \"nova-metadata-0\" (UID: \"deab7dc6-3048-4721-8688-57ecae22876e\") " pod="openstack/nova-metadata-0" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.744644 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deab7dc6-3048-4721-8688-57ecae22876e-logs\") pod \"nova-metadata-0\" (UID: \"deab7dc6-3048-4721-8688-57ecae22876e\") " pod="openstack/nova-metadata-0" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.745081 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deab7dc6-3048-4721-8688-57ecae22876e-logs\") pod \"nova-metadata-0\" (UID: \"deab7dc6-3048-4721-8688-57ecae22876e\") " pod="openstack/nova-metadata-0" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.749076 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deab7dc6-3048-4721-8688-57ecae22876e-config-data\") pod \"nova-metadata-0\" (UID: \"deab7dc6-3048-4721-8688-57ecae22876e\") " pod="openstack/nova-metadata-0" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.749501 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deab7dc6-3048-4721-8688-57ecae22876e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"deab7dc6-3048-4721-8688-57ecae22876e\") " pod="openstack/nova-metadata-0" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.765906 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fsmt\" (UniqueName: \"kubernetes.io/projected/deab7dc6-3048-4721-8688-57ecae22876e-kube-api-access-5fsmt\") pod \"nova-metadata-0\" (UID: \"deab7dc6-3048-4721-8688-57ecae22876e\") " pod="openstack/nova-metadata-0" Feb 27 17:43:41 crc kubenswrapper[4830]: I0227 17:43:41.954186 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.127720 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.210743 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.224681 4830 generic.go:334] "Generic (PLEG): container finished" podID="8aec15ae-952f-4209-9b96-bd90f7e16b44" containerID="e85f0ac8519365b549ae950038dae5a8ae1cae18a01629940514cbfdc49431bb" exitCode=0 Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.224750 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8aec15ae-952f-4209-9b96-bd90f7e16b44","Type":"ContainerDied","Data":"e85f0ac8519365b549ae950038dae5a8ae1cae18a01629940514cbfdc49431bb"} Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.224775 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8aec15ae-952f-4209-9b96-bd90f7e16b44","Type":"ContainerDied","Data":"b24ad9c9f2b9e15e802dc7a141cec604dfd74394fc0939cc8b3d74f8714c15e0"} Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.224792 4830 scope.go:117] "RemoveContainer" containerID="e85f0ac8519365b549ae950038dae5a8ae1cae18a01629940514cbfdc49431bb" Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.225065 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.254026 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8aec15ae-952f-4209-9b96-bd90f7e16b44-config-data\") pod \"8aec15ae-952f-4209-9b96-bd90f7e16b44\" (UID: \"8aec15ae-952f-4209-9b96-bd90f7e16b44\") " Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.254217 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hsjj\" (UniqueName: \"kubernetes.io/projected/8aec15ae-952f-4209-9b96-bd90f7e16b44-kube-api-access-8hsjj\") pod \"8aec15ae-952f-4209-9b96-bd90f7e16b44\" (UID: \"8aec15ae-952f-4209-9b96-bd90f7e16b44\") " Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.254241 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8aec15ae-952f-4209-9b96-bd90f7e16b44-combined-ca-bundle\") pod \"8aec15ae-952f-4209-9b96-bd90f7e16b44\" (UID: \"8aec15ae-952f-4209-9b96-bd90f7e16b44\") " Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.254293 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8aec15ae-952f-4209-9b96-bd90f7e16b44-logs\") pod \"8aec15ae-952f-4209-9b96-bd90f7e16b44\" (UID: \"8aec15ae-952f-4209-9b96-bd90f7e16b44\") " Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.254922 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8aec15ae-952f-4209-9b96-bd90f7e16b44-logs" (OuterVolumeSpecName: "logs") pod "8aec15ae-952f-4209-9b96-bd90f7e16b44" (UID: "8aec15ae-952f-4209-9b96-bd90f7e16b44"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.255567 4830 scope.go:117] "RemoveContainer" containerID="117522c3b30cba765b6dd74111347985bbd0a2d347e8f01d93bb80eccd696e1c" Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.257726 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8aec15ae-952f-4209-9b96-bd90f7e16b44-kube-api-access-8hsjj" (OuterVolumeSpecName: "kube-api-access-8hsjj") pod "8aec15ae-952f-4209-9b96-bd90f7e16b44" (UID: "8aec15ae-952f-4209-9b96-bd90f7e16b44"). InnerVolumeSpecName "kube-api-access-8hsjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.279912 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8aec15ae-952f-4209-9b96-bd90f7e16b44-config-data" (OuterVolumeSpecName: "config-data") pod "8aec15ae-952f-4209-9b96-bd90f7e16b44" (UID: "8aec15ae-952f-4209-9b96-bd90f7e16b44"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.280741 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8aec15ae-952f-4209-9b96-bd90f7e16b44-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8aec15ae-952f-4209-9b96-bd90f7e16b44" (UID: "8aec15ae-952f-4209-9b96-bd90f7e16b44"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.357631 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8aec15ae-952f-4209-9b96-bd90f7e16b44-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.357741 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hsjj\" (UniqueName: \"kubernetes.io/projected/8aec15ae-952f-4209-9b96-bd90f7e16b44-kube-api-access-8hsjj\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.357801 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8aec15ae-952f-4209-9b96-bd90f7e16b44-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.357822 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8aec15ae-952f-4209-9b96-bd90f7e16b44-logs\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.380874 4830 scope.go:117] "RemoveContainer" containerID="e85f0ac8519365b549ae950038dae5a8ae1cae18a01629940514cbfdc49431bb" Feb 27 17:43:42 crc kubenswrapper[4830]: E0227 17:43:42.381387 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e85f0ac8519365b549ae950038dae5a8ae1cae18a01629940514cbfdc49431bb\": container with ID starting with e85f0ac8519365b549ae950038dae5a8ae1cae18a01629940514cbfdc49431bb not found: ID does not exist" containerID="e85f0ac8519365b549ae950038dae5a8ae1cae18a01629940514cbfdc49431bb" Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.381433 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e85f0ac8519365b549ae950038dae5a8ae1cae18a01629940514cbfdc49431bb"} err="failed to get container status \"e85f0ac8519365b549ae950038dae5a8ae1cae18a01629940514cbfdc49431bb\": rpc error: code = NotFound desc = could not find container \"e85f0ac8519365b549ae950038dae5a8ae1cae18a01629940514cbfdc49431bb\": container with ID starting with e85f0ac8519365b549ae950038dae5a8ae1cae18a01629940514cbfdc49431bb not found: ID does not exist" Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.381454 4830 scope.go:117] "RemoveContainer" containerID="117522c3b30cba765b6dd74111347985bbd0a2d347e8f01d93bb80eccd696e1c" Feb 27 17:43:42 crc kubenswrapper[4830]: E0227 17:43:42.381784 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"117522c3b30cba765b6dd74111347985bbd0a2d347e8f01d93bb80eccd696e1c\": container with ID starting with 117522c3b30cba765b6dd74111347985bbd0a2d347e8f01d93bb80eccd696e1c not found: ID does not exist" containerID="117522c3b30cba765b6dd74111347985bbd0a2d347e8f01d93bb80eccd696e1c" Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.381830 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"117522c3b30cba765b6dd74111347985bbd0a2d347e8f01d93bb80eccd696e1c"} err="failed to get container status \"117522c3b30cba765b6dd74111347985bbd0a2d347e8f01d93bb80eccd696e1c\": rpc error: code = NotFound desc = could not find container \"117522c3b30cba765b6dd74111347985bbd0a2d347e8f01d93bb80eccd696e1c\": container with ID starting with 117522c3b30cba765b6dd74111347985bbd0a2d347e8f01d93bb80eccd696e1c not found: ID does not exist" Feb 27 17:43:42 crc kubenswrapper[4830]: W0227 17:43:42.459664 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddeab7dc6_3048_4721_8688_57ecae22876e.slice/crio-d56b6fda52a516f080cbeab8f7ea6c02fab876e89cd44f0ba5ec3935d574f240 WatchSource:0}: Error finding container d56b6fda52a516f080cbeab8f7ea6c02fab876e89cd44f0ba5ec3935d574f240: Status 404 returned error can't find the container with id d56b6fda52a516f080cbeab8f7ea6c02fab876e89cd44f0ba5ec3935d574f240 Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.463361 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.596354 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.615334 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.631311 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 27 17:43:42 crc kubenswrapper[4830]: E0227 17:43:42.632012 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aec15ae-952f-4209-9b96-bd90f7e16b44" containerName="nova-api-log" Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.632043 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aec15ae-952f-4209-9b96-bd90f7e16b44" containerName="nova-api-log" Feb 27 17:43:42 crc kubenswrapper[4830]: E0227 17:43:42.632084 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aec15ae-952f-4209-9b96-bd90f7e16b44" containerName="nova-api-api" Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.632091 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aec15ae-952f-4209-9b96-bd90f7e16b44" containerName="nova-api-api" Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.633268 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="8aec15ae-952f-4209-9b96-bd90f7e16b44" containerName="nova-api-api" Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.633313 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="8aec15ae-952f-4209-9b96-bd90f7e16b44" containerName="nova-api-log" Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.635581 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.643848 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.647899 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.764690 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/856c8d55-5c9d-4655-8752-63a97ecb38d2-logs\") pod \"nova-api-0\" (UID: \"856c8d55-5c9d-4655-8752-63a97ecb38d2\") " pod="openstack/nova-api-0" Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.764736 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbkqs\" (UniqueName: \"kubernetes.io/projected/856c8d55-5c9d-4655-8752-63a97ecb38d2-kube-api-access-rbkqs\") pod \"nova-api-0\" (UID: \"856c8d55-5c9d-4655-8752-63a97ecb38d2\") " pod="openstack/nova-api-0" Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.764783 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/856c8d55-5c9d-4655-8752-63a97ecb38d2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"856c8d55-5c9d-4655-8752-63a97ecb38d2\") " pod="openstack/nova-api-0" Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.764807 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/856c8d55-5c9d-4655-8752-63a97ecb38d2-config-data\") pod \"nova-api-0\" (UID: \"856c8d55-5c9d-4655-8752-63a97ecb38d2\") " pod="openstack/nova-api-0" Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.774771 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="610822bd-ca63-45d2-9a7e-c9dd6a5068e9" path="/var/lib/kubelet/pods/610822bd-ca63-45d2-9a7e-c9dd6a5068e9/volumes" Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.775358 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78014205-9653-44a4-a659-0deefc09785c" path="/var/lib/kubelet/pods/78014205-9653-44a4-a659-0deefc09785c/volumes" Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.775887 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8aec15ae-952f-4209-9b96-bd90f7e16b44" path="/var/lib/kubelet/pods/8aec15ae-952f-4209-9b96-bd90f7e16b44/volumes" Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.866643 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/856c8d55-5c9d-4655-8752-63a97ecb38d2-logs\") pod \"nova-api-0\" (UID: \"856c8d55-5c9d-4655-8752-63a97ecb38d2\") " pod="openstack/nova-api-0" Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.866722 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbkqs\" (UniqueName: \"kubernetes.io/projected/856c8d55-5c9d-4655-8752-63a97ecb38d2-kube-api-access-rbkqs\") pod \"nova-api-0\" (UID: \"856c8d55-5c9d-4655-8752-63a97ecb38d2\") " pod="openstack/nova-api-0" Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.866781 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/856c8d55-5c9d-4655-8752-63a97ecb38d2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"856c8d55-5c9d-4655-8752-63a97ecb38d2\") " pod="openstack/nova-api-0" Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.866840 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/856c8d55-5c9d-4655-8752-63a97ecb38d2-config-data\") pod \"nova-api-0\" (UID: \"856c8d55-5c9d-4655-8752-63a97ecb38d2\") " pod="openstack/nova-api-0" Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.867492 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/856c8d55-5c9d-4655-8752-63a97ecb38d2-logs\") pod \"nova-api-0\" (UID: \"856c8d55-5c9d-4655-8752-63a97ecb38d2\") " pod="openstack/nova-api-0" Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.878869 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/856c8d55-5c9d-4655-8752-63a97ecb38d2-config-data\") pod \"nova-api-0\" (UID: \"856c8d55-5c9d-4655-8752-63a97ecb38d2\") " pod="openstack/nova-api-0" Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.879559 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/856c8d55-5c9d-4655-8752-63a97ecb38d2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"856c8d55-5c9d-4655-8752-63a97ecb38d2\") " pod="openstack/nova-api-0" Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.881850 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbkqs\" (UniqueName: \"kubernetes.io/projected/856c8d55-5c9d-4655-8752-63a97ecb38d2-kube-api-access-rbkqs\") pod \"nova-api-0\" (UID: \"856c8d55-5c9d-4655-8752-63a97ecb38d2\") " pod="openstack/nova-api-0" Feb 27 17:43:42 crc kubenswrapper[4830]: I0227 17:43:42.960527 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 17:43:43 crc kubenswrapper[4830]: I0227 17:43:43.243869 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"85f07766-38ac-48a4-9ed2-e87e5cc56093","Type":"ContainerStarted","Data":"2f6e826b7f5e36c9cf4e24d7c630f10292d1ee84e4660aa4c0c310faa3b95a9e"} Feb 27 17:43:43 crc kubenswrapper[4830]: I0227 17:43:43.244265 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"85f07766-38ac-48a4-9ed2-e87e5cc56093","Type":"ContainerStarted","Data":"255a6ca8d6867f0abaf5efd5f196f2404da4df6ec674ced29277c83845012e1d"} Feb 27 17:43:43 crc kubenswrapper[4830]: I0227 17:43:43.270482 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"deab7dc6-3048-4721-8688-57ecae22876e","Type":"ContainerStarted","Data":"52db8ac141e0a1f002135cef6cba38b21657eeb4d0a72947ec49cbfed201a4c0"} Feb 27 17:43:43 crc kubenswrapper[4830]: I0227 17:43:43.270550 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"deab7dc6-3048-4721-8688-57ecae22876e","Type":"ContainerStarted","Data":"90775655a9f3edb46d15ed8b3765fda6c459f5737b5a9827205a5a1853341610"} Feb 27 17:43:43 crc kubenswrapper[4830]: I0227 17:43:43.270570 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"deab7dc6-3048-4721-8688-57ecae22876e","Type":"ContainerStarted","Data":"d56b6fda52a516f080cbeab8f7ea6c02fab876e89cd44f0ba5ec3935d574f240"} Feb 27 17:43:43 crc kubenswrapper[4830]: I0227 17:43:43.291874 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.29184548 podStartE2EDuration="2.29184548s" podCreationTimestamp="2026-02-27 17:43:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:43:43.266188923 +0000 UTC m=+5819.355461426" watchObservedRunningTime="2026-02-27 17:43:43.29184548 +0000 UTC m=+5819.381117953" Feb 27 17:43:43 crc kubenswrapper[4830]: I0227 17:43:43.306153 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.306129694 podStartE2EDuration="2.306129694s" podCreationTimestamp="2026-02-27 17:43:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:43:43.291101533 +0000 UTC m=+5819.380373996" watchObservedRunningTime="2026-02-27 17:43:43.306129694 +0000 UTC m=+5819.395402167" Feb 27 17:43:43 crc kubenswrapper[4830]: I0227 17:43:43.419838 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 17:43:44 crc kubenswrapper[4830]: I0227 17:43:44.288724 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"856c8d55-5c9d-4655-8752-63a97ecb38d2","Type":"ContainerStarted","Data":"27bac96151e1f53148780e96ad22610c28076ac65b866eb9d04b3a7b408b655a"} Feb 27 17:43:44 crc kubenswrapper[4830]: I0227 17:43:44.288768 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"856c8d55-5c9d-4655-8752-63a97ecb38d2","Type":"ContainerStarted","Data":"a46c39a2dbec52dbace3d2bef0430644077cc0412e3682b6c23d5006347dc9c8"} Feb 27 17:43:44 crc kubenswrapper[4830]: I0227 17:43:44.288780 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"856c8d55-5c9d-4655-8752-63a97ecb38d2","Type":"ContainerStarted","Data":"81c2b555d55c514bb3708647e6200fba14071685734af2c93a14068002ae5e2f"} Feb 27 17:43:44 crc kubenswrapper[4830]: I0227 17:43:44.328443 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.328413422 podStartE2EDuration="2.328413422s" podCreationTimestamp="2026-02-27 17:43:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:43:44.305860749 +0000 UTC m=+5820.395133222" watchObservedRunningTime="2026-02-27 17:43:44.328413422 +0000 UTC m=+5820.417685925" Feb 27 17:43:44 crc kubenswrapper[4830]: I0227 17:43:44.774528 4830 scope.go:117] "RemoveContainer" containerID="0f73af46b765b3d98f3f5e9883b49887ac4f2ac3485e91a001e18c5d351646d8" Feb 27 17:43:44 crc kubenswrapper[4830]: E0227 17:43:44.775060 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:43:46 crc kubenswrapper[4830]: I0227 17:43:46.666269 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 27 17:43:46 crc kubenswrapper[4830]: I0227 17:43:46.955220 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 27 17:43:46 crc kubenswrapper[4830]: I0227 17:43:46.955359 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 27 17:43:47 crc kubenswrapper[4830]: E0227 17:43:47.767806 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:43:49 crc kubenswrapper[4830]: I0227 17:43:49.576559 4830 scope.go:117] "RemoveContainer" containerID="cfe791d8da96314f66528f242569ccaaaff78cc5620fb47a107fcdd4d3f4e74e" Feb 27 17:43:51 crc kubenswrapper[4830]: I0227 17:43:51.666780 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 27 17:43:51 crc kubenswrapper[4830]: I0227 17:43:51.713102 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 27 17:43:51 crc kubenswrapper[4830]: I0227 17:43:51.955211 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 27 17:43:51 crc kubenswrapper[4830]: I0227 17:43:51.955311 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 27 17:43:52 crc kubenswrapper[4830]: I0227 17:43:52.404585 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 27 17:43:52 crc kubenswrapper[4830]: I0227 17:43:52.961183 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 27 17:43:52 crc kubenswrapper[4830]: I0227 17:43:52.961222 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 27 17:43:53 crc kubenswrapper[4830]: I0227 17:43:53.044228 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="deab7dc6-3048-4721-8688-57ecae22876e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.112:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 27 17:43:53 crc kubenswrapper[4830]: I0227 17:43:53.044346 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="deab7dc6-3048-4721-8688-57ecae22876e" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.112:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 27 17:43:54 crc kubenswrapper[4830]: I0227 17:43:54.043330 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="856c8d55-5c9d-4655-8752-63a97ecb38d2" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.113:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 27 17:43:54 crc kubenswrapper[4830]: I0227 17:43:54.043987 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="856c8d55-5c9d-4655-8752-63a97ecb38d2" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.113:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 27 17:43:55 crc kubenswrapper[4830]: I0227 17:43:55.762746 4830 scope.go:117] "RemoveContainer" containerID="0f73af46b765b3d98f3f5e9883b49887ac4f2ac3485e91a001e18c5d351646d8" Feb 27 17:43:55 crc kubenswrapper[4830]: E0227 17:43:55.763457 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:43:58 crc kubenswrapper[4830]: E0227 17:43:58.767342 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:44:00 crc kubenswrapper[4830]: I0227 17:44:00.195023 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536904-jrdqt"] Feb 27 17:44:00 crc kubenswrapper[4830]: I0227 17:44:00.196208 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536904-jrdqt" Feb 27 17:44:00 crc kubenswrapper[4830]: I0227 17:44:00.225565 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536904-jrdqt"] Feb 27 17:44:00 crc kubenswrapper[4830]: I0227 17:44:00.360804 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmw5q\" (UniqueName: \"kubernetes.io/projected/77856f9c-1131-4857-9fff-bddf1d27b5d3-kube-api-access-xmw5q\") pod \"auto-csr-approver-29536904-jrdqt\" (UID: \"77856f9c-1131-4857-9fff-bddf1d27b5d3\") " pod="openshift-infra/auto-csr-approver-29536904-jrdqt" Feb 27 17:44:00 crc kubenswrapper[4830]: I0227 17:44:00.462463 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmw5q\" (UniqueName: \"kubernetes.io/projected/77856f9c-1131-4857-9fff-bddf1d27b5d3-kube-api-access-xmw5q\") pod \"auto-csr-approver-29536904-jrdqt\" (UID: \"77856f9c-1131-4857-9fff-bddf1d27b5d3\") " pod="openshift-infra/auto-csr-approver-29536904-jrdqt" Feb 27 17:44:00 crc kubenswrapper[4830]: I0227 17:44:00.487654 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmw5q\" (UniqueName: \"kubernetes.io/projected/77856f9c-1131-4857-9fff-bddf1d27b5d3-kube-api-access-xmw5q\") pod \"auto-csr-approver-29536904-jrdqt\" (UID: \"77856f9c-1131-4857-9fff-bddf1d27b5d3\") " pod="openshift-infra/auto-csr-approver-29536904-jrdqt" Feb 27 17:44:00 crc kubenswrapper[4830]: I0227 17:44:00.513903 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536904-jrdqt" Feb 27 17:44:01 crc kubenswrapper[4830]: I0227 17:44:01.005812 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536904-jrdqt"] Feb 27 17:44:01 crc kubenswrapper[4830]: I0227 17:44:01.017920 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 17:44:01 crc kubenswrapper[4830]: I0227 17:44:01.480043 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536904-jrdqt" event={"ID":"77856f9c-1131-4857-9fff-bddf1d27b5d3","Type":"ContainerStarted","Data":"96b0ff06422bf3129940869ee45a349df7a0efb3de3e23c0a9e84d06d774e5da"} Feb 27 17:44:01 crc kubenswrapper[4830]: I0227 17:44:01.958241 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 27 17:44:01 crc kubenswrapper[4830]: I0227 17:44:01.958849 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 27 17:44:01 crc kubenswrapper[4830]: I0227 17:44:01.962388 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 27 17:44:01 crc kubenswrapper[4830]: I0227 17:44:01.963517 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 27 17:44:02 crc kubenswrapper[4830]: E0227 17:44:02.001661 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 17:44:02 crc kubenswrapper[4830]: E0227 17:44:02.001851 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 17:44:02 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 17:44:02 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xmw5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536904-jrdqt_openshift-infra(77856f9c-1131-4857-9fff-bddf1d27b5d3): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 17:44:02 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 17:44:02 crc kubenswrapper[4830]: E0227 17:44:02.003042 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536904-jrdqt" podUID="77856f9c-1131-4857-9fff-bddf1d27b5d3" Feb 27 17:44:02 crc kubenswrapper[4830]: E0227 17:44:02.494488 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536904-jrdqt" podUID="77856f9c-1131-4857-9fff-bddf1d27b5d3" Feb 27 17:44:02 crc kubenswrapper[4830]: I0227 17:44:02.966042 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 27 17:44:02 crc kubenswrapper[4830]: I0227 17:44:02.966217 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 27 17:44:02 crc kubenswrapper[4830]: I0227 17:44:02.966755 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 27 17:44:02 crc kubenswrapper[4830]: I0227 17:44:02.966808 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 27 17:44:02 crc kubenswrapper[4830]: I0227 17:44:02.970607 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 27 17:44:02 crc kubenswrapper[4830]: I0227 17:44:02.973855 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 27 17:44:03 crc kubenswrapper[4830]: I0227 17:44:03.233121 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5cf7c86fb5-5wfg7"] Feb 27 17:44:03 crc kubenswrapper[4830]: I0227 17:44:03.234661 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cf7c86fb5-5wfg7" Feb 27 17:44:03 crc kubenswrapper[4830]: I0227 17:44:03.247438 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cf7c86fb5-5wfg7"] Feb 27 17:44:03 crc kubenswrapper[4830]: I0227 17:44:03.328892 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b376941-61ec-4cfc-9ced-db78152e29f0-config\") pod \"dnsmasq-dns-5cf7c86fb5-5wfg7\" (UID: \"1b376941-61ec-4cfc-9ced-db78152e29f0\") " pod="openstack/dnsmasq-dns-5cf7c86fb5-5wfg7" Feb 27 17:44:03 crc kubenswrapper[4830]: I0227 17:44:03.329043 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b376941-61ec-4cfc-9ced-db78152e29f0-ovsdbserver-sb\") pod \"dnsmasq-dns-5cf7c86fb5-5wfg7\" (UID: \"1b376941-61ec-4cfc-9ced-db78152e29f0\") " pod="openstack/dnsmasq-dns-5cf7c86fb5-5wfg7" Feb 27 17:44:03 crc kubenswrapper[4830]: I0227 17:44:03.329085 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wbwp\" (UniqueName: \"kubernetes.io/projected/1b376941-61ec-4cfc-9ced-db78152e29f0-kube-api-access-7wbwp\") pod \"dnsmasq-dns-5cf7c86fb5-5wfg7\" (UID: \"1b376941-61ec-4cfc-9ced-db78152e29f0\") " pod="openstack/dnsmasq-dns-5cf7c86fb5-5wfg7" Feb 27 17:44:03 crc kubenswrapper[4830]: I0227 17:44:03.329104 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b376941-61ec-4cfc-9ced-db78152e29f0-ovsdbserver-nb\") pod \"dnsmasq-dns-5cf7c86fb5-5wfg7\" (UID: \"1b376941-61ec-4cfc-9ced-db78152e29f0\") " pod="openstack/dnsmasq-dns-5cf7c86fb5-5wfg7" Feb 27 17:44:03 crc kubenswrapper[4830]: I0227 17:44:03.329489 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b376941-61ec-4cfc-9ced-db78152e29f0-dns-svc\") pod \"dnsmasq-dns-5cf7c86fb5-5wfg7\" (UID: \"1b376941-61ec-4cfc-9ced-db78152e29f0\") " pod="openstack/dnsmasq-dns-5cf7c86fb5-5wfg7" Feb 27 17:44:03 crc kubenswrapper[4830]: I0227 17:44:03.431352 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b376941-61ec-4cfc-9ced-db78152e29f0-ovsdbserver-sb\") pod \"dnsmasq-dns-5cf7c86fb5-5wfg7\" (UID: \"1b376941-61ec-4cfc-9ced-db78152e29f0\") " pod="openstack/dnsmasq-dns-5cf7c86fb5-5wfg7" Feb 27 17:44:03 crc kubenswrapper[4830]: I0227 17:44:03.431406 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wbwp\" (UniqueName: \"kubernetes.io/projected/1b376941-61ec-4cfc-9ced-db78152e29f0-kube-api-access-7wbwp\") pod \"dnsmasq-dns-5cf7c86fb5-5wfg7\" (UID: \"1b376941-61ec-4cfc-9ced-db78152e29f0\") " pod="openstack/dnsmasq-dns-5cf7c86fb5-5wfg7" Feb 27 17:44:03 crc kubenswrapper[4830]: I0227 17:44:03.431427 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b376941-61ec-4cfc-9ced-db78152e29f0-ovsdbserver-nb\") pod \"dnsmasq-dns-5cf7c86fb5-5wfg7\" (UID: \"1b376941-61ec-4cfc-9ced-db78152e29f0\") " pod="openstack/dnsmasq-dns-5cf7c86fb5-5wfg7" Feb 27 17:44:03 crc kubenswrapper[4830]: I0227 17:44:03.431539 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b376941-61ec-4cfc-9ced-db78152e29f0-dns-svc\") pod \"dnsmasq-dns-5cf7c86fb5-5wfg7\" (UID: \"1b376941-61ec-4cfc-9ced-db78152e29f0\") " pod="openstack/dnsmasq-dns-5cf7c86fb5-5wfg7" Feb 27 17:44:03 crc kubenswrapper[4830]: I0227 17:44:03.431563 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b376941-61ec-4cfc-9ced-db78152e29f0-config\") pod \"dnsmasq-dns-5cf7c86fb5-5wfg7\" (UID: \"1b376941-61ec-4cfc-9ced-db78152e29f0\") " pod="openstack/dnsmasq-dns-5cf7c86fb5-5wfg7" Feb 27 17:44:03 crc kubenswrapper[4830]: I0227 17:44:03.432509 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b376941-61ec-4cfc-9ced-db78152e29f0-config\") pod \"dnsmasq-dns-5cf7c86fb5-5wfg7\" (UID: \"1b376941-61ec-4cfc-9ced-db78152e29f0\") " pod="openstack/dnsmasq-dns-5cf7c86fb5-5wfg7" Feb 27 17:44:03 crc kubenswrapper[4830]: I0227 17:44:03.432583 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b376941-61ec-4cfc-9ced-db78152e29f0-ovsdbserver-nb\") pod \"dnsmasq-dns-5cf7c86fb5-5wfg7\" (UID: \"1b376941-61ec-4cfc-9ced-db78152e29f0\") " pod="openstack/dnsmasq-dns-5cf7c86fb5-5wfg7" Feb 27 17:44:03 crc kubenswrapper[4830]: I0227 17:44:03.432632 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b376941-61ec-4cfc-9ced-db78152e29f0-ovsdbserver-sb\") pod \"dnsmasq-dns-5cf7c86fb5-5wfg7\" (UID: \"1b376941-61ec-4cfc-9ced-db78152e29f0\") " pod="openstack/dnsmasq-dns-5cf7c86fb5-5wfg7" Feb 27 17:44:03 crc kubenswrapper[4830]: I0227 17:44:03.433182 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b376941-61ec-4cfc-9ced-db78152e29f0-dns-svc\") pod \"dnsmasq-dns-5cf7c86fb5-5wfg7\" (UID: \"1b376941-61ec-4cfc-9ced-db78152e29f0\") " pod="openstack/dnsmasq-dns-5cf7c86fb5-5wfg7" Feb 27 17:44:03 crc kubenswrapper[4830]: I0227 17:44:03.456185 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wbwp\" (UniqueName: \"kubernetes.io/projected/1b376941-61ec-4cfc-9ced-db78152e29f0-kube-api-access-7wbwp\") pod \"dnsmasq-dns-5cf7c86fb5-5wfg7\" (UID: \"1b376941-61ec-4cfc-9ced-db78152e29f0\") " pod="openstack/dnsmasq-dns-5cf7c86fb5-5wfg7" Feb 27 17:44:03 crc kubenswrapper[4830]: I0227 17:44:03.564149 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cf7c86fb5-5wfg7" Feb 27 17:44:04 crc kubenswrapper[4830]: I0227 17:44:04.079992 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cf7c86fb5-5wfg7"] Feb 27 17:44:04 crc kubenswrapper[4830]: W0227 17:44:04.084378 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b376941_61ec_4cfc_9ced_db78152e29f0.slice/crio-ba71cfafdd4dc3cc955015ac0bd570c4ddfe33f0a341904455e6ff5a2615bcc4 WatchSource:0}: Error finding container ba71cfafdd4dc3cc955015ac0bd570c4ddfe33f0a341904455e6ff5a2615bcc4: Status 404 returned error can't find the container with id ba71cfafdd4dc3cc955015ac0bd570c4ddfe33f0a341904455e6ff5a2615bcc4 Feb 27 17:44:04 crc kubenswrapper[4830]: I0227 17:44:04.515908 4830 generic.go:334] "Generic (PLEG): container finished" podID="1b376941-61ec-4cfc-9ced-db78152e29f0" containerID="0f49a590c7089256ccafcf55562c343b54ea8b3ad7619383956439ec225dc43d" exitCode=0 Feb 27 17:44:04 crc kubenswrapper[4830]: I0227 17:44:04.516011 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cf7c86fb5-5wfg7" event={"ID":"1b376941-61ec-4cfc-9ced-db78152e29f0","Type":"ContainerDied","Data":"0f49a590c7089256ccafcf55562c343b54ea8b3ad7619383956439ec225dc43d"} Feb 27 17:44:04 crc kubenswrapper[4830]: I0227 17:44:04.516310 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cf7c86fb5-5wfg7" event={"ID":"1b376941-61ec-4cfc-9ced-db78152e29f0","Type":"ContainerStarted","Data":"ba71cfafdd4dc3cc955015ac0bd570c4ddfe33f0a341904455e6ff5a2615bcc4"} Feb 27 17:44:05 crc kubenswrapper[4830]: I0227 17:44:05.537352 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cf7c86fb5-5wfg7" event={"ID":"1b376941-61ec-4cfc-9ced-db78152e29f0","Type":"ContainerStarted","Data":"3e07539f2a77d58f5e12dba382cbb7f0fa5a84f3836e675b3a68b4b44bb198b1"} Feb 27 17:44:05 crc kubenswrapper[4830]: I0227 17:44:05.537769 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5cf7c86fb5-5wfg7" Feb 27 17:44:06 crc kubenswrapper[4830]: I0227 17:44:06.763365 4830 scope.go:117] "RemoveContainer" containerID="0f73af46b765b3d98f3f5e9883b49887ac4f2ac3485e91a001e18c5d351646d8" Feb 27 17:44:06 crc kubenswrapper[4830]: E0227 17:44:06.764166 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:44:09 crc kubenswrapper[4830]: E0227 17:44:09.765366 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:44:13 crc kubenswrapper[4830]: I0227 17:44:13.567545 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5cf7c86fb5-5wfg7" Feb 27 17:44:13 crc kubenswrapper[4830]: I0227 17:44:13.612181 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5cf7c86fb5-5wfg7" podStartSLOduration=10.612149553 podStartE2EDuration="10.612149553s" podCreationTimestamp="2026-02-27 17:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:44:05.57137472 +0000 UTC m=+5841.660647183" watchObservedRunningTime="2026-02-27 17:44:13.612149553 +0000 UTC m=+5849.701422046" Feb 27 17:44:13 crc kubenswrapper[4830]: I0227 17:44:13.675910 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57fd8cfc4f-fkfd5"] Feb 27 17:44:13 crc kubenswrapper[4830]: I0227 17:44:13.676335 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57fd8cfc4f-fkfd5" podUID="bffeb097-0b73-4ade-8ea4-2a64979aeaf6" containerName="dnsmasq-dns" containerID="cri-o://33495a6609ce6a02b046a20973bcc16da1ecd244375f83edaeaac9a05ceab2e2" gracePeriod=10 Feb 27 17:44:14 crc kubenswrapper[4830]: I0227 17:44:14.214514 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57fd8cfc4f-fkfd5" Feb 27 17:44:14 crc kubenswrapper[4830]: I0227 17:44:14.286519 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bffeb097-0b73-4ade-8ea4-2a64979aeaf6-dns-svc\") pod \"bffeb097-0b73-4ade-8ea4-2a64979aeaf6\" (UID: \"bffeb097-0b73-4ade-8ea4-2a64979aeaf6\") " Feb 27 17:44:14 crc kubenswrapper[4830]: I0227 17:44:14.286714 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8lf6\" (UniqueName: \"kubernetes.io/projected/bffeb097-0b73-4ade-8ea4-2a64979aeaf6-kube-api-access-l8lf6\") pod \"bffeb097-0b73-4ade-8ea4-2a64979aeaf6\" (UID: \"bffeb097-0b73-4ade-8ea4-2a64979aeaf6\") " Feb 27 17:44:14 crc kubenswrapper[4830]: I0227 17:44:14.287012 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bffeb097-0b73-4ade-8ea4-2a64979aeaf6-ovsdbserver-nb\") pod \"bffeb097-0b73-4ade-8ea4-2a64979aeaf6\" (UID: \"bffeb097-0b73-4ade-8ea4-2a64979aeaf6\") " Feb 27 17:44:14 crc kubenswrapper[4830]: I0227 17:44:14.287160 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bffeb097-0b73-4ade-8ea4-2a64979aeaf6-ovsdbserver-sb\") pod \"bffeb097-0b73-4ade-8ea4-2a64979aeaf6\" (UID: \"bffeb097-0b73-4ade-8ea4-2a64979aeaf6\") " Feb 27 17:44:14 crc kubenswrapper[4830]: I0227 17:44:14.287225 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bffeb097-0b73-4ade-8ea4-2a64979aeaf6-config\") pod \"bffeb097-0b73-4ade-8ea4-2a64979aeaf6\" (UID: \"bffeb097-0b73-4ade-8ea4-2a64979aeaf6\") " Feb 27 17:44:14 crc kubenswrapper[4830]: I0227 17:44:14.298990 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bffeb097-0b73-4ade-8ea4-2a64979aeaf6-kube-api-access-l8lf6" (OuterVolumeSpecName: "kube-api-access-l8lf6") pod "bffeb097-0b73-4ade-8ea4-2a64979aeaf6" (UID: "bffeb097-0b73-4ade-8ea4-2a64979aeaf6"). InnerVolumeSpecName "kube-api-access-l8lf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:44:14 crc kubenswrapper[4830]: I0227 17:44:14.345786 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bffeb097-0b73-4ade-8ea4-2a64979aeaf6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bffeb097-0b73-4ade-8ea4-2a64979aeaf6" (UID: "bffeb097-0b73-4ade-8ea4-2a64979aeaf6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:44:14 crc kubenswrapper[4830]: I0227 17:44:14.349935 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bffeb097-0b73-4ade-8ea4-2a64979aeaf6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bffeb097-0b73-4ade-8ea4-2a64979aeaf6" (UID: "bffeb097-0b73-4ade-8ea4-2a64979aeaf6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:44:14 crc kubenswrapper[4830]: I0227 17:44:14.356557 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bffeb097-0b73-4ade-8ea4-2a64979aeaf6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bffeb097-0b73-4ade-8ea4-2a64979aeaf6" (UID: "bffeb097-0b73-4ade-8ea4-2a64979aeaf6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:44:14 crc kubenswrapper[4830]: I0227 17:44:14.366230 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bffeb097-0b73-4ade-8ea4-2a64979aeaf6-config" (OuterVolumeSpecName: "config") pod "bffeb097-0b73-4ade-8ea4-2a64979aeaf6" (UID: "bffeb097-0b73-4ade-8ea4-2a64979aeaf6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:44:14 crc kubenswrapper[4830]: I0227 17:44:14.389558 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8lf6\" (UniqueName: \"kubernetes.io/projected/bffeb097-0b73-4ade-8ea4-2a64979aeaf6-kube-api-access-l8lf6\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:14 crc kubenswrapper[4830]: I0227 17:44:14.389598 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bffeb097-0b73-4ade-8ea4-2a64979aeaf6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:14 crc kubenswrapper[4830]: I0227 17:44:14.389614 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bffeb097-0b73-4ade-8ea4-2a64979aeaf6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:14 crc kubenswrapper[4830]: I0227 17:44:14.389627 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bffeb097-0b73-4ade-8ea4-2a64979aeaf6-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:14 crc kubenswrapper[4830]: I0227 17:44:14.389638 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bffeb097-0b73-4ade-8ea4-2a64979aeaf6-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:14 crc kubenswrapper[4830]: I0227 17:44:14.637877 4830 generic.go:334] "Generic (PLEG): container finished" podID="bffeb097-0b73-4ade-8ea4-2a64979aeaf6" containerID="33495a6609ce6a02b046a20973bcc16da1ecd244375f83edaeaac9a05ceab2e2" exitCode=0 Feb 27 17:44:14 crc kubenswrapper[4830]: I0227 17:44:14.638294 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57fd8cfc4f-fkfd5" Feb 27 17:44:14 crc kubenswrapper[4830]: I0227 17:44:14.638295 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57fd8cfc4f-fkfd5" event={"ID":"bffeb097-0b73-4ade-8ea4-2a64979aeaf6","Type":"ContainerDied","Data":"33495a6609ce6a02b046a20973bcc16da1ecd244375f83edaeaac9a05ceab2e2"} Feb 27 17:44:14 crc kubenswrapper[4830]: I0227 17:44:14.641153 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57fd8cfc4f-fkfd5" event={"ID":"bffeb097-0b73-4ade-8ea4-2a64979aeaf6","Type":"ContainerDied","Data":"82472f0be044648f9e384565bf065c692bbdab4b0374901bdb88397fb63c2996"} Feb 27 17:44:14 crc kubenswrapper[4830]: I0227 17:44:14.641188 4830 scope.go:117] "RemoveContainer" containerID="33495a6609ce6a02b046a20973bcc16da1ecd244375f83edaeaac9a05ceab2e2" Feb 27 17:44:14 crc kubenswrapper[4830]: I0227 17:44:14.688461 4830 scope.go:117] "RemoveContainer" containerID="0a7ce7b7f3399284623af1cb03abdcfed6ecd557b8e7a48338df06401e258595" Feb 27 17:44:14 crc kubenswrapper[4830]: I0227 17:44:14.709309 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57fd8cfc4f-fkfd5"] Feb 27 17:44:14 crc kubenswrapper[4830]: I0227 17:44:14.724226 4830 scope.go:117] "RemoveContainer" containerID="33495a6609ce6a02b046a20973bcc16da1ecd244375f83edaeaac9a05ceab2e2" Feb 27 17:44:14 crc kubenswrapper[4830]: E0227 17:44:14.724927 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33495a6609ce6a02b046a20973bcc16da1ecd244375f83edaeaac9a05ceab2e2\": container with ID starting with 33495a6609ce6a02b046a20973bcc16da1ecd244375f83edaeaac9a05ceab2e2 not found: ID does not exist" containerID="33495a6609ce6a02b046a20973bcc16da1ecd244375f83edaeaac9a05ceab2e2" Feb 27 17:44:14 crc kubenswrapper[4830]: I0227 17:44:14.725025 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33495a6609ce6a02b046a20973bcc16da1ecd244375f83edaeaac9a05ceab2e2"} err="failed to get container status \"33495a6609ce6a02b046a20973bcc16da1ecd244375f83edaeaac9a05ceab2e2\": rpc error: code = NotFound desc = could not find container \"33495a6609ce6a02b046a20973bcc16da1ecd244375f83edaeaac9a05ceab2e2\": container with ID starting with 33495a6609ce6a02b046a20973bcc16da1ecd244375f83edaeaac9a05ceab2e2 not found: ID does not exist" Feb 27 17:44:14 crc kubenswrapper[4830]: I0227 17:44:14.725098 4830 scope.go:117] "RemoveContainer" containerID="0a7ce7b7f3399284623af1cb03abdcfed6ecd557b8e7a48338df06401e258595" Feb 27 17:44:14 crc kubenswrapper[4830]: E0227 17:44:14.726364 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a7ce7b7f3399284623af1cb03abdcfed6ecd557b8e7a48338df06401e258595\": container with ID starting with 0a7ce7b7f3399284623af1cb03abdcfed6ecd557b8e7a48338df06401e258595 not found: ID does not exist" containerID="0a7ce7b7f3399284623af1cb03abdcfed6ecd557b8e7a48338df06401e258595" Feb 27 17:44:14 crc kubenswrapper[4830]: I0227 17:44:14.726497 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a7ce7b7f3399284623af1cb03abdcfed6ecd557b8e7a48338df06401e258595"} err="failed to get container status \"0a7ce7b7f3399284623af1cb03abdcfed6ecd557b8e7a48338df06401e258595\": rpc error: code = NotFound desc = could not find container \"0a7ce7b7f3399284623af1cb03abdcfed6ecd557b8e7a48338df06401e258595\": container with ID starting with 0a7ce7b7f3399284623af1cb03abdcfed6ecd557b8e7a48338df06401e258595 not found: ID does not exist" Feb 27 17:44:14 crc kubenswrapper[4830]: I0227 17:44:14.748032 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57fd8cfc4f-fkfd5"] Feb 27 17:44:14 crc kubenswrapper[4830]: I0227 17:44:14.792313 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bffeb097-0b73-4ade-8ea4-2a64979aeaf6" path="/var/lib/kubelet/pods/bffeb097-0b73-4ade-8ea4-2a64979aeaf6/volumes" Feb 27 17:44:15 crc kubenswrapper[4830]: E0227 17:44:15.261667 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 17:44:15 crc kubenswrapper[4830]: E0227 17:44:15.261840 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 17:44:15 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 17:44:15 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xmw5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536904-jrdqt_openshift-infra(77856f9c-1131-4857-9fff-bddf1d27b5d3): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 17:44:15 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 17:44:15 crc kubenswrapper[4830]: E0227 17:44:15.263128 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536904-jrdqt" podUID="77856f9c-1131-4857-9fff-bddf1d27b5d3" Feb 27 17:44:15 crc kubenswrapper[4830]: I0227 17:44:15.444153 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-44qds"] Feb 27 17:44:15 crc kubenswrapper[4830]: E0227 17:44:15.444857 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bffeb097-0b73-4ade-8ea4-2a64979aeaf6" containerName="init" Feb 27 17:44:15 crc kubenswrapper[4830]: I0227 17:44:15.444879 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="bffeb097-0b73-4ade-8ea4-2a64979aeaf6" containerName="init" Feb 27 17:44:15 crc kubenswrapper[4830]: E0227 17:44:15.444918 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bffeb097-0b73-4ade-8ea4-2a64979aeaf6" containerName="dnsmasq-dns" Feb 27 17:44:15 crc kubenswrapper[4830]: I0227 17:44:15.444925 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="bffeb097-0b73-4ade-8ea4-2a64979aeaf6" containerName="dnsmasq-dns" Feb 27 17:44:15 crc kubenswrapper[4830]: I0227 17:44:15.445127 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="bffeb097-0b73-4ade-8ea4-2a64979aeaf6" containerName="dnsmasq-dns" Feb 27 17:44:15 crc kubenswrapper[4830]: I0227 17:44:15.445935 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-44qds" Feb 27 17:44:15 crc kubenswrapper[4830]: I0227 17:44:15.472127 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-44qds"] Feb 27 17:44:15 crc kubenswrapper[4830]: I0227 17:44:15.508608 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhq5x\" (UniqueName: \"kubernetes.io/projected/f413ccbf-4c04-4ac8-9698-421fee71e5ca-kube-api-access-vhq5x\") pod \"cinder-db-create-44qds\" (UID: \"f413ccbf-4c04-4ac8-9698-421fee71e5ca\") " pod="openstack/cinder-db-create-44qds" Feb 27 17:44:15 crc kubenswrapper[4830]: I0227 17:44:15.508661 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f413ccbf-4c04-4ac8-9698-421fee71e5ca-operator-scripts\") pod \"cinder-db-create-44qds\" (UID: \"f413ccbf-4c04-4ac8-9698-421fee71e5ca\") " pod="openstack/cinder-db-create-44qds" Feb 27 17:44:15 crc kubenswrapper[4830]: I0227 17:44:15.536396 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-8f2c-account-create-update-dxpp6"] Feb 27 17:44:15 crc kubenswrapper[4830]: I0227 17:44:15.537882 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8f2c-account-create-update-dxpp6" Feb 27 17:44:15 crc kubenswrapper[4830]: I0227 17:44:15.541340 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 27 17:44:15 crc kubenswrapper[4830]: I0227 17:44:15.566395 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-8f2c-account-create-update-dxpp6"] Feb 27 17:44:15 crc kubenswrapper[4830]: I0227 17:44:15.611637 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1804cc70-6b63-4738-8dd9-19129e207c08-operator-scripts\") pod \"cinder-8f2c-account-create-update-dxpp6\" (UID: \"1804cc70-6b63-4738-8dd9-19129e207c08\") " pod="openstack/cinder-8f2c-account-create-update-dxpp6" Feb 27 17:44:15 crc kubenswrapper[4830]: I0227 17:44:15.611781 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjxph\" (UniqueName: \"kubernetes.io/projected/1804cc70-6b63-4738-8dd9-19129e207c08-kube-api-access-kjxph\") pod \"cinder-8f2c-account-create-update-dxpp6\" (UID: \"1804cc70-6b63-4738-8dd9-19129e207c08\") " pod="openstack/cinder-8f2c-account-create-update-dxpp6" Feb 27 17:44:15 crc kubenswrapper[4830]: I0227 17:44:15.611842 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhq5x\" (UniqueName: \"kubernetes.io/projected/f413ccbf-4c04-4ac8-9698-421fee71e5ca-kube-api-access-vhq5x\") pod \"cinder-db-create-44qds\" (UID: \"f413ccbf-4c04-4ac8-9698-421fee71e5ca\") " pod="openstack/cinder-db-create-44qds" Feb 27 17:44:15 crc kubenswrapper[4830]: I0227 17:44:15.611869 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f413ccbf-4c04-4ac8-9698-421fee71e5ca-operator-scripts\") pod \"cinder-db-create-44qds\" (UID: \"f413ccbf-4c04-4ac8-9698-421fee71e5ca\") " pod="openstack/cinder-db-create-44qds" Feb 27 17:44:15 crc kubenswrapper[4830]: I0227 17:44:15.612775 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f413ccbf-4c04-4ac8-9698-421fee71e5ca-operator-scripts\") pod \"cinder-db-create-44qds\" (UID: \"f413ccbf-4c04-4ac8-9698-421fee71e5ca\") " pod="openstack/cinder-db-create-44qds" Feb 27 17:44:15 crc kubenswrapper[4830]: I0227 17:44:15.635664 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhq5x\" (UniqueName: \"kubernetes.io/projected/f413ccbf-4c04-4ac8-9698-421fee71e5ca-kube-api-access-vhq5x\") pod \"cinder-db-create-44qds\" (UID: \"f413ccbf-4c04-4ac8-9698-421fee71e5ca\") " pod="openstack/cinder-db-create-44qds" Feb 27 17:44:15 crc kubenswrapper[4830]: I0227 17:44:15.713685 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjxph\" (UniqueName: \"kubernetes.io/projected/1804cc70-6b63-4738-8dd9-19129e207c08-kube-api-access-kjxph\") pod \"cinder-8f2c-account-create-update-dxpp6\" (UID: \"1804cc70-6b63-4738-8dd9-19129e207c08\") " pod="openstack/cinder-8f2c-account-create-update-dxpp6" Feb 27 17:44:15 crc kubenswrapper[4830]: I0227 17:44:15.713787 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1804cc70-6b63-4738-8dd9-19129e207c08-operator-scripts\") pod \"cinder-8f2c-account-create-update-dxpp6\" (UID: \"1804cc70-6b63-4738-8dd9-19129e207c08\") " pod="openstack/cinder-8f2c-account-create-update-dxpp6" Feb 27 17:44:15 crc kubenswrapper[4830]: I0227 17:44:15.714992 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1804cc70-6b63-4738-8dd9-19129e207c08-operator-scripts\") pod \"cinder-8f2c-account-create-update-dxpp6\" (UID: \"1804cc70-6b63-4738-8dd9-19129e207c08\") " pod="openstack/cinder-8f2c-account-create-update-dxpp6" Feb 27 17:44:15 crc kubenswrapper[4830]: I0227 17:44:15.736498 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjxph\" (UniqueName: \"kubernetes.io/projected/1804cc70-6b63-4738-8dd9-19129e207c08-kube-api-access-kjxph\") pod \"cinder-8f2c-account-create-update-dxpp6\" (UID: \"1804cc70-6b63-4738-8dd9-19129e207c08\") " pod="openstack/cinder-8f2c-account-create-update-dxpp6" Feb 27 17:44:15 crc kubenswrapper[4830]: I0227 17:44:15.764288 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-44qds" Feb 27 17:44:15 crc kubenswrapper[4830]: I0227 17:44:15.862740 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8f2c-account-create-update-dxpp6" Feb 27 17:44:16 crc kubenswrapper[4830]: I0227 17:44:16.325061 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-44qds"] Feb 27 17:44:16 crc kubenswrapper[4830]: W0227 17:44:16.329877 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf413ccbf_4c04_4ac8_9698_421fee71e5ca.slice/crio-8328dae8a140ba943e4f80c126b4d91875a92b8e53f96d3785f302be3393fd2e WatchSource:0}: Error finding container 8328dae8a140ba943e4f80c126b4d91875a92b8e53f96d3785f302be3393fd2e: Status 404 returned error can't find the container with id 8328dae8a140ba943e4f80c126b4d91875a92b8e53f96d3785f302be3393fd2e Feb 27 17:44:16 crc kubenswrapper[4830]: I0227 17:44:16.410768 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-8f2c-account-create-update-dxpp6"] Feb 27 17:44:16 crc kubenswrapper[4830]: W0227 17:44:16.415084 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1804cc70_6b63_4738_8dd9_19129e207c08.slice/crio-0ba660765e820c467e3cecf6af5f6caf950c3d8882b76c8c92a3b97ed1d50a9a WatchSource:0}: Error finding container 0ba660765e820c467e3cecf6af5f6caf950c3d8882b76c8c92a3b97ed1d50a9a: Status 404 returned error can't find the container with id 0ba660765e820c467e3cecf6af5f6caf950c3d8882b76c8c92a3b97ed1d50a9a Feb 27 17:44:16 crc kubenswrapper[4830]: I0227 17:44:16.675565 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8f2c-account-create-update-dxpp6" event={"ID":"1804cc70-6b63-4738-8dd9-19129e207c08","Type":"ContainerStarted","Data":"e6025491c2a8d25ef3d0a88646ac4094e038d55e7bc95aa1f2f13e968309f97d"} Feb 27 17:44:16 crc kubenswrapper[4830]: I0227 17:44:16.675618 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8f2c-account-create-update-dxpp6" event={"ID":"1804cc70-6b63-4738-8dd9-19129e207c08","Type":"ContainerStarted","Data":"0ba660765e820c467e3cecf6af5f6caf950c3d8882b76c8c92a3b97ed1d50a9a"} Feb 27 17:44:16 crc kubenswrapper[4830]: I0227 17:44:16.678479 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-44qds" event={"ID":"f413ccbf-4c04-4ac8-9698-421fee71e5ca","Type":"ContainerStarted","Data":"2e2f575e03dcedacee0a87532b1a795db59aa5461672477ec9edebb6c4178cba"} Feb 27 17:44:16 crc kubenswrapper[4830]: I0227 17:44:16.678511 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-44qds" event={"ID":"f413ccbf-4c04-4ac8-9698-421fee71e5ca","Type":"ContainerStarted","Data":"8328dae8a140ba943e4f80c126b4d91875a92b8e53f96d3785f302be3393fd2e"} Feb 27 17:44:16 crc kubenswrapper[4830]: I0227 17:44:16.698441 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-8f2c-account-create-update-dxpp6" podStartSLOduration=1.698421804 podStartE2EDuration="1.698421804s" podCreationTimestamp="2026-02-27 17:44:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:44:16.692601553 +0000 UTC m=+5852.781874036" watchObservedRunningTime="2026-02-27 17:44:16.698421804 +0000 UTC m=+5852.787694277" Feb 27 17:44:16 crc kubenswrapper[4830]: I0227 17:44:16.726610 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-44qds" podStartSLOduration=1.7265890910000001 podStartE2EDuration="1.726589091s" podCreationTimestamp="2026-02-27 17:44:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:44:16.715002602 +0000 UTC m=+5852.804275075" watchObservedRunningTime="2026-02-27 17:44:16.726589091 +0000 UTC m=+5852.815861564" Feb 27 17:44:17 crc kubenswrapper[4830]: I0227 17:44:17.691875 4830 generic.go:334] "Generic (PLEG): container finished" podID="f413ccbf-4c04-4ac8-9698-421fee71e5ca" containerID="2e2f575e03dcedacee0a87532b1a795db59aa5461672477ec9edebb6c4178cba" exitCode=0 Feb 27 17:44:17 crc kubenswrapper[4830]: I0227 17:44:17.692269 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-44qds" event={"ID":"f413ccbf-4c04-4ac8-9698-421fee71e5ca","Type":"ContainerDied","Data":"2e2f575e03dcedacee0a87532b1a795db59aa5461672477ec9edebb6c4178cba"} Feb 27 17:44:17 crc kubenswrapper[4830]: I0227 17:44:17.694748 4830 generic.go:334] "Generic (PLEG): container finished" podID="1804cc70-6b63-4738-8dd9-19129e207c08" containerID="e6025491c2a8d25ef3d0a88646ac4094e038d55e7bc95aa1f2f13e968309f97d" exitCode=0 Feb 27 17:44:17 crc kubenswrapper[4830]: I0227 17:44:17.694779 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8f2c-account-create-update-dxpp6" event={"ID":"1804cc70-6b63-4738-8dd9-19129e207c08","Type":"ContainerDied","Data":"e6025491c2a8d25ef3d0a88646ac4094e038d55e7bc95aa1f2f13e968309f97d"} Feb 27 17:44:19 crc kubenswrapper[4830]: I0227 17:44:19.257290 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8f2c-account-create-update-dxpp6" Feb 27 17:44:19 crc kubenswrapper[4830]: I0227 17:44:19.266244 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-44qds" Feb 27 17:44:19 crc kubenswrapper[4830]: I0227 17:44:19.299415 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjxph\" (UniqueName: \"kubernetes.io/projected/1804cc70-6b63-4738-8dd9-19129e207c08-kube-api-access-kjxph\") pod \"1804cc70-6b63-4738-8dd9-19129e207c08\" (UID: \"1804cc70-6b63-4738-8dd9-19129e207c08\") " Feb 27 17:44:19 crc kubenswrapper[4830]: I0227 17:44:19.299471 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhq5x\" (UniqueName: \"kubernetes.io/projected/f413ccbf-4c04-4ac8-9698-421fee71e5ca-kube-api-access-vhq5x\") pod \"f413ccbf-4c04-4ac8-9698-421fee71e5ca\" (UID: \"f413ccbf-4c04-4ac8-9698-421fee71e5ca\") " Feb 27 17:44:19 crc kubenswrapper[4830]: I0227 17:44:19.299498 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f413ccbf-4c04-4ac8-9698-421fee71e5ca-operator-scripts\") pod \"f413ccbf-4c04-4ac8-9698-421fee71e5ca\" (UID: \"f413ccbf-4c04-4ac8-9698-421fee71e5ca\") " Feb 27 17:44:19 crc kubenswrapper[4830]: I0227 17:44:19.299540 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1804cc70-6b63-4738-8dd9-19129e207c08-operator-scripts\") pod \"1804cc70-6b63-4738-8dd9-19129e207c08\" (UID: \"1804cc70-6b63-4738-8dd9-19129e207c08\") " Feb 27 17:44:19 crc kubenswrapper[4830]: I0227 17:44:19.300427 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f413ccbf-4c04-4ac8-9698-421fee71e5ca-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f413ccbf-4c04-4ac8-9698-421fee71e5ca" (UID: "f413ccbf-4c04-4ac8-9698-421fee71e5ca"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:44:19 crc kubenswrapper[4830]: I0227 17:44:19.305322 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1804cc70-6b63-4738-8dd9-19129e207c08-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1804cc70-6b63-4738-8dd9-19129e207c08" (UID: "1804cc70-6b63-4738-8dd9-19129e207c08"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:44:19 crc kubenswrapper[4830]: I0227 17:44:19.309020 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1804cc70-6b63-4738-8dd9-19129e207c08-kube-api-access-kjxph" (OuterVolumeSpecName: "kube-api-access-kjxph") pod "1804cc70-6b63-4738-8dd9-19129e207c08" (UID: "1804cc70-6b63-4738-8dd9-19129e207c08"). InnerVolumeSpecName "kube-api-access-kjxph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:44:19 crc kubenswrapper[4830]: I0227 17:44:19.325506 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f413ccbf-4c04-4ac8-9698-421fee71e5ca-kube-api-access-vhq5x" (OuterVolumeSpecName: "kube-api-access-vhq5x") pod "f413ccbf-4c04-4ac8-9698-421fee71e5ca" (UID: "f413ccbf-4c04-4ac8-9698-421fee71e5ca"). InnerVolumeSpecName "kube-api-access-vhq5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:44:19 crc kubenswrapper[4830]: I0227 17:44:19.401368 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1804cc70-6b63-4738-8dd9-19129e207c08-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:19 crc kubenswrapper[4830]: I0227 17:44:19.401401 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjxph\" (UniqueName: \"kubernetes.io/projected/1804cc70-6b63-4738-8dd9-19129e207c08-kube-api-access-kjxph\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:19 crc kubenswrapper[4830]: I0227 17:44:19.401413 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhq5x\" (UniqueName: \"kubernetes.io/projected/f413ccbf-4c04-4ac8-9698-421fee71e5ca-kube-api-access-vhq5x\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:19 crc kubenswrapper[4830]: I0227 17:44:19.401422 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f413ccbf-4c04-4ac8-9698-421fee71e5ca-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:19 crc kubenswrapper[4830]: I0227 17:44:19.720134 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8f2c-account-create-update-dxpp6" Feb 27 17:44:19 crc kubenswrapper[4830]: I0227 17:44:19.720876 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8f2c-account-create-update-dxpp6" event={"ID":"1804cc70-6b63-4738-8dd9-19129e207c08","Type":"ContainerDied","Data":"0ba660765e820c467e3cecf6af5f6caf950c3d8882b76c8c92a3b97ed1d50a9a"} Feb 27 17:44:19 crc kubenswrapper[4830]: I0227 17:44:19.720974 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ba660765e820c467e3cecf6af5f6caf950c3d8882b76c8c92a3b97ed1d50a9a" Feb 27 17:44:19 crc kubenswrapper[4830]: I0227 17:44:19.731148 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-44qds" event={"ID":"f413ccbf-4c04-4ac8-9698-421fee71e5ca","Type":"ContainerDied","Data":"8328dae8a140ba943e4f80c126b4d91875a92b8e53f96d3785f302be3393fd2e"} Feb 27 17:44:19 crc kubenswrapper[4830]: I0227 17:44:19.731657 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8328dae8a140ba943e4f80c126b4d91875a92b8e53f96d3785f302be3393fd2e" Feb 27 17:44:19 crc kubenswrapper[4830]: I0227 17:44:19.731483 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-44qds" Feb 27 17:44:20 crc kubenswrapper[4830]: I0227 17:44:20.718230 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-m9wvm"] Feb 27 17:44:20 crc kubenswrapper[4830]: E0227 17:44:20.718644 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f413ccbf-4c04-4ac8-9698-421fee71e5ca" containerName="mariadb-database-create" Feb 27 17:44:20 crc kubenswrapper[4830]: I0227 17:44:20.718657 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f413ccbf-4c04-4ac8-9698-421fee71e5ca" containerName="mariadb-database-create" Feb 27 17:44:20 crc kubenswrapper[4830]: E0227 17:44:20.718687 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1804cc70-6b63-4738-8dd9-19129e207c08" containerName="mariadb-account-create-update" Feb 27 17:44:20 crc kubenswrapper[4830]: I0227 17:44:20.718695 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="1804cc70-6b63-4738-8dd9-19129e207c08" containerName="mariadb-account-create-update" Feb 27 17:44:20 crc kubenswrapper[4830]: I0227 17:44:20.718852 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="1804cc70-6b63-4738-8dd9-19129e207c08" containerName="mariadb-account-create-update" Feb 27 17:44:20 crc kubenswrapper[4830]: I0227 17:44:20.718860 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f413ccbf-4c04-4ac8-9698-421fee71e5ca" containerName="mariadb-database-create" Feb 27 17:44:20 crc kubenswrapper[4830]: I0227 17:44:20.719483 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-m9wvm" Feb 27 17:44:20 crc kubenswrapper[4830]: I0227 17:44:20.722872 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 27 17:44:20 crc kubenswrapper[4830]: I0227 17:44:20.726127 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-h788x" Feb 27 17:44:20 crc kubenswrapper[4830]: I0227 17:44:20.726377 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 27 17:44:20 crc kubenswrapper[4830]: I0227 17:44:20.743769 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-m9wvm"] Feb 27 17:44:20 crc kubenswrapper[4830]: I0227 17:44:20.762708 4830 scope.go:117] "RemoveContainer" containerID="0f73af46b765b3d98f3f5e9883b49887ac4f2ac3485e91a001e18c5d351646d8" Feb 27 17:44:20 crc kubenswrapper[4830]: E0227 17:44:20.763017 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:44:20 crc kubenswrapper[4830]: I0227 17:44:20.831073 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff265913-df5e-490b-ba35-98be9b52fdb3-combined-ca-bundle\") pod \"cinder-db-sync-m9wvm\" (UID: \"ff265913-df5e-490b-ba35-98be9b52fdb3\") " pod="openstack/cinder-db-sync-m9wvm" Feb 27 17:44:20 crc kubenswrapper[4830]: I0227 17:44:20.831134 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff265913-df5e-490b-ba35-98be9b52fdb3-config-data\") pod \"cinder-db-sync-m9wvm\" (UID: \"ff265913-df5e-490b-ba35-98be9b52fdb3\") " pod="openstack/cinder-db-sync-m9wvm" Feb 27 17:44:20 crc kubenswrapper[4830]: I0227 17:44:20.831168 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff265913-df5e-490b-ba35-98be9b52fdb3-etc-machine-id\") pod \"cinder-db-sync-m9wvm\" (UID: \"ff265913-df5e-490b-ba35-98be9b52fdb3\") " pod="openstack/cinder-db-sync-m9wvm" Feb 27 17:44:20 crc kubenswrapper[4830]: I0227 17:44:20.831247 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd8qr\" (UniqueName: \"kubernetes.io/projected/ff265913-df5e-490b-ba35-98be9b52fdb3-kube-api-access-bd8qr\") pod \"cinder-db-sync-m9wvm\" (UID: \"ff265913-df5e-490b-ba35-98be9b52fdb3\") " pod="openstack/cinder-db-sync-m9wvm" Feb 27 17:44:20 crc kubenswrapper[4830]: I0227 17:44:20.831276 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ff265913-df5e-490b-ba35-98be9b52fdb3-db-sync-config-data\") pod \"cinder-db-sync-m9wvm\" (UID: \"ff265913-df5e-490b-ba35-98be9b52fdb3\") " pod="openstack/cinder-db-sync-m9wvm" Feb 27 17:44:20 crc kubenswrapper[4830]: I0227 17:44:20.831306 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff265913-df5e-490b-ba35-98be9b52fdb3-scripts\") pod \"cinder-db-sync-m9wvm\" (UID: \"ff265913-df5e-490b-ba35-98be9b52fdb3\") " pod="openstack/cinder-db-sync-m9wvm" Feb 27 17:44:20 crc kubenswrapper[4830]: I0227 17:44:20.933302 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff265913-df5e-490b-ba35-98be9b52fdb3-combined-ca-bundle\") pod \"cinder-db-sync-m9wvm\" (UID: \"ff265913-df5e-490b-ba35-98be9b52fdb3\") " pod="openstack/cinder-db-sync-m9wvm" Feb 27 17:44:20 crc kubenswrapper[4830]: I0227 17:44:20.933655 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff265913-df5e-490b-ba35-98be9b52fdb3-config-data\") pod \"cinder-db-sync-m9wvm\" (UID: \"ff265913-df5e-490b-ba35-98be9b52fdb3\") " pod="openstack/cinder-db-sync-m9wvm" Feb 27 17:44:20 crc kubenswrapper[4830]: I0227 17:44:20.933820 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff265913-df5e-490b-ba35-98be9b52fdb3-etc-machine-id\") pod \"cinder-db-sync-m9wvm\" (UID: \"ff265913-df5e-490b-ba35-98be9b52fdb3\") " pod="openstack/cinder-db-sync-m9wvm" Feb 27 17:44:20 crc kubenswrapper[4830]: I0227 17:44:20.934049 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd8qr\" (UniqueName: \"kubernetes.io/projected/ff265913-df5e-490b-ba35-98be9b52fdb3-kube-api-access-bd8qr\") pod \"cinder-db-sync-m9wvm\" (UID: \"ff265913-df5e-490b-ba35-98be9b52fdb3\") " pod="openstack/cinder-db-sync-m9wvm" Feb 27 17:44:20 crc kubenswrapper[4830]: I0227 17:44:20.934200 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ff265913-df5e-490b-ba35-98be9b52fdb3-db-sync-config-data\") pod \"cinder-db-sync-m9wvm\" (UID: \"ff265913-df5e-490b-ba35-98be9b52fdb3\") " pod="openstack/cinder-db-sync-m9wvm" Feb 27 17:44:20 crc kubenswrapper[4830]: I0227 17:44:20.933906 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff265913-df5e-490b-ba35-98be9b52fdb3-etc-machine-id\") pod \"cinder-db-sync-m9wvm\" (UID: \"ff265913-df5e-490b-ba35-98be9b52fdb3\") " pod="openstack/cinder-db-sync-m9wvm" Feb 27 17:44:20 crc kubenswrapper[4830]: I0227 17:44:20.934337 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff265913-df5e-490b-ba35-98be9b52fdb3-scripts\") pod \"cinder-db-sync-m9wvm\" (UID: \"ff265913-df5e-490b-ba35-98be9b52fdb3\") " pod="openstack/cinder-db-sync-m9wvm" Feb 27 17:44:20 crc kubenswrapper[4830]: I0227 17:44:20.946982 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff265913-df5e-490b-ba35-98be9b52fdb3-config-data\") pod \"cinder-db-sync-m9wvm\" (UID: \"ff265913-df5e-490b-ba35-98be9b52fdb3\") " pod="openstack/cinder-db-sync-m9wvm" Feb 27 17:44:20 crc kubenswrapper[4830]: I0227 17:44:20.951524 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ff265913-df5e-490b-ba35-98be9b52fdb3-db-sync-config-data\") pod \"cinder-db-sync-m9wvm\" (UID: \"ff265913-df5e-490b-ba35-98be9b52fdb3\") " pod="openstack/cinder-db-sync-m9wvm" Feb 27 17:44:20 crc kubenswrapper[4830]: I0227 17:44:20.951764 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff265913-df5e-490b-ba35-98be9b52fdb3-combined-ca-bundle\") pod \"cinder-db-sync-m9wvm\" (UID: \"ff265913-df5e-490b-ba35-98be9b52fdb3\") " pod="openstack/cinder-db-sync-m9wvm" Feb 27 17:44:20 crc kubenswrapper[4830]: I0227 17:44:20.952019 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff265913-df5e-490b-ba35-98be9b52fdb3-scripts\") pod \"cinder-db-sync-m9wvm\" (UID: \"ff265913-df5e-490b-ba35-98be9b52fdb3\") " pod="openstack/cinder-db-sync-m9wvm" Feb 27 17:44:20 crc kubenswrapper[4830]: I0227 17:44:20.956215 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd8qr\" (UniqueName: \"kubernetes.io/projected/ff265913-df5e-490b-ba35-98be9b52fdb3-kube-api-access-bd8qr\") pod \"cinder-db-sync-m9wvm\" (UID: \"ff265913-df5e-490b-ba35-98be9b52fdb3\") " pod="openstack/cinder-db-sync-m9wvm" Feb 27 17:44:21 crc kubenswrapper[4830]: I0227 17:44:21.038588 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-m9wvm" Feb 27 17:44:21 crc kubenswrapper[4830]: I0227 17:44:21.529937 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-m9wvm"] Feb 27 17:44:21 crc kubenswrapper[4830]: I0227 17:44:21.756869 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-m9wvm" event={"ID":"ff265913-df5e-490b-ba35-98be9b52fdb3","Type":"ContainerStarted","Data":"afa8214a58d3b6406849294bf50a8ef9caf0df87a43e839aeaebaaa9d7954cb9"} Feb 27 17:44:22 crc kubenswrapper[4830]: I0227 17:44:22.777844 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-m9wvm" event={"ID":"ff265913-df5e-490b-ba35-98be9b52fdb3","Type":"ContainerStarted","Data":"2556677abbb65c17d6c1ad2d531cfd136b59424a4216ec299d5e55f0e5e9209a"} Feb 27 17:44:22 crc kubenswrapper[4830]: I0227 17:44:22.801686 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-m9wvm" podStartSLOduration=2.801660457 podStartE2EDuration="2.801660457s" podCreationTimestamp="2026-02-27 17:44:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:44:22.795289683 +0000 UTC m=+5858.884562146" watchObservedRunningTime="2026-02-27 17:44:22.801660457 +0000 UTC m=+5858.890932950" Feb 27 17:44:24 crc kubenswrapper[4830]: I0227 17:44:24.805263 4830 generic.go:334] "Generic (PLEG): container finished" podID="ff265913-df5e-490b-ba35-98be9b52fdb3" containerID="2556677abbb65c17d6c1ad2d531cfd136b59424a4216ec299d5e55f0e5e9209a" exitCode=0 Feb 27 17:44:24 crc kubenswrapper[4830]: I0227 17:44:24.805361 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-m9wvm" event={"ID":"ff265913-df5e-490b-ba35-98be9b52fdb3","Type":"ContainerDied","Data":"2556677abbb65c17d6c1ad2d531cfd136b59424a4216ec299d5e55f0e5e9209a"} Feb 27 17:44:24 crc kubenswrapper[4830]: E0227 17:44:24.958486 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 17:44:24 crc kubenswrapper[4830]: E0227 17:44:24.958719 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 17:44:24 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 17:44:24 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mdb7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536898-vrwjs_openshift-infra(204eb1af-36ad-4de7-9da7-9a37fefd3026): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 17:44:24 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 17:44:24 crc kubenswrapper[4830]: E0227 17:44:24.959842 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:44:26 crc kubenswrapper[4830]: I0227 17:44:26.243421 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-m9wvm" Feb 27 17:44:26 crc kubenswrapper[4830]: I0227 17:44:26.362453 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bd8qr\" (UniqueName: \"kubernetes.io/projected/ff265913-df5e-490b-ba35-98be9b52fdb3-kube-api-access-bd8qr\") pod \"ff265913-df5e-490b-ba35-98be9b52fdb3\" (UID: \"ff265913-df5e-490b-ba35-98be9b52fdb3\") " Feb 27 17:44:26 crc kubenswrapper[4830]: I0227 17:44:26.362510 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff265913-df5e-490b-ba35-98be9b52fdb3-etc-machine-id\") pod \"ff265913-df5e-490b-ba35-98be9b52fdb3\" (UID: \"ff265913-df5e-490b-ba35-98be9b52fdb3\") " Feb 27 17:44:26 crc kubenswrapper[4830]: I0227 17:44:26.362539 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ff265913-df5e-490b-ba35-98be9b52fdb3-db-sync-config-data\") pod \"ff265913-df5e-490b-ba35-98be9b52fdb3\" (UID: \"ff265913-df5e-490b-ba35-98be9b52fdb3\") " Feb 27 17:44:26 crc kubenswrapper[4830]: I0227 17:44:26.362577 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff265913-df5e-490b-ba35-98be9b52fdb3-combined-ca-bundle\") pod \"ff265913-df5e-490b-ba35-98be9b52fdb3\" (UID: \"ff265913-df5e-490b-ba35-98be9b52fdb3\") " Feb 27 17:44:26 crc kubenswrapper[4830]: I0227 17:44:26.362637 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff265913-df5e-490b-ba35-98be9b52fdb3-scripts\") pod \"ff265913-df5e-490b-ba35-98be9b52fdb3\" (UID: \"ff265913-df5e-490b-ba35-98be9b52fdb3\") " Feb 27 17:44:26 crc kubenswrapper[4830]: I0227 17:44:26.362710 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff265913-df5e-490b-ba35-98be9b52fdb3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ff265913-df5e-490b-ba35-98be9b52fdb3" (UID: "ff265913-df5e-490b-ba35-98be9b52fdb3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 17:44:26 crc kubenswrapper[4830]: I0227 17:44:26.362768 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff265913-df5e-490b-ba35-98be9b52fdb3-config-data\") pod \"ff265913-df5e-490b-ba35-98be9b52fdb3\" (UID: \"ff265913-df5e-490b-ba35-98be9b52fdb3\") " Feb 27 17:44:26 crc kubenswrapper[4830]: I0227 17:44:26.363249 4830 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff265913-df5e-490b-ba35-98be9b52fdb3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:26 crc kubenswrapper[4830]: I0227 17:44:26.368984 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff265913-df5e-490b-ba35-98be9b52fdb3-kube-api-access-bd8qr" (OuterVolumeSpecName: "kube-api-access-bd8qr") pod "ff265913-df5e-490b-ba35-98be9b52fdb3" (UID: "ff265913-df5e-490b-ba35-98be9b52fdb3"). InnerVolumeSpecName "kube-api-access-bd8qr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:44:26 crc kubenswrapper[4830]: I0227 17:44:26.369265 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff265913-df5e-490b-ba35-98be9b52fdb3-scripts" (OuterVolumeSpecName: "scripts") pod "ff265913-df5e-490b-ba35-98be9b52fdb3" (UID: "ff265913-df5e-490b-ba35-98be9b52fdb3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:44:26 crc kubenswrapper[4830]: I0227 17:44:26.370961 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff265913-df5e-490b-ba35-98be9b52fdb3-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ff265913-df5e-490b-ba35-98be9b52fdb3" (UID: "ff265913-df5e-490b-ba35-98be9b52fdb3"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:44:26 crc kubenswrapper[4830]: I0227 17:44:26.401982 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff265913-df5e-490b-ba35-98be9b52fdb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff265913-df5e-490b-ba35-98be9b52fdb3" (UID: "ff265913-df5e-490b-ba35-98be9b52fdb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:44:26 crc kubenswrapper[4830]: I0227 17:44:26.423565 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff265913-df5e-490b-ba35-98be9b52fdb3-config-data" (OuterVolumeSpecName: "config-data") pod "ff265913-df5e-490b-ba35-98be9b52fdb3" (UID: "ff265913-df5e-490b-ba35-98be9b52fdb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:44:26 crc kubenswrapper[4830]: I0227 17:44:26.465558 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff265913-df5e-490b-ba35-98be9b52fdb3-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:26 crc kubenswrapper[4830]: I0227 17:44:26.465624 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bd8qr\" (UniqueName: \"kubernetes.io/projected/ff265913-df5e-490b-ba35-98be9b52fdb3-kube-api-access-bd8qr\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:26 crc kubenswrapper[4830]: I0227 17:44:26.465638 4830 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ff265913-df5e-490b-ba35-98be9b52fdb3-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:26 crc kubenswrapper[4830]: I0227 17:44:26.465649 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff265913-df5e-490b-ba35-98be9b52fdb3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:26 crc kubenswrapper[4830]: I0227 17:44:26.465658 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff265913-df5e-490b-ba35-98be9b52fdb3-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:26 crc kubenswrapper[4830]: E0227 17:44:26.765863 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536904-jrdqt" podUID="77856f9c-1131-4857-9fff-bddf1d27b5d3" Feb 27 17:44:26 crc kubenswrapper[4830]: I0227 17:44:26.830312 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-m9wvm" event={"ID":"ff265913-df5e-490b-ba35-98be9b52fdb3","Type":"ContainerDied","Data":"afa8214a58d3b6406849294bf50a8ef9caf0df87a43e839aeaebaaa9d7954cb9"} Feb 27 17:44:26 crc kubenswrapper[4830]: I0227 17:44:26.830610 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afa8214a58d3b6406849294bf50a8ef9caf0df87a43e839aeaebaaa9d7954cb9" Feb 27 17:44:26 crc kubenswrapper[4830]: I0227 17:44:26.830381 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-m9wvm" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.316048 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b8675bf5c-vgk78"] Feb 27 17:44:27 crc kubenswrapper[4830]: E0227 17:44:27.316742 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff265913-df5e-490b-ba35-98be9b52fdb3" containerName="cinder-db-sync" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.316756 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff265913-df5e-490b-ba35-98be9b52fdb3" containerName="cinder-db-sync" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.316964 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff265913-df5e-490b-ba35-98be9b52fdb3" containerName="cinder-db-sync" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.317917 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b8675bf5c-vgk78" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.339850 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b8675bf5c-vgk78"] Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.384074 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a049e072-04be-4b81-8815-c5ee22647712-ovsdbserver-nb\") pod \"dnsmasq-dns-6b8675bf5c-vgk78\" (UID: \"a049e072-04be-4b81-8815-c5ee22647712\") " pod="openstack/dnsmasq-dns-6b8675bf5c-vgk78" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.384139 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a049e072-04be-4b81-8815-c5ee22647712-dns-svc\") pod \"dnsmasq-dns-6b8675bf5c-vgk78\" (UID: \"a049e072-04be-4b81-8815-c5ee22647712\") " pod="openstack/dnsmasq-dns-6b8675bf5c-vgk78" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.384157 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a049e072-04be-4b81-8815-c5ee22647712-ovsdbserver-sb\") pod \"dnsmasq-dns-6b8675bf5c-vgk78\" (UID: \"a049e072-04be-4b81-8815-c5ee22647712\") " pod="openstack/dnsmasq-dns-6b8675bf5c-vgk78" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.384190 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a049e072-04be-4b81-8815-c5ee22647712-config\") pod \"dnsmasq-dns-6b8675bf5c-vgk78\" (UID: \"a049e072-04be-4b81-8815-c5ee22647712\") " pod="openstack/dnsmasq-dns-6b8675bf5c-vgk78" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.384345 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5kkv\" (UniqueName: \"kubernetes.io/projected/a049e072-04be-4b81-8815-c5ee22647712-kube-api-access-r5kkv\") pod \"dnsmasq-dns-6b8675bf5c-vgk78\" (UID: \"a049e072-04be-4b81-8815-c5ee22647712\") " pod="openstack/dnsmasq-dns-6b8675bf5c-vgk78" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.421373 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.422998 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.432584 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-h788x" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.432901 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.433050 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.433501 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.439991 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.486973 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-config-data\") pod \"cinder-api-0\" (UID: \"09b3d7eb-5e19-47a1-81bb-a9e0755077ad\") " pod="openstack/cinder-api-0" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.487024 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a049e072-04be-4b81-8815-c5ee22647712-ovsdbserver-nb\") pod \"dnsmasq-dns-6b8675bf5c-vgk78\" (UID: \"a049e072-04be-4b81-8815-c5ee22647712\") " pod="openstack/dnsmasq-dns-6b8675bf5c-vgk78" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.487060 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-logs\") pod \"cinder-api-0\" (UID: \"09b3d7eb-5e19-47a1-81bb-a9e0755077ad\") " pod="openstack/cinder-api-0" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.487086 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwqqn\" (UniqueName: \"kubernetes.io/projected/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-kube-api-access-dwqqn\") pod \"cinder-api-0\" (UID: \"09b3d7eb-5e19-47a1-81bb-a9e0755077ad\") " pod="openstack/cinder-api-0" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.487110 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a049e072-04be-4b81-8815-c5ee22647712-dns-svc\") pod \"dnsmasq-dns-6b8675bf5c-vgk78\" (UID: \"a049e072-04be-4b81-8815-c5ee22647712\") " pod="openstack/dnsmasq-dns-6b8675bf5c-vgk78" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.487128 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a049e072-04be-4b81-8815-c5ee22647712-ovsdbserver-sb\") pod \"dnsmasq-dns-6b8675bf5c-vgk78\" (UID: \"a049e072-04be-4b81-8815-c5ee22647712\") " pod="openstack/dnsmasq-dns-6b8675bf5c-vgk78" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.487238 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"09b3d7eb-5e19-47a1-81bb-a9e0755077ad\") " pod="openstack/cinder-api-0" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.487300 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a049e072-04be-4b81-8815-c5ee22647712-config\") pod \"dnsmasq-dns-6b8675bf5c-vgk78\" (UID: \"a049e072-04be-4b81-8815-c5ee22647712\") " pod="openstack/dnsmasq-dns-6b8675bf5c-vgk78" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.487329 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5kkv\" (UniqueName: \"kubernetes.io/projected/a049e072-04be-4b81-8815-c5ee22647712-kube-api-access-r5kkv\") pod \"dnsmasq-dns-6b8675bf5c-vgk78\" (UID: \"a049e072-04be-4b81-8815-c5ee22647712\") " pod="openstack/dnsmasq-dns-6b8675bf5c-vgk78" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.487483 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-config-data-custom\") pod \"cinder-api-0\" (UID: \"09b3d7eb-5e19-47a1-81bb-a9e0755077ad\") " pod="openstack/cinder-api-0" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.487621 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-scripts\") pod \"cinder-api-0\" (UID: \"09b3d7eb-5e19-47a1-81bb-a9e0755077ad\") " pod="openstack/cinder-api-0" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.487704 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-etc-machine-id\") pod \"cinder-api-0\" (UID: \"09b3d7eb-5e19-47a1-81bb-a9e0755077ad\") " pod="openstack/cinder-api-0" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.487908 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a049e072-04be-4b81-8815-c5ee22647712-ovsdbserver-nb\") pod \"dnsmasq-dns-6b8675bf5c-vgk78\" (UID: \"a049e072-04be-4b81-8815-c5ee22647712\") " pod="openstack/dnsmasq-dns-6b8675bf5c-vgk78" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.487935 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a049e072-04be-4b81-8815-c5ee22647712-ovsdbserver-sb\") pod \"dnsmasq-dns-6b8675bf5c-vgk78\" (UID: \"a049e072-04be-4b81-8815-c5ee22647712\") " pod="openstack/dnsmasq-dns-6b8675bf5c-vgk78" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.488107 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a049e072-04be-4b81-8815-c5ee22647712-dns-svc\") pod \"dnsmasq-dns-6b8675bf5c-vgk78\" (UID: \"a049e072-04be-4b81-8815-c5ee22647712\") " pod="openstack/dnsmasq-dns-6b8675bf5c-vgk78" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.488199 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a049e072-04be-4b81-8815-c5ee22647712-config\") pod \"dnsmasq-dns-6b8675bf5c-vgk78\" (UID: \"a049e072-04be-4b81-8815-c5ee22647712\") " pod="openstack/dnsmasq-dns-6b8675bf5c-vgk78" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.503168 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5kkv\" (UniqueName: \"kubernetes.io/projected/a049e072-04be-4b81-8815-c5ee22647712-kube-api-access-r5kkv\") pod \"dnsmasq-dns-6b8675bf5c-vgk78\" (UID: \"a049e072-04be-4b81-8815-c5ee22647712\") " pod="openstack/dnsmasq-dns-6b8675bf5c-vgk78" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.589375 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-config-data-custom\") pod \"cinder-api-0\" (UID: \"09b3d7eb-5e19-47a1-81bb-a9e0755077ad\") " pod="openstack/cinder-api-0" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.589447 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-scripts\") pod \"cinder-api-0\" (UID: \"09b3d7eb-5e19-47a1-81bb-a9e0755077ad\") " pod="openstack/cinder-api-0" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.589489 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-etc-machine-id\") pod \"cinder-api-0\" (UID: \"09b3d7eb-5e19-47a1-81bb-a9e0755077ad\") " pod="openstack/cinder-api-0" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.589521 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-config-data\") pod \"cinder-api-0\" (UID: \"09b3d7eb-5e19-47a1-81bb-a9e0755077ad\") " pod="openstack/cinder-api-0" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.589549 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-logs\") pod \"cinder-api-0\" (UID: \"09b3d7eb-5e19-47a1-81bb-a9e0755077ad\") " pod="openstack/cinder-api-0" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.589570 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwqqn\" (UniqueName: \"kubernetes.io/projected/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-kube-api-access-dwqqn\") pod \"cinder-api-0\" (UID: \"09b3d7eb-5e19-47a1-81bb-a9e0755077ad\") " pod="openstack/cinder-api-0" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.589607 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"09b3d7eb-5e19-47a1-81bb-a9e0755077ad\") " pod="openstack/cinder-api-0" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.590111 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-etc-machine-id\") pod \"cinder-api-0\" (UID: \"09b3d7eb-5e19-47a1-81bb-a9e0755077ad\") " pod="openstack/cinder-api-0" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.590901 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-logs\") pod \"cinder-api-0\" (UID: \"09b3d7eb-5e19-47a1-81bb-a9e0755077ad\") " pod="openstack/cinder-api-0" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.592787 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"09b3d7eb-5e19-47a1-81bb-a9e0755077ad\") " pod="openstack/cinder-api-0" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.593984 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-config-data-custom\") pod \"cinder-api-0\" (UID: \"09b3d7eb-5e19-47a1-81bb-a9e0755077ad\") " pod="openstack/cinder-api-0" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.595293 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-scripts\") pod \"cinder-api-0\" (UID: \"09b3d7eb-5e19-47a1-81bb-a9e0755077ad\") " pod="openstack/cinder-api-0" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.596737 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-config-data\") pod \"cinder-api-0\" (UID: \"09b3d7eb-5e19-47a1-81bb-a9e0755077ad\") " pod="openstack/cinder-api-0" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.605229 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwqqn\" (UniqueName: \"kubernetes.io/projected/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-kube-api-access-dwqqn\") pod \"cinder-api-0\" (UID: \"09b3d7eb-5e19-47a1-81bb-a9e0755077ad\") " pod="openstack/cinder-api-0" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.658905 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b8675bf5c-vgk78" Feb 27 17:44:27 crc kubenswrapper[4830]: I0227 17:44:27.754677 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 17:44:28 crc kubenswrapper[4830]: I0227 17:44:28.149192 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b8675bf5c-vgk78"] Feb 27 17:44:28 crc kubenswrapper[4830]: W0227 17:44:28.149473 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda049e072_04be_4b81_8815_c5ee22647712.slice/crio-f47b0243ea4e8774352a2cc54c58c9cc173a0fede1cbe5b0ccfd65532aebfeb2 WatchSource:0}: Error finding container f47b0243ea4e8774352a2cc54c58c9cc173a0fede1cbe5b0ccfd65532aebfeb2: Status 404 returned error can't find the container with id f47b0243ea4e8774352a2cc54c58c9cc173a0fede1cbe5b0ccfd65532aebfeb2 Feb 27 17:44:28 crc kubenswrapper[4830]: I0227 17:44:28.270956 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 27 17:44:28 crc kubenswrapper[4830]: W0227 17:44:28.283321 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09b3d7eb_5e19_47a1_81bb_a9e0755077ad.slice/crio-70714150cc14c96f437a1ed6c910168cfb918219e95573230ea1d2dbf63a615e WatchSource:0}: Error finding container 70714150cc14c96f437a1ed6c910168cfb918219e95573230ea1d2dbf63a615e: Status 404 returned error can't find the container with id 70714150cc14c96f437a1ed6c910168cfb918219e95573230ea1d2dbf63a615e Feb 27 17:44:28 crc kubenswrapper[4830]: I0227 17:44:28.849588 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"09b3d7eb-5e19-47a1-81bb-a9e0755077ad","Type":"ContainerStarted","Data":"25c54f075c8988085178cb7b935b2679011fae62263f07254bc9fadd5a34e79c"} Feb 27 17:44:28 crc kubenswrapper[4830]: I0227 17:44:28.849967 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"09b3d7eb-5e19-47a1-81bb-a9e0755077ad","Type":"ContainerStarted","Data":"70714150cc14c96f437a1ed6c910168cfb918219e95573230ea1d2dbf63a615e"} Feb 27 17:44:28 crc kubenswrapper[4830]: I0227 17:44:28.851238 4830 generic.go:334] "Generic (PLEG): container finished" podID="a049e072-04be-4b81-8815-c5ee22647712" containerID="b6c7280077cc8e953c7e95de0528cdaae07f64f34ba829ab82078f57ec612170" exitCode=0 Feb 27 17:44:28 crc kubenswrapper[4830]: I0227 17:44:28.851286 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b8675bf5c-vgk78" event={"ID":"a049e072-04be-4b81-8815-c5ee22647712","Type":"ContainerDied","Data":"b6c7280077cc8e953c7e95de0528cdaae07f64f34ba829ab82078f57ec612170"} Feb 27 17:44:28 crc kubenswrapper[4830]: I0227 17:44:28.851314 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b8675bf5c-vgk78" event={"ID":"a049e072-04be-4b81-8815-c5ee22647712","Type":"ContainerStarted","Data":"f47b0243ea4e8774352a2cc54c58c9cc173a0fede1cbe5b0ccfd65532aebfeb2"} Feb 27 17:44:29 crc kubenswrapper[4830]: I0227 17:44:29.862172 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"09b3d7eb-5e19-47a1-81bb-a9e0755077ad","Type":"ContainerStarted","Data":"63745ec9fad77b93e96cd00caeb0d073bc10ce82e0ce24b95627b855b5429092"} Feb 27 17:44:29 crc kubenswrapper[4830]: I0227 17:44:29.863706 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 27 17:44:29 crc kubenswrapper[4830]: I0227 17:44:29.870687 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b8675bf5c-vgk78" event={"ID":"a049e072-04be-4b81-8815-c5ee22647712","Type":"ContainerStarted","Data":"ef7c245299c6e69b5d3a31720a0eba9f28969f2ca942e8a2965df2e339554965"} Feb 27 17:44:29 crc kubenswrapper[4830]: I0227 17:44:29.871490 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b8675bf5c-vgk78" Feb 27 17:44:29 crc kubenswrapper[4830]: I0227 17:44:29.887696 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.887674317 podStartE2EDuration="2.887674317s" podCreationTimestamp="2026-02-27 17:44:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:44:29.881411146 +0000 UTC m=+5865.970683609" watchObservedRunningTime="2026-02-27 17:44:29.887674317 +0000 UTC m=+5865.976946780" Feb 27 17:44:29 crc kubenswrapper[4830]: I0227 17:44:29.909880 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b8675bf5c-vgk78" podStartSLOduration=2.9098641499999998 podStartE2EDuration="2.90986415s" podCreationTimestamp="2026-02-27 17:44:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:44:29.905448744 +0000 UTC m=+5865.994721207" watchObservedRunningTime="2026-02-27 17:44:29.90986415 +0000 UTC m=+5865.999136613" Feb 27 17:44:30 crc kubenswrapper[4830]: I0227 17:44:30.197017 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:44:30 crc kubenswrapper[4830]: I0227 17:44:30.197261 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="deab7dc6-3048-4721-8688-57ecae22876e" containerName="nova-metadata-log" containerID="cri-o://90775655a9f3edb46d15ed8b3765fda6c459f5737b5a9827205a5a1853341610" gracePeriod=30 Feb 27 17:44:30 crc kubenswrapper[4830]: I0227 17:44:30.197379 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="deab7dc6-3048-4721-8688-57ecae22876e" containerName="nova-metadata-metadata" containerID="cri-o://52db8ac141e0a1f002135cef6cba38b21657eeb4d0a72947ec49cbfed201a4c0" gracePeriod=30 Feb 27 17:44:30 crc kubenswrapper[4830]: I0227 17:44:30.210646 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 17:44:30 crc kubenswrapper[4830]: I0227 17:44:30.210895 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="856c8d55-5c9d-4655-8752-63a97ecb38d2" containerName="nova-api-log" containerID="cri-o://a46c39a2dbec52dbace3d2bef0430644077cc0412e3682b6c23d5006347dc9c8" gracePeriod=30 Feb 27 17:44:30 crc kubenswrapper[4830]: I0227 17:44:30.210974 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="856c8d55-5c9d-4655-8752-63a97ecb38d2" containerName="nova-api-api" containerID="cri-o://27bac96151e1f53148780e96ad22610c28076ac65b866eb9d04b3a7b408b655a" gracePeriod=30 Feb 27 17:44:30 crc kubenswrapper[4830]: I0227 17:44:30.227754 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 17:44:30 crc kubenswrapper[4830]: I0227 17:44:30.228256 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="85f07766-38ac-48a4-9ed2-e87e5cc56093" containerName="nova-scheduler-scheduler" containerID="cri-o://2f6e826b7f5e36c9cf4e24d7c630f10292d1ee84e4660aa4c0c310faa3b95a9e" gracePeriod=30 Feb 27 17:44:30 crc kubenswrapper[4830]: I0227 17:44:30.262136 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 17:44:30 crc kubenswrapper[4830]: I0227 17:44:30.262408 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="debf2adf-e44d-4329-9470-740f206ac43b" containerName="nova-cell0-conductor-conductor" containerID="cri-o://59134d028826964bffc3afa6087405079139f7a2f6866323cb23a0d7881aee4a" gracePeriod=30 Feb 27 17:44:30 crc kubenswrapper[4830]: I0227 17:44:30.269464 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 17:44:30 crc kubenswrapper[4830]: I0227 17:44:30.269671 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="2bd39a49-2ce3-4ac6-aec1-316d99d5826c" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://f6d0526901a9a4b9aa9a02cef7cc69c87ccf982dce23e12e734ec2de215090d5" gracePeriod=30 Feb 27 17:44:30 crc kubenswrapper[4830]: I0227 17:44:30.341964 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 27 17:44:30 crc kubenswrapper[4830]: I0227 17:44:30.342179 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="71167e3e-162d-4836-939e-0abbc7a1217c" containerName="nova-cell1-conductor-conductor" containerID="cri-o://7d7ada0491ba00be4bc229dd1643af1a324ae49a6bf62e3d0bb68329193d84a5" gracePeriod=30 Feb 27 17:44:30 crc kubenswrapper[4830]: I0227 17:44:30.894582 4830 generic.go:334] "Generic (PLEG): container finished" podID="deab7dc6-3048-4721-8688-57ecae22876e" containerID="90775655a9f3edb46d15ed8b3765fda6c459f5737b5a9827205a5a1853341610" exitCode=143 Feb 27 17:44:30 crc kubenswrapper[4830]: I0227 17:44:30.894815 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"deab7dc6-3048-4721-8688-57ecae22876e","Type":"ContainerDied","Data":"90775655a9f3edb46d15ed8b3765fda6c459f5737b5a9827205a5a1853341610"} Feb 27 17:44:30 crc kubenswrapper[4830]: I0227 17:44:30.908361 4830 generic.go:334] "Generic (PLEG): container finished" podID="2bd39a49-2ce3-4ac6-aec1-316d99d5826c" containerID="f6d0526901a9a4b9aa9a02cef7cc69c87ccf982dce23e12e734ec2de215090d5" exitCode=0 Feb 27 17:44:30 crc kubenswrapper[4830]: I0227 17:44:30.908463 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2bd39a49-2ce3-4ac6-aec1-316d99d5826c","Type":"ContainerDied","Data":"f6d0526901a9a4b9aa9a02cef7cc69c87ccf982dce23e12e734ec2de215090d5"} Feb 27 17:44:30 crc kubenswrapper[4830]: I0227 17:44:30.913733 4830 generic.go:334] "Generic (PLEG): container finished" podID="856c8d55-5c9d-4655-8752-63a97ecb38d2" containerID="a46c39a2dbec52dbace3d2bef0430644077cc0412e3682b6c23d5006347dc9c8" exitCode=143 Feb 27 17:44:30 crc kubenswrapper[4830]: I0227 17:44:30.914769 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"856c8d55-5c9d-4655-8752-63a97ecb38d2","Type":"ContainerDied","Data":"a46c39a2dbec52dbace3d2bef0430644077cc0412e3682b6c23d5006347dc9c8"} Feb 27 17:44:31 crc kubenswrapper[4830]: I0227 17:44:31.073699 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:44:31 crc kubenswrapper[4830]: I0227 17:44:31.166307 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bd39a49-2ce3-4ac6-aec1-316d99d5826c-combined-ca-bundle\") pod \"2bd39a49-2ce3-4ac6-aec1-316d99d5826c\" (UID: \"2bd39a49-2ce3-4ac6-aec1-316d99d5826c\") " Feb 27 17:44:31 crc kubenswrapper[4830]: I0227 17:44:31.166696 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcj2z\" (UniqueName: \"kubernetes.io/projected/2bd39a49-2ce3-4ac6-aec1-316d99d5826c-kube-api-access-tcj2z\") pod \"2bd39a49-2ce3-4ac6-aec1-316d99d5826c\" (UID: \"2bd39a49-2ce3-4ac6-aec1-316d99d5826c\") " Feb 27 17:44:31 crc kubenswrapper[4830]: I0227 17:44:31.166829 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bd39a49-2ce3-4ac6-aec1-316d99d5826c-config-data\") pod \"2bd39a49-2ce3-4ac6-aec1-316d99d5826c\" (UID: \"2bd39a49-2ce3-4ac6-aec1-316d99d5826c\") " Feb 27 17:44:31 crc kubenswrapper[4830]: I0227 17:44:31.175294 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bd39a49-2ce3-4ac6-aec1-316d99d5826c-kube-api-access-tcj2z" (OuterVolumeSpecName: "kube-api-access-tcj2z") pod "2bd39a49-2ce3-4ac6-aec1-316d99d5826c" (UID: "2bd39a49-2ce3-4ac6-aec1-316d99d5826c"). InnerVolumeSpecName "kube-api-access-tcj2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:44:31 crc kubenswrapper[4830]: I0227 17:44:31.195514 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bd39a49-2ce3-4ac6-aec1-316d99d5826c-config-data" (OuterVolumeSpecName: "config-data") pod "2bd39a49-2ce3-4ac6-aec1-316d99d5826c" (UID: "2bd39a49-2ce3-4ac6-aec1-316d99d5826c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:44:31 crc kubenswrapper[4830]: I0227 17:44:31.201297 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bd39a49-2ce3-4ac6-aec1-316d99d5826c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2bd39a49-2ce3-4ac6-aec1-316d99d5826c" (UID: "2bd39a49-2ce3-4ac6-aec1-316d99d5826c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:44:31 crc kubenswrapper[4830]: I0227 17:44:31.270497 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bd39a49-2ce3-4ac6-aec1-316d99d5826c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:31 crc kubenswrapper[4830]: I0227 17:44:31.270548 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcj2z\" (UniqueName: \"kubernetes.io/projected/2bd39a49-2ce3-4ac6-aec1-316d99d5826c-kube-api-access-tcj2z\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:31 crc kubenswrapper[4830]: I0227 17:44:31.270559 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bd39a49-2ce3-4ac6-aec1-316d99d5826c-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:31 crc kubenswrapper[4830]: E0227 17:44:31.668585 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2f6e826b7f5e36c9cf4e24d7c630f10292d1ee84e4660aa4c0c310faa3b95a9e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 27 17:44:31 crc kubenswrapper[4830]: E0227 17:44:31.669986 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2f6e826b7f5e36c9cf4e24d7c630f10292d1ee84e4660aa4c0c310faa3b95a9e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 27 17:44:31 crc kubenswrapper[4830]: E0227 17:44:31.671383 4830 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2f6e826b7f5e36c9cf4e24d7c630f10292d1ee84e4660aa4c0c310faa3b95a9e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 27 17:44:31 crc kubenswrapper[4830]: E0227 17:44:31.671457 4830 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="85f07766-38ac-48a4-9ed2-e87e5cc56093" containerName="nova-scheduler-scheduler" Feb 27 17:44:31 crc kubenswrapper[4830]: I0227 17:44:31.680276 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 27 17:44:31 crc kubenswrapper[4830]: I0227 17:44:31.778765 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hfpf\" (UniqueName: \"kubernetes.io/projected/71167e3e-162d-4836-939e-0abbc7a1217c-kube-api-access-5hfpf\") pod \"71167e3e-162d-4836-939e-0abbc7a1217c\" (UID: \"71167e3e-162d-4836-939e-0abbc7a1217c\") " Feb 27 17:44:31 crc kubenswrapper[4830]: I0227 17:44:31.778890 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71167e3e-162d-4836-939e-0abbc7a1217c-combined-ca-bundle\") pod \"71167e3e-162d-4836-939e-0abbc7a1217c\" (UID: \"71167e3e-162d-4836-939e-0abbc7a1217c\") " Feb 27 17:44:31 crc kubenswrapper[4830]: I0227 17:44:31.779133 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71167e3e-162d-4836-939e-0abbc7a1217c-config-data\") pod \"71167e3e-162d-4836-939e-0abbc7a1217c\" (UID: \"71167e3e-162d-4836-939e-0abbc7a1217c\") " Feb 27 17:44:31 crc kubenswrapper[4830]: I0227 17:44:31.788451 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71167e3e-162d-4836-939e-0abbc7a1217c-kube-api-access-5hfpf" (OuterVolumeSpecName: "kube-api-access-5hfpf") pod "71167e3e-162d-4836-939e-0abbc7a1217c" (UID: "71167e3e-162d-4836-939e-0abbc7a1217c"). InnerVolumeSpecName "kube-api-access-5hfpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:44:31 crc kubenswrapper[4830]: I0227 17:44:31.802268 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71167e3e-162d-4836-939e-0abbc7a1217c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71167e3e-162d-4836-939e-0abbc7a1217c" (UID: "71167e3e-162d-4836-939e-0abbc7a1217c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:44:31 crc kubenswrapper[4830]: I0227 17:44:31.809116 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71167e3e-162d-4836-939e-0abbc7a1217c-config-data" (OuterVolumeSpecName: "config-data") pod "71167e3e-162d-4836-939e-0abbc7a1217c" (UID: "71167e3e-162d-4836-939e-0abbc7a1217c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:44:31 crc kubenswrapper[4830]: I0227 17:44:31.881366 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71167e3e-162d-4836-939e-0abbc7a1217c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:31 crc kubenswrapper[4830]: I0227 17:44:31.881411 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71167e3e-162d-4836-939e-0abbc7a1217c-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:31 crc kubenswrapper[4830]: I0227 17:44:31.881428 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hfpf\" (UniqueName: \"kubernetes.io/projected/71167e3e-162d-4836-939e-0abbc7a1217c-kube-api-access-5hfpf\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:31 crc kubenswrapper[4830]: I0227 17:44:31.934384 4830 generic.go:334] "Generic (PLEG): container finished" podID="71167e3e-162d-4836-939e-0abbc7a1217c" containerID="7d7ada0491ba00be4bc229dd1643af1a324ae49a6bf62e3d0bb68329193d84a5" exitCode=0 Feb 27 17:44:31 crc kubenswrapper[4830]: I0227 17:44:31.934444 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 27 17:44:31 crc kubenswrapper[4830]: I0227 17:44:31.934464 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"71167e3e-162d-4836-939e-0abbc7a1217c","Type":"ContainerDied","Data":"7d7ada0491ba00be4bc229dd1643af1a324ae49a6bf62e3d0bb68329193d84a5"} Feb 27 17:44:31 crc kubenswrapper[4830]: I0227 17:44:31.935658 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"71167e3e-162d-4836-939e-0abbc7a1217c","Type":"ContainerDied","Data":"62a2cbc065dd21bb8521d3ad3d54106d8235ade1cb92aadf9969c7828771ddae"} Feb 27 17:44:31 crc kubenswrapper[4830]: I0227 17:44:31.935696 4830 scope.go:117] "RemoveContainer" containerID="7d7ada0491ba00be4bc229dd1643af1a324ae49a6bf62e3d0bb68329193d84a5" Feb 27 17:44:31 crc kubenswrapper[4830]: I0227 17:44:31.945728 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2bd39a49-2ce3-4ac6-aec1-316d99d5826c","Type":"ContainerDied","Data":"07196a3529fe89f221d626c8a6869243093fae3dd46ed8b1d6779427082481c3"} Feb 27 17:44:31 crc kubenswrapper[4830]: I0227 17:44:31.945937 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:44:31 crc kubenswrapper[4830]: I0227 17:44:31.971937 4830 scope.go:117] "RemoveContainer" containerID="7d7ada0491ba00be4bc229dd1643af1a324ae49a6bf62e3d0bb68329193d84a5" Feb 27 17:44:31 crc kubenswrapper[4830]: E0227 17:44:31.980443 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d7ada0491ba00be4bc229dd1643af1a324ae49a6bf62e3d0bb68329193d84a5\": container with ID starting with 7d7ada0491ba00be4bc229dd1643af1a324ae49a6bf62e3d0bb68329193d84a5 not found: ID does not exist" containerID="7d7ada0491ba00be4bc229dd1643af1a324ae49a6bf62e3d0bb68329193d84a5" Feb 27 17:44:31 crc kubenswrapper[4830]: I0227 17:44:31.980531 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d7ada0491ba00be4bc229dd1643af1a324ae49a6bf62e3d0bb68329193d84a5"} err="failed to get container status \"7d7ada0491ba00be4bc229dd1643af1a324ae49a6bf62e3d0bb68329193d84a5\": rpc error: code = NotFound desc = could not find container \"7d7ada0491ba00be4bc229dd1643af1a324ae49a6bf62e3d0bb68329193d84a5\": container with ID starting with 7d7ada0491ba00be4bc229dd1643af1a324ae49a6bf62e3d0bb68329193d84a5 not found: ID does not exist" Feb 27 17:44:31 crc kubenswrapper[4830]: I0227 17:44:31.980561 4830 scope.go:117] "RemoveContainer" containerID="f6d0526901a9a4b9aa9a02cef7cc69c87ccf982dce23e12e734ec2de215090d5" Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.002801 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.015755 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.033044 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 27 17:44:32 crc kubenswrapper[4830]: E0227 17:44:32.033776 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bd39a49-2ce3-4ac6-aec1-316d99d5826c" containerName="nova-cell1-novncproxy-novncproxy" Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.033792 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bd39a49-2ce3-4ac6-aec1-316d99d5826c" containerName="nova-cell1-novncproxy-novncproxy" Feb 27 17:44:32 crc kubenswrapper[4830]: E0227 17:44:32.033818 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71167e3e-162d-4836-939e-0abbc7a1217c" containerName="nova-cell1-conductor-conductor" Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.033825 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="71167e3e-162d-4836-939e-0abbc7a1217c" containerName="nova-cell1-conductor-conductor" Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.034210 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bd39a49-2ce3-4ac6-aec1-316d99d5826c" containerName="nova-cell1-novncproxy-novncproxy" Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.034235 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="71167e3e-162d-4836-939e-0abbc7a1217c" containerName="nova-cell1-conductor-conductor" Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.038384 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.043009 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.055841 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.065942 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.076031 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.084814 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.086243 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.088148 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.092322 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.093535 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d4f86df-5d6b-4fd2-8c50-e414adfda318-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1d4f86df-5d6b-4fd2-8c50-e414adfda318\") " pod="openstack/nova-cell1-conductor-0" Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.093600 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d4f86df-5d6b-4fd2-8c50-e414adfda318-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1d4f86df-5d6b-4fd2-8c50-e414adfda318\") " pod="openstack/nova-cell1-conductor-0" Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.093702 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t6c2\" (UniqueName: \"kubernetes.io/projected/1d4f86df-5d6b-4fd2-8c50-e414adfda318-kube-api-access-7t6c2\") pod \"nova-cell1-conductor-0\" (UID: \"1d4f86df-5d6b-4fd2-8c50-e414adfda318\") " pod="openstack/nova-cell1-conductor-0" Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.194777 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85168f4c-a1d8-408f-a88c-269e899d29d9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"85168f4c-a1d8-408f-a88c-269e899d29d9\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.194912 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85168f4c-a1d8-408f-a88c-269e899d29d9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"85168f4c-a1d8-408f-a88c-269e899d29d9\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.195035 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8p88\" (UniqueName: \"kubernetes.io/projected/85168f4c-a1d8-408f-a88c-269e899d29d9-kube-api-access-d8p88\") pod \"nova-cell1-novncproxy-0\" (UID: \"85168f4c-a1d8-408f-a88c-269e899d29d9\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.195100 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d4f86df-5d6b-4fd2-8c50-e414adfda318-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1d4f86df-5d6b-4fd2-8c50-e414adfda318\") " pod="openstack/nova-cell1-conductor-0" Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.195213 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d4f86df-5d6b-4fd2-8c50-e414adfda318-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1d4f86df-5d6b-4fd2-8c50-e414adfda318\") " pod="openstack/nova-cell1-conductor-0" Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.195277 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t6c2\" (UniqueName: \"kubernetes.io/projected/1d4f86df-5d6b-4fd2-8c50-e414adfda318-kube-api-access-7t6c2\") pod \"nova-cell1-conductor-0\" (UID: \"1d4f86df-5d6b-4fd2-8c50-e414adfda318\") " pod="openstack/nova-cell1-conductor-0" Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.199591 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d4f86df-5d6b-4fd2-8c50-e414adfda318-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1d4f86df-5d6b-4fd2-8c50-e414adfda318\") " pod="openstack/nova-cell1-conductor-0" Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.200848 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d4f86df-5d6b-4fd2-8c50-e414adfda318-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1d4f86df-5d6b-4fd2-8c50-e414adfda318\") " pod="openstack/nova-cell1-conductor-0" Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.220984 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t6c2\" (UniqueName: \"kubernetes.io/projected/1d4f86df-5d6b-4fd2-8c50-e414adfda318-kube-api-access-7t6c2\") pod \"nova-cell1-conductor-0\" (UID: \"1d4f86df-5d6b-4fd2-8c50-e414adfda318\") " pod="openstack/nova-cell1-conductor-0" Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.297529 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85168f4c-a1d8-408f-a88c-269e899d29d9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"85168f4c-a1d8-408f-a88c-269e899d29d9\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.297645 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85168f4c-a1d8-408f-a88c-269e899d29d9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"85168f4c-a1d8-408f-a88c-269e899d29d9\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.297729 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8p88\" (UniqueName: \"kubernetes.io/projected/85168f4c-a1d8-408f-a88c-269e899d29d9-kube-api-access-d8p88\") pod \"nova-cell1-novncproxy-0\" (UID: \"85168f4c-a1d8-408f-a88c-269e899d29d9\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.304293 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85168f4c-a1d8-408f-a88c-269e899d29d9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"85168f4c-a1d8-408f-a88c-269e899d29d9\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.312385 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85168f4c-a1d8-408f-a88c-269e899d29d9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"85168f4c-a1d8-408f-a88c-269e899d29d9\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.316546 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8p88\" (UniqueName: \"kubernetes.io/projected/85168f4c-a1d8-408f-a88c-269e899d29d9-kube-api-access-d8p88\") pod \"nova-cell1-novncproxy-0\" (UID: \"85168f4c-a1d8-408f-a88c-269e899d29d9\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.379871 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.417612 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.789550 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bd39a49-2ce3-4ac6-aec1-316d99d5826c" path="/var/lib/kubelet/pods/2bd39a49-2ce3-4ac6-aec1-316d99d5826c/volumes" Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.791161 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71167e3e-162d-4836-939e-0abbc7a1217c" path="/var/lib/kubelet/pods/71167e3e-162d-4836-939e-0abbc7a1217c/volumes" Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.959421 4830 generic.go:334] "Generic (PLEG): container finished" podID="debf2adf-e44d-4329-9470-740f206ac43b" containerID="59134d028826964bffc3afa6087405079139f7a2f6866323cb23a0d7881aee4a" exitCode=0 Feb 27 17:44:32 crc kubenswrapper[4830]: I0227 17:44:32.959580 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"debf2adf-e44d-4329-9470-740f206ac43b","Type":"ContainerDied","Data":"59134d028826964bffc3afa6087405079139f7a2f6866323cb23a0d7881aee4a"} Feb 27 17:44:33 crc kubenswrapper[4830]: W0227 17:44:33.014189 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d4f86df_5d6b_4fd2_8c50_e414adfda318.slice/crio-aac34bb40503b75978948a5f5561791496b9c5bc19aa8aca395264417c543c34 WatchSource:0}: Error finding container aac34bb40503b75978948a5f5561791496b9c5bc19aa8aca395264417c543c34: Status 404 returned error can't find the container with id aac34bb40503b75978948a5f5561791496b9c5bc19aa8aca395264417c543c34 Feb 27 17:44:33 crc kubenswrapper[4830]: I0227 17:44:33.016765 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 27 17:44:33 crc kubenswrapper[4830]: W0227 17:44:33.101517 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85168f4c_a1d8_408f_a88c_269e899d29d9.slice/crio-31ebf173313760ecacd20ad473dcb5a3d28cddef99a37f62043aec4e65294f33 WatchSource:0}: Error finding container 31ebf173313760ecacd20ad473dcb5a3d28cddef99a37f62043aec4e65294f33: Status 404 returned error can't find the container with id 31ebf173313760ecacd20ad473dcb5a3d28cddef99a37f62043aec4e65294f33 Feb 27 17:44:33 crc kubenswrapper[4830]: I0227 17:44:33.104777 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 17:44:33 crc kubenswrapper[4830]: I0227 17:44:33.365252 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="deab7dc6-3048-4721-8688-57ecae22876e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.112:8775/\": read tcp 10.217.0.2:49744->10.217.1.112:8775: read: connection reset by peer" Feb 27 17:44:33 crc kubenswrapper[4830]: I0227 17:44:33.365275 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="deab7dc6-3048-4721-8688-57ecae22876e" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.112:8775/\": read tcp 10.217.0.2:49746->10.217.1.112:8775: read: connection reset by peer" Feb 27 17:44:33 crc kubenswrapper[4830]: I0227 17:44:33.594784 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 27 17:44:33 crc kubenswrapper[4830]: I0227 17:44:33.624857 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs869\" (UniqueName: \"kubernetes.io/projected/debf2adf-e44d-4329-9470-740f206ac43b-kube-api-access-bs869\") pod \"debf2adf-e44d-4329-9470-740f206ac43b\" (UID: \"debf2adf-e44d-4329-9470-740f206ac43b\") " Feb 27 17:44:33 crc kubenswrapper[4830]: I0227 17:44:33.624996 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/debf2adf-e44d-4329-9470-740f206ac43b-combined-ca-bundle\") pod \"debf2adf-e44d-4329-9470-740f206ac43b\" (UID: \"debf2adf-e44d-4329-9470-740f206ac43b\") " Feb 27 17:44:33 crc kubenswrapper[4830]: I0227 17:44:33.625043 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/debf2adf-e44d-4329-9470-740f206ac43b-config-data\") pod \"debf2adf-e44d-4329-9470-740f206ac43b\" (UID: \"debf2adf-e44d-4329-9470-740f206ac43b\") " Feb 27 17:44:33 crc kubenswrapper[4830]: I0227 17:44:33.632492 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/debf2adf-e44d-4329-9470-740f206ac43b-kube-api-access-bs869" (OuterVolumeSpecName: "kube-api-access-bs869") pod "debf2adf-e44d-4329-9470-740f206ac43b" (UID: "debf2adf-e44d-4329-9470-740f206ac43b"). InnerVolumeSpecName "kube-api-access-bs869". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:44:33 crc kubenswrapper[4830]: I0227 17:44:33.659258 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/debf2adf-e44d-4329-9470-740f206ac43b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "debf2adf-e44d-4329-9470-740f206ac43b" (UID: "debf2adf-e44d-4329-9470-740f206ac43b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:44:33 crc kubenswrapper[4830]: I0227 17:44:33.661040 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/debf2adf-e44d-4329-9470-740f206ac43b-config-data" (OuterVolumeSpecName: "config-data") pod "debf2adf-e44d-4329-9470-740f206ac43b" (UID: "debf2adf-e44d-4329-9470-740f206ac43b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:44:33 crc kubenswrapper[4830]: I0227 17:44:33.728291 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs869\" (UniqueName: \"kubernetes.io/projected/debf2adf-e44d-4329-9470-740f206ac43b-kube-api-access-bs869\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:33 crc kubenswrapper[4830]: I0227 17:44:33.728331 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/debf2adf-e44d-4329-9470-740f206ac43b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:33 crc kubenswrapper[4830]: I0227 17:44:33.728342 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/debf2adf-e44d-4329-9470-740f206ac43b-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:33 crc kubenswrapper[4830]: I0227 17:44:33.763611 4830 scope.go:117] "RemoveContainer" containerID="0f73af46b765b3d98f3f5e9883b49887ac4f2ac3485e91a001e18c5d351646d8" Feb 27 17:44:33 crc kubenswrapper[4830]: E0227 17:44:33.764036 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:44:33 crc kubenswrapper[4830]: I0227 17:44:33.806446 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 17:44:33 crc kubenswrapper[4830]: I0227 17:44:33.832069 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deab7dc6-3048-4721-8688-57ecae22876e-config-data\") pod \"deab7dc6-3048-4721-8688-57ecae22876e\" (UID: \"deab7dc6-3048-4721-8688-57ecae22876e\") " Feb 27 17:44:33 crc kubenswrapper[4830]: I0227 17:44:33.832169 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deab7dc6-3048-4721-8688-57ecae22876e-logs\") pod \"deab7dc6-3048-4721-8688-57ecae22876e\" (UID: \"deab7dc6-3048-4721-8688-57ecae22876e\") " Feb 27 17:44:33 crc kubenswrapper[4830]: I0227 17:44:33.832246 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fsmt\" (UniqueName: \"kubernetes.io/projected/deab7dc6-3048-4721-8688-57ecae22876e-kube-api-access-5fsmt\") pod \"deab7dc6-3048-4721-8688-57ecae22876e\" (UID: \"deab7dc6-3048-4721-8688-57ecae22876e\") " Feb 27 17:44:33 crc kubenswrapper[4830]: I0227 17:44:33.832310 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deab7dc6-3048-4721-8688-57ecae22876e-combined-ca-bundle\") pod \"deab7dc6-3048-4721-8688-57ecae22876e\" (UID: \"deab7dc6-3048-4721-8688-57ecae22876e\") " Feb 27 17:44:33 crc kubenswrapper[4830]: I0227 17:44:33.834512 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/deab7dc6-3048-4721-8688-57ecae22876e-logs" (OuterVolumeSpecName: "logs") pod "deab7dc6-3048-4721-8688-57ecae22876e" (UID: "deab7dc6-3048-4721-8688-57ecae22876e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:44:33 crc kubenswrapper[4830]: I0227 17:44:33.838499 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deab7dc6-3048-4721-8688-57ecae22876e-kube-api-access-5fsmt" (OuterVolumeSpecName: "kube-api-access-5fsmt") pod "deab7dc6-3048-4721-8688-57ecae22876e" (UID: "deab7dc6-3048-4721-8688-57ecae22876e"). InnerVolumeSpecName "kube-api-access-5fsmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:44:33 crc kubenswrapper[4830]: I0227 17:44:33.899353 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 17:44:33 crc kubenswrapper[4830]: I0227 17:44:33.903263 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deab7dc6-3048-4721-8688-57ecae22876e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "deab7dc6-3048-4721-8688-57ecae22876e" (UID: "deab7dc6-3048-4721-8688-57ecae22876e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:44:33 crc kubenswrapper[4830]: I0227 17:44:33.906087 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deab7dc6-3048-4721-8688-57ecae22876e-config-data" (OuterVolumeSpecName: "config-data") pod "deab7dc6-3048-4721-8688-57ecae22876e" (UID: "deab7dc6-3048-4721-8688-57ecae22876e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:44:33 crc kubenswrapper[4830]: I0227 17:44:33.934247 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/856c8d55-5c9d-4655-8752-63a97ecb38d2-config-data\") pod \"856c8d55-5c9d-4655-8752-63a97ecb38d2\" (UID: \"856c8d55-5c9d-4655-8752-63a97ecb38d2\") " Feb 27 17:44:33 crc kubenswrapper[4830]: I0227 17:44:33.934332 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/856c8d55-5c9d-4655-8752-63a97ecb38d2-combined-ca-bundle\") pod \"856c8d55-5c9d-4655-8752-63a97ecb38d2\" (UID: \"856c8d55-5c9d-4655-8752-63a97ecb38d2\") " Feb 27 17:44:33 crc kubenswrapper[4830]: I0227 17:44:33.934423 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbkqs\" (UniqueName: \"kubernetes.io/projected/856c8d55-5c9d-4655-8752-63a97ecb38d2-kube-api-access-rbkqs\") pod \"856c8d55-5c9d-4655-8752-63a97ecb38d2\" (UID: \"856c8d55-5c9d-4655-8752-63a97ecb38d2\") " Feb 27 17:44:33 crc kubenswrapper[4830]: I0227 17:44:33.934455 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/856c8d55-5c9d-4655-8752-63a97ecb38d2-logs\") pod \"856c8d55-5c9d-4655-8752-63a97ecb38d2\" (UID: \"856c8d55-5c9d-4655-8752-63a97ecb38d2\") " Feb 27 17:44:33 crc kubenswrapper[4830]: I0227 17:44:33.934802 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deab7dc6-3048-4721-8688-57ecae22876e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:33 crc kubenswrapper[4830]: I0227 17:44:33.934819 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/deab7dc6-3048-4721-8688-57ecae22876e-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:33 crc kubenswrapper[4830]: I0227 17:44:33.934830 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/deab7dc6-3048-4721-8688-57ecae22876e-logs\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:33 crc kubenswrapper[4830]: I0227 17:44:33.934845 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fsmt\" (UniqueName: \"kubernetes.io/projected/deab7dc6-3048-4721-8688-57ecae22876e-kube-api-access-5fsmt\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:33 crc kubenswrapper[4830]: I0227 17:44:33.935279 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/856c8d55-5c9d-4655-8752-63a97ecb38d2-logs" (OuterVolumeSpecName: "logs") pod "856c8d55-5c9d-4655-8752-63a97ecb38d2" (UID: "856c8d55-5c9d-4655-8752-63a97ecb38d2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:44:33 crc kubenswrapper[4830]: I0227 17:44:33.944924 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/856c8d55-5c9d-4655-8752-63a97ecb38d2-kube-api-access-rbkqs" (OuterVolumeSpecName: "kube-api-access-rbkqs") pod "856c8d55-5c9d-4655-8752-63a97ecb38d2" (UID: "856c8d55-5c9d-4655-8752-63a97ecb38d2"). InnerVolumeSpecName "kube-api-access-rbkqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:44:33 crc kubenswrapper[4830]: I0227 17:44:33.969710 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/856c8d55-5c9d-4655-8752-63a97ecb38d2-config-data" (OuterVolumeSpecName: "config-data") pod "856c8d55-5c9d-4655-8752-63a97ecb38d2" (UID: "856c8d55-5c9d-4655-8752-63a97ecb38d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:44:33 crc kubenswrapper[4830]: I0227 17:44:33.969828 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/856c8d55-5c9d-4655-8752-63a97ecb38d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "856c8d55-5c9d-4655-8752-63a97ecb38d2" (UID: "856c8d55-5c9d-4655-8752-63a97ecb38d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:33.997344 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"85168f4c-a1d8-408f-a88c-269e899d29d9","Type":"ContainerStarted","Data":"a9c3c2eb39acfa472bf482a8a42845867719a89517102e56bb27d5b9299283b4"} Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:33.997436 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"85168f4c-a1d8-408f-a88c-269e899d29d9","Type":"ContainerStarted","Data":"31ebf173313760ecacd20ad473dcb5a3d28cddef99a37f62043aec4e65294f33"} Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.012603 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1d4f86df-5d6b-4fd2-8c50-e414adfda318","Type":"ContainerStarted","Data":"d066df29ee3627f861c91fbaf814c7285af92383ba9cf4c04ceba18ecf6f285e"} Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.012658 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1d4f86df-5d6b-4fd2-8c50-e414adfda318","Type":"ContainerStarted","Data":"aac34bb40503b75978948a5f5561791496b9c5bc19aa8aca395264417c543c34"} Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.012768 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.020633 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"debf2adf-e44d-4329-9470-740f206ac43b","Type":"ContainerDied","Data":"329ca1b76e952ba4c5dcc1fbcf4234a5f6e1e9af7f34545d37e2e7a78df18263"} Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.020686 4830 scope.go:117] "RemoveContainer" containerID="59134d028826964bffc3afa6087405079139f7a2f6866323cb23a0d7881aee4a" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.020799 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.031893 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.031863 podStartE2EDuration="3.031863s" podCreationTimestamp="2026-02-27 17:44:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:44:34.017595637 +0000 UTC m=+5870.106868090" watchObservedRunningTime="2026-02-27 17:44:34.031863 +0000 UTC m=+5870.121135483" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.032436 4830 generic.go:334] "Generic (PLEG): container finished" podID="deab7dc6-3048-4721-8688-57ecae22876e" containerID="52db8ac141e0a1f002135cef6cba38b21657eeb4d0a72947ec49cbfed201a4c0" exitCode=0 Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.032537 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.033736 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"deab7dc6-3048-4721-8688-57ecae22876e","Type":"ContainerDied","Data":"52db8ac141e0a1f002135cef6cba38b21657eeb4d0a72947ec49cbfed201a4c0"} Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.033778 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"deab7dc6-3048-4721-8688-57ecae22876e","Type":"ContainerDied","Data":"d56b6fda52a516f080cbeab8f7ea6c02fab876e89cd44f0ba5ec3935d574f240"} Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.046188 4830 generic.go:334] "Generic (PLEG): container finished" podID="856c8d55-5c9d-4655-8752-63a97ecb38d2" containerID="27bac96151e1f53148780e96ad22610c28076ac65b866eb9d04b3a7b408b655a" exitCode=0 Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.046234 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"856c8d55-5c9d-4655-8752-63a97ecb38d2","Type":"ContainerDied","Data":"27bac96151e1f53148780e96ad22610c28076ac65b866eb9d04b3a7b408b655a"} Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.046261 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"856c8d55-5c9d-4655-8752-63a97ecb38d2","Type":"ContainerDied","Data":"81c2b555d55c514bb3708647e6200fba14071685734af2c93a14068002ae5e2f"} Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.046317 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.050589 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.050572451 podStartE2EDuration="3.050572451s" podCreationTimestamp="2026-02-27 17:44:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:44:34.040734864 +0000 UTC m=+5870.130007327" watchObservedRunningTime="2026-02-27 17:44:34.050572451 +0000 UTC m=+5870.139844914" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.053455 4830 scope.go:117] "RemoveContainer" containerID="52db8ac141e0a1f002135cef6cba38b21657eeb4d0a72947ec49cbfed201a4c0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.062709 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbkqs\" (UniqueName: \"kubernetes.io/projected/856c8d55-5c9d-4655-8752-63a97ecb38d2-kube-api-access-rbkqs\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.062744 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/856c8d55-5c9d-4655-8752-63a97ecb38d2-logs\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.062756 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/856c8d55-5c9d-4655-8752-63a97ecb38d2-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.062765 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/856c8d55-5c9d-4655-8752-63a97ecb38d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.097787 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.100077 4830 scope.go:117] "RemoveContainer" containerID="90775655a9f3edb46d15ed8b3765fda6c459f5737b5a9827205a5a1853341610" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.120932 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.131310 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.141022 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.151407 4830 scope.go:117] "RemoveContainer" containerID="52db8ac141e0a1f002135cef6cba38b21657eeb4d0a72947ec49cbfed201a4c0" Feb 27 17:44:34 crc kubenswrapper[4830]: E0227 17:44:34.151857 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52db8ac141e0a1f002135cef6cba38b21657eeb4d0a72947ec49cbfed201a4c0\": container with ID starting with 52db8ac141e0a1f002135cef6cba38b21657eeb4d0a72947ec49cbfed201a4c0 not found: ID does not exist" containerID="52db8ac141e0a1f002135cef6cba38b21657eeb4d0a72947ec49cbfed201a4c0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.151892 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52db8ac141e0a1f002135cef6cba38b21657eeb4d0a72947ec49cbfed201a4c0"} err="failed to get container status \"52db8ac141e0a1f002135cef6cba38b21657eeb4d0a72947ec49cbfed201a4c0\": rpc error: code = NotFound desc = could not find container \"52db8ac141e0a1f002135cef6cba38b21657eeb4d0a72947ec49cbfed201a4c0\": container with ID starting with 52db8ac141e0a1f002135cef6cba38b21657eeb4d0a72947ec49cbfed201a4c0 not found: ID does not exist" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.151914 4830 scope.go:117] "RemoveContainer" containerID="90775655a9f3edb46d15ed8b3765fda6c459f5737b5a9827205a5a1853341610" Feb 27 17:44:34 crc kubenswrapper[4830]: E0227 17:44:34.152374 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90775655a9f3edb46d15ed8b3765fda6c459f5737b5a9827205a5a1853341610\": container with ID starting with 90775655a9f3edb46d15ed8b3765fda6c459f5737b5a9827205a5a1853341610 not found: ID does not exist" containerID="90775655a9f3edb46d15ed8b3765fda6c459f5737b5a9827205a5a1853341610" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.152400 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90775655a9f3edb46d15ed8b3765fda6c459f5737b5a9827205a5a1853341610"} err="failed to get container status \"90775655a9f3edb46d15ed8b3765fda6c459f5737b5a9827205a5a1853341610\": rpc error: code = NotFound desc = could not find container \"90775655a9f3edb46d15ed8b3765fda6c459f5737b5a9827205a5a1853341610\": container with ID starting with 90775655a9f3edb46d15ed8b3765fda6c459f5737b5a9827205a5a1853341610 not found: ID does not exist" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.152415 4830 scope.go:117] "RemoveContainer" containerID="27bac96151e1f53148780e96ad22610c28076ac65b866eb9d04b3a7b408b655a" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.152500 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 17:44:34 crc kubenswrapper[4830]: E0227 17:44:34.152889 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="debf2adf-e44d-4329-9470-740f206ac43b" containerName="nova-cell0-conductor-conductor" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.152907 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="debf2adf-e44d-4329-9470-740f206ac43b" containerName="nova-cell0-conductor-conductor" Feb 27 17:44:34 crc kubenswrapper[4830]: E0227 17:44:34.152966 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deab7dc6-3048-4721-8688-57ecae22876e" containerName="nova-metadata-log" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.152974 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="deab7dc6-3048-4721-8688-57ecae22876e" containerName="nova-metadata-log" Feb 27 17:44:34 crc kubenswrapper[4830]: E0227 17:44:34.152991 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="856c8d55-5c9d-4655-8752-63a97ecb38d2" containerName="nova-api-api" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.152997 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="856c8d55-5c9d-4655-8752-63a97ecb38d2" containerName="nova-api-api" Feb 27 17:44:34 crc kubenswrapper[4830]: E0227 17:44:34.153018 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deab7dc6-3048-4721-8688-57ecae22876e" containerName="nova-metadata-metadata" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.153025 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="deab7dc6-3048-4721-8688-57ecae22876e" containerName="nova-metadata-metadata" Feb 27 17:44:34 crc kubenswrapper[4830]: E0227 17:44:34.153033 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="856c8d55-5c9d-4655-8752-63a97ecb38d2" containerName="nova-api-log" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.153040 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="856c8d55-5c9d-4655-8752-63a97ecb38d2" containerName="nova-api-log" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.153207 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="deab7dc6-3048-4721-8688-57ecae22876e" containerName="nova-metadata-log" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.153225 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="debf2adf-e44d-4329-9470-740f206ac43b" containerName="nova-cell0-conductor-conductor" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.153239 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="deab7dc6-3048-4721-8688-57ecae22876e" containerName="nova-metadata-metadata" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.153253 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="856c8d55-5c9d-4655-8752-63a97ecb38d2" containerName="nova-api-log" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.153264 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="856c8d55-5c9d-4655-8752-63a97ecb38d2" containerName="nova-api-api" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.153894 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.159350 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.159886 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.164979 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4gvl\" (UniqueName: \"kubernetes.io/projected/9854db0d-60b3-462b-818b-9fa262f89cb4-kube-api-access-s4gvl\") pod \"nova-cell0-conductor-0\" (UID: \"9854db0d-60b3-462b-818b-9fa262f89cb4\") " pod="openstack/nova-cell0-conductor-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.178106 4830 scope.go:117] "RemoveContainer" containerID="a46c39a2dbec52dbace3d2bef0430644077cc0412e3682b6c23d5006347dc9c8" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.180196 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9854db0d-60b3-462b-818b-9fa262f89cb4-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9854db0d-60b3-462b-818b-9fa262f89cb4\") " pod="openstack/nova-cell0-conductor-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.180327 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9854db0d-60b3-462b-818b-9fa262f89cb4-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9854db0d-60b3-462b-818b-9fa262f89cb4\") " pod="openstack/nova-cell0-conductor-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.198519 4830 scope.go:117] "RemoveContainer" containerID="27bac96151e1f53148780e96ad22610c28076ac65b866eb9d04b3a7b408b655a" Feb 27 17:44:34 crc kubenswrapper[4830]: E0227 17:44:34.200568 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27bac96151e1f53148780e96ad22610c28076ac65b866eb9d04b3a7b408b655a\": container with ID starting with 27bac96151e1f53148780e96ad22610c28076ac65b866eb9d04b3a7b408b655a not found: ID does not exist" containerID="27bac96151e1f53148780e96ad22610c28076ac65b866eb9d04b3a7b408b655a" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.200626 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27bac96151e1f53148780e96ad22610c28076ac65b866eb9d04b3a7b408b655a"} err="failed to get container status \"27bac96151e1f53148780e96ad22610c28076ac65b866eb9d04b3a7b408b655a\": rpc error: code = NotFound desc = could not find container \"27bac96151e1f53148780e96ad22610c28076ac65b866eb9d04b3a7b408b655a\": container with ID starting with 27bac96151e1f53148780e96ad22610c28076ac65b866eb9d04b3a7b408b655a not found: ID does not exist" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.200672 4830 scope.go:117] "RemoveContainer" containerID="a46c39a2dbec52dbace3d2bef0430644077cc0412e3682b6c23d5006347dc9c8" Feb 27 17:44:34 crc kubenswrapper[4830]: E0227 17:44:34.201104 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a46c39a2dbec52dbace3d2bef0430644077cc0412e3682b6c23d5006347dc9c8\": container with ID starting with a46c39a2dbec52dbace3d2bef0430644077cc0412e3682b6c23d5006347dc9c8 not found: ID does not exist" containerID="a46c39a2dbec52dbace3d2bef0430644077cc0412e3682b6c23d5006347dc9c8" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.201159 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a46c39a2dbec52dbace3d2bef0430644077cc0412e3682b6c23d5006347dc9c8"} err="failed to get container status \"a46c39a2dbec52dbace3d2bef0430644077cc0412e3682b6c23d5006347dc9c8\": rpc error: code = NotFound desc = could not find container \"a46c39a2dbec52dbace3d2bef0430644077cc0412e3682b6c23d5006347dc9c8\": container with ID starting with a46c39a2dbec52dbace3d2bef0430644077cc0412e3682b6c23d5006347dc9c8 not found: ID does not exist" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.201234 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.208851 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.211774 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.239643 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.248685 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.257163 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.267859 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.269796 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.271867 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.276748 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.281689 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9854db0d-60b3-462b-818b-9fa262f89cb4-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9854db0d-60b3-462b-818b-9fa262f89cb4\") " pod="openstack/nova-cell0-conductor-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.281744 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9854db0d-60b3-462b-818b-9fa262f89cb4-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9854db0d-60b3-462b-818b-9fa262f89cb4\") " pod="openstack/nova-cell0-conductor-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.281773 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd94c1a3-2090-4382-b181-7b121e05a5d7-logs\") pod \"nova-metadata-0\" (UID: \"cd94c1a3-2090-4382-b181-7b121e05a5d7\") " pod="openstack/nova-metadata-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.281793 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv42g\" (UniqueName: \"kubernetes.io/projected/cd94c1a3-2090-4382-b181-7b121e05a5d7-kube-api-access-vv42g\") pod \"nova-metadata-0\" (UID: \"cd94c1a3-2090-4382-b181-7b121e05a5d7\") " pod="openstack/nova-metadata-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.281820 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd94c1a3-2090-4382-b181-7b121e05a5d7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cd94c1a3-2090-4382-b181-7b121e05a5d7\") " pod="openstack/nova-metadata-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.281865 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4gvl\" (UniqueName: \"kubernetes.io/projected/9854db0d-60b3-462b-818b-9fa262f89cb4-kube-api-access-s4gvl\") pod \"nova-cell0-conductor-0\" (UID: \"9854db0d-60b3-462b-818b-9fa262f89cb4\") " pod="openstack/nova-cell0-conductor-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.281918 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd94c1a3-2090-4382-b181-7b121e05a5d7-config-data\") pod \"nova-metadata-0\" (UID: \"cd94c1a3-2090-4382-b181-7b121e05a5d7\") " pod="openstack/nova-metadata-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.285482 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9854db0d-60b3-462b-818b-9fa262f89cb4-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9854db0d-60b3-462b-818b-9fa262f89cb4\") " pod="openstack/nova-cell0-conductor-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.285912 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9854db0d-60b3-462b-818b-9fa262f89cb4-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9854db0d-60b3-462b-818b-9fa262f89cb4\") " pod="openstack/nova-cell0-conductor-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.304476 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4gvl\" (UniqueName: \"kubernetes.io/projected/9854db0d-60b3-462b-818b-9fa262f89cb4-kube-api-access-s4gvl\") pod \"nova-cell0-conductor-0\" (UID: \"9854db0d-60b3-462b-818b-9fa262f89cb4\") " pod="openstack/nova-cell0-conductor-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.386460 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d64fd96a-b098-4112-8019-6577ba87df85-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d64fd96a-b098-4112-8019-6577ba87df85\") " pod="openstack/nova-api-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.386544 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd94c1a3-2090-4382-b181-7b121e05a5d7-logs\") pod \"nova-metadata-0\" (UID: \"cd94c1a3-2090-4382-b181-7b121e05a5d7\") " pod="openstack/nova-metadata-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.386575 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv42g\" (UniqueName: \"kubernetes.io/projected/cd94c1a3-2090-4382-b181-7b121e05a5d7-kube-api-access-vv42g\") pod \"nova-metadata-0\" (UID: \"cd94c1a3-2090-4382-b181-7b121e05a5d7\") " pod="openstack/nova-metadata-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.386601 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd94c1a3-2090-4382-b181-7b121e05a5d7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cd94c1a3-2090-4382-b181-7b121e05a5d7\") " pod="openstack/nova-metadata-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.386655 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrgmr\" (UniqueName: \"kubernetes.io/projected/d64fd96a-b098-4112-8019-6577ba87df85-kube-api-access-wrgmr\") pod \"nova-api-0\" (UID: \"d64fd96a-b098-4112-8019-6577ba87df85\") " pod="openstack/nova-api-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.386699 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d64fd96a-b098-4112-8019-6577ba87df85-config-data\") pod \"nova-api-0\" (UID: \"d64fd96a-b098-4112-8019-6577ba87df85\") " pod="openstack/nova-api-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.386720 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d64fd96a-b098-4112-8019-6577ba87df85-logs\") pod \"nova-api-0\" (UID: \"d64fd96a-b098-4112-8019-6577ba87df85\") " pod="openstack/nova-api-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.386744 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd94c1a3-2090-4382-b181-7b121e05a5d7-config-data\") pod \"nova-metadata-0\" (UID: \"cd94c1a3-2090-4382-b181-7b121e05a5d7\") " pod="openstack/nova-metadata-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.387400 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd94c1a3-2090-4382-b181-7b121e05a5d7-logs\") pod \"nova-metadata-0\" (UID: \"cd94c1a3-2090-4382-b181-7b121e05a5d7\") " pod="openstack/nova-metadata-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.390802 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd94c1a3-2090-4382-b181-7b121e05a5d7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cd94c1a3-2090-4382-b181-7b121e05a5d7\") " pod="openstack/nova-metadata-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.391875 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd94c1a3-2090-4382-b181-7b121e05a5d7-config-data\") pod \"nova-metadata-0\" (UID: \"cd94c1a3-2090-4382-b181-7b121e05a5d7\") " pod="openstack/nova-metadata-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.415867 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv42g\" (UniqueName: \"kubernetes.io/projected/cd94c1a3-2090-4382-b181-7b121e05a5d7-kube-api-access-vv42g\") pod \"nova-metadata-0\" (UID: \"cd94c1a3-2090-4382-b181-7b121e05a5d7\") " pod="openstack/nova-metadata-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.483463 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.488350 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d64fd96a-b098-4112-8019-6577ba87df85-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d64fd96a-b098-4112-8019-6577ba87df85\") " pod="openstack/nova-api-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.488464 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrgmr\" (UniqueName: \"kubernetes.io/projected/d64fd96a-b098-4112-8019-6577ba87df85-kube-api-access-wrgmr\") pod \"nova-api-0\" (UID: \"d64fd96a-b098-4112-8019-6577ba87df85\") " pod="openstack/nova-api-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.488516 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d64fd96a-b098-4112-8019-6577ba87df85-config-data\") pod \"nova-api-0\" (UID: \"d64fd96a-b098-4112-8019-6577ba87df85\") " pod="openstack/nova-api-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.488539 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d64fd96a-b098-4112-8019-6577ba87df85-logs\") pod \"nova-api-0\" (UID: \"d64fd96a-b098-4112-8019-6577ba87df85\") " pod="openstack/nova-api-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.488977 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d64fd96a-b098-4112-8019-6577ba87df85-logs\") pod \"nova-api-0\" (UID: \"d64fd96a-b098-4112-8019-6577ba87df85\") " pod="openstack/nova-api-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.492591 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d64fd96a-b098-4112-8019-6577ba87df85-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d64fd96a-b098-4112-8019-6577ba87df85\") " pod="openstack/nova-api-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.494163 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d64fd96a-b098-4112-8019-6577ba87df85-config-data\") pod \"nova-api-0\" (UID: \"d64fd96a-b098-4112-8019-6577ba87df85\") " pod="openstack/nova-api-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.511250 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrgmr\" (UniqueName: \"kubernetes.io/projected/d64fd96a-b098-4112-8019-6577ba87df85-kube-api-access-wrgmr\") pod \"nova-api-0\" (UID: \"d64fd96a-b098-4112-8019-6577ba87df85\") " pod="openstack/nova-api-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.548458 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.598557 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.781090 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="856c8d55-5c9d-4655-8752-63a97ecb38d2" path="/var/lib/kubelet/pods/856c8d55-5c9d-4655-8752-63a97ecb38d2/volumes" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.781686 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deab7dc6-3048-4721-8688-57ecae22876e" path="/var/lib/kubelet/pods/deab7dc6-3048-4721-8688-57ecae22876e/volumes" Feb 27 17:44:34 crc kubenswrapper[4830]: I0227 17:44:34.782243 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="debf2adf-e44d-4329-9470-740f206ac43b" path="/var/lib/kubelet/pods/debf2adf-e44d-4329-9470-740f206ac43b/volumes" Feb 27 17:44:35 crc kubenswrapper[4830]: I0227 17:44:35.014436 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:44:35 crc kubenswrapper[4830]: I0227 17:44:35.029677 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 17:44:35 crc kubenswrapper[4830]: I0227 17:44:35.088847 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cd94c1a3-2090-4382-b181-7b121e05a5d7","Type":"ContainerStarted","Data":"32c82ac9aeed5ecc7167462d3a9fe355edb5ce30c7f7cbc7ae1ab243c5af7c45"} Feb 27 17:44:35 crc kubenswrapper[4830]: I0227 17:44:35.101084 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9854db0d-60b3-462b-818b-9fa262f89cb4","Type":"ContainerStarted","Data":"f4a76fd78887489a97cfc3b79579cc4a44bd7a233a5fc170292ec75430dff5f6"} Feb 27 17:44:35 crc kubenswrapper[4830]: I0227 17:44:35.127446 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 17:44:35 crc kubenswrapper[4830]: W0227 17:44:35.129461 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd64fd96a_b098_4112_8019_6577ba87df85.slice/crio-78a8784b69451ac55e9aa2576930a206fded79702a564f2d4ad566b119a70396 WatchSource:0}: Error finding container 78a8784b69451ac55e9aa2576930a206fded79702a564f2d4ad566b119a70396: Status 404 returned error can't find the container with id 78a8784b69451ac55e9aa2576930a206fded79702a564f2d4ad566b119a70396 Feb 27 17:44:35 crc kubenswrapper[4830]: I0227 17:44:35.615884 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 17:44:35 crc kubenswrapper[4830]: I0227 17:44:35.715057 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85f07766-38ac-48a4-9ed2-e87e5cc56093-config-data\") pod \"85f07766-38ac-48a4-9ed2-e87e5cc56093\" (UID: \"85f07766-38ac-48a4-9ed2-e87e5cc56093\") " Feb 27 17:44:35 crc kubenswrapper[4830]: I0227 17:44:35.715820 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7x4s7\" (UniqueName: \"kubernetes.io/projected/85f07766-38ac-48a4-9ed2-e87e5cc56093-kube-api-access-7x4s7\") pod \"85f07766-38ac-48a4-9ed2-e87e5cc56093\" (UID: \"85f07766-38ac-48a4-9ed2-e87e5cc56093\") " Feb 27 17:44:35 crc kubenswrapper[4830]: I0227 17:44:35.715859 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85f07766-38ac-48a4-9ed2-e87e5cc56093-combined-ca-bundle\") pod \"85f07766-38ac-48a4-9ed2-e87e5cc56093\" (UID: \"85f07766-38ac-48a4-9ed2-e87e5cc56093\") " Feb 27 17:44:35 crc kubenswrapper[4830]: I0227 17:44:35.722710 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85f07766-38ac-48a4-9ed2-e87e5cc56093-kube-api-access-7x4s7" (OuterVolumeSpecName: "kube-api-access-7x4s7") pod "85f07766-38ac-48a4-9ed2-e87e5cc56093" (UID: "85f07766-38ac-48a4-9ed2-e87e5cc56093"). InnerVolumeSpecName "kube-api-access-7x4s7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:44:35 crc kubenswrapper[4830]: I0227 17:44:35.773249 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85f07766-38ac-48a4-9ed2-e87e5cc56093-config-data" (OuterVolumeSpecName: "config-data") pod "85f07766-38ac-48a4-9ed2-e87e5cc56093" (UID: "85f07766-38ac-48a4-9ed2-e87e5cc56093"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:44:35 crc kubenswrapper[4830]: I0227 17:44:35.782283 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85f07766-38ac-48a4-9ed2-e87e5cc56093-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "85f07766-38ac-48a4-9ed2-e87e5cc56093" (UID: "85f07766-38ac-48a4-9ed2-e87e5cc56093"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:44:35 crc kubenswrapper[4830]: I0227 17:44:35.819688 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7x4s7\" (UniqueName: \"kubernetes.io/projected/85f07766-38ac-48a4-9ed2-e87e5cc56093-kube-api-access-7x4s7\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:35 crc kubenswrapper[4830]: I0227 17:44:35.819723 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85f07766-38ac-48a4-9ed2-e87e5cc56093-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:35 crc kubenswrapper[4830]: I0227 17:44:35.819733 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85f07766-38ac-48a4-9ed2-e87e5cc56093-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:36 crc kubenswrapper[4830]: I0227 17:44:36.109363 4830 generic.go:334] "Generic (PLEG): container finished" podID="85f07766-38ac-48a4-9ed2-e87e5cc56093" containerID="2f6e826b7f5e36c9cf4e24d7c630f10292d1ee84e4660aa4c0c310faa3b95a9e" exitCode=0 Feb 27 17:44:36 crc kubenswrapper[4830]: I0227 17:44:36.109638 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"85f07766-38ac-48a4-9ed2-e87e5cc56093","Type":"ContainerDied","Data":"2f6e826b7f5e36c9cf4e24d7c630f10292d1ee84e4660aa4c0c310faa3b95a9e"} Feb 27 17:44:36 crc kubenswrapper[4830]: I0227 17:44:36.109737 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 17:44:36 crc kubenswrapper[4830]: I0227 17:44:36.109764 4830 scope.go:117] "RemoveContainer" containerID="2f6e826b7f5e36c9cf4e24d7c630f10292d1ee84e4660aa4c0c310faa3b95a9e" Feb 27 17:44:36 crc kubenswrapper[4830]: I0227 17:44:36.109746 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"85f07766-38ac-48a4-9ed2-e87e5cc56093","Type":"ContainerDied","Data":"255a6ca8d6867f0abaf5efd5f196f2404da4df6ec674ced29277c83845012e1d"} Feb 27 17:44:36 crc kubenswrapper[4830]: I0227 17:44:36.113267 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9854db0d-60b3-462b-818b-9fa262f89cb4","Type":"ContainerStarted","Data":"352f39da19d410f02510843cc5fca34cdca5c70d2a6ff1b59e0e6fc4f316c00c"} Feb 27 17:44:36 crc kubenswrapper[4830]: I0227 17:44:36.114367 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 27 17:44:36 crc kubenswrapper[4830]: I0227 17:44:36.118366 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d64fd96a-b098-4112-8019-6577ba87df85","Type":"ContainerStarted","Data":"35a26bb893b05a4b46254fa263de3754e89a0658d95d90ebb16b8095ad4da7f0"} Feb 27 17:44:36 crc kubenswrapper[4830]: I0227 17:44:36.118404 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d64fd96a-b098-4112-8019-6577ba87df85","Type":"ContainerStarted","Data":"163a5e9550404788628a173824ffae57353e891f1d545313e8aef5261a2c873c"} Feb 27 17:44:36 crc kubenswrapper[4830]: I0227 17:44:36.118416 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d64fd96a-b098-4112-8019-6577ba87df85","Type":"ContainerStarted","Data":"78a8784b69451ac55e9aa2576930a206fded79702a564f2d4ad566b119a70396"} Feb 27 17:44:36 crc kubenswrapper[4830]: I0227 17:44:36.134262 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.1342490769999998 podStartE2EDuration="2.134249077s" podCreationTimestamp="2026-02-27 17:44:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:44:36.130874226 +0000 UTC m=+5872.220146689" watchObservedRunningTime="2026-02-27 17:44:36.134249077 +0000 UTC m=+5872.223521540" Feb 27 17:44:36 crc kubenswrapper[4830]: I0227 17:44:36.144383 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cd94c1a3-2090-4382-b181-7b121e05a5d7","Type":"ContainerStarted","Data":"076924d75bc5d26c603ac0ad2f2cc8a739297af5a53d65be3746a8196e73eb94"} Feb 27 17:44:36 crc kubenswrapper[4830]: I0227 17:44:36.144411 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cd94c1a3-2090-4382-b181-7b121e05a5d7","Type":"ContainerStarted","Data":"12e99a5fa50a816e1f0f88bd69f4e10aa22c9f163afacde8687e41e721d6a061"} Feb 27 17:44:36 crc kubenswrapper[4830]: I0227 17:44:36.165784 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.165764504 podStartE2EDuration="2.165764504s" podCreationTimestamp="2026-02-27 17:44:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:44:36.155065517 +0000 UTC m=+5872.244338010" watchObservedRunningTime="2026-02-27 17:44:36.165764504 +0000 UTC m=+5872.255036967" Feb 27 17:44:36 crc kubenswrapper[4830]: I0227 17:44:36.181394 4830 scope.go:117] "RemoveContainer" containerID="2f6e826b7f5e36c9cf4e24d7c630f10292d1ee84e4660aa4c0c310faa3b95a9e" Feb 27 17:44:36 crc kubenswrapper[4830]: E0227 17:44:36.184196 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f6e826b7f5e36c9cf4e24d7c630f10292d1ee84e4660aa4c0c310faa3b95a9e\": container with ID starting with 2f6e826b7f5e36c9cf4e24d7c630f10292d1ee84e4660aa4c0c310faa3b95a9e not found: ID does not exist" containerID="2f6e826b7f5e36c9cf4e24d7c630f10292d1ee84e4660aa4c0c310faa3b95a9e" Feb 27 17:44:36 crc kubenswrapper[4830]: I0227 17:44:36.184472 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f6e826b7f5e36c9cf4e24d7c630f10292d1ee84e4660aa4c0c310faa3b95a9e"} err="failed to get container status \"2f6e826b7f5e36c9cf4e24d7c630f10292d1ee84e4660aa4c0c310faa3b95a9e\": rpc error: code = NotFound desc = could not find container \"2f6e826b7f5e36c9cf4e24d7c630f10292d1ee84e4660aa4c0c310faa3b95a9e\": container with ID starting with 2f6e826b7f5e36c9cf4e24d7c630f10292d1ee84e4660aa4c0c310faa3b95a9e not found: ID does not exist" Feb 27 17:44:36 crc kubenswrapper[4830]: I0227 17:44:36.198779 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 17:44:36 crc kubenswrapper[4830]: I0227 17:44:36.219888 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 17:44:36 crc kubenswrapper[4830]: I0227 17:44:36.237182 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 17:44:36 crc kubenswrapper[4830]: I0227 17:44:36.237421 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.237400668 podStartE2EDuration="2.237400668s" podCreationTimestamp="2026-02-27 17:44:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:44:36.197019917 +0000 UTC m=+5872.286292390" watchObservedRunningTime="2026-02-27 17:44:36.237400668 +0000 UTC m=+5872.326673131" Feb 27 17:44:36 crc kubenswrapper[4830]: E0227 17:44:36.237639 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85f07766-38ac-48a4-9ed2-e87e5cc56093" containerName="nova-scheduler-scheduler" Feb 27 17:44:36 crc kubenswrapper[4830]: I0227 17:44:36.237660 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="85f07766-38ac-48a4-9ed2-e87e5cc56093" containerName="nova-scheduler-scheduler" Feb 27 17:44:36 crc kubenswrapper[4830]: I0227 17:44:36.237874 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="85f07766-38ac-48a4-9ed2-e87e5cc56093" containerName="nova-scheduler-scheduler" Feb 27 17:44:36 crc kubenswrapper[4830]: I0227 17:44:36.238575 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 17:44:36 crc kubenswrapper[4830]: I0227 17:44:36.240790 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 27 17:44:36 crc kubenswrapper[4830]: I0227 17:44:36.261421 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 17:44:36 crc kubenswrapper[4830]: I0227 17:44:36.334980 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c482099c-834e-41c1-92f6-7a4699524e31-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c482099c-834e-41c1-92f6-7a4699524e31\") " pod="openstack/nova-scheduler-0" Feb 27 17:44:36 crc kubenswrapper[4830]: I0227 17:44:36.335121 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwc6s\" (UniqueName: \"kubernetes.io/projected/c482099c-834e-41c1-92f6-7a4699524e31-kube-api-access-nwc6s\") pod \"nova-scheduler-0\" (UID: \"c482099c-834e-41c1-92f6-7a4699524e31\") " pod="openstack/nova-scheduler-0" Feb 27 17:44:36 crc kubenswrapper[4830]: I0227 17:44:36.335229 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c482099c-834e-41c1-92f6-7a4699524e31-config-data\") pod \"nova-scheduler-0\" (UID: \"c482099c-834e-41c1-92f6-7a4699524e31\") " pod="openstack/nova-scheduler-0" Feb 27 17:44:36 crc kubenswrapper[4830]: I0227 17:44:36.437214 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c482099c-834e-41c1-92f6-7a4699524e31-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c482099c-834e-41c1-92f6-7a4699524e31\") " pod="openstack/nova-scheduler-0" Feb 27 17:44:36 crc kubenswrapper[4830]: I0227 17:44:36.437332 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwc6s\" (UniqueName: \"kubernetes.io/projected/c482099c-834e-41c1-92f6-7a4699524e31-kube-api-access-nwc6s\") pod \"nova-scheduler-0\" (UID: \"c482099c-834e-41c1-92f6-7a4699524e31\") " pod="openstack/nova-scheduler-0" Feb 27 17:44:36 crc kubenswrapper[4830]: I0227 17:44:36.437411 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c482099c-834e-41c1-92f6-7a4699524e31-config-data\") pod \"nova-scheduler-0\" (UID: \"c482099c-834e-41c1-92f6-7a4699524e31\") " pod="openstack/nova-scheduler-0" Feb 27 17:44:36 crc kubenswrapper[4830]: I0227 17:44:36.442719 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c482099c-834e-41c1-92f6-7a4699524e31-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c482099c-834e-41c1-92f6-7a4699524e31\") " pod="openstack/nova-scheduler-0" Feb 27 17:44:36 crc kubenswrapper[4830]: I0227 17:44:36.443522 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c482099c-834e-41c1-92f6-7a4699524e31-config-data\") pod \"nova-scheduler-0\" (UID: \"c482099c-834e-41c1-92f6-7a4699524e31\") " pod="openstack/nova-scheduler-0" Feb 27 17:44:36 crc kubenswrapper[4830]: I0227 17:44:36.464120 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwc6s\" (UniqueName: \"kubernetes.io/projected/c482099c-834e-41c1-92f6-7a4699524e31-kube-api-access-nwc6s\") pod \"nova-scheduler-0\" (UID: \"c482099c-834e-41c1-92f6-7a4699524e31\") " pod="openstack/nova-scheduler-0" Feb 27 17:44:36 crc kubenswrapper[4830]: I0227 17:44:36.561144 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 17:44:36 crc kubenswrapper[4830]: I0227 17:44:36.778748 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85f07766-38ac-48a4-9ed2-e87e5cc56093" path="/var/lib/kubelet/pods/85f07766-38ac-48a4-9ed2-e87e5cc56093/volumes" Feb 27 17:44:37 crc kubenswrapper[4830]: I0227 17:44:37.026473 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 17:44:37 crc kubenswrapper[4830]: W0227 17:44:37.026717 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc482099c_834e_41c1_92f6_7a4699524e31.slice/crio-2ed62fa7bf90ff547170994088bcfa6cde401cce0d44cee3b72ce827e59bf9f9 WatchSource:0}: Error finding container 2ed62fa7bf90ff547170994088bcfa6cde401cce0d44cee3b72ce827e59bf9f9: Status 404 returned error can't find the container with id 2ed62fa7bf90ff547170994088bcfa6cde401cce0d44cee3b72ce827e59bf9f9 Feb 27 17:44:37 crc kubenswrapper[4830]: I0227 17:44:37.162433 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c482099c-834e-41c1-92f6-7a4699524e31","Type":"ContainerStarted","Data":"2ed62fa7bf90ff547170994088bcfa6cde401cce0d44cee3b72ce827e59bf9f9"} Feb 27 17:44:37 crc kubenswrapper[4830]: I0227 17:44:37.419561 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:44:37 crc kubenswrapper[4830]: I0227 17:44:37.661160 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b8675bf5c-vgk78" Feb 27 17:44:37 crc kubenswrapper[4830]: I0227 17:44:37.736306 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cf7c86fb5-5wfg7"] Feb 27 17:44:37 crc kubenswrapper[4830]: I0227 17:44:37.736548 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5cf7c86fb5-5wfg7" podUID="1b376941-61ec-4cfc-9ced-db78152e29f0" containerName="dnsmasq-dns" containerID="cri-o://3e07539f2a77d58f5e12dba382cbb7f0fa5a84f3836e675b3a68b4b44bb198b1" gracePeriod=10 Feb 27 17:44:38 crc kubenswrapper[4830]: I0227 17:44:38.179120 4830 generic.go:334] "Generic (PLEG): container finished" podID="1b376941-61ec-4cfc-9ced-db78152e29f0" containerID="3e07539f2a77d58f5e12dba382cbb7f0fa5a84f3836e675b3a68b4b44bb198b1" exitCode=0 Feb 27 17:44:38 crc kubenswrapper[4830]: I0227 17:44:38.179456 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cf7c86fb5-5wfg7" event={"ID":"1b376941-61ec-4cfc-9ced-db78152e29f0","Type":"ContainerDied","Data":"3e07539f2a77d58f5e12dba382cbb7f0fa5a84f3836e675b3a68b4b44bb198b1"} Feb 27 17:44:38 crc kubenswrapper[4830]: I0227 17:44:38.183339 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c482099c-834e-41c1-92f6-7a4699524e31","Type":"ContainerStarted","Data":"0e6e548b3329eae03a387da5196c23495a11f9c0b67dcdc724fdc431db943ffd"} Feb 27 17:44:38 crc kubenswrapper[4830]: I0227 17:44:38.207312 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.207294137 podStartE2EDuration="2.207294137s" podCreationTimestamp="2026-02-27 17:44:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:44:38.200021261 +0000 UTC m=+5874.289293724" watchObservedRunningTime="2026-02-27 17:44:38.207294137 +0000 UTC m=+5874.296566600" Feb 27 17:44:38 crc kubenswrapper[4830]: I0227 17:44:38.293444 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cf7c86fb5-5wfg7" Feb 27 17:44:38 crc kubenswrapper[4830]: I0227 17:44:38.389685 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b376941-61ec-4cfc-9ced-db78152e29f0-ovsdbserver-sb\") pod \"1b376941-61ec-4cfc-9ced-db78152e29f0\" (UID: \"1b376941-61ec-4cfc-9ced-db78152e29f0\") " Feb 27 17:44:38 crc kubenswrapper[4830]: I0227 17:44:38.389813 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b376941-61ec-4cfc-9ced-db78152e29f0-dns-svc\") pod \"1b376941-61ec-4cfc-9ced-db78152e29f0\" (UID: \"1b376941-61ec-4cfc-9ced-db78152e29f0\") " Feb 27 17:44:38 crc kubenswrapper[4830]: I0227 17:44:38.389856 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b376941-61ec-4cfc-9ced-db78152e29f0-ovsdbserver-nb\") pod \"1b376941-61ec-4cfc-9ced-db78152e29f0\" (UID: \"1b376941-61ec-4cfc-9ced-db78152e29f0\") " Feb 27 17:44:38 crc kubenswrapper[4830]: I0227 17:44:38.389890 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wbwp\" (UniqueName: \"kubernetes.io/projected/1b376941-61ec-4cfc-9ced-db78152e29f0-kube-api-access-7wbwp\") pod \"1b376941-61ec-4cfc-9ced-db78152e29f0\" (UID: \"1b376941-61ec-4cfc-9ced-db78152e29f0\") " Feb 27 17:44:38 crc kubenswrapper[4830]: I0227 17:44:38.390065 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b376941-61ec-4cfc-9ced-db78152e29f0-config\") pod \"1b376941-61ec-4cfc-9ced-db78152e29f0\" (UID: \"1b376941-61ec-4cfc-9ced-db78152e29f0\") " Feb 27 17:44:38 crc kubenswrapper[4830]: I0227 17:44:38.406557 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b376941-61ec-4cfc-9ced-db78152e29f0-kube-api-access-7wbwp" (OuterVolumeSpecName: "kube-api-access-7wbwp") pod "1b376941-61ec-4cfc-9ced-db78152e29f0" (UID: "1b376941-61ec-4cfc-9ced-db78152e29f0"). InnerVolumeSpecName "kube-api-access-7wbwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:44:38 crc kubenswrapper[4830]: I0227 17:44:38.443284 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b376941-61ec-4cfc-9ced-db78152e29f0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1b376941-61ec-4cfc-9ced-db78152e29f0" (UID: "1b376941-61ec-4cfc-9ced-db78152e29f0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:44:38 crc kubenswrapper[4830]: I0227 17:44:38.444388 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b376941-61ec-4cfc-9ced-db78152e29f0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1b376941-61ec-4cfc-9ced-db78152e29f0" (UID: "1b376941-61ec-4cfc-9ced-db78152e29f0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:44:38 crc kubenswrapper[4830]: I0227 17:44:38.447670 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b376941-61ec-4cfc-9ced-db78152e29f0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1b376941-61ec-4cfc-9ced-db78152e29f0" (UID: "1b376941-61ec-4cfc-9ced-db78152e29f0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:44:38 crc kubenswrapper[4830]: I0227 17:44:38.451701 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b376941-61ec-4cfc-9ced-db78152e29f0-config" (OuterVolumeSpecName: "config") pod "1b376941-61ec-4cfc-9ced-db78152e29f0" (UID: "1b376941-61ec-4cfc-9ced-db78152e29f0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:44:38 crc kubenswrapper[4830]: I0227 17:44:38.491303 4830 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b376941-61ec-4cfc-9ced-db78152e29f0-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:38 crc kubenswrapper[4830]: I0227 17:44:38.491567 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b376941-61ec-4cfc-9ced-db78152e29f0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:38 crc kubenswrapper[4830]: I0227 17:44:38.491633 4830 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b376941-61ec-4cfc-9ced-db78152e29f0-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:38 crc kubenswrapper[4830]: I0227 17:44:38.491701 4830 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b376941-61ec-4cfc-9ced-db78152e29f0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:38 crc kubenswrapper[4830]: I0227 17:44:38.491754 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wbwp\" (UniqueName: \"kubernetes.io/projected/1b376941-61ec-4cfc-9ced-db78152e29f0-kube-api-access-7wbwp\") on node \"crc\" DevicePath \"\"" Feb 27 17:44:38 crc kubenswrapper[4830]: E0227 17:44:38.644859 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 17:44:38 crc kubenswrapper[4830]: E0227 17:44:38.645001 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 17:44:38 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 17:44:38 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xmw5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536904-jrdqt_openshift-infra(77856f9c-1131-4857-9fff-bddf1d27b5d3): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 17:44:38 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 17:44:38 crc kubenswrapper[4830]: E0227 17:44:38.646542 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536904-jrdqt" podUID="77856f9c-1131-4857-9fff-bddf1d27b5d3" Feb 27 17:44:39 crc kubenswrapper[4830]: I0227 17:44:39.197491 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cf7c86fb5-5wfg7" event={"ID":"1b376941-61ec-4cfc-9ced-db78152e29f0","Type":"ContainerDied","Data":"ba71cfafdd4dc3cc955015ac0bd570c4ddfe33f0a341904455e6ff5a2615bcc4"} Feb 27 17:44:39 crc kubenswrapper[4830]: I0227 17:44:39.198500 4830 scope.go:117] "RemoveContainer" containerID="3e07539f2a77d58f5e12dba382cbb7f0fa5a84f3836e675b3a68b4b44bb198b1" Feb 27 17:44:39 crc kubenswrapper[4830]: I0227 17:44:39.197499 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cf7c86fb5-5wfg7" Feb 27 17:44:39 crc kubenswrapper[4830]: I0227 17:44:39.242496 4830 scope.go:117] "RemoveContainer" containerID="0f49a590c7089256ccafcf55562c343b54ea8b3ad7619383956439ec225dc43d" Feb 27 17:44:39 crc kubenswrapper[4830]: I0227 17:44:39.272190 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cf7c86fb5-5wfg7"] Feb 27 17:44:39 crc kubenswrapper[4830]: I0227 17:44:39.286355 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5cf7c86fb5-5wfg7"] Feb 27 17:44:39 crc kubenswrapper[4830]: I0227 17:44:39.479005 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 27 17:44:39 crc kubenswrapper[4830]: I0227 17:44:39.549101 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 27 17:44:39 crc kubenswrapper[4830]: I0227 17:44:39.550457 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 27 17:44:39 crc kubenswrapper[4830]: E0227 17:44:39.767054 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:44:40 crc kubenswrapper[4830]: I0227 17:44:40.783474 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b376941-61ec-4cfc-9ced-db78152e29f0" path="/var/lib/kubelet/pods/1b376941-61ec-4cfc-9ced-db78152e29f0/volumes" Feb 27 17:44:41 crc kubenswrapper[4830]: I0227 17:44:41.562765 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 27 17:44:42 crc kubenswrapper[4830]: I0227 17:44:42.420020 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:44:42 crc kubenswrapper[4830]: I0227 17:44:42.429300 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 27 17:44:42 crc kubenswrapper[4830]: I0227 17:44:42.438713 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:44:43 crc kubenswrapper[4830]: I0227 17:44:43.253459 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:44:44 crc kubenswrapper[4830]: I0227 17:44:44.519672 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 27 17:44:44 crc kubenswrapper[4830]: I0227 17:44:44.549785 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 27 17:44:44 crc kubenswrapper[4830]: I0227 17:44:44.549850 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 27 17:44:44 crc kubenswrapper[4830]: I0227 17:44:44.600395 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 27 17:44:44 crc kubenswrapper[4830]: I0227 17:44:44.600464 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 27 17:44:45 crc kubenswrapper[4830]: I0227 17:44:45.631163 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="cd94c1a3-2090-4382-b181-7b121e05a5d7" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.124:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 27 17:44:45 crc kubenswrapper[4830]: I0227 17:44:45.631228 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="cd94c1a3-2090-4382-b181-7b121e05a5d7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.124:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 27 17:44:45 crc kubenswrapper[4830]: I0227 17:44:45.714300 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d64fd96a-b098-4112-8019-6577ba87df85" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.125:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 27 17:44:45 crc kubenswrapper[4830]: I0227 17:44:45.714758 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d64fd96a-b098-4112-8019-6577ba87df85" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.125:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 27 17:44:46 crc kubenswrapper[4830]: I0227 17:44:46.562705 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 27 17:44:46 crc kubenswrapper[4830]: I0227 17:44:46.606091 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 27 17:44:47 crc kubenswrapper[4830]: I0227 17:44:47.366537 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 27 17:44:48 crc kubenswrapper[4830]: I0227 17:44:48.762657 4830 scope.go:117] "RemoveContainer" containerID="0f73af46b765b3d98f3f5e9883b49887ac4f2ac3485e91a001e18c5d351646d8" Feb 27 17:44:48 crc kubenswrapper[4830]: E0227 17:44:48.763236 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:44:50 crc kubenswrapper[4830]: E0227 17:44:50.771217 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:44:51 crc kubenswrapper[4830]: E0227 17:44:51.765166 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536904-jrdqt" podUID="77856f9c-1131-4857-9fff-bddf1d27b5d3" Feb 27 17:44:54 crc kubenswrapper[4830]: I0227 17:44:54.552227 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 27 17:44:54 crc kubenswrapper[4830]: I0227 17:44:54.553817 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 27 17:44:54 crc kubenswrapper[4830]: I0227 17:44:54.555614 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 27 17:44:54 crc kubenswrapper[4830]: I0227 17:44:54.604846 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 27 17:44:54 crc kubenswrapper[4830]: I0227 17:44:54.606409 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 27 17:44:54 crc kubenswrapper[4830]: I0227 17:44:54.606731 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 27 17:44:54 crc kubenswrapper[4830]: I0227 17:44:54.612689 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 27 17:44:55 crc kubenswrapper[4830]: I0227 17:44:55.387055 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 27 17:44:55 crc kubenswrapper[4830]: I0227 17:44:55.389338 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 27 17:44:55 crc kubenswrapper[4830]: I0227 17:44:55.390046 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 27 17:44:56 crc kubenswrapper[4830]: I0227 17:44:56.682353 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 17:44:56 crc kubenswrapper[4830]: E0227 17:44:56.683236 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b376941-61ec-4cfc-9ced-db78152e29f0" containerName="init" Feb 27 17:44:56 crc kubenswrapper[4830]: I0227 17:44:56.683257 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b376941-61ec-4cfc-9ced-db78152e29f0" containerName="init" Feb 27 17:44:56 crc kubenswrapper[4830]: E0227 17:44:56.683283 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b376941-61ec-4cfc-9ced-db78152e29f0" containerName="dnsmasq-dns" Feb 27 17:44:56 crc kubenswrapper[4830]: I0227 17:44:56.683297 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b376941-61ec-4cfc-9ced-db78152e29f0" containerName="dnsmasq-dns" Feb 27 17:44:56 crc kubenswrapper[4830]: I0227 17:44:56.683593 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b376941-61ec-4cfc-9ced-db78152e29f0" containerName="dnsmasq-dns" Feb 27 17:44:56 crc kubenswrapper[4830]: I0227 17:44:56.685277 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 17:44:56 crc kubenswrapper[4830]: I0227 17:44:56.688344 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 27 17:44:56 crc kubenswrapper[4830]: I0227 17:44:56.690687 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 17:44:56 crc kubenswrapper[4830]: I0227 17:44:56.827878 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eef2bf38-f907-43dc-916d-4407988e6b37-scripts\") pod \"cinder-scheduler-0\" (UID: \"eef2bf38-f907-43dc-916d-4407988e6b37\") " pod="openstack/cinder-scheduler-0" Feb 27 17:44:56 crc kubenswrapper[4830]: I0227 17:44:56.827933 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eef2bf38-f907-43dc-916d-4407988e6b37-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"eef2bf38-f907-43dc-916d-4407988e6b37\") " pod="openstack/cinder-scheduler-0" Feb 27 17:44:56 crc kubenswrapper[4830]: I0227 17:44:56.828082 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slg9p\" (UniqueName: \"kubernetes.io/projected/eef2bf38-f907-43dc-916d-4407988e6b37-kube-api-access-slg9p\") pod \"cinder-scheduler-0\" (UID: \"eef2bf38-f907-43dc-916d-4407988e6b37\") " pod="openstack/cinder-scheduler-0" Feb 27 17:44:56 crc kubenswrapper[4830]: I0227 17:44:56.828103 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eef2bf38-f907-43dc-916d-4407988e6b37-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"eef2bf38-f907-43dc-916d-4407988e6b37\") " pod="openstack/cinder-scheduler-0" Feb 27 17:44:56 crc kubenswrapper[4830]: I0227 17:44:56.828185 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eef2bf38-f907-43dc-916d-4407988e6b37-config-data\") pod \"cinder-scheduler-0\" (UID: \"eef2bf38-f907-43dc-916d-4407988e6b37\") " pod="openstack/cinder-scheduler-0" Feb 27 17:44:56 crc kubenswrapper[4830]: I0227 17:44:56.828213 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eef2bf38-f907-43dc-916d-4407988e6b37-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"eef2bf38-f907-43dc-916d-4407988e6b37\") " pod="openstack/cinder-scheduler-0" Feb 27 17:44:56 crc kubenswrapper[4830]: I0227 17:44:56.929908 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eef2bf38-f907-43dc-916d-4407988e6b37-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"eef2bf38-f907-43dc-916d-4407988e6b37\") " pod="openstack/cinder-scheduler-0" Feb 27 17:44:56 crc kubenswrapper[4830]: I0227 17:44:56.930050 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eef2bf38-f907-43dc-916d-4407988e6b37-scripts\") pod \"cinder-scheduler-0\" (UID: \"eef2bf38-f907-43dc-916d-4407988e6b37\") " pod="openstack/cinder-scheduler-0" Feb 27 17:44:56 crc kubenswrapper[4830]: I0227 17:44:56.930079 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eef2bf38-f907-43dc-916d-4407988e6b37-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"eef2bf38-f907-43dc-916d-4407988e6b37\") " pod="openstack/cinder-scheduler-0" Feb 27 17:44:56 crc kubenswrapper[4830]: I0227 17:44:56.930141 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slg9p\" (UniqueName: \"kubernetes.io/projected/eef2bf38-f907-43dc-916d-4407988e6b37-kube-api-access-slg9p\") pod \"cinder-scheduler-0\" (UID: \"eef2bf38-f907-43dc-916d-4407988e6b37\") " pod="openstack/cinder-scheduler-0" Feb 27 17:44:56 crc kubenswrapper[4830]: I0227 17:44:56.930161 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eef2bf38-f907-43dc-916d-4407988e6b37-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"eef2bf38-f907-43dc-916d-4407988e6b37\") " pod="openstack/cinder-scheduler-0" Feb 27 17:44:56 crc kubenswrapper[4830]: I0227 17:44:56.930179 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eef2bf38-f907-43dc-916d-4407988e6b37-config-data\") pod \"cinder-scheduler-0\" (UID: \"eef2bf38-f907-43dc-916d-4407988e6b37\") " pod="openstack/cinder-scheduler-0" Feb 27 17:44:56 crc kubenswrapper[4830]: I0227 17:44:56.930175 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eef2bf38-f907-43dc-916d-4407988e6b37-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"eef2bf38-f907-43dc-916d-4407988e6b37\") " pod="openstack/cinder-scheduler-0" Feb 27 17:44:56 crc kubenswrapper[4830]: I0227 17:44:56.935185 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eef2bf38-f907-43dc-916d-4407988e6b37-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"eef2bf38-f907-43dc-916d-4407988e6b37\") " pod="openstack/cinder-scheduler-0" Feb 27 17:44:56 crc kubenswrapper[4830]: I0227 17:44:56.935286 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eef2bf38-f907-43dc-916d-4407988e6b37-scripts\") pod \"cinder-scheduler-0\" (UID: \"eef2bf38-f907-43dc-916d-4407988e6b37\") " pod="openstack/cinder-scheduler-0" Feb 27 17:44:56 crc kubenswrapper[4830]: I0227 17:44:56.935482 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eef2bf38-f907-43dc-916d-4407988e6b37-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"eef2bf38-f907-43dc-916d-4407988e6b37\") " pod="openstack/cinder-scheduler-0" Feb 27 17:44:56 crc kubenswrapper[4830]: I0227 17:44:56.936223 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eef2bf38-f907-43dc-916d-4407988e6b37-config-data\") pod \"cinder-scheduler-0\" (UID: \"eef2bf38-f907-43dc-916d-4407988e6b37\") " pod="openstack/cinder-scheduler-0" Feb 27 17:44:56 crc kubenswrapper[4830]: I0227 17:44:56.944359 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slg9p\" (UniqueName: \"kubernetes.io/projected/eef2bf38-f907-43dc-916d-4407988e6b37-kube-api-access-slg9p\") pod \"cinder-scheduler-0\" (UID: \"eef2bf38-f907-43dc-916d-4407988e6b37\") " pod="openstack/cinder-scheduler-0" Feb 27 17:44:57 crc kubenswrapper[4830]: I0227 17:44:57.007229 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 17:44:57 crc kubenswrapper[4830]: I0227 17:44:57.444550 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 17:44:57 crc kubenswrapper[4830]: W0227 17:44:57.459101 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeef2bf38_f907_43dc_916d_4407988e6b37.slice/crio-25050af8c261f6bfb42efdea864394224343da68783a28b707ed2a4470cce7df WatchSource:0}: Error finding container 25050af8c261f6bfb42efdea864394224343da68783a28b707ed2a4470cce7df: Status 404 returned error can't find the container with id 25050af8c261f6bfb42efdea864394224343da68783a28b707ed2a4470cce7df Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.157259 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.157859 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="09b3d7eb-5e19-47a1-81bb-a9e0755077ad" containerName="cinder-api-log" containerID="cri-o://25c54f075c8988085178cb7b935b2679011fae62263f07254bc9fadd5a34e79c" gracePeriod=30 Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.158560 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="09b3d7eb-5e19-47a1-81bb-a9e0755077ad" containerName="cinder-api" containerID="cri-o://63745ec9fad77b93e96cd00caeb0d073bc10ce82e0ce24b95627b855b5429092" gracePeriod=30 Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.424284 4830 generic.go:334] "Generic (PLEG): container finished" podID="09b3d7eb-5e19-47a1-81bb-a9e0755077ad" containerID="25c54f075c8988085178cb7b935b2679011fae62263f07254bc9fadd5a34e79c" exitCode=143 Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.424364 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"09b3d7eb-5e19-47a1-81bb-a9e0755077ad","Type":"ContainerDied","Data":"25c54f075c8988085178cb7b935b2679011fae62263f07254bc9fadd5a34e79c"} Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.425692 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eef2bf38-f907-43dc-916d-4407988e6b37","Type":"ContainerStarted","Data":"a4b5e3c22a86b258ff81643d87f5fb353d38636aed3ef14a3f53ffd3b8d352a7"} Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.425743 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eef2bf38-f907-43dc-916d-4407988e6b37","Type":"ContainerStarted","Data":"25050af8c261f6bfb42efdea864394224343da68783a28b707ed2a4470cce7df"} Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.470182 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.471622 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.475058 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.490710 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.563675 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.563735 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.563816 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.563833 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-run\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.563851 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.563869 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-dev\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.563884 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.563906 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2qjp\" (UniqueName: \"kubernetes.io/projected/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-kube-api-access-v2qjp\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.563933 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.563975 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-sys\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.564001 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.564049 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.564067 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.564086 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.564109 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.564378 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.666058 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.666380 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.666409 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.666259 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.666457 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.666499 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.666519 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-run\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.666547 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.666575 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-dev\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.666597 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.666629 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2qjp\" (UniqueName: \"kubernetes.io/projected/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-kube-api-access-v2qjp\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.666656 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-dev\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.666658 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.666685 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.666704 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-sys\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.666721 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-run\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.666730 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.666663 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.666801 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.666827 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.666854 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.666884 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.666906 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-sys\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.667198 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.667245 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.667243 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.671832 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.671908 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.672778 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.672790 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.675331 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.684723 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2qjp\" (UniqueName: \"kubernetes.io/projected/ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8-kube-api-access-v2qjp\") pod \"cinder-volume-volume1-0\" (UID: \"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8\") " pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:58 crc kubenswrapper[4830]: I0227 17:44:58.902637 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.158763 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.161759 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.166292 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.166723 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.278759 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fea93d69-d865-4c2a-b245-eda3ff54abac-sys\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.278811 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fea93d69-d865-4c2a-b245-eda3ff54abac-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.278829 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fea93d69-d865-4c2a-b245-eda3ff54abac-dev\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.278848 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fea93d69-d865-4c2a-b245-eda3ff54abac-run\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.279050 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fea93d69-d865-4c2a-b245-eda3ff54abac-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.279104 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fea93d69-d865-4c2a-b245-eda3ff54abac-config-data-custom\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.279168 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fea93d69-d865-4c2a-b245-eda3ff54abac-lib-modules\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.279214 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/fea93d69-d865-4c2a-b245-eda3ff54abac-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.279285 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fea93d69-d865-4c2a-b245-eda3ff54abac-config-data\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.279414 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fea93d69-d865-4c2a-b245-eda3ff54abac-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.279442 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/fea93d69-d865-4c2a-b245-eda3ff54abac-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.279470 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fea93d69-d865-4c2a-b245-eda3ff54abac-scripts\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.279546 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fea93d69-d865-4c2a-b245-eda3ff54abac-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.279580 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzlgd\" (UniqueName: \"kubernetes.io/projected/fea93d69-d865-4c2a-b245-eda3ff54abac-kube-api-access-rzlgd\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.279701 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fea93d69-d865-4c2a-b245-eda3ff54abac-ceph\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.279767 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fea93d69-d865-4c2a-b245-eda3ff54abac-etc-nvme\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.381819 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fea93d69-d865-4c2a-b245-eda3ff54abac-ceph\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.381885 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fea93d69-d865-4c2a-b245-eda3ff54abac-etc-nvme\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.381968 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fea93d69-d865-4c2a-b245-eda3ff54abac-sys\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.381994 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fea93d69-d865-4c2a-b245-eda3ff54abac-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.382022 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fea93d69-d865-4c2a-b245-eda3ff54abac-dev\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.382047 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fea93d69-d865-4c2a-b245-eda3ff54abac-run\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.382085 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fea93d69-d865-4c2a-b245-eda3ff54abac-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.382106 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fea93d69-d865-4c2a-b245-eda3ff54abac-config-data-custom\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.382137 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fea93d69-d865-4c2a-b245-eda3ff54abac-lib-modules\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.382144 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fea93d69-d865-4c2a-b245-eda3ff54abac-etc-nvme\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.382168 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/fea93d69-d865-4c2a-b245-eda3ff54abac-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.382229 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/fea93d69-d865-4c2a-b245-eda3ff54abac-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.382272 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fea93d69-d865-4c2a-b245-eda3ff54abac-sys\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.382274 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fea93d69-d865-4c2a-b245-eda3ff54abac-config-data\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.382413 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fea93d69-d865-4c2a-b245-eda3ff54abac-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.382456 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/fea93d69-d865-4c2a-b245-eda3ff54abac-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.382583 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fea93d69-d865-4c2a-b245-eda3ff54abac-scripts\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.382602 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fea93d69-d865-4c2a-b245-eda3ff54abac-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.382717 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzlgd\" (UniqueName: \"kubernetes.io/projected/fea93d69-d865-4c2a-b245-eda3ff54abac-kube-api-access-rzlgd\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.382748 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fea93d69-d865-4c2a-b245-eda3ff54abac-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.382907 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fea93d69-d865-4c2a-b245-eda3ff54abac-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.382931 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fea93d69-d865-4c2a-b245-eda3ff54abac-dev\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.382969 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fea93d69-d865-4c2a-b245-eda3ff54abac-run\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.383248 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fea93d69-d865-4c2a-b245-eda3ff54abac-lib-modules\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.383279 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fea93d69-d865-4c2a-b245-eda3ff54abac-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.383320 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/fea93d69-d865-4c2a-b245-eda3ff54abac-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.388810 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fea93d69-d865-4c2a-b245-eda3ff54abac-scripts\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.391329 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fea93d69-d865-4c2a-b245-eda3ff54abac-config-data-custom\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.392003 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fea93d69-d865-4c2a-b245-eda3ff54abac-config-data\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.397357 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fea93d69-d865-4c2a-b245-eda3ff54abac-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.409866 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fea93d69-d865-4c2a-b245-eda3ff54abac-ceph\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.414078 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzlgd\" (UniqueName: \"kubernetes.io/projected/fea93d69-d865-4c2a-b245-eda3ff54abac-kube-api-access-rzlgd\") pod \"cinder-backup-0\" (UID: \"fea93d69-d865-4c2a-b245-eda3ff54abac\") " pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.443777 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eef2bf38-f907-43dc-916d-4407988e6b37","Type":"ContainerStarted","Data":"52bf8134dd2a75192d454861d0616412949bd96f9412f2632d0ec2ffba2cdbc0"} Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.479264 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.479241952 podStartE2EDuration="3.479241952s" podCreationTimestamp="2026-02-27 17:44:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:44:59.474486218 +0000 UTC m=+5895.563758681" watchObservedRunningTime="2026-02-27 17:44:59.479241952 +0000 UTC m=+5895.568514415" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.488736 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 27 17:44:59 crc kubenswrapper[4830]: I0227 17:44:59.629632 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 27 17:45:00 crc kubenswrapper[4830]: I0227 17:45:00.136750 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536905-9z5pp"] Feb 27 17:45:00 crc kubenswrapper[4830]: I0227 17:45:00.139212 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536905-9z5pp" Feb 27 17:45:00 crc kubenswrapper[4830]: I0227 17:45:00.141450 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 17:45:00 crc kubenswrapper[4830]: I0227 17:45:00.141519 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 17:45:00 crc kubenswrapper[4830]: I0227 17:45:00.149190 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536905-9z5pp"] Feb 27 17:45:00 crc kubenswrapper[4830]: I0227 17:45:00.248866 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 27 17:45:00 crc kubenswrapper[4830]: W0227 17:45:00.251976 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfea93d69_d865_4c2a_b245_eda3ff54abac.slice/crio-c4d7f897bdbdbf35354b6101fded54b46ac96dd9f70023dcd411ca33700cfe68 WatchSource:0}: Error finding container c4d7f897bdbdbf35354b6101fded54b46ac96dd9f70023dcd411ca33700cfe68: Status 404 returned error can't find the container with id c4d7f897bdbdbf35354b6101fded54b46ac96dd9f70023dcd411ca33700cfe68 Feb 27 17:45:00 crc kubenswrapper[4830]: I0227 17:45:00.312410 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f54d3b0-ad80-49bc-92e2-d1d9100542e8-config-volume\") pod \"collect-profiles-29536905-9z5pp\" (UID: \"5f54d3b0-ad80-49bc-92e2-d1d9100542e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536905-9z5pp" Feb 27 17:45:00 crc kubenswrapper[4830]: I0227 17:45:00.312600 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5f54d3b0-ad80-49bc-92e2-d1d9100542e8-secret-volume\") pod \"collect-profiles-29536905-9z5pp\" (UID: \"5f54d3b0-ad80-49bc-92e2-d1d9100542e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536905-9z5pp" Feb 27 17:45:00 crc kubenswrapper[4830]: I0227 17:45:00.312830 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x72nm\" (UniqueName: \"kubernetes.io/projected/5f54d3b0-ad80-49bc-92e2-d1d9100542e8-kube-api-access-x72nm\") pod \"collect-profiles-29536905-9z5pp\" (UID: \"5f54d3b0-ad80-49bc-92e2-d1d9100542e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536905-9z5pp" Feb 27 17:45:00 crc kubenswrapper[4830]: I0227 17:45:00.414448 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f54d3b0-ad80-49bc-92e2-d1d9100542e8-config-volume\") pod \"collect-profiles-29536905-9z5pp\" (UID: \"5f54d3b0-ad80-49bc-92e2-d1d9100542e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536905-9z5pp" Feb 27 17:45:00 crc kubenswrapper[4830]: I0227 17:45:00.414536 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5f54d3b0-ad80-49bc-92e2-d1d9100542e8-secret-volume\") pod \"collect-profiles-29536905-9z5pp\" (UID: \"5f54d3b0-ad80-49bc-92e2-d1d9100542e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536905-9z5pp" Feb 27 17:45:00 crc kubenswrapper[4830]: I0227 17:45:00.414659 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x72nm\" (UniqueName: \"kubernetes.io/projected/5f54d3b0-ad80-49bc-92e2-d1d9100542e8-kube-api-access-x72nm\") pod \"collect-profiles-29536905-9z5pp\" (UID: \"5f54d3b0-ad80-49bc-92e2-d1d9100542e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536905-9z5pp" Feb 27 17:45:00 crc kubenswrapper[4830]: I0227 17:45:00.415736 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f54d3b0-ad80-49bc-92e2-d1d9100542e8-config-volume\") pod \"collect-profiles-29536905-9z5pp\" (UID: \"5f54d3b0-ad80-49bc-92e2-d1d9100542e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536905-9z5pp" Feb 27 17:45:00 crc kubenswrapper[4830]: I0227 17:45:00.430988 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5f54d3b0-ad80-49bc-92e2-d1d9100542e8-secret-volume\") pod \"collect-profiles-29536905-9z5pp\" (UID: \"5f54d3b0-ad80-49bc-92e2-d1d9100542e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536905-9z5pp" Feb 27 17:45:00 crc kubenswrapper[4830]: I0227 17:45:00.442086 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x72nm\" (UniqueName: \"kubernetes.io/projected/5f54d3b0-ad80-49bc-92e2-d1d9100542e8-kube-api-access-x72nm\") pod \"collect-profiles-29536905-9z5pp\" (UID: \"5f54d3b0-ad80-49bc-92e2-d1d9100542e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536905-9z5pp" Feb 27 17:45:00 crc kubenswrapper[4830]: I0227 17:45:00.452867 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8","Type":"ContainerStarted","Data":"66eeac13e77fd9542cb06c8874849f4e8260ed9503eacf466f1d29b5ad609463"} Feb 27 17:45:00 crc kubenswrapper[4830]: I0227 17:45:00.454142 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"fea93d69-d865-4c2a-b245-eda3ff54abac","Type":"ContainerStarted","Data":"c4d7f897bdbdbf35354b6101fded54b46ac96dd9f70023dcd411ca33700cfe68"} Feb 27 17:45:00 crc kubenswrapper[4830]: I0227 17:45:00.463183 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536905-9z5pp" Feb 27 17:45:00 crc kubenswrapper[4830]: I0227 17:45:00.998421 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536905-9z5pp"] Feb 27 17:45:01 crc kubenswrapper[4830]: I0227 17:45:01.469287 4830 generic.go:334] "Generic (PLEG): container finished" podID="09b3d7eb-5e19-47a1-81bb-a9e0755077ad" containerID="63745ec9fad77b93e96cd00caeb0d073bc10ce82e0ce24b95627b855b5429092" exitCode=0 Feb 27 17:45:01 crc kubenswrapper[4830]: I0227 17:45:01.469442 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"09b3d7eb-5e19-47a1-81bb-a9e0755077ad","Type":"ContainerDied","Data":"63745ec9fad77b93e96cd00caeb0d073bc10ce82e0ce24b95627b855b5429092"} Feb 27 17:45:01 crc kubenswrapper[4830]: I0227 17:45:01.473039 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8","Type":"ContainerStarted","Data":"5e95476437a20eca6f0d0e506f1ac7a71ec8fd8f8856df0e82e0f8780452d1e7"} Feb 27 17:45:01 crc kubenswrapper[4830]: I0227 17:45:01.474386 4830 generic.go:334] "Generic (PLEG): container finished" podID="5f54d3b0-ad80-49bc-92e2-d1d9100542e8" containerID="4582fb325e3dc5e21447e4e6106aa1d897417259c1b534f20ade7494a044f394" exitCode=0 Feb 27 17:45:01 crc kubenswrapper[4830]: I0227 17:45:01.474419 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536905-9z5pp" event={"ID":"5f54d3b0-ad80-49bc-92e2-d1d9100542e8","Type":"ContainerDied","Data":"4582fb325e3dc5e21447e4e6106aa1d897417259c1b534f20ade7494a044f394"} Feb 27 17:45:01 crc kubenswrapper[4830]: I0227 17:45:01.474444 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536905-9z5pp" event={"ID":"5f54d3b0-ad80-49bc-92e2-d1d9100542e8","Type":"ContainerStarted","Data":"88effbe57a01a8f60e974f992cecb7500ae2398453707870ccd8bc1eba7bf9e8"} Feb 27 17:45:01 crc kubenswrapper[4830]: I0227 17:45:01.745464 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 17:45:01 crc kubenswrapper[4830]: I0227 17:45:01.848643 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-scripts\") pod \"09b3d7eb-5e19-47a1-81bb-a9e0755077ad\" (UID: \"09b3d7eb-5e19-47a1-81bb-a9e0755077ad\") " Feb 27 17:45:01 crc kubenswrapper[4830]: I0227 17:45:01.848832 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwqqn\" (UniqueName: \"kubernetes.io/projected/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-kube-api-access-dwqqn\") pod \"09b3d7eb-5e19-47a1-81bb-a9e0755077ad\" (UID: \"09b3d7eb-5e19-47a1-81bb-a9e0755077ad\") " Feb 27 17:45:01 crc kubenswrapper[4830]: I0227 17:45:01.849018 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-config-data-custom\") pod \"09b3d7eb-5e19-47a1-81bb-a9e0755077ad\" (UID: \"09b3d7eb-5e19-47a1-81bb-a9e0755077ad\") " Feb 27 17:45:01 crc kubenswrapper[4830]: I0227 17:45:01.849192 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-logs\") pod \"09b3d7eb-5e19-47a1-81bb-a9e0755077ad\" (UID: \"09b3d7eb-5e19-47a1-81bb-a9e0755077ad\") " Feb 27 17:45:01 crc kubenswrapper[4830]: I0227 17:45:01.849295 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-combined-ca-bundle\") pod \"09b3d7eb-5e19-47a1-81bb-a9e0755077ad\" (UID: \"09b3d7eb-5e19-47a1-81bb-a9e0755077ad\") " Feb 27 17:45:01 crc kubenswrapper[4830]: I0227 17:45:01.849392 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-etc-machine-id\") pod \"09b3d7eb-5e19-47a1-81bb-a9e0755077ad\" (UID: \"09b3d7eb-5e19-47a1-81bb-a9e0755077ad\") " Feb 27 17:45:01 crc kubenswrapper[4830]: I0227 17:45:01.849566 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-config-data\") pod \"09b3d7eb-5e19-47a1-81bb-a9e0755077ad\" (UID: \"09b3d7eb-5e19-47a1-81bb-a9e0755077ad\") " Feb 27 17:45:01 crc kubenswrapper[4830]: I0227 17:45:01.850575 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-logs" (OuterVolumeSpecName: "logs") pod "09b3d7eb-5e19-47a1-81bb-a9e0755077ad" (UID: "09b3d7eb-5e19-47a1-81bb-a9e0755077ad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:45:01 crc kubenswrapper[4830]: I0227 17:45:01.851084 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "09b3d7eb-5e19-47a1-81bb-a9e0755077ad" (UID: "09b3d7eb-5e19-47a1-81bb-a9e0755077ad"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 17:45:01 crc kubenswrapper[4830]: I0227 17:45:01.854874 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-kube-api-access-dwqqn" (OuterVolumeSpecName: "kube-api-access-dwqqn") pod "09b3d7eb-5e19-47a1-81bb-a9e0755077ad" (UID: "09b3d7eb-5e19-47a1-81bb-a9e0755077ad"). InnerVolumeSpecName "kube-api-access-dwqqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:45:01 crc kubenswrapper[4830]: I0227 17:45:01.859052 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "09b3d7eb-5e19-47a1-81bb-a9e0755077ad" (UID: "09b3d7eb-5e19-47a1-81bb-a9e0755077ad"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:45:01 crc kubenswrapper[4830]: I0227 17:45:01.860111 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-scripts" (OuterVolumeSpecName: "scripts") pod "09b3d7eb-5e19-47a1-81bb-a9e0755077ad" (UID: "09b3d7eb-5e19-47a1-81bb-a9e0755077ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:45:01 crc kubenswrapper[4830]: I0227 17:45:01.900536 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09b3d7eb-5e19-47a1-81bb-a9e0755077ad" (UID: "09b3d7eb-5e19-47a1-81bb-a9e0755077ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:45:01 crc kubenswrapper[4830]: I0227 17:45:01.954188 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:45:01 crc kubenswrapper[4830]: I0227 17:45:01.954301 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwqqn\" (UniqueName: \"kubernetes.io/projected/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-kube-api-access-dwqqn\") on node \"crc\" DevicePath \"\"" Feb 27 17:45:01 crc kubenswrapper[4830]: I0227 17:45:01.954360 4830 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 17:45:01 crc kubenswrapper[4830]: I0227 17:45:01.954413 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-logs\") on node \"crc\" DevicePath \"\"" Feb 27 17:45:01 crc kubenswrapper[4830]: I0227 17:45:01.954480 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:45:01 crc kubenswrapper[4830]: I0227 17:45:01.954534 4830 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 27 17:45:01 crc kubenswrapper[4830]: I0227 17:45:01.957121 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-config-data" (OuterVolumeSpecName: "config-data") pod "09b3d7eb-5e19-47a1-81bb-a9e0755077ad" (UID: "09b3d7eb-5e19-47a1-81bb-a9e0755077ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.007323 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.055963 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09b3d7eb-5e19-47a1-81bb-a9e0755077ad-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.489821 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"fea93d69-d865-4c2a-b245-eda3ff54abac","Type":"ContainerStarted","Data":"175fc18164f088f713635fd8d21634b7458270fd3d1f33c807c2c733c346afaf"} Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.489873 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"fea93d69-d865-4c2a-b245-eda3ff54abac","Type":"ContainerStarted","Data":"4b4abc9887e4b346ddbb81001c379dad07771994c20b950ddf0396f9cc329a1d"} Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.492500 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.492492 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"09b3d7eb-5e19-47a1-81bb-a9e0755077ad","Type":"ContainerDied","Data":"70714150cc14c96f437a1ed6c910168cfb918219e95573230ea1d2dbf63a615e"} Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.492573 4830 scope.go:117] "RemoveContainer" containerID="63745ec9fad77b93e96cd00caeb0d073bc10ce82e0ce24b95627b855b5429092" Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.496284 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8","Type":"ContainerStarted","Data":"0f6f56e81d9c82d26f0f9f73c6823336cd2b9473ff9ccf788a1089b19562df13"} Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.517019 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=2.440179636 podStartE2EDuration="3.516985855s" podCreationTimestamp="2026-02-27 17:44:59 +0000 UTC" firstStartedPulling="2026-02-27 17:45:00.254537649 +0000 UTC m=+5896.343810112" lastFinishedPulling="2026-02-27 17:45:01.331343868 +0000 UTC m=+5897.420616331" observedRunningTime="2026-02-27 17:45:02.508911201 +0000 UTC m=+5898.598183704" watchObservedRunningTime="2026-02-27 17:45:02.516985855 +0000 UTC m=+5898.606258358" Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.544396 4830 scope.go:117] "RemoveContainer" containerID="25c54f075c8988085178cb7b935b2679011fae62263f07254bc9fadd5a34e79c" Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.553873 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=3.3903726770000002 podStartE2EDuration="4.553845191s" podCreationTimestamp="2026-02-27 17:44:58 +0000 UTC" firstStartedPulling="2026-02-27 17:44:59.685116243 +0000 UTC m=+5895.774388706" lastFinishedPulling="2026-02-27 17:45:00.848588747 +0000 UTC m=+5896.937861220" observedRunningTime="2026-02-27 17:45:02.536275159 +0000 UTC m=+5898.625547682" watchObservedRunningTime="2026-02-27 17:45:02.553845191 +0000 UTC m=+5898.643117664" Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.570006 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.580511 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.593680 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 27 17:45:02 crc kubenswrapper[4830]: E0227 17:45:02.594093 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09b3d7eb-5e19-47a1-81bb-a9e0755077ad" containerName="cinder-api" Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.594113 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="09b3d7eb-5e19-47a1-81bb-a9e0755077ad" containerName="cinder-api" Feb 27 17:45:02 crc kubenswrapper[4830]: E0227 17:45:02.594124 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09b3d7eb-5e19-47a1-81bb-a9e0755077ad" containerName="cinder-api-log" Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.594130 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="09b3d7eb-5e19-47a1-81bb-a9e0755077ad" containerName="cinder-api-log" Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.594288 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="09b3d7eb-5e19-47a1-81bb-a9e0755077ad" containerName="cinder-api-log" Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.594312 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="09b3d7eb-5e19-47a1-81bb-a9e0755077ad" containerName="cinder-api" Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.595270 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.598176 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.605774 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.764176 4830 scope.go:117] "RemoveContainer" containerID="0f73af46b765b3d98f3f5e9883b49887ac4f2ac3485e91a001e18c5d351646d8" Feb 27 17:45:02 crc kubenswrapper[4830]: E0227 17:45:02.764386 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:45:02 crc kubenswrapper[4830]: E0227 17:45:02.765923 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.770990 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e37d0f8-38cf-4583-811f-1907fd385a6c-config-data\") pod \"cinder-api-0\" (UID: \"0e37d0f8-38cf-4583-811f-1907fd385a6c\") " pod="openstack/cinder-api-0" Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.771034 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e37d0f8-38cf-4583-811f-1907fd385a6c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0e37d0f8-38cf-4583-811f-1907fd385a6c\") " pod="openstack/cinder-api-0" Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.771064 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e37d0f8-38cf-4583-811f-1907fd385a6c-scripts\") pod \"cinder-api-0\" (UID: \"0e37d0f8-38cf-4583-811f-1907fd385a6c\") " pod="openstack/cinder-api-0" Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.771175 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e37d0f8-38cf-4583-811f-1907fd385a6c-logs\") pod \"cinder-api-0\" (UID: \"0e37d0f8-38cf-4583-811f-1907fd385a6c\") " pod="openstack/cinder-api-0" Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.771209 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz9fm\" (UniqueName: \"kubernetes.io/projected/0e37d0f8-38cf-4583-811f-1907fd385a6c-kube-api-access-cz9fm\") pod \"cinder-api-0\" (UID: \"0e37d0f8-38cf-4583-811f-1907fd385a6c\") " pod="openstack/cinder-api-0" Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.771248 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0e37d0f8-38cf-4583-811f-1907fd385a6c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0e37d0f8-38cf-4583-811f-1907fd385a6c\") " pod="openstack/cinder-api-0" Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.771319 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e37d0f8-38cf-4583-811f-1907fd385a6c-config-data-custom\") pod \"cinder-api-0\" (UID: \"0e37d0f8-38cf-4583-811f-1907fd385a6c\") " pod="openstack/cinder-api-0" Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.776704 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09b3d7eb-5e19-47a1-81bb-a9e0755077ad" path="/var/lib/kubelet/pods/09b3d7eb-5e19-47a1-81bb-a9e0755077ad/volumes" Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.873986 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0e37d0f8-38cf-4583-811f-1907fd385a6c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0e37d0f8-38cf-4583-811f-1907fd385a6c\") " pod="openstack/cinder-api-0" Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.874523 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e37d0f8-38cf-4583-811f-1907fd385a6c-config-data-custom\") pod \"cinder-api-0\" (UID: \"0e37d0f8-38cf-4583-811f-1907fd385a6c\") " pod="openstack/cinder-api-0" Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.874673 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e37d0f8-38cf-4583-811f-1907fd385a6c-config-data\") pod \"cinder-api-0\" (UID: \"0e37d0f8-38cf-4583-811f-1907fd385a6c\") " pod="openstack/cinder-api-0" Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.874803 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e37d0f8-38cf-4583-811f-1907fd385a6c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0e37d0f8-38cf-4583-811f-1907fd385a6c\") " pod="openstack/cinder-api-0" Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.874879 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e37d0f8-38cf-4583-811f-1907fd385a6c-scripts\") pod \"cinder-api-0\" (UID: \"0e37d0f8-38cf-4583-811f-1907fd385a6c\") " pod="openstack/cinder-api-0" Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.874988 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e37d0f8-38cf-4583-811f-1907fd385a6c-logs\") pod \"cinder-api-0\" (UID: \"0e37d0f8-38cf-4583-811f-1907fd385a6c\") " pod="openstack/cinder-api-0" Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.875113 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz9fm\" (UniqueName: \"kubernetes.io/projected/0e37d0f8-38cf-4583-811f-1907fd385a6c-kube-api-access-cz9fm\") pod \"cinder-api-0\" (UID: \"0e37d0f8-38cf-4583-811f-1907fd385a6c\") " pod="openstack/cinder-api-0" Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.874167 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0e37d0f8-38cf-4583-811f-1907fd385a6c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0e37d0f8-38cf-4583-811f-1907fd385a6c\") " pod="openstack/cinder-api-0" Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.876658 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e37d0f8-38cf-4583-811f-1907fd385a6c-logs\") pod \"cinder-api-0\" (UID: \"0e37d0f8-38cf-4583-811f-1907fd385a6c\") " pod="openstack/cinder-api-0" Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.881646 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e37d0f8-38cf-4583-811f-1907fd385a6c-config-data-custom\") pod \"cinder-api-0\" (UID: \"0e37d0f8-38cf-4583-811f-1907fd385a6c\") " pod="openstack/cinder-api-0" Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.882146 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e37d0f8-38cf-4583-811f-1907fd385a6c-config-data\") pod \"cinder-api-0\" (UID: \"0e37d0f8-38cf-4583-811f-1907fd385a6c\") " pod="openstack/cinder-api-0" Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.882755 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e37d0f8-38cf-4583-811f-1907fd385a6c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0e37d0f8-38cf-4583-811f-1907fd385a6c\") " pod="openstack/cinder-api-0" Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.887437 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e37d0f8-38cf-4583-811f-1907fd385a6c-scripts\") pod \"cinder-api-0\" (UID: \"0e37d0f8-38cf-4583-811f-1907fd385a6c\") " pod="openstack/cinder-api-0" Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.893089 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz9fm\" (UniqueName: \"kubernetes.io/projected/0e37d0f8-38cf-4583-811f-1907fd385a6c-kube-api-access-cz9fm\") pod \"cinder-api-0\" (UID: \"0e37d0f8-38cf-4583-811f-1907fd385a6c\") " pod="openstack/cinder-api-0" Feb 27 17:45:02 crc kubenswrapper[4830]: I0227 17:45:02.925534 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 17:45:03 crc kubenswrapper[4830]: I0227 17:45:03.040473 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536905-9z5pp" Feb 27 17:45:03 crc kubenswrapper[4830]: I0227 17:45:03.181618 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f54d3b0-ad80-49bc-92e2-d1d9100542e8-config-volume\") pod \"5f54d3b0-ad80-49bc-92e2-d1d9100542e8\" (UID: \"5f54d3b0-ad80-49bc-92e2-d1d9100542e8\") " Feb 27 17:45:03 crc kubenswrapper[4830]: I0227 17:45:03.181763 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x72nm\" (UniqueName: \"kubernetes.io/projected/5f54d3b0-ad80-49bc-92e2-d1d9100542e8-kube-api-access-x72nm\") pod \"5f54d3b0-ad80-49bc-92e2-d1d9100542e8\" (UID: \"5f54d3b0-ad80-49bc-92e2-d1d9100542e8\") " Feb 27 17:45:03 crc kubenswrapper[4830]: I0227 17:45:03.181859 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5f54d3b0-ad80-49bc-92e2-d1d9100542e8-secret-volume\") pod \"5f54d3b0-ad80-49bc-92e2-d1d9100542e8\" (UID: \"5f54d3b0-ad80-49bc-92e2-d1d9100542e8\") " Feb 27 17:45:03 crc kubenswrapper[4830]: I0227 17:45:03.182463 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f54d3b0-ad80-49bc-92e2-d1d9100542e8-config-volume" (OuterVolumeSpecName: "config-volume") pod "5f54d3b0-ad80-49bc-92e2-d1d9100542e8" (UID: "5f54d3b0-ad80-49bc-92e2-d1d9100542e8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:45:03 crc kubenswrapper[4830]: I0227 17:45:03.186603 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f54d3b0-ad80-49bc-92e2-d1d9100542e8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5f54d3b0-ad80-49bc-92e2-d1d9100542e8" (UID: "5f54d3b0-ad80-49bc-92e2-d1d9100542e8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:45:03 crc kubenswrapper[4830]: I0227 17:45:03.206407 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f54d3b0-ad80-49bc-92e2-d1d9100542e8-kube-api-access-x72nm" (OuterVolumeSpecName: "kube-api-access-x72nm") pod "5f54d3b0-ad80-49bc-92e2-d1d9100542e8" (UID: "5f54d3b0-ad80-49bc-92e2-d1d9100542e8"). InnerVolumeSpecName "kube-api-access-x72nm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:45:03 crc kubenswrapper[4830]: I0227 17:45:03.233442 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 27 17:45:03 crc kubenswrapper[4830]: I0227 17:45:03.284309 4830 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5f54d3b0-ad80-49bc-92e2-d1d9100542e8-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 17:45:03 crc kubenswrapper[4830]: I0227 17:45:03.284333 4830 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f54d3b0-ad80-49bc-92e2-d1d9100542e8-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 17:45:03 crc kubenswrapper[4830]: I0227 17:45:03.284344 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x72nm\" (UniqueName: \"kubernetes.io/projected/5f54d3b0-ad80-49bc-92e2-d1d9100542e8-kube-api-access-x72nm\") on node \"crc\" DevicePath \"\"" Feb 27 17:45:03 crc kubenswrapper[4830]: I0227 17:45:03.507411 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536905-9z5pp" event={"ID":"5f54d3b0-ad80-49bc-92e2-d1d9100542e8","Type":"ContainerDied","Data":"88effbe57a01a8f60e974f992cecb7500ae2398453707870ccd8bc1eba7bf9e8"} Feb 27 17:45:03 crc kubenswrapper[4830]: I0227 17:45:03.507650 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88effbe57a01a8f60e974f992cecb7500ae2398453707870ccd8bc1eba7bf9e8" Feb 27 17:45:03 crc kubenswrapper[4830]: I0227 17:45:03.507700 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536905-9z5pp" Feb 27 17:45:03 crc kubenswrapper[4830]: I0227 17:45:03.514648 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0e37d0f8-38cf-4583-811f-1907fd385a6c","Type":"ContainerStarted","Data":"dec432055bb00052a7919bb73af9d31cb5bc585437c57d20d672dbcdb2e19eb2"} Feb 27 17:45:03 crc kubenswrapper[4830]: I0227 17:45:03.904141 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Feb 27 17:45:04 crc kubenswrapper[4830]: I0227 17:45:04.131089 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536860-2n2dr"] Feb 27 17:45:04 crc kubenswrapper[4830]: I0227 17:45:04.140083 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536860-2n2dr"] Feb 27 17:45:04 crc kubenswrapper[4830]: I0227 17:45:04.490269 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Feb 27 17:45:04 crc kubenswrapper[4830]: I0227 17:45:04.530860 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0e37d0f8-38cf-4583-811f-1907fd385a6c","Type":"ContainerStarted","Data":"a2631f65fdc13249050c49ec3d889c2bf968c64114ad2b94ddeb2ae6942632ff"} Feb 27 17:45:04 crc kubenswrapper[4830]: I0227 17:45:04.801173 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1abc3c2c-443e-473d-a216-27c5fddb12c5" path="/var/lib/kubelet/pods/1abc3c2c-443e-473d-a216-27c5fddb12c5/volumes" Feb 27 17:45:05 crc kubenswrapper[4830]: I0227 17:45:05.545181 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0e37d0f8-38cf-4583-811f-1907fd385a6c","Type":"ContainerStarted","Data":"f63524298c8d2f0199c0afc021fb3a48b9f72b09975e8062315f845c391ec650"} Feb 27 17:45:05 crc kubenswrapper[4830]: I0227 17:45:05.545515 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 27 17:45:05 crc kubenswrapper[4830]: I0227 17:45:05.590711 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.5906791719999998 podStartE2EDuration="3.590679172s" podCreationTimestamp="2026-02-27 17:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:45:05.570092687 +0000 UTC m=+5901.659365230" watchObservedRunningTime="2026-02-27 17:45:05.590679172 +0000 UTC m=+5901.679951655" Feb 27 17:45:06 crc kubenswrapper[4830]: E0227 17:45:06.771335 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536904-jrdqt" podUID="77856f9c-1131-4857-9fff-bddf1d27b5d3" Feb 27 17:45:07 crc kubenswrapper[4830]: I0227 17:45:07.242764 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 27 17:45:07 crc kubenswrapper[4830]: I0227 17:45:07.311638 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 17:45:07 crc kubenswrapper[4830]: I0227 17:45:07.563995 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="eef2bf38-f907-43dc-916d-4407988e6b37" containerName="cinder-scheduler" containerID="cri-o://a4b5e3c22a86b258ff81643d87f5fb353d38636aed3ef14a3f53ffd3b8d352a7" gracePeriod=30 Feb 27 17:45:07 crc kubenswrapper[4830]: I0227 17:45:07.564105 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="eef2bf38-f907-43dc-916d-4407988e6b37" containerName="probe" containerID="cri-o://52bf8134dd2a75192d454861d0616412949bd96f9412f2632d0ec2ffba2cdbc0" gracePeriod=30 Feb 27 17:45:08 crc kubenswrapper[4830]: I0227 17:45:08.574974 4830 generic.go:334] "Generic (PLEG): container finished" podID="eef2bf38-f907-43dc-916d-4407988e6b37" containerID="52bf8134dd2a75192d454861d0616412949bd96f9412f2632d0ec2ffba2cdbc0" exitCode=0 Feb 27 17:45:08 crc kubenswrapper[4830]: I0227 17:45:08.574997 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eef2bf38-f907-43dc-916d-4407988e6b37","Type":"ContainerDied","Data":"52bf8134dd2a75192d454861d0616412949bd96f9412f2632d0ec2ffba2cdbc0"} Feb 27 17:45:09 crc kubenswrapper[4830]: I0227 17:45:09.147748 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Feb 27 17:45:09 crc kubenswrapper[4830]: I0227 17:45:09.736988 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.458597 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.552226 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eef2bf38-f907-43dc-916d-4407988e6b37-combined-ca-bundle\") pod \"eef2bf38-f907-43dc-916d-4407988e6b37\" (UID: \"eef2bf38-f907-43dc-916d-4407988e6b37\") " Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.552302 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eef2bf38-f907-43dc-916d-4407988e6b37-scripts\") pod \"eef2bf38-f907-43dc-916d-4407988e6b37\" (UID: \"eef2bf38-f907-43dc-916d-4407988e6b37\") " Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.552358 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eef2bf38-f907-43dc-916d-4407988e6b37-config-data-custom\") pod \"eef2bf38-f907-43dc-916d-4407988e6b37\" (UID: \"eef2bf38-f907-43dc-916d-4407988e6b37\") " Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.552397 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eef2bf38-f907-43dc-916d-4407988e6b37-config-data\") pod \"eef2bf38-f907-43dc-916d-4407988e6b37\" (UID: \"eef2bf38-f907-43dc-916d-4407988e6b37\") " Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.552500 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eef2bf38-f907-43dc-916d-4407988e6b37-etc-machine-id\") pod \"eef2bf38-f907-43dc-916d-4407988e6b37\" (UID: \"eef2bf38-f907-43dc-916d-4407988e6b37\") " Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.552615 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slg9p\" (UniqueName: \"kubernetes.io/projected/eef2bf38-f907-43dc-916d-4407988e6b37-kube-api-access-slg9p\") pod \"eef2bf38-f907-43dc-916d-4407988e6b37\" (UID: \"eef2bf38-f907-43dc-916d-4407988e6b37\") " Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.552904 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eef2bf38-f907-43dc-916d-4407988e6b37-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "eef2bf38-f907-43dc-916d-4407988e6b37" (UID: "eef2bf38-f907-43dc-916d-4407988e6b37"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.558683 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eef2bf38-f907-43dc-916d-4407988e6b37-kube-api-access-slg9p" (OuterVolumeSpecName: "kube-api-access-slg9p") pod "eef2bf38-f907-43dc-916d-4407988e6b37" (UID: "eef2bf38-f907-43dc-916d-4407988e6b37"). InnerVolumeSpecName "kube-api-access-slg9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.558788 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eef2bf38-f907-43dc-916d-4407988e6b37-scripts" (OuterVolumeSpecName: "scripts") pod "eef2bf38-f907-43dc-916d-4407988e6b37" (UID: "eef2bf38-f907-43dc-916d-4407988e6b37"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.570376 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eef2bf38-f907-43dc-916d-4407988e6b37-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "eef2bf38-f907-43dc-916d-4407988e6b37" (UID: "eef2bf38-f907-43dc-916d-4407988e6b37"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.600223 4830 generic.go:334] "Generic (PLEG): container finished" podID="eef2bf38-f907-43dc-916d-4407988e6b37" containerID="a4b5e3c22a86b258ff81643d87f5fb353d38636aed3ef14a3f53ffd3b8d352a7" exitCode=0 Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.600276 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.600288 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eef2bf38-f907-43dc-916d-4407988e6b37","Type":"ContainerDied","Data":"a4b5e3c22a86b258ff81643d87f5fb353d38636aed3ef14a3f53ffd3b8d352a7"} Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.600323 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"eef2bf38-f907-43dc-916d-4407988e6b37","Type":"ContainerDied","Data":"25050af8c261f6bfb42efdea864394224343da68783a28b707ed2a4470cce7df"} Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.600342 4830 scope.go:117] "RemoveContainer" containerID="52bf8134dd2a75192d454861d0616412949bd96f9412f2632d0ec2ffba2cdbc0" Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.631024 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eef2bf38-f907-43dc-916d-4407988e6b37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eef2bf38-f907-43dc-916d-4407988e6b37" (UID: "eef2bf38-f907-43dc-916d-4407988e6b37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.654560 4830 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eef2bf38-f907-43dc-916d-4407988e6b37-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.654585 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slg9p\" (UniqueName: \"kubernetes.io/projected/eef2bf38-f907-43dc-916d-4407988e6b37-kube-api-access-slg9p\") on node \"crc\" DevicePath \"\"" Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.654595 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eef2bf38-f907-43dc-916d-4407988e6b37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.654606 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eef2bf38-f907-43dc-916d-4407988e6b37-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.654615 4830 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eef2bf38-f907-43dc-916d-4407988e6b37-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.662057 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eef2bf38-f907-43dc-916d-4407988e6b37-config-data" (OuterVolumeSpecName: "config-data") pod "eef2bf38-f907-43dc-916d-4407988e6b37" (UID: "eef2bf38-f907-43dc-916d-4407988e6b37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.685316 4830 scope.go:117] "RemoveContainer" containerID="a4b5e3c22a86b258ff81643d87f5fb353d38636aed3ef14a3f53ffd3b8d352a7" Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.705392 4830 scope.go:117] "RemoveContainer" containerID="52bf8134dd2a75192d454861d0616412949bd96f9412f2632d0ec2ffba2cdbc0" Feb 27 17:45:10 crc kubenswrapper[4830]: E0227 17:45:10.705801 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52bf8134dd2a75192d454861d0616412949bd96f9412f2632d0ec2ffba2cdbc0\": container with ID starting with 52bf8134dd2a75192d454861d0616412949bd96f9412f2632d0ec2ffba2cdbc0 not found: ID does not exist" containerID="52bf8134dd2a75192d454861d0616412949bd96f9412f2632d0ec2ffba2cdbc0" Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.705840 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52bf8134dd2a75192d454861d0616412949bd96f9412f2632d0ec2ffba2cdbc0"} err="failed to get container status \"52bf8134dd2a75192d454861d0616412949bd96f9412f2632d0ec2ffba2cdbc0\": rpc error: code = NotFound desc = could not find container \"52bf8134dd2a75192d454861d0616412949bd96f9412f2632d0ec2ffba2cdbc0\": container with ID starting with 52bf8134dd2a75192d454861d0616412949bd96f9412f2632d0ec2ffba2cdbc0 not found: ID does not exist" Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.705859 4830 scope.go:117] "RemoveContainer" containerID="a4b5e3c22a86b258ff81643d87f5fb353d38636aed3ef14a3f53ffd3b8d352a7" Feb 27 17:45:10 crc kubenswrapper[4830]: E0227 17:45:10.706166 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4b5e3c22a86b258ff81643d87f5fb353d38636aed3ef14a3f53ffd3b8d352a7\": container with ID starting with a4b5e3c22a86b258ff81643d87f5fb353d38636aed3ef14a3f53ffd3b8d352a7 not found: ID does not exist" containerID="a4b5e3c22a86b258ff81643d87f5fb353d38636aed3ef14a3f53ffd3b8d352a7" Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.706187 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4b5e3c22a86b258ff81643d87f5fb353d38636aed3ef14a3f53ffd3b8d352a7"} err="failed to get container status \"a4b5e3c22a86b258ff81643d87f5fb353d38636aed3ef14a3f53ffd3b8d352a7\": rpc error: code = NotFound desc = could not find container \"a4b5e3c22a86b258ff81643d87f5fb353d38636aed3ef14a3f53ffd3b8d352a7\": container with ID starting with a4b5e3c22a86b258ff81643d87f5fb353d38636aed3ef14a3f53ffd3b8d352a7 not found: ID does not exist" Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.756106 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eef2bf38-f907-43dc-916d-4407988e6b37-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.936017 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.955601 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.971140 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 17:45:10 crc kubenswrapper[4830]: E0227 17:45:10.971653 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eef2bf38-f907-43dc-916d-4407988e6b37" containerName="cinder-scheduler" Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.971678 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="eef2bf38-f907-43dc-916d-4407988e6b37" containerName="cinder-scheduler" Feb 27 17:45:10 crc kubenswrapper[4830]: E0227 17:45:10.971719 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f54d3b0-ad80-49bc-92e2-d1d9100542e8" containerName="collect-profiles" Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.971728 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f54d3b0-ad80-49bc-92e2-d1d9100542e8" containerName="collect-profiles" Feb 27 17:45:10 crc kubenswrapper[4830]: E0227 17:45:10.971746 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eef2bf38-f907-43dc-916d-4407988e6b37" containerName="probe" Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.971754 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="eef2bf38-f907-43dc-916d-4407988e6b37" containerName="probe" Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.972074 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="eef2bf38-f907-43dc-916d-4407988e6b37" containerName="probe" Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.972106 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f54d3b0-ad80-49bc-92e2-d1d9100542e8" containerName="collect-profiles" Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.972128 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="eef2bf38-f907-43dc-916d-4407988e6b37" containerName="cinder-scheduler" Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.974035 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.978331 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 27 17:45:10 crc kubenswrapper[4830]: I0227 17:45:10.979636 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 17:45:11 crc kubenswrapper[4830]: I0227 17:45:11.061274 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn7hk\" (UniqueName: \"kubernetes.io/projected/ff43406b-1751-47e9-84a7-38f1e2aa419e-kube-api-access-rn7hk\") pod \"cinder-scheduler-0\" (UID: \"ff43406b-1751-47e9-84a7-38f1e2aa419e\") " pod="openstack/cinder-scheduler-0" Feb 27 17:45:11 crc kubenswrapper[4830]: I0227 17:45:11.061321 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff43406b-1751-47e9-84a7-38f1e2aa419e-config-data\") pod \"cinder-scheduler-0\" (UID: \"ff43406b-1751-47e9-84a7-38f1e2aa419e\") " pod="openstack/cinder-scheduler-0" Feb 27 17:45:11 crc kubenswrapper[4830]: I0227 17:45:11.061366 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff43406b-1751-47e9-84a7-38f1e2aa419e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ff43406b-1751-47e9-84a7-38f1e2aa419e\") " pod="openstack/cinder-scheduler-0" Feb 27 17:45:11 crc kubenswrapper[4830]: I0227 17:45:11.061479 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff43406b-1751-47e9-84a7-38f1e2aa419e-scripts\") pod \"cinder-scheduler-0\" (UID: \"ff43406b-1751-47e9-84a7-38f1e2aa419e\") " pod="openstack/cinder-scheduler-0" Feb 27 17:45:11 crc kubenswrapper[4830]: I0227 17:45:11.061512 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff43406b-1751-47e9-84a7-38f1e2aa419e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ff43406b-1751-47e9-84a7-38f1e2aa419e\") " pod="openstack/cinder-scheduler-0" Feb 27 17:45:11 crc kubenswrapper[4830]: I0227 17:45:11.061565 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff43406b-1751-47e9-84a7-38f1e2aa419e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ff43406b-1751-47e9-84a7-38f1e2aa419e\") " pod="openstack/cinder-scheduler-0" Feb 27 17:45:11 crc kubenswrapper[4830]: I0227 17:45:11.163897 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff43406b-1751-47e9-84a7-38f1e2aa419e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ff43406b-1751-47e9-84a7-38f1e2aa419e\") " pod="openstack/cinder-scheduler-0" Feb 27 17:45:11 crc kubenswrapper[4830]: I0227 17:45:11.164046 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ff43406b-1751-47e9-84a7-38f1e2aa419e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ff43406b-1751-47e9-84a7-38f1e2aa419e\") " pod="openstack/cinder-scheduler-0" Feb 27 17:45:11 crc kubenswrapper[4830]: I0227 17:45:11.164334 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff43406b-1751-47e9-84a7-38f1e2aa419e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ff43406b-1751-47e9-84a7-38f1e2aa419e\") " pod="openstack/cinder-scheduler-0" Feb 27 17:45:11 crc kubenswrapper[4830]: I0227 17:45:11.164391 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn7hk\" (UniqueName: \"kubernetes.io/projected/ff43406b-1751-47e9-84a7-38f1e2aa419e-kube-api-access-rn7hk\") pod \"cinder-scheduler-0\" (UID: \"ff43406b-1751-47e9-84a7-38f1e2aa419e\") " pod="openstack/cinder-scheduler-0" Feb 27 17:45:11 crc kubenswrapper[4830]: I0227 17:45:11.164418 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff43406b-1751-47e9-84a7-38f1e2aa419e-config-data\") pod \"cinder-scheduler-0\" (UID: \"ff43406b-1751-47e9-84a7-38f1e2aa419e\") " pod="openstack/cinder-scheduler-0" Feb 27 17:45:11 crc kubenswrapper[4830]: I0227 17:45:11.164475 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff43406b-1751-47e9-84a7-38f1e2aa419e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ff43406b-1751-47e9-84a7-38f1e2aa419e\") " pod="openstack/cinder-scheduler-0" Feb 27 17:45:11 crc kubenswrapper[4830]: I0227 17:45:11.164558 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff43406b-1751-47e9-84a7-38f1e2aa419e-scripts\") pod \"cinder-scheduler-0\" (UID: \"ff43406b-1751-47e9-84a7-38f1e2aa419e\") " pod="openstack/cinder-scheduler-0" Feb 27 17:45:11 crc kubenswrapper[4830]: I0227 17:45:11.170676 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff43406b-1751-47e9-84a7-38f1e2aa419e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ff43406b-1751-47e9-84a7-38f1e2aa419e\") " pod="openstack/cinder-scheduler-0" Feb 27 17:45:11 crc kubenswrapper[4830]: I0227 17:45:11.170689 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff43406b-1751-47e9-84a7-38f1e2aa419e-scripts\") pod \"cinder-scheduler-0\" (UID: \"ff43406b-1751-47e9-84a7-38f1e2aa419e\") " pod="openstack/cinder-scheduler-0" Feb 27 17:45:11 crc kubenswrapper[4830]: I0227 17:45:11.173601 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff43406b-1751-47e9-84a7-38f1e2aa419e-config-data\") pod \"cinder-scheduler-0\" (UID: \"ff43406b-1751-47e9-84a7-38f1e2aa419e\") " pod="openstack/cinder-scheduler-0" Feb 27 17:45:11 crc kubenswrapper[4830]: I0227 17:45:11.176658 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ff43406b-1751-47e9-84a7-38f1e2aa419e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ff43406b-1751-47e9-84a7-38f1e2aa419e\") " pod="openstack/cinder-scheduler-0" Feb 27 17:45:11 crc kubenswrapper[4830]: I0227 17:45:11.187365 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn7hk\" (UniqueName: \"kubernetes.io/projected/ff43406b-1751-47e9-84a7-38f1e2aa419e-kube-api-access-rn7hk\") pod \"cinder-scheduler-0\" (UID: \"ff43406b-1751-47e9-84a7-38f1e2aa419e\") " pod="openstack/cinder-scheduler-0" Feb 27 17:45:11 crc kubenswrapper[4830]: I0227 17:45:11.315780 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 17:45:11 crc kubenswrapper[4830]: I0227 17:45:11.787082 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 17:45:11 crc kubenswrapper[4830]: W0227 17:45:11.787535 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff43406b_1751_47e9_84a7_38f1e2aa419e.slice/crio-1e40afa00045abdf624b5516b63a65825538066bd54627203fc36753d1e7d0db WatchSource:0}: Error finding container 1e40afa00045abdf624b5516b63a65825538066bd54627203fc36753d1e7d0db: Status 404 returned error can't find the container with id 1e40afa00045abdf624b5516b63a65825538066bd54627203fc36753d1e7d0db Feb 27 17:45:12 crc kubenswrapper[4830]: I0227 17:45:12.627325 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ff43406b-1751-47e9-84a7-38f1e2aa419e","Type":"ContainerStarted","Data":"6b6626fd7e4863e0342cb10b871b6686d668a8a69e3da25ae86c56117b07ed27"} Feb 27 17:45:12 crc kubenswrapper[4830]: I0227 17:45:12.627589 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ff43406b-1751-47e9-84a7-38f1e2aa419e","Type":"ContainerStarted","Data":"1e40afa00045abdf624b5516b63a65825538066bd54627203fc36753d1e7d0db"} Feb 27 17:45:12 crc kubenswrapper[4830]: I0227 17:45:12.775791 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eef2bf38-f907-43dc-916d-4407988e6b37" path="/var/lib/kubelet/pods/eef2bf38-f907-43dc-916d-4407988e6b37/volumes" Feb 27 17:45:13 crc kubenswrapper[4830]: I0227 17:45:13.641130 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ff43406b-1751-47e9-84a7-38f1e2aa419e","Type":"ContainerStarted","Data":"ffa045ed5d234f5a9df3d933ba9a9ebacc5c32bcafdf3c9c34ae3b61cd71e989"} Feb 27 17:45:14 crc kubenswrapper[4830]: I0227 17:45:14.697151 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 27 17:45:14 crc kubenswrapper[4830]: I0227 17:45:14.783639 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.783607408 podStartE2EDuration="4.783607408s" podCreationTimestamp="2026-02-27 17:45:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:45:13.671015348 +0000 UTC m=+5909.760287811" watchObservedRunningTime="2026-02-27 17:45:14.783607408 +0000 UTC m=+5910.872879871" Feb 27 17:45:15 crc kubenswrapper[4830]: E0227 17:45:15.765295 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:45:16 crc kubenswrapper[4830]: I0227 17:45:16.316785 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 27 17:45:17 crc kubenswrapper[4830]: I0227 17:45:17.762393 4830 scope.go:117] "RemoveContainer" containerID="0f73af46b765b3d98f3f5e9883b49887ac4f2ac3485e91a001e18c5d351646d8" Feb 27 17:45:17 crc kubenswrapper[4830]: E0227 17:45:17.763192 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:45:21 crc kubenswrapper[4830]: I0227 17:45:21.563026 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 27 17:45:23 crc kubenswrapper[4830]: I0227 17:45:23.767056 4830 generic.go:334] "Generic (PLEG): container finished" podID="77856f9c-1131-4857-9fff-bddf1d27b5d3" containerID="8796327ff924c8489b6e6d9b0bd9cdf89d2a62f3ba1335b489ef3339d1c3304a" exitCode=0 Feb 27 17:45:23 crc kubenswrapper[4830]: I0227 17:45:23.767184 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536904-jrdqt" event={"ID":"77856f9c-1131-4857-9fff-bddf1d27b5d3","Type":"ContainerDied","Data":"8796327ff924c8489b6e6d9b0bd9cdf89d2a62f3ba1335b489ef3339d1c3304a"} Feb 27 17:45:25 crc kubenswrapper[4830]: I0227 17:45:25.217684 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536904-jrdqt" Feb 27 17:45:25 crc kubenswrapper[4830]: I0227 17:45:25.304779 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmw5q\" (UniqueName: \"kubernetes.io/projected/77856f9c-1131-4857-9fff-bddf1d27b5d3-kube-api-access-xmw5q\") pod \"77856f9c-1131-4857-9fff-bddf1d27b5d3\" (UID: \"77856f9c-1131-4857-9fff-bddf1d27b5d3\") " Feb 27 17:45:25 crc kubenswrapper[4830]: I0227 17:45:25.318117 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77856f9c-1131-4857-9fff-bddf1d27b5d3-kube-api-access-xmw5q" (OuterVolumeSpecName: "kube-api-access-xmw5q") pod "77856f9c-1131-4857-9fff-bddf1d27b5d3" (UID: "77856f9c-1131-4857-9fff-bddf1d27b5d3"). InnerVolumeSpecName "kube-api-access-xmw5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:45:25 crc kubenswrapper[4830]: I0227 17:45:25.407457 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmw5q\" (UniqueName: \"kubernetes.io/projected/77856f9c-1131-4857-9fff-bddf1d27b5d3-kube-api-access-xmw5q\") on node \"crc\" DevicePath \"\"" Feb 27 17:45:25 crc kubenswrapper[4830]: I0227 17:45:25.806498 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536904-jrdqt" event={"ID":"77856f9c-1131-4857-9fff-bddf1d27b5d3","Type":"ContainerDied","Data":"96b0ff06422bf3129940869ee45a349df7a0efb3de3e23c0a9e84d06d774e5da"} Feb 27 17:45:25 crc kubenswrapper[4830]: I0227 17:45:25.806879 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96b0ff06422bf3129940869ee45a349df7a0efb3de3e23c0a9e84d06d774e5da" Feb 27 17:45:25 crc kubenswrapper[4830]: I0227 17:45:25.806573 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536904-jrdqt" Feb 27 17:45:26 crc kubenswrapper[4830]: I0227 17:45:26.320308 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536896-w25wj"] Feb 27 17:45:26 crc kubenswrapper[4830]: I0227 17:45:26.331207 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536896-w25wj"] Feb 27 17:45:26 crc kubenswrapper[4830]: I0227 17:45:26.791582 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0ad818e-4327-4796-958d-87f0c600e5d0" path="/var/lib/kubelet/pods/d0ad818e-4327-4796-958d-87f0c600e5d0/volumes" Feb 27 17:45:29 crc kubenswrapper[4830]: I0227 17:45:29.762483 4830 scope.go:117] "RemoveContainer" containerID="0f73af46b765b3d98f3f5e9883b49887ac4f2ac3485e91a001e18c5d351646d8" Feb 27 17:45:29 crc kubenswrapper[4830]: E0227 17:45:29.763380 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:45:30 crc kubenswrapper[4830]: E0227 17:45:30.768592 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:45:43 crc kubenswrapper[4830]: I0227 17:45:43.301813 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gbcl6"] Feb 27 17:45:43 crc kubenswrapper[4830]: E0227 17:45:43.303226 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77856f9c-1131-4857-9fff-bddf1d27b5d3" containerName="oc" Feb 27 17:45:43 crc kubenswrapper[4830]: I0227 17:45:43.303405 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="77856f9c-1131-4857-9fff-bddf1d27b5d3" containerName="oc" Feb 27 17:45:43 crc kubenswrapper[4830]: I0227 17:45:43.303915 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="77856f9c-1131-4857-9fff-bddf1d27b5d3" containerName="oc" Feb 27 17:45:43 crc kubenswrapper[4830]: I0227 17:45:43.307285 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gbcl6" Feb 27 17:45:43 crc kubenswrapper[4830]: I0227 17:45:43.323805 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gbcl6"] Feb 27 17:45:43 crc kubenswrapper[4830]: I0227 17:45:43.458575 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9tjs\" (UniqueName: \"kubernetes.io/projected/90e915d6-d74a-4f5b-a8da-8f0f2acdda48-kube-api-access-t9tjs\") pod \"redhat-operators-gbcl6\" (UID: \"90e915d6-d74a-4f5b-a8da-8f0f2acdda48\") " pod="openshift-marketplace/redhat-operators-gbcl6" Feb 27 17:45:43 crc kubenswrapper[4830]: I0227 17:45:43.459258 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90e915d6-d74a-4f5b-a8da-8f0f2acdda48-catalog-content\") pod \"redhat-operators-gbcl6\" (UID: \"90e915d6-d74a-4f5b-a8da-8f0f2acdda48\") " pod="openshift-marketplace/redhat-operators-gbcl6" Feb 27 17:45:43 crc kubenswrapper[4830]: I0227 17:45:43.459374 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90e915d6-d74a-4f5b-a8da-8f0f2acdda48-utilities\") pod \"redhat-operators-gbcl6\" (UID: \"90e915d6-d74a-4f5b-a8da-8f0f2acdda48\") " pod="openshift-marketplace/redhat-operators-gbcl6" Feb 27 17:45:43 crc kubenswrapper[4830]: I0227 17:45:43.562182 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90e915d6-d74a-4f5b-a8da-8f0f2acdda48-catalog-content\") pod \"redhat-operators-gbcl6\" (UID: \"90e915d6-d74a-4f5b-a8da-8f0f2acdda48\") " pod="openshift-marketplace/redhat-operators-gbcl6" Feb 27 17:45:43 crc kubenswrapper[4830]: I0227 17:45:43.562323 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90e915d6-d74a-4f5b-a8da-8f0f2acdda48-utilities\") pod \"redhat-operators-gbcl6\" (UID: \"90e915d6-d74a-4f5b-a8da-8f0f2acdda48\") " pod="openshift-marketplace/redhat-operators-gbcl6" Feb 27 17:45:43 crc kubenswrapper[4830]: I0227 17:45:43.562398 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9tjs\" (UniqueName: \"kubernetes.io/projected/90e915d6-d74a-4f5b-a8da-8f0f2acdda48-kube-api-access-t9tjs\") pod \"redhat-operators-gbcl6\" (UID: \"90e915d6-d74a-4f5b-a8da-8f0f2acdda48\") " pod="openshift-marketplace/redhat-operators-gbcl6" Feb 27 17:45:43 crc kubenswrapper[4830]: I0227 17:45:43.563219 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90e915d6-d74a-4f5b-a8da-8f0f2acdda48-utilities\") pod \"redhat-operators-gbcl6\" (UID: \"90e915d6-d74a-4f5b-a8da-8f0f2acdda48\") " pod="openshift-marketplace/redhat-operators-gbcl6" Feb 27 17:45:43 crc kubenswrapper[4830]: I0227 17:45:43.563216 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90e915d6-d74a-4f5b-a8da-8f0f2acdda48-catalog-content\") pod \"redhat-operators-gbcl6\" (UID: \"90e915d6-d74a-4f5b-a8da-8f0f2acdda48\") " pod="openshift-marketplace/redhat-operators-gbcl6" Feb 27 17:45:43 crc kubenswrapper[4830]: I0227 17:45:43.585393 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9tjs\" (UniqueName: \"kubernetes.io/projected/90e915d6-d74a-4f5b-a8da-8f0f2acdda48-kube-api-access-t9tjs\") pod \"redhat-operators-gbcl6\" (UID: \"90e915d6-d74a-4f5b-a8da-8f0f2acdda48\") " pod="openshift-marketplace/redhat-operators-gbcl6" Feb 27 17:45:43 crc kubenswrapper[4830]: I0227 17:45:43.643343 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gbcl6" Feb 27 17:45:43 crc kubenswrapper[4830]: I0227 17:45:43.762759 4830 scope.go:117] "RemoveContainer" containerID="0f73af46b765b3d98f3f5e9883b49887ac4f2ac3485e91a001e18c5d351646d8" Feb 27 17:45:43 crc kubenswrapper[4830]: E0227 17:45:43.763456 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:45:44 crc kubenswrapper[4830]: I0227 17:45:44.120996 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gbcl6"] Feb 27 17:45:45 crc kubenswrapper[4830]: I0227 17:45:45.018308 4830 generic.go:334] "Generic (PLEG): container finished" podID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" containerID="a899de0dbf17614a003a78f6f79914ce7785cca43694b713e7ea7e744695e6d0" exitCode=0 Feb 27 17:45:45 crc kubenswrapper[4830]: I0227 17:45:45.019369 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbcl6" event={"ID":"90e915d6-d74a-4f5b-a8da-8f0f2acdda48","Type":"ContainerDied","Data":"a899de0dbf17614a003a78f6f79914ce7785cca43694b713e7ea7e744695e6d0"} Feb 27 17:45:45 crc kubenswrapper[4830]: I0227 17:45:45.019419 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbcl6" event={"ID":"90e915d6-d74a-4f5b-a8da-8f0f2acdda48","Type":"ContainerStarted","Data":"de817b138505468257c54fddd61a56fd9130b77ac87aec4c8bad76dfad4482c6"} Feb 27 17:45:45 crc kubenswrapper[4830]: E0227 17:45:45.681233 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 17:45:45 crc kubenswrapper[4830]: E0227 17:45:45.681847 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t9tjs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-gbcl6_openshift-marketplace(90e915d6-d74a-4f5b-a8da-8f0f2acdda48): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 17:45:45 crc kubenswrapper[4830]: E0227 17:45:45.683383 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:45:45 crc kubenswrapper[4830]: E0227 17:45:45.766383 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:45:46 crc kubenswrapper[4830]: E0227 17:45:46.031313 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:45:50 crc kubenswrapper[4830]: I0227 17:45:50.044463 4830 scope.go:117] "RemoveContainer" containerID="77cb231367bef5a5c657a9a74f8cb9920ebf5b7e99f163f9d2e72c2ba91e833f" Feb 27 17:45:50 crc kubenswrapper[4830]: I0227 17:45:50.079126 4830 scope.go:117] "RemoveContainer" containerID="3ed57176e05eab0df493d59b2eb579edae3360ab2f3a539695e07ff20ed1e889" Feb 27 17:45:50 crc kubenswrapper[4830]: I0227 17:45:50.134761 4830 scope.go:117] "RemoveContainer" containerID="d9413209a63ed534e80c99f7d62265cb445e8bac648ad296b3a540292cb8161f" Feb 27 17:45:50 crc kubenswrapper[4830]: I0227 17:45:50.213728 4830 scope.go:117] "RemoveContainer" containerID="afbea8777456cbc8f1a81fc08205987b966279a1cf47a28d7acdf37825011c56" Feb 27 17:45:56 crc kubenswrapper[4830]: I0227 17:45:56.763017 4830 scope.go:117] "RemoveContainer" containerID="0f73af46b765b3d98f3f5e9883b49887ac4f2ac3485e91a001e18c5d351646d8" Feb 27 17:45:56 crc kubenswrapper[4830]: E0227 17:45:56.764266 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:45:57 crc kubenswrapper[4830]: E0227 17:45:57.769393 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:46:00 crc kubenswrapper[4830]: I0227 17:46:00.156233 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536906-9ktdj"] Feb 27 17:46:00 crc kubenswrapper[4830]: I0227 17:46:00.158021 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536906-9ktdj" Feb 27 17:46:00 crc kubenswrapper[4830]: I0227 17:46:00.175184 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536906-9ktdj"] Feb 27 17:46:00 crc kubenswrapper[4830]: I0227 17:46:00.218191 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnh2m\" (UniqueName: \"kubernetes.io/projected/09e651f3-9b46-4392-9ce4-a653c4ad3415-kube-api-access-lnh2m\") pod \"auto-csr-approver-29536906-9ktdj\" (UID: \"09e651f3-9b46-4392-9ce4-a653c4ad3415\") " pod="openshift-infra/auto-csr-approver-29536906-9ktdj" Feb 27 17:46:00 crc kubenswrapper[4830]: I0227 17:46:00.319850 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnh2m\" (UniqueName: \"kubernetes.io/projected/09e651f3-9b46-4392-9ce4-a653c4ad3415-kube-api-access-lnh2m\") pod \"auto-csr-approver-29536906-9ktdj\" (UID: \"09e651f3-9b46-4392-9ce4-a653c4ad3415\") " pod="openshift-infra/auto-csr-approver-29536906-9ktdj" Feb 27 17:46:00 crc kubenswrapper[4830]: I0227 17:46:00.344099 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnh2m\" (UniqueName: \"kubernetes.io/projected/09e651f3-9b46-4392-9ce4-a653c4ad3415-kube-api-access-lnh2m\") pod \"auto-csr-approver-29536906-9ktdj\" (UID: \"09e651f3-9b46-4392-9ce4-a653c4ad3415\") " pod="openshift-infra/auto-csr-approver-29536906-9ktdj" Feb 27 17:46:00 crc kubenswrapper[4830]: E0227 17:46:00.498141 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 17:46:00 crc kubenswrapper[4830]: E0227 17:46:00.498347 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t9tjs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-gbcl6_openshift-marketplace(90e915d6-d74a-4f5b-a8da-8f0f2acdda48): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 17:46:00 crc kubenswrapper[4830]: E0227 17:46:00.499600 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:46:00 crc kubenswrapper[4830]: I0227 17:46:00.505748 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536906-9ktdj" Feb 27 17:46:00 crc kubenswrapper[4830]: I0227 17:46:00.976809 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536906-9ktdj"] Feb 27 17:46:01 crc kubenswrapper[4830]: I0227 17:46:01.247605 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536906-9ktdj" event={"ID":"09e651f3-9b46-4392-9ce4-a653c4ad3415","Type":"ContainerStarted","Data":"8757e15fc46920caacb0aac53fd620bdef1580f288a89c8aa935dca848cb8e33"} Feb 27 17:46:04 crc kubenswrapper[4830]: I0227 17:46:04.294018 4830 generic.go:334] "Generic (PLEG): container finished" podID="09e651f3-9b46-4392-9ce4-a653c4ad3415" containerID="b0bf7173c82f7f2d011318a3c91727ae7ba189f22e558e4935be7ddca99b9944" exitCode=0 Feb 27 17:46:04 crc kubenswrapper[4830]: I0227 17:46:04.294141 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536906-9ktdj" event={"ID":"09e651f3-9b46-4392-9ce4-a653c4ad3415","Type":"ContainerDied","Data":"b0bf7173c82f7f2d011318a3c91727ae7ba189f22e558e4935be7ddca99b9944"} Feb 27 17:46:05 crc kubenswrapper[4830]: I0227 17:46:05.712889 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536906-9ktdj" Feb 27 17:46:05 crc kubenswrapper[4830]: I0227 17:46:05.846854 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnh2m\" (UniqueName: \"kubernetes.io/projected/09e651f3-9b46-4392-9ce4-a653c4ad3415-kube-api-access-lnh2m\") pod \"09e651f3-9b46-4392-9ce4-a653c4ad3415\" (UID: \"09e651f3-9b46-4392-9ce4-a653c4ad3415\") " Feb 27 17:46:05 crc kubenswrapper[4830]: I0227 17:46:05.858343 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09e651f3-9b46-4392-9ce4-a653c4ad3415-kube-api-access-lnh2m" (OuterVolumeSpecName: "kube-api-access-lnh2m") pod "09e651f3-9b46-4392-9ce4-a653c4ad3415" (UID: "09e651f3-9b46-4392-9ce4-a653c4ad3415"). InnerVolumeSpecName "kube-api-access-lnh2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:46:05 crc kubenswrapper[4830]: I0227 17:46:05.950570 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnh2m\" (UniqueName: \"kubernetes.io/projected/09e651f3-9b46-4392-9ce4-a653c4ad3415-kube-api-access-lnh2m\") on node \"crc\" DevicePath \"\"" Feb 27 17:46:06 crc kubenswrapper[4830]: I0227 17:46:06.320326 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536906-9ktdj" event={"ID":"09e651f3-9b46-4392-9ce4-a653c4ad3415","Type":"ContainerDied","Data":"8757e15fc46920caacb0aac53fd620bdef1580f288a89c8aa935dca848cb8e33"} Feb 27 17:46:06 crc kubenswrapper[4830]: I0227 17:46:06.320356 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536906-9ktdj" Feb 27 17:46:06 crc kubenswrapper[4830]: I0227 17:46:06.320378 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8757e15fc46920caacb0aac53fd620bdef1580f288a89c8aa935dca848cb8e33" Feb 27 17:46:06 crc kubenswrapper[4830]: I0227 17:46:06.801248 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536900-rmh78"] Feb 27 17:46:06 crc kubenswrapper[4830]: I0227 17:46:06.809696 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536900-rmh78"] Feb 27 17:46:08 crc kubenswrapper[4830]: I0227 17:46:08.786659 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="900b9199-11ea-4332-b62c-81ebc07f20dd" path="/var/lib/kubelet/pods/900b9199-11ea-4332-b62c-81ebc07f20dd/volumes" Feb 27 17:46:09 crc kubenswrapper[4830]: I0227 17:46:09.762789 4830 scope.go:117] "RemoveContainer" containerID="0f73af46b765b3d98f3f5e9883b49887ac4f2ac3485e91a001e18c5d351646d8" Feb 27 17:46:09 crc kubenswrapper[4830]: E0227 17:46:09.763842 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:46:11 crc kubenswrapper[4830]: E0227 17:46:11.766403 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:46:15 crc kubenswrapper[4830]: E0227 17:46:15.766671 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:46:23 crc kubenswrapper[4830]: I0227 17:46:23.762785 4830 scope.go:117] "RemoveContainer" containerID="0f73af46b765b3d98f3f5e9883b49887ac4f2ac3485e91a001e18c5d351646d8" Feb 27 17:46:23 crc kubenswrapper[4830]: E0227 17:46:23.764418 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:46:25 crc kubenswrapper[4830]: E0227 17:46:25.764046 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:46:30 crc kubenswrapper[4830]: E0227 17:46:30.795042 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 17:46:30 crc kubenswrapper[4830]: E0227 17:46:30.795866 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t9tjs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-gbcl6_openshift-marketplace(90e915d6-d74a-4f5b-a8da-8f0f2acdda48): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 17:46:30 crc kubenswrapper[4830]: E0227 17:46:30.797049 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:46:35 crc kubenswrapper[4830]: I0227 17:46:35.763299 4830 scope.go:117] "RemoveContainer" containerID="0f73af46b765b3d98f3f5e9883b49887ac4f2ac3485e91a001e18c5d351646d8" Feb 27 17:46:35 crc kubenswrapper[4830]: E0227 17:46:35.764146 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:46:37 crc kubenswrapper[4830]: E0227 17:46:37.767037 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:46:42 crc kubenswrapper[4830]: E0227 17:46:42.766097 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:46:46 crc kubenswrapper[4830]: I0227 17:46:46.268737 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-06a6-account-create-update-bgq2j"] Feb 27 17:46:46 crc kubenswrapper[4830]: I0227 17:46:46.288437 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-tq4ws"] Feb 27 17:46:46 crc kubenswrapper[4830]: I0227 17:46:46.304972 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-06a6-account-create-update-bgq2j"] Feb 27 17:46:46 crc kubenswrapper[4830]: I0227 17:46:46.316485 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-tq4ws"] Feb 27 17:46:46 crc kubenswrapper[4830]: I0227 17:46:46.782758 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b582958-8ecf-444a-a09f-db96b283db18" path="/var/lib/kubelet/pods/0b582958-8ecf-444a-a09f-db96b283db18/volumes" Feb 27 17:46:46 crc kubenswrapper[4830]: I0227 17:46:46.784489 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84a29c55-5ac8-46dd-8be0-a12243cedbbf" path="/var/lib/kubelet/pods/84a29c55-5ac8-46dd-8be0-a12243cedbbf/volumes" Feb 27 17:46:47 crc kubenswrapper[4830]: I0227 17:46:47.762611 4830 scope.go:117] "RemoveContainer" containerID="0f73af46b765b3d98f3f5e9883b49887ac4f2ac3485e91a001e18c5d351646d8" Feb 27 17:46:47 crc kubenswrapper[4830]: E0227 17:46:47.764404 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:46:50 crc kubenswrapper[4830]: I0227 17:46:50.456300 4830 scope.go:117] "RemoveContainer" containerID="f79c27818404e9b20da607e84a10901094ce0ae0527be8e0df748d5a83409c7d" Feb 27 17:46:50 crc kubenswrapper[4830]: I0227 17:46:50.503836 4830 scope.go:117] "RemoveContainer" containerID="f89cdd1399349b91b86536eefb41a584598a542d50342d2dc8053c5141c1672b" Feb 27 17:46:50 crc kubenswrapper[4830]: I0227 17:46:50.533421 4830 scope.go:117] "RemoveContainer" containerID="c2817d8d312078614557cfe74230997c282b244e775ab91873dc6592eb036f19" Feb 27 17:46:50 crc kubenswrapper[4830]: E0227 17:46:50.764573 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:46:52 crc kubenswrapper[4830]: I0227 17:46:52.044063 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-4sk26"] Feb 27 17:46:52 crc kubenswrapper[4830]: I0227 17:46:52.061512 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-4sk26"] Feb 27 17:46:52 crc kubenswrapper[4830]: I0227 17:46:52.784628 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5236dad-b287-4f62-afa9-c6449a7b18d2" path="/var/lib/kubelet/pods/c5236dad-b287-4f62-afa9-c6449a7b18d2/volumes" Feb 27 17:46:53 crc kubenswrapper[4830]: E0227 17:46:53.765185 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.657665 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-b8hbd"] Feb 27 17:46:58 crc kubenswrapper[4830]: E0227 17:46:58.660335 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09e651f3-9b46-4392-9ce4-a653c4ad3415" containerName="oc" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.660515 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="09e651f3-9b46-4392-9ce4-a653c4ad3415" containerName="oc" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.661143 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="09e651f3-9b46-4392-9ce4-a653c4ad3415" containerName="oc" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.662495 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-b8hbd" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.666904 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-dqvl2" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.667267 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.677732 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-b8hbd"] Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.695666 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-rfd9s"] Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.698141 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rfd9s" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.704378 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-rfd9s"] Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.762671 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4881336-2572-4aa9-a0c2-9c46b73b7898-scripts\") pod \"ovn-controller-b8hbd\" (UID: \"d4881336-2572-4aa9-a0c2-9c46b73b7898\") " pod="openstack/ovn-controller-b8hbd" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.762766 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msw2c\" (UniqueName: \"kubernetes.io/projected/d4881336-2572-4aa9-a0c2-9c46b73b7898-kube-api-access-msw2c\") pod \"ovn-controller-b8hbd\" (UID: \"d4881336-2572-4aa9-a0c2-9c46b73b7898\") " pod="openstack/ovn-controller-b8hbd" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.762806 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d4881336-2572-4aa9-a0c2-9c46b73b7898-var-run\") pod \"ovn-controller-b8hbd\" (UID: \"d4881336-2572-4aa9-a0c2-9c46b73b7898\") " pod="openstack/ovn-controller-b8hbd" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.762824 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d4881336-2572-4aa9-a0c2-9c46b73b7898-var-run-ovn\") pod \"ovn-controller-b8hbd\" (UID: \"d4881336-2572-4aa9-a0c2-9c46b73b7898\") " pod="openstack/ovn-controller-b8hbd" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.762844 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d4881336-2572-4aa9-a0c2-9c46b73b7898-var-log-ovn\") pod \"ovn-controller-b8hbd\" (UID: \"d4881336-2572-4aa9-a0c2-9c46b73b7898\") " pod="openstack/ovn-controller-b8hbd" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.864595 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4881336-2572-4aa9-a0c2-9c46b73b7898-scripts\") pod \"ovn-controller-b8hbd\" (UID: \"d4881336-2572-4aa9-a0c2-9c46b73b7898\") " pod="openstack/ovn-controller-b8hbd" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.864929 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c2f87ce8-a38b-467d-a4bf-17eefbfbc958-var-lib\") pod \"ovn-controller-ovs-rfd9s\" (UID: \"c2f87ce8-a38b-467d-a4bf-17eefbfbc958\") " pod="openstack/ovn-controller-ovs-rfd9s" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.864973 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c2f87ce8-a38b-467d-a4bf-17eefbfbc958-var-run\") pod \"ovn-controller-ovs-rfd9s\" (UID: \"c2f87ce8-a38b-467d-a4bf-17eefbfbc958\") " pod="openstack/ovn-controller-ovs-rfd9s" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.865161 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2f87ce8-a38b-467d-a4bf-17eefbfbc958-scripts\") pod \"ovn-controller-ovs-rfd9s\" (UID: \"c2f87ce8-a38b-467d-a4bf-17eefbfbc958\") " pod="openstack/ovn-controller-ovs-rfd9s" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.865255 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msw2c\" (UniqueName: \"kubernetes.io/projected/d4881336-2572-4aa9-a0c2-9c46b73b7898-kube-api-access-msw2c\") pod \"ovn-controller-b8hbd\" (UID: \"d4881336-2572-4aa9-a0c2-9c46b73b7898\") " pod="openstack/ovn-controller-b8hbd" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.865348 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d4881336-2572-4aa9-a0c2-9c46b73b7898-var-run\") pod \"ovn-controller-b8hbd\" (UID: \"d4881336-2572-4aa9-a0c2-9c46b73b7898\") " pod="openstack/ovn-controller-b8hbd" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.865377 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d4881336-2572-4aa9-a0c2-9c46b73b7898-var-run-ovn\") pod \"ovn-controller-b8hbd\" (UID: \"d4881336-2572-4aa9-a0c2-9c46b73b7898\") " pod="openstack/ovn-controller-b8hbd" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.865406 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d4881336-2572-4aa9-a0c2-9c46b73b7898-var-log-ovn\") pod \"ovn-controller-b8hbd\" (UID: \"d4881336-2572-4aa9-a0c2-9c46b73b7898\") " pod="openstack/ovn-controller-b8hbd" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.865429 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjzps\" (UniqueName: \"kubernetes.io/projected/c2f87ce8-a38b-467d-a4bf-17eefbfbc958-kube-api-access-pjzps\") pod \"ovn-controller-ovs-rfd9s\" (UID: \"c2f87ce8-a38b-467d-a4bf-17eefbfbc958\") " pod="openstack/ovn-controller-ovs-rfd9s" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.865467 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c2f87ce8-a38b-467d-a4bf-17eefbfbc958-etc-ovs\") pod \"ovn-controller-ovs-rfd9s\" (UID: \"c2f87ce8-a38b-467d-a4bf-17eefbfbc958\") " pod="openstack/ovn-controller-ovs-rfd9s" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.865540 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c2f87ce8-a38b-467d-a4bf-17eefbfbc958-var-log\") pod \"ovn-controller-ovs-rfd9s\" (UID: \"c2f87ce8-a38b-467d-a4bf-17eefbfbc958\") " pod="openstack/ovn-controller-ovs-rfd9s" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.865703 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d4881336-2572-4aa9-a0c2-9c46b73b7898-var-run\") pod \"ovn-controller-b8hbd\" (UID: \"d4881336-2572-4aa9-a0c2-9c46b73b7898\") " pod="openstack/ovn-controller-b8hbd" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.865719 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d4881336-2572-4aa9-a0c2-9c46b73b7898-var-log-ovn\") pod \"ovn-controller-b8hbd\" (UID: \"d4881336-2572-4aa9-a0c2-9c46b73b7898\") " pod="openstack/ovn-controller-b8hbd" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.865812 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d4881336-2572-4aa9-a0c2-9c46b73b7898-var-run-ovn\") pod \"ovn-controller-b8hbd\" (UID: \"d4881336-2572-4aa9-a0c2-9c46b73b7898\") " pod="openstack/ovn-controller-b8hbd" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.869562 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4881336-2572-4aa9-a0c2-9c46b73b7898-scripts\") pod \"ovn-controller-b8hbd\" (UID: \"d4881336-2572-4aa9-a0c2-9c46b73b7898\") " pod="openstack/ovn-controller-b8hbd" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.901408 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msw2c\" (UniqueName: \"kubernetes.io/projected/d4881336-2572-4aa9-a0c2-9c46b73b7898-kube-api-access-msw2c\") pod \"ovn-controller-b8hbd\" (UID: \"d4881336-2572-4aa9-a0c2-9c46b73b7898\") " pod="openstack/ovn-controller-b8hbd" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.966978 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjzps\" (UniqueName: \"kubernetes.io/projected/c2f87ce8-a38b-467d-a4bf-17eefbfbc958-kube-api-access-pjzps\") pod \"ovn-controller-ovs-rfd9s\" (UID: \"c2f87ce8-a38b-467d-a4bf-17eefbfbc958\") " pod="openstack/ovn-controller-ovs-rfd9s" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.967047 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c2f87ce8-a38b-467d-a4bf-17eefbfbc958-etc-ovs\") pod \"ovn-controller-ovs-rfd9s\" (UID: \"c2f87ce8-a38b-467d-a4bf-17eefbfbc958\") " pod="openstack/ovn-controller-ovs-rfd9s" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.967091 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c2f87ce8-a38b-467d-a4bf-17eefbfbc958-var-log\") pod \"ovn-controller-ovs-rfd9s\" (UID: \"c2f87ce8-a38b-467d-a4bf-17eefbfbc958\") " pod="openstack/ovn-controller-ovs-rfd9s" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.967232 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c2f87ce8-a38b-467d-a4bf-17eefbfbc958-var-lib\") pod \"ovn-controller-ovs-rfd9s\" (UID: \"c2f87ce8-a38b-467d-a4bf-17eefbfbc958\") " pod="openstack/ovn-controller-ovs-rfd9s" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.967256 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c2f87ce8-a38b-467d-a4bf-17eefbfbc958-var-run\") pod \"ovn-controller-ovs-rfd9s\" (UID: \"c2f87ce8-a38b-467d-a4bf-17eefbfbc958\") " pod="openstack/ovn-controller-ovs-rfd9s" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.967291 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2f87ce8-a38b-467d-a4bf-17eefbfbc958-scripts\") pod \"ovn-controller-ovs-rfd9s\" (UID: \"c2f87ce8-a38b-467d-a4bf-17eefbfbc958\") " pod="openstack/ovn-controller-ovs-rfd9s" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.967449 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c2f87ce8-a38b-467d-a4bf-17eefbfbc958-var-log\") pod \"ovn-controller-ovs-rfd9s\" (UID: \"c2f87ce8-a38b-467d-a4bf-17eefbfbc958\") " pod="openstack/ovn-controller-ovs-rfd9s" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.967523 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c2f87ce8-a38b-467d-a4bf-17eefbfbc958-etc-ovs\") pod \"ovn-controller-ovs-rfd9s\" (UID: \"c2f87ce8-a38b-467d-a4bf-17eefbfbc958\") " pod="openstack/ovn-controller-ovs-rfd9s" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.967577 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c2f87ce8-a38b-467d-a4bf-17eefbfbc958-var-lib\") pod \"ovn-controller-ovs-rfd9s\" (UID: \"c2f87ce8-a38b-467d-a4bf-17eefbfbc958\") " pod="openstack/ovn-controller-ovs-rfd9s" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.967621 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c2f87ce8-a38b-467d-a4bf-17eefbfbc958-var-run\") pod \"ovn-controller-ovs-rfd9s\" (UID: \"c2f87ce8-a38b-467d-a4bf-17eefbfbc958\") " pod="openstack/ovn-controller-ovs-rfd9s" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.969776 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2f87ce8-a38b-467d-a4bf-17eefbfbc958-scripts\") pod \"ovn-controller-ovs-rfd9s\" (UID: \"c2f87ce8-a38b-467d-a4bf-17eefbfbc958\") " pod="openstack/ovn-controller-ovs-rfd9s" Feb 27 17:46:58 crc kubenswrapper[4830]: I0227 17:46:58.983221 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjzps\" (UniqueName: \"kubernetes.io/projected/c2f87ce8-a38b-467d-a4bf-17eefbfbc958-kube-api-access-pjzps\") pod \"ovn-controller-ovs-rfd9s\" (UID: \"c2f87ce8-a38b-467d-a4bf-17eefbfbc958\") " pod="openstack/ovn-controller-ovs-rfd9s" Feb 27 17:46:59 crc kubenswrapper[4830]: I0227 17:46:59.000081 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-b8hbd" Feb 27 17:46:59 crc kubenswrapper[4830]: I0227 17:46:59.014937 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rfd9s" Feb 27 17:46:59 crc kubenswrapper[4830]: I0227 17:46:59.639021 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-b8hbd"] Feb 27 17:46:59 crc kubenswrapper[4830]: I0227 17:46:59.968409 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-rfd9s"] Feb 27 17:46:59 crc kubenswrapper[4830]: W0227 17:46:59.978157 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2f87ce8_a38b_467d_a4bf_17eefbfbc958.slice/crio-efdbdbe74ab2d0818a2dd8f4c622f6807a96aaade6695936822ff12de86f9961 WatchSource:0}: Error finding container efdbdbe74ab2d0818a2dd8f4c622f6807a96aaade6695936822ff12de86f9961: Status 404 returned error can't find the container with id efdbdbe74ab2d0818a2dd8f4c622f6807a96aaade6695936822ff12de86f9961 Feb 27 17:47:00 crc kubenswrapper[4830]: I0227 17:47:00.025327 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rfd9s" event={"ID":"c2f87ce8-a38b-467d-a4bf-17eefbfbc958","Type":"ContainerStarted","Data":"efdbdbe74ab2d0818a2dd8f4c622f6807a96aaade6695936822ff12de86f9961"} Feb 27 17:47:00 crc kubenswrapper[4830]: I0227 17:47:00.027014 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-b8hbd" event={"ID":"d4881336-2572-4aa9-a0c2-9c46b73b7898","Type":"ContainerStarted","Data":"5e884d2162761b7312d62a1b0662ab5470b567dfd19adeb9dd9a6966e8bfe666"} Feb 27 17:47:00 crc kubenswrapper[4830]: I0227 17:47:00.268606 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-xzrhp"] Feb 27 17:47:00 crc kubenswrapper[4830]: I0227 17:47:00.270041 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xzrhp" Feb 27 17:47:00 crc kubenswrapper[4830]: I0227 17:47:00.276402 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 27 17:47:00 crc kubenswrapper[4830]: I0227 17:47:00.287298 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-xzrhp"] Feb 27 17:47:00 crc kubenswrapper[4830]: I0227 17:47:00.405911 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f2c4bba8-df9d-411c-9990-7e98513001aa-ovn-rundir\") pod \"ovn-controller-metrics-xzrhp\" (UID: \"f2c4bba8-df9d-411c-9990-7e98513001aa\") " pod="openstack/ovn-controller-metrics-xzrhp" Feb 27 17:47:00 crc kubenswrapper[4830]: I0227 17:47:00.405988 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2c4bba8-df9d-411c-9990-7e98513001aa-config\") pod \"ovn-controller-metrics-xzrhp\" (UID: \"f2c4bba8-df9d-411c-9990-7e98513001aa\") " pod="openstack/ovn-controller-metrics-xzrhp" Feb 27 17:47:00 crc kubenswrapper[4830]: I0227 17:47:00.406015 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f2c4bba8-df9d-411c-9990-7e98513001aa-ovs-rundir\") pod \"ovn-controller-metrics-xzrhp\" (UID: \"f2c4bba8-df9d-411c-9990-7e98513001aa\") " pod="openstack/ovn-controller-metrics-xzrhp" Feb 27 17:47:00 crc kubenswrapper[4830]: I0227 17:47:00.406113 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmg9n\" (UniqueName: \"kubernetes.io/projected/f2c4bba8-df9d-411c-9990-7e98513001aa-kube-api-access-dmg9n\") pod \"ovn-controller-metrics-xzrhp\" (UID: \"f2c4bba8-df9d-411c-9990-7e98513001aa\") " pod="openstack/ovn-controller-metrics-xzrhp" Feb 27 17:47:00 crc kubenswrapper[4830]: I0227 17:47:00.507584 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f2c4bba8-df9d-411c-9990-7e98513001aa-ovn-rundir\") pod \"ovn-controller-metrics-xzrhp\" (UID: \"f2c4bba8-df9d-411c-9990-7e98513001aa\") " pod="openstack/ovn-controller-metrics-xzrhp" Feb 27 17:47:00 crc kubenswrapper[4830]: I0227 17:47:00.507627 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2c4bba8-df9d-411c-9990-7e98513001aa-config\") pod \"ovn-controller-metrics-xzrhp\" (UID: \"f2c4bba8-df9d-411c-9990-7e98513001aa\") " pod="openstack/ovn-controller-metrics-xzrhp" Feb 27 17:47:00 crc kubenswrapper[4830]: I0227 17:47:00.507647 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f2c4bba8-df9d-411c-9990-7e98513001aa-ovs-rundir\") pod \"ovn-controller-metrics-xzrhp\" (UID: \"f2c4bba8-df9d-411c-9990-7e98513001aa\") " pod="openstack/ovn-controller-metrics-xzrhp" Feb 27 17:47:00 crc kubenswrapper[4830]: I0227 17:47:00.507692 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmg9n\" (UniqueName: \"kubernetes.io/projected/f2c4bba8-df9d-411c-9990-7e98513001aa-kube-api-access-dmg9n\") pod \"ovn-controller-metrics-xzrhp\" (UID: \"f2c4bba8-df9d-411c-9990-7e98513001aa\") " pod="openstack/ovn-controller-metrics-xzrhp" Feb 27 17:47:00 crc kubenswrapper[4830]: I0227 17:47:00.508003 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f2c4bba8-df9d-411c-9990-7e98513001aa-ovn-rundir\") pod \"ovn-controller-metrics-xzrhp\" (UID: \"f2c4bba8-df9d-411c-9990-7e98513001aa\") " pod="openstack/ovn-controller-metrics-xzrhp" Feb 27 17:47:00 crc kubenswrapper[4830]: I0227 17:47:00.508113 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f2c4bba8-df9d-411c-9990-7e98513001aa-ovs-rundir\") pod \"ovn-controller-metrics-xzrhp\" (UID: \"f2c4bba8-df9d-411c-9990-7e98513001aa\") " pod="openstack/ovn-controller-metrics-xzrhp" Feb 27 17:47:00 crc kubenswrapper[4830]: I0227 17:47:00.508442 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2c4bba8-df9d-411c-9990-7e98513001aa-config\") pod \"ovn-controller-metrics-xzrhp\" (UID: \"f2c4bba8-df9d-411c-9990-7e98513001aa\") " pod="openstack/ovn-controller-metrics-xzrhp" Feb 27 17:47:00 crc kubenswrapper[4830]: I0227 17:47:00.528576 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmg9n\" (UniqueName: \"kubernetes.io/projected/f2c4bba8-df9d-411c-9990-7e98513001aa-kube-api-access-dmg9n\") pod \"ovn-controller-metrics-xzrhp\" (UID: \"f2c4bba8-df9d-411c-9990-7e98513001aa\") " pod="openstack/ovn-controller-metrics-xzrhp" Feb 27 17:47:00 crc kubenswrapper[4830]: I0227 17:47:00.614200 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xzrhp" Feb 27 17:47:01 crc kubenswrapper[4830]: I0227 17:47:01.039460 4830 generic.go:334] "Generic (PLEG): container finished" podID="c2f87ce8-a38b-467d-a4bf-17eefbfbc958" containerID="b5c99f8539e4bf7e4e80de9f0141bf52f609600dc347b3392b1521e7e29ecd8d" exitCode=0 Feb 27 17:47:01 crc kubenswrapper[4830]: I0227 17:47:01.039722 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rfd9s" event={"ID":"c2f87ce8-a38b-467d-a4bf-17eefbfbc958","Type":"ContainerDied","Data":"b5c99f8539e4bf7e4e80de9f0141bf52f609600dc347b3392b1521e7e29ecd8d"} Feb 27 17:47:01 crc kubenswrapper[4830]: I0227 17:47:01.041484 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-b8hbd" event={"ID":"d4881336-2572-4aa9-a0c2-9c46b73b7898","Type":"ContainerStarted","Data":"1fc2bff265e65640112a13c25ce7bcdf35de3bc9da69447ec3258293d9b948a4"} Feb 27 17:47:01 crc kubenswrapper[4830]: I0227 17:47:01.042017 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-b8hbd" Feb 27 17:47:01 crc kubenswrapper[4830]: I0227 17:47:01.062764 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-xzrhp"] Feb 27 17:47:01 crc kubenswrapper[4830]: I0227 17:47:01.100668 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-b8hbd" podStartSLOduration=3.100650723 podStartE2EDuration="3.100650723s" podCreationTimestamp="2026-02-27 17:46:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:47:01.089356893 +0000 UTC m=+6017.178629356" watchObservedRunningTime="2026-02-27 17:47:01.100650723 +0000 UTC m=+6017.189923186" Feb 27 17:47:02 crc kubenswrapper[4830]: I0227 17:47:02.061462 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xzrhp" event={"ID":"f2c4bba8-df9d-411c-9990-7e98513001aa","Type":"ContainerStarted","Data":"c52fa1c5ec2bb4925c8b58e1ed38ebb321e02bb595c99ef85fc52c2a22385707"} Feb 27 17:47:02 crc kubenswrapper[4830]: I0227 17:47:02.062049 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xzrhp" event={"ID":"f2c4bba8-df9d-411c-9990-7e98513001aa","Type":"ContainerStarted","Data":"1fd6c3f6389b4745931ddbf1124252bd3e8bd33012175bf85da81aa98b102fe7"} Feb 27 17:47:02 crc kubenswrapper[4830]: I0227 17:47:02.071536 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rfd9s" event={"ID":"c2f87ce8-a38b-467d-a4bf-17eefbfbc958","Type":"ContainerStarted","Data":"d41382bac9e134c3852775822b4260335265072d8c3140d3c1dc8b467e2ca901"} Feb 27 17:47:02 crc kubenswrapper[4830]: I0227 17:47:02.071690 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-rfd9s" Feb 27 17:47:02 crc kubenswrapper[4830]: I0227 17:47:02.071720 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rfd9s" event={"ID":"c2f87ce8-a38b-467d-a4bf-17eefbfbc958","Type":"ContainerStarted","Data":"9b4c32691a4469557d97ee548d200b2ccc9a5be4b3a2fd450a4a0b692c7142c4"} Feb 27 17:47:02 crc kubenswrapper[4830]: I0227 17:47:02.071987 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-rfd9s" Feb 27 17:47:02 crc kubenswrapper[4830]: I0227 17:47:02.079530 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-xzrhp" podStartSLOduration=2.079510494 podStartE2EDuration="2.079510494s" podCreationTimestamp="2026-02-27 17:47:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:47:02.078585141 +0000 UTC m=+6018.167857604" watchObservedRunningTime="2026-02-27 17:47:02.079510494 +0000 UTC m=+6018.168782977" Feb 27 17:47:02 crc kubenswrapper[4830]: I0227 17:47:02.131598 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-rfd9s" podStartSLOduration=4.131575288 podStartE2EDuration="4.131575288s" podCreationTimestamp="2026-02-27 17:46:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:47:02.122257326 +0000 UTC m=+6018.211529809" watchObservedRunningTime="2026-02-27 17:47:02.131575288 +0000 UTC m=+6018.220847771" Feb 27 17:47:02 crc kubenswrapper[4830]: I0227 17:47:02.762932 4830 scope.go:117] "RemoveContainer" containerID="0f73af46b765b3d98f3f5e9883b49887ac4f2ac3485e91a001e18c5d351646d8" Feb 27 17:47:02 crc kubenswrapper[4830]: E0227 17:47:02.763352 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:47:04 crc kubenswrapper[4830]: E0227 17:47:04.777696 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:47:06 crc kubenswrapper[4830]: I0227 17:47:06.057008 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-nhz7x"] Feb 27 17:47:06 crc kubenswrapper[4830]: I0227 17:47:06.071338 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-nhz7x"] Feb 27 17:47:06 crc kubenswrapper[4830]: I0227 17:47:06.782935 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24e71808-4a6f-46f1-b878-7a4b2e75270b" path="/var/lib/kubelet/pods/24e71808-4a6f-46f1-b878-7a4b2e75270b/volumes" Feb 27 17:47:08 crc kubenswrapper[4830]: E0227 17:47:08.766547 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:47:08 crc kubenswrapper[4830]: I0227 17:47:08.960719 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-create-z4frp"] Feb 27 17:47:08 crc kubenswrapper[4830]: I0227 17:47:08.962364 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-z4frp" Feb 27 17:47:08 crc kubenswrapper[4830]: I0227 17:47:08.969143 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-z4frp"] Feb 27 17:47:09 crc kubenswrapper[4830]: I0227 17:47:09.096350 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2526\" (UniqueName: \"kubernetes.io/projected/53729d31-2de8-4d6f-b2de-7b9eacb758a0-kube-api-access-p2526\") pod \"octavia-db-create-z4frp\" (UID: \"53729d31-2de8-4d6f-b2de-7b9eacb758a0\") " pod="openstack/octavia-db-create-z4frp" Feb 27 17:47:09 crc kubenswrapper[4830]: I0227 17:47:09.096573 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53729d31-2de8-4d6f-b2de-7b9eacb758a0-operator-scripts\") pod \"octavia-db-create-z4frp\" (UID: \"53729d31-2de8-4d6f-b2de-7b9eacb758a0\") " pod="openstack/octavia-db-create-z4frp" Feb 27 17:47:09 crc kubenswrapper[4830]: I0227 17:47:09.197983 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2526\" (UniqueName: \"kubernetes.io/projected/53729d31-2de8-4d6f-b2de-7b9eacb758a0-kube-api-access-p2526\") pod \"octavia-db-create-z4frp\" (UID: \"53729d31-2de8-4d6f-b2de-7b9eacb758a0\") " pod="openstack/octavia-db-create-z4frp" Feb 27 17:47:09 crc kubenswrapper[4830]: I0227 17:47:09.198126 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53729d31-2de8-4d6f-b2de-7b9eacb758a0-operator-scripts\") pod \"octavia-db-create-z4frp\" (UID: \"53729d31-2de8-4d6f-b2de-7b9eacb758a0\") " pod="openstack/octavia-db-create-z4frp" Feb 27 17:47:09 crc kubenswrapper[4830]: I0227 17:47:09.198914 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53729d31-2de8-4d6f-b2de-7b9eacb758a0-operator-scripts\") pod \"octavia-db-create-z4frp\" (UID: \"53729d31-2de8-4d6f-b2de-7b9eacb758a0\") " pod="openstack/octavia-db-create-z4frp" Feb 27 17:47:09 crc kubenswrapper[4830]: I0227 17:47:09.226306 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2526\" (UniqueName: \"kubernetes.io/projected/53729d31-2de8-4d6f-b2de-7b9eacb758a0-kube-api-access-p2526\") pod \"octavia-db-create-z4frp\" (UID: \"53729d31-2de8-4d6f-b2de-7b9eacb758a0\") " pod="openstack/octavia-db-create-z4frp" Feb 27 17:47:09 crc kubenswrapper[4830]: I0227 17:47:09.295521 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-z4frp" Feb 27 17:47:09 crc kubenswrapper[4830]: I0227 17:47:09.847041 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-z4frp"] Feb 27 17:47:09 crc kubenswrapper[4830]: W0227 17:47:09.856132 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53729d31_2de8_4d6f_b2de_7b9eacb758a0.slice/crio-c4ecc7ef5fdded8eca93cd6b5f32280a9fb6e758875cb5803ff98c9b6c89ea36 WatchSource:0}: Error finding container c4ecc7ef5fdded8eca93cd6b5f32280a9fb6e758875cb5803ff98c9b6c89ea36: Status 404 returned error can't find the container with id c4ecc7ef5fdded8eca93cd6b5f32280a9fb6e758875cb5803ff98c9b6c89ea36 Feb 27 17:47:10 crc kubenswrapper[4830]: I0227 17:47:10.188854 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-z4frp" event={"ID":"53729d31-2de8-4d6f-b2de-7b9eacb758a0","Type":"ContainerStarted","Data":"9fd246f254a91e8fb8ab65f60c00c04f30548b8e4e10017fa997454e6c1dfe57"} Feb 27 17:47:10 crc kubenswrapper[4830]: I0227 17:47:10.189250 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-z4frp" event={"ID":"53729d31-2de8-4d6f-b2de-7b9eacb758a0","Type":"ContainerStarted","Data":"c4ecc7ef5fdded8eca93cd6b5f32280a9fb6e758875cb5803ff98c9b6c89ea36"} Feb 27 17:47:10 crc kubenswrapper[4830]: I0227 17:47:10.210650 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-db-create-z4frp" podStartSLOduration=2.210628359 podStartE2EDuration="2.210628359s" podCreationTimestamp="2026-02-27 17:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:47:10.205707902 +0000 UTC m=+6026.294980445" watchObservedRunningTime="2026-02-27 17:47:10.210628359 +0000 UTC m=+6026.299900822" Feb 27 17:47:10 crc kubenswrapper[4830]: I0227 17:47:10.398920 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-ebd1-account-create-update-s45vz"] Feb 27 17:47:10 crc kubenswrapper[4830]: I0227 17:47:10.400220 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-ebd1-account-create-update-s45vz" Feb 27 17:47:10 crc kubenswrapper[4830]: I0227 17:47:10.402771 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-db-secret" Feb 27 17:47:10 crc kubenswrapper[4830]: I0227 17:47:10.409118 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-ebd1-account-create-update-s45vz"] Feb 27 17:47:10 crc kubenswrapper[4830]: I0227 17:47:10.524575 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5de2bd98-8ad8-4952-9956-225bec3013e1-operator-scripts\") pod \"octavia-ebd1-account-create-update-s45vz\" (UID: \"5de2bd98-8ad8-4952-9956-225bec3013e1\") " pod="openstack/octavia-ebd1-account-create-update-s45vz" Feb 27 17:47:10 crc kubenswrapper[4830]: I0227 17:47:10.524685 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spmcf\" (UniqueName: \"kubernetes.io/projected/5de2bd98-8ad8-4952-9956-225bec3013e1-kube-api-access-spmcf\") pod \"octavia-ebd1-account-create-update-s45vz\" (UID: \"5de2bd98-8ad8-4952-9956-225bec3013e1\") " pod="openstack/octavia-ebd1-account-create-update-s45vz" Feb 27 17:47:10 crc kubenswrapper[4830]: I0227 17:47:10.630675 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5de2bd98-8ad8-4952-9956-225bec3013e1-operator-scripts\") pod \"octavia-ebd1-account-create-update-s45vz\" (UID: \"5de2bd98-8ad8-4952-9956-225bec3013e1\") " pod="openstack/octavia-ebd1-account-create-update-s45vz" Feb 27 17:47:10 crc kubenswrapper[4830]: I0227 17:47:10.629396 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5de2bd98-8ad8-4952-9956-225bec3013e1-operator-scripts\") pod \"octavia-ebd1-account-create-update-s45vz\" (UID: \"5de2bd98-8ad8-4952-9956-225bec3013e1\") " pod="openstack/octavia-ebd1-account-create-update-s45vz" Feb 27 17:47:10 crc kubenswrapper[4830]: I0227 17:47:10.630882 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spmcf\" (UniqueName: \"kubernetes.io/projected/5de2bd98-8ad8-4952-9956-225bec3013e1-kube-api-access-spmcf\") pod \"octavia-ebd1-account-create-update-s45vz\" (UID: \"5de2bd98-8ad8-4952-9956-225bec3013e1\") " pod="openstack/octavia-ebd1-account-create-update-s45vz" Feb 27 17:47:10 crc kubenswrapper[4830]: I0227 17:47:10.657841 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spmcf\" (UniqueName: \"kubernetes.io/projected/5de2bd98-8ad8-4952-9956-225bec3013e1-kube-api-access-spmcf\") pod \"octavia-ebd1-account-create-update-s45vz\" (UID: \"5de2bd98-8ad8-4952-9956-225bec3013e1\") " pod="openstack/octavia-ebd1-account-create-update-s45vz" Feb 27 17:47:10 crc kubenswrapper[4830]: I0227 17:47:10.724912 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-ebd1-account-create-update-s45vz" Feb 27 17:47:10 crc kubenswrapper[4830]: I0227 17:47:10.995463 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-ebd1-account-create-update-s45vz"] Feb 27 17:47:11 crc kubenswrapper[4830]: I0227 17:47:11.199530 4830 generic.go:334] "Generic (PLEG): container finished" podID="53729d31-2de8-4d6f-b2de-7b9eacb758a0" containerID="9fd246f254a91e8fb8ab65f60c00c04f30548b8e4e10017fa997454e6c1dfe57" exitCode=0 Feb 27 17:47:11 crc kubenswrapper[4830]: I0227 17:47:11.199621 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-z4frp" event={"ID":"53729d31-2de8-4d6f-b2de-7b9eacb758a0","Type":"ContainerDied","Data":"9fd246f254a91e8fb8ab65f60c00c04f30548b8e4e10017fa997454e6c1dfe57"} Feb 27 17:47:11 crc kubenswrapper[4830]: I0227 17:47:11.201170 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-ebd1-account-create-update-s45vz" event={"ID":"5de2bd98-8ad8-4952-9956-225bec3013e1","Type":"ContainerStarted","Data":"bbea1891288f5f5c543fb453f5df2b3682410ec0450b7815a5e5c511061dda4e"} Feb 27 17:47:12 crc kubenswrapper[4830]: I0227 17:47:12.228281 4830 generic.go:334] "Generic (PLEG): container finished" podID="5de2bd98-8ad8-4952-9956-225bec3013e1" containerID="518ad1501f616efbe6ad57be8f6606539be903e87d0cdca192e8588c7fa593e1" exitCode=0 Feb 27 17:47:12 crc kubenswrapper[4830]: I0227 17:47:12.228538 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-ebd1-account-create-update-s45vz" event={"ID":"5de2bd98-8ad8-4952-9956-225bec3013e1","Type":"ContainerDied","Data":"518ad1501f616efbe6ad57be8f6606539be903e87d0cdca192e8588c7fa593e1"} Feb 27 17:47:12 crc kubenswrapper[4830]: I0227 17:47:12.670784 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-z4frp" Feb 27 17:47:12 crc kubenswrapper[4830]: I0227 17:47:12.784189 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53729d31-2de8-4d6f-b2de-7b9eacb758a0-operator-scripts\") pod \"53729d31-2de8-4d6f-b2de-7b9eacb758a0\" (UID: \"53729d31-2de8-4d6f-b2de-7b9eacb758a0\") " Feb 27 17:47:12 crc kubenswrapper[4830]: I0227 17:47:12.784380 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2526\" (UniqueName: \"kubernetes.io/projected/53729d31-2de8-4d6f-b2de-7b9eacb758a0-kube-api-access-p2526\") pod \"53729d31-2de8-4d6f-b2de-7b9eacb758a0\" (UID: \"53729d31-2de8-4d6f-b2de-7b9eacb758a0\") " Feb 27 17:47:12 crc kubenswrapper[4830]: I0227 17:47:12.784832 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53729d31-2de8-4d6f-b2de-7b9eacb758a0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "53729d31-2de8-4d6f-b2de-7b9eacb758a0" (UID: "53729d31-2de8-4d6f-b2de-7b9eacb758a0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:47:12 crc kubenswrapper[4830]: I0227 17:47:12.785403 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53729d31-2de8-4d6f-b2de-7b9eacb758a0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:47:12 crc kubenswrapper[4830]: I0227 17:47:12.792147 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53729d31-2de8-4d6f-b2de-7b9eacb758a0-kube-api-access-p2526" (OuterVolumeSpecName: "kube-api-access-p2526") pod "53729d31-2de8-4d6f-b2de-7b9eacb758a0" (UID: "53729d31-2de8-4d6f-b2de-7b9eacb758a0"). InnerVolumeSpecName "kube-api-access-p2526". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:47:12 crc kubenswrapper[4830]: I0227 17:47:12.887764 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2526\" (UniqueName: \"kubernetes.io/projected/53729d31-2de8-4d6f-b2de-7b9eacb758a0-kube-api-access-p2526\") on node \"crc\" DevicePath \"\"" Feb 27 17:47:13 crc kubenswrapper[4830]: I0227 17:47:13.247030 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-z4frp" Feb 27 17:47:13 crc kubenswrapper[4830]: I0227 17:47:13.248330 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-z4frp" event={"ID":"53729d31-2de8-4d6f-b2de-7b9eacb758a0","Type":"ContainerDied","Data":"c4ecc7ef5fdded8eca93cd6b5f32280a9fb6e758875cb5803ff98c9b6c89ea36"} Feb 27 17:47:13 crc kubenswrapper[4830]: I0227 17:47:13.248779 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4ecc7ef5fdded8eca93cd6b5f32280a9fb6e758875cb5803ff98c9b6c89ea36" Feb 27 17:47:13 crc kubenswrapper[4830]: I0227 17:47:13.697180 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-ebd1-account-create-update-s45vz" Feb 27 17:47:13 crc kubenswrapper[4830]: I0227 17:47:13.808187 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spmcf\" (UniqueName: \"kubernetes.io/projected/5de2bd98-8ad8-4952-9956-225bec3013e1-kube-api-access-spmcf\") pod \"5de2bd98-8ad8-4952-9956-225bec3013e1\" (UID: \"5de2bd98-8ad8-4952-9956-225bec3013e1\") " Feb 27 17:47:13 crc kubenswrapper[4830]: I0227 17:47:13.808322 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5de2bd98-8ad8-4952-9956-225bec3013e1-operator-scripts\") pod \"5de2bd98-8ad8-4952-9956-225bec3013e1\" (UID: \"5de2bd98-8ad8-4952-9956-225bec3013e1\") " Feb 27 17:47:13 crc kubenswrapper[4830]: I0227 17:47:13.809404 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5de2bd98-8ad8-4952-9956-225bec3013e1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5de2bd98-8ad8-4952-9956-225bec3013e1" (UID: "5de2bd98-8ad8-4952-9956-225bec3013e1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:47:13 crc kubenswrapper[4830]: I0227 17:47:13.818344 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5de2bd98-8ad8-4952-9956-225bec3013e1-kube-api-access-spmcf" (OuterVolumeSpecName: "kube-api-access-spmcf") pod "5de2bd98-8ad8-4952-9956-225bec3013e1" (UID: "5de2bd98-8ad8-4952-9956-225bec3013e1"). InnerVolumeSpecName "kube-api-access-spmcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:47:13 crc kubenswrapper[4830]: I0227 17:47:13.912197 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spmcf\" (UniqueName: \"kubernetes.io/projected/5de2bd98-8ad8-4952-9956-225bec3013e1-kube-api-access-spmcf\") on node \"crc\" DevicePath \"\"" Feb 27 17:47:13 crc kubenswrapper[4830]: I0227 17:47:13.912255 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5de2bd98-8ad8-4952-9956-225bec3013e1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:47:14 crc kubenswrapper[4830]: I0227 17:47:14.265199 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-ebd1-account-create-update-s45vz" event={"ID":"5de2bd98-8ad8-4952-9956-225bec3013e1","Type":"ContainerDied","Data":"bbea1891288f5f5c543fb453f5df2b3682410ec0450b7815a5e5c511061dda4e"} Feb 27 17:47:14 crc kubenswrapper[4830]: I0227 17:47:14.265266 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbea1891288f5f5c543fb453f5df2b3682410ec0450b7815a5e5c511061dda4e" Feb 27 17:47:14 crc kubenswrapper[4830]: I0227 17:47:14.265368 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-ebd1-account-create-update-s45vz" Feb 27 17:47:14 crc kubenswrapper[4830]: I0227 17:47:14.779539 4830 scope.go:117] "RemoveContainer" containerID="0f73af46b765b3d98f3f5e9883b49887ac4f2ac3485e91a001e18c5d351646d8" Feb 27 17:47:14 crc kubenswrapper[4830]: E0227 17:47:14.780116 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:47:16 crc kubenswrapper[4830]: I0227 17:47:16.462779 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-persistence-db-create-gtbf9"] Feb 27 17:47:16 crc kubenswrapper[4830]: E0227 17:47:16.463791 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5de2bd98-8ad8-4952-9956-225bec3013e1" containerName="mariadb-account-create-update" Feb 27 17:47:16 crc kubenswrapper[4830]: I0227 17:47:16.463828 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="5de2bd98-8ad8-4952-9956-225bec3013e1" containerName="mariadb-account-create-update" Feb 27 17:47:16 crc kubenswrapper[4830]: E0227 17:47:16.463845 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53729d31-2de8-4d6f-b2de-7b9eacb758a0" containerName="mariadb-database-create" Feb 27 17:47:16 crc kubenswrapper[4830]: I0227 17:47:16.463856 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="53729d31-2de8-4d6f-b2de-7b9eacb758a0" containerName="mariadb-database-create" Feb 27 17:47:16 crc kubenswrapper[4830]: I0227 17:47:16.464193 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="5de2bd98-8ad8-4952-9956-225bec3013e1" containerName="mariadb-account-create-update" Feb 27 17:47:16 crc kubenswrapper[4830]: I0227 17:47:16.464229 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="53729d31-2de8-4d6f-b2de-7b9eacb758a0" containerName="mariadb-database-create" Feb 27 17:47:16 crc kubenswrapper[4830]: I0227 17:47:16.465126 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-gtbf9" Feb 27 17:47:16 crc kubenswrapper[4830]: I0227 17:47:16.481360 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-gtbf9"] Feb 27 17:47:16 crc kubenswrapper[4830]: I0227 17:47:16.571426 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfdmr\" (UniqueName: \"kubernetes.io/projected/0c4f7b16-9303-44d5-a45b-a9365add4438-kube-api-access-jfdmr\") pod \"octavia-persistence-db-create-gtbf9\" (UID: \"0c4f7b16-9303-44d5-a45b-a9365add4438\") " pod="openstack/octavia-persistence-db-create-gtbf9" Feb 27 17:47:16 crc kubenswrapper[4830]: I0227 17:47:16.571476 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c4f7b16-9303-44d5-a45b-a9365add4438-operator-scripts\") pod \"octavia-persistence-db-create-gtbf9\" (UID: \"0c4f7b16-9303-44d5-a45b-a9365add4438\") " pod="openstack/octavia-persistence-db-create-gtbf9" Feb 27 17:47:16 crc kubenswrapper[4830]: I0227 17:47:16.673332 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfdmr\" (UniqueName: \"kubernetes.io/projected/0c4f7b16-9303-44d5-a45b-a9365add4438-kube-api-access-jfdmr\") pod \"octavia-persistence-db-create-gtbf9\" (UID: \"0c4f7b16-9303-44d5-a45b-a9365add4438\") " pod="openstack/octavia-persistence-db-create-gtbf9" Feb 27 17:47:16 crc kubenswrapper[4830]: I0227 17:47:16.673378 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c4f7b16-9303-44d5-a45b-a9365add4438-operator-scripts\") pod \"octavia-persistence-db-create-gtbf9\" (UID: \"0c4f7b16-9303-44d5-a45b-a9365add4438\") " pod="openstack/octavia-persistence-db-create-gtbf9" Feb 27 17:47:16 crc kubenswrapper[4830]: I0227 17:47:16.674041 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c4f7b16-9303-44d5-a45b-a9365add4438-operator-scripts\") pod \"octavia-persistence-db-create-gtbf9\" (UID: \"0c4f7b16-9303-44d5-a45b-a9365add4438\") " pod="openstack/octavia-persistence-db-create-gtbf9" Feb 27 17:47:16 crc kubenswrapper[4830]: I0227 17:47:16.707544 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfdmr\" (UniqueName: \"kubernetes.io/projected/0c4f7b16-9303-44d5-a45b-a9365add4438-kube-api-access-jfdmr\") pod \"octavia-persistence-db-create-gtbf9\" (UID: \"0c4f7b16-9303-44d5-a45b-a9365add4438\") " pod="openstack/octavia-persistence-db-create-gtbf9" Feb 27 17:47:16 crc kubenswrapper[4830]: I0227 17:47:16.790994 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-gtbf9" Feb 27 17:47:17 crc kubenswrapper[4830]: I0227 17:47:17.314055 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-gtbf9"] Feb 27 17:47:17 crc kubenswrapper[4830]: W0227 17:47:17.336956 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c4f7b16_9303_44d5_a45b_a9365add4438.slice/crio-8e5f25c8294b5cd996c22c095b4639af1e7a4eb998721ecec1ad10def83e10d6 WatchSource:0}: Error finding container 8e5f25c8294b5cd996c22c095b4639af1e7a4eb998721ecec1ad10def83e10d6: Status 404 returned error can't find the container with id 8e5f25c8294b5cd996c22c095b4639af1e7a4eb998721ecec1ad10def83e10d6 Feb 27 17:47:17 crc kubenswrapper[4830]: I0227 17:47:17.838534 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-69a2-account-create-update-r8l2r"] Feb 27 17:47:17 crc kubenswrapper[4830]: I0227 17:47:17.840291 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-69a2-account-create-update-r8l2r" Feb 27 17:47:17 crc kubenswrapper[4830]: I0227 17:47:17.842819 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-persistence-db-secret" Feb 27 17:47:17 crc kubenswrapper[4830]: I0227 17:47:17.860539 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-69a2-account-create-update-r8l2r"] Feb 27 17:47:17 crc kubenswrapper[4830]: I0227 17:47:17.895037 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bae2b61b-3081-415b-a231-4994052a20c4-operator-scripts\") pod \"octavia-69a2-account-create-update-r8l2r\" (UID: \"bae2b61b-3081-415b-a231-4994052a20c4\") " pod="openstack/octavia-69a2-account-create-update-r8l2r" Feb 27 17:47:17 crc kubenswrapper[4830]: I0227 17:47:17.895143 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjnn6\" (UniqueName: \"kubernetes.io/projected/bae2b61b-3081-415b-a231-4994052a20c4-kube-api-access-jjnn6\") pod \"octavia-69a2-account-create-update-r8l2r\" (UID: \"bae2b61b-3081-415b-a231-4994052a20c4\") " pod="openstack/octavia-69a2-account-create-update-r8l2r" Feb 27 17:47:17 crc kubenswrapper[4830]: I0227 17:47:17.997272 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjnn6\" (UniqueName: \"kubernetes.io/projected/bae2b61b-3081-415b-a231-4994052a20c4-kube-api-access-jjnn6\") pod \"octavia-69a2-account-create-update-r8l2r\" (UID: \"bae2b61b-3081-415b-a231-4994052a20c4\") " pod="openstack/octavia-69a2-account-create-update-r8l2r" Feb 27 17:47:17 crc kubenswrapper[4830]: I0227 17:47:17.997448 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bae2b61b-3081-415b-a231-4994052a20c4-operator-scripts\") pod \"octavia-69a2-account-create-update-r8l2r\" (UID: \"bae2b61b-3081-415b-a231-4994052a20c4\") " pod="openstack/octavia-69a2-account-create-update-r8l2r" Feb 27 17:47:17 crc kubenswrapper[4830]: I0227 17:47:17.998328 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bae2b61b-3081-415b-a231-4994052a20c4-operator-scripts\") pod \"octavia-69a2-account-create-update-r8l2r\" (UID: \"bae2b61b-3081-415b-a231-4994052a20c4\") " pod="openstack/octavia-69a2-account-create-update-r8l2r" Feb 27 17:47:18 crc kubenswrapper[4830]: I0227 17:47:18.027818 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjnn6\" (UniqueName: \"kubernetes.io/projected/bae2b61b-3081-415b-a231-4994052a20c4-kube-api-access-jjnn6\") pod \"octavia-69a2-account-create-update-r8l2r\" (UID: \"bae2b61b-3081-415b-a231-4994052a20c4\") " pod="openstack/octavia-69a2-account-create-update-r8l2r" Feb 27 17:47:18 crc kubenswrapper[4830]: I0227 17:47:18.184468 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-69a2-account-create-update-r8l2r" Feb 27 17:47:18 crc kubenswrapper[4830]: I0227 17:47:18.325928 4830 generic.go:334] "Generic (PLEG): container finished" podID="0c4f7b16-9303-44d5-a45b-a9365add4438" containerID="89f1f067e27d0ff1f16fc5c3814328a79897ecb9eb81ad881ffa0e0536577f9e" exitCode=0 Feb 27 17:47:18 crc kubenswrapper[4830]: I0227 17:47:18.326227 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-gtbf9" event={"ID":"0c4f7b16-9303-44d5-a45b-a9365add4438","Type":"ContainerDied","Data":"89f1f067e27d0ff1f16fc5c3814328a79897ecb9eb81ad881ffa0e0536577f9e"} Feb 27 17:47:18 crc kubenswrapper[4830]: I0227 17:47:18.326255 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-gtbf9" event={"ID":"0c4f7b16-9303-44d5-a45b-a9365add4438","Type":"ContainerStarted","Data":"8e5f25c8294b5cd996c22c095b4639af1e7a4eb998721ecec1ad10def83e10d6"} Feb 27 17:47:18 crc kubenswrapper[4830]: I0227 17:47:18.701522 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-69a2-account-create-update-r8l2r"] Feb 27 17:47:18 crc kubenswrapper[4830]: W0227 17:47:18.702885 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbae2b61b_3081_415b_a231_4994052a20c4.slice/crio-13886193810c40fed387cf828ec6938e733b998bca47816d6ce5f0095783c70b WatchSource:0}: Error finding container 13886193810c40fed387cf828ec6938e733b998bca47816d6ce5f0095783c70b: Status 404 returned error can't find the container with id 13886193810c40fed387cf828ec6938e733b998bca47816d6ce5f0095783c70b Feb 27 17:47:18 crc kubenswrapper[4830]: I0227 17:47:18.870875 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-djgzn"] Feb 27 17:47:18 crc kubenswrapper[4830]: I0227 17:47:18.874012 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-djgzn" Feb 27 17:47:18 crc kubenswrapper[4830]: I0227 17:47:18.881046 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-djgzn"] Feb 27 17:47:18 crc kubenswrapper[4830]: I0227 17:47:18.915504 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq7fb\" (UniqueName: \"kubernetes.io/projected/971be4ad-5722-4c6f-8f27-5421e80d1cff-kube-api-access-kq7fb\") pod \"community-operators-djgzn\" (UID: \"971be4ad-5722-4c6f-8f27-5421e80d1cff\") " pod="openshift-marketplace/community-operators-djgzn" Feb 27 17:47:18 crc kubenswrapper[4830]: I0227 17:47:18.915600 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/971be4ad-5722-4c6f-8f27-5421e80d1cff-utilities\") pod \"community-operators-djgzn\" (UID: \"971be4ad-5722-4c6f-8f27-5421e80d1cff\") " pod="openshift-marketplace/community-operators-djgzn" Feb 27 17:47:18 crc kubenswrapper[4830]: I0227 17:47:18.915648 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/971be4ad-5722-4c6f-8f27-5421e80d1cff-catalog-content\") pod \"community-operators-djgzn\" (UID: \"971be4ad-5722-4c6f-8f27-5421e80d1cff\") " pod="openshift-marketplace/community-operators-djgzn" Feb 27 17:47:19 crc kubenswrapper[4830]: I0227 17:47:19.017522 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq7fb\" (UniqueName: \"kubernetes.io/projected/971be4ad-5722-4c6f-8f27-5421e80d1cff-kube-api-access-kq7fb\") pod \"community-operators-djgzn\" (UID: \"971be4ad-5722-4c6f-8f27-5421e80d1cff\") " pod="openshift-marketplace/community-operators-djgzn" Feb 27 17:47:19 crc kubenswrapper[4830]: I0227 17:47:19.017625 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/971be4ad-5722-4c6f-8f27-5421e80d1cff-utilities\") pod \"community-operators-djgzn\" (UID: \"971be4ad-5722-4c6f-8f27-5421e80d1cff\") " pod="openshift-marketplace/community-operators-djgzn" Feb 27 17:47:19 crc kubenswrapper[4830]: I0227 17:47:19.017669 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/971be4ad-5722-4c6f-8f27-5421e80d1cff-catalog-content\") pod \"community-operators-djgzn\" (UID: \"971be4ad-5722-4c6f-8f27-5421e80d1cff\") " pod="openshift-marketplace/community-operators-djgzn" Feb 27 17:47:19 crc kubenswrapper[4830]: I0227 17:47:19.018284 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/971be4ad-5722-4c6f-8f27-5421e80d1cff-utilities\") pod \"community-operators-djgzn\" (UID: \"971be4ad-5722-4c6f-8f27-5421e80d1cff\") " pod="openshift-marketplace/community-operators-djgzn" Feb 27 17:47:19 crc kubenswrapper[4830]: I0227 17:47:19.018368 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/971be4ad-5722-4c6f-8f27-5421e80d1cff-catalog-content\") pod \"community-operators-djgzn\" (UID: \"971be4ad-5722-4c6f-8f27-5421e80d1cff\") " pod="openshift-marketplace/community-operators-djgzn" Feb 27 17:47:19 crc kubenswrapper[4830]: I0227 17:47:19.041549 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq7fb\" (UniqueName: \"kubernetes.io/projected/971be4ad-5722-4c6f-8f27-5421e80d1cff-kube-api-access-kq7fb\") pod \"community-operators-djgzn\" (UID: \"971be4ad-5722-4c6f-8f27-5421e80d1cff\") " pod="openshift-marketplace/community-operators-djgzn" Feb 27 17:47:19 crc kubenswrapper[4830]: I0227 17:47:19.207897 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-djgzn" Feb 27 17:47:19 crc kubenswrapper[4830]: I0227 17:47:19.338689 4830 generic.go:334] "Generic (PLEG): container finished" podID="bae2b61b-3081-415b-a231-4994052a20c4" containerID="5c818927d08a5f9938aacf14fdb10ff10c857b47865ab63e07d4f19933ea6710" exitCode=0 Feb 27 17:47:19 crc kubenswrapper[4830]: I0227 17:47:19.338752 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-69a2-account-create-update-r8l2r" event={"ID":"bae2b61b-3081-415b-a231-4994052a20c4","Type":"ContainerDied","Data":"5c818927d08a5f9938aacf14fdb10ff10c857b47865ab63e07d4f19933ea6710"} Feb 27 17:47:19 crc kubenswrapper[4830]: I0227 17:47:19.339041 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-69a2-account-create-update-r8l2r" event={"ID":"bae2b61b-3081-415b-a231-4994052a20c4","Type":"ContainerStarted","Data":"13886193810c40fed387cf828ec6938e733b998bca47816d6ce5f0095783c70b"} Feb 27 17:47:19 crc kubenswrapper[4830]: E0227 17:47:19.764152 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:47:19 crc kubenswrapper[4830]: I0227 17:47:19.790371 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-djgzn"] Feb 27 17:47:19 crc kubenswrapper[4830]: I0227 17:47:19.933287 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-gtbf9" Feb 27 17:47:20 crc kubenswrapper[4830]: I0227 17:47:20.042217 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c4f7b16-9303-44d5-a45b-a9365add4438-operator-scripts\") pod \"0c4f7b16-9303-44d5-a45b-a9365add4438\" (UID: \"0c4f7b16-9303-44d5-a45b-a9365add4438\") " Feb 27 17:47:20 crc kubenswrapper[4830]: I0227 17:47:20.042259 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfdmr\" (UniqueName: \"kubernetes.io/projected/0c4f7b16-9303-44d5-a45b-a9365add4438-kube-api-access-jfdmr\") pod \"0c4f7b16-9303-44d5-a45b-a9365add4438\" (UID: \"0c4f7b16-9303-44d5-a45b-a9365add4438\") " Feb 27 17:47:20 crc kubenswrapper[4830]: I0227 17:47:20.044733 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c4f7b16-9303-44d5-a45b-a9365add4438-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0c4f7b16-9303-44d5-a45b-a9365add4438" (UID: "0c4f7b16-9303-44d5-a45b-a9365add4438"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:47:20 crc kubenswrapper[4830]: I0227 17:47:20.049013 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c4f7b16-9303-44d5-a45b-a9365add4438-kube-api-access-jfdmr" (OuterVolumeSpecName: "kube-api-access-jfdmr") pod "0c4f7b16-9303-44d5-a45b-a9365add4438" (UID: "0c4f7b16-9303-44d5-a45b-a9365add4438"). InnerVolumeSpecName "kube-api-access-jfdmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:47:20 crc kubenswrapper[4830]: I0227 17:47:20.144225 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfdmr\" (UniqueName: \"kubernetes.io/projected/0c4f7b16-9303-44d5-a45b-a9365add4438-kube-api-access-jfdmr\") on node \"crc\" DevicePath \"\"" Feb 27 17:47:20 crc kubenswrapper[4830]: I0227 17:47:20.144280 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c4f7b16-9303-44d5-a45b-a9365add4438-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:47:20 crc kubenswrapper[4830]: I0227 17:47:20.350460 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-gtbf9" event={"ID":"0c4f7b16-9303-44d5-a45b-a9365add4438","Type":"ContainerDied","Data":"8e5f25c8294b5cd996c22c095b4639af1e7a4eb998721ecec1ad10def83e10d6"} Feb 27 17:47:20 crc kubenswrapper[4830]: I0227 17:47:20.350497 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e5f25c8294b5cd996c22c095b4639af1e7a4eb998721ecec1ad10def83e10d6" Feb 27 17:47:20 crc kubenswrapper[4830]: I0227 17:47:20.350472 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-gtbf9" Feb 27 17:47:20 crc kubenswrapper[4830]: I0227 17:47:20.351906 4830 generic.go:334] "Generic (PLEG): container finished" podID="971be4ad-5722-4c6f-8f27-5421e80d1cff" containerID="8fd0162d9efe549e418be20b4fe1f5c5d5e47648e47d596b900e316f8d8994ce" exitCode=0 Feb 27 17:47:20 crc kubenswrapper[4830]: I0227 17:47:20.352318 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-djgzn" event={"ID":"971be4ad-5722-4c6f-8f27-5421e80d1cff","Type":"ContainerDied","Data":"8fd0162d9efe549e418be20b4fe1f5c5d5e47648e47d596b900e316f8d8994ce"} Feb 27 17:47:20 crc kubenswrapper[4830]: I0227 17:47:20.352345 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-djgzn" event={"ID":"971be4ad-5722-4c6f-8f27-5421e80d1cff","Type":"ContainerStarted","Data":"88515bb056831f8c6432a0f6c892c73e0a6a08e3f93106b1635466a2a9483107"} Feb 27 17:47:20 crc kubenswrapper[4830]: I0227 17:47:20.744659 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-69a2-account-create-update-r8l2r" Feb 27 17:47:20 crc kubenswrapper[4830]: I0227 17:47:20.863993 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjnn6\" (UniqueName: \"kubernetes.io/projected/bae2b61b-3081-415b-a231-4994052a20c4-kube-api-access-jjnn6\") pod \"bae2b61b-3081-415b-a231-4994052a20c4\" (UID: \"bae2b61b-3081-415b-a231-4994052a20c4\") " Feb 27 17:47:20 crc kubenswrapper[4830]: I0227 17:47:20.864154 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bae2b61b-3081-415b-a231-4994052a20c4-operator-scripts\") pod \"bae2b61b-3081-415b-a231-4994052a20c4\" (UID: \"bae2b61b-3081-415b-a231-4994052a20c4\") " Feb 27 17:47:20 crc kubenswrapper[4830]: I0227 17:47:20.866236 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bae2b61b-3081-415b-a231-4994052a20c4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bae2b61b-3081-415b-a231-4994052a20c4" (UID: "bae2b61b-3081-415b-a231-4994052a20c4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:47:20 crc kubenswrapper[4830]: I0227 17:47:20.868990 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bae2b61b-3081-415b-a231-4994052a20c4-kube-api-access-jjnn6" (OuterVolumeSpecName: "kube-api-access-jjnn6") pod "bae2b61b-3081-415b-a231-4994052a20c4" (UID: "bae2b61b-3081-415b-a231-4994052a20c4"). InnerVolumeSpecName "kube-api-access-jjnn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:47:20 crc kubenswrapper[4830]: I0227 17:47:20.966533 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bae2b61b-3081-415b-a231-4994052a20c4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:47:20 crc kubenswrapper[4830]: I0227 17:47:20.966575 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjnn6\" (UniqueName: \"kubernetes.io/projected/bae2b61b-3081-415b-a231-4994052a20c4-kube-api-access-jjnn6\") on node \"crc\" DevicePath \"\"" Feb 27 17:47:21 crc kubenswrapper[4830]: I0227 17:47:21.365005 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-69a2-account-create-update-r8l2r" event={"ID":"bae2b61b-3081-415b-a231-4994052a20c4","Type":"ContainerDied","Data":"13886193810c40fed387cf828ec6938e733b998bca47816d6ce5f0095783c70b"} Feb 27 17:47:21 crc kubenswrapper[4830]: I0227 17:47:21.365064 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13886193810c40fed387cf828ec6938e733b998bca47816d6ce5f0095783c70b" Feb 27 17:47:21 crc kubenswrapper[4830]: I0227 17:47:21.365072 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-69a2-account-create-update-r8l2r" Feb 27 17:47:21 crc kubenswrapper[4830]: I0227 17:47:21.368286 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-djgzn" event={"ID":"971be4ad-5722-4c6f-8f27-5421e80d1cff","Type":"ContainerStarted","Data":"231cb45d17859b26b95b7e952c172f0d5c3aee085704ca0e523a23a748f28e03"} Feb 27 17:47:21 crc kubenswrapper[4830]: E0227 17:47:21.437264 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 17:47:21 crc kubenswrapper[4830]: E0227 17:47:21.437461 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t9tjs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-gbcl6_openshift-marketplace(90e915d6-d74a-4f5b-a8da-8f0f2acdda48): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 17:47:21 crc kubenswrapper[4830]: E0227 17:47:21.440031 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:47:22 crc kubenswrapper[4830]: I0227 17:47:22.383121 4830 generic.go:334] "Generic (PLEG): container finished" podID="971be4ad-5722-4c6f-8f27-5421e80d1cff" containerID="231cb45d17859b26b95b7e952c172f0d5c3aee085704ca0e523a23a748f28e03" exitCode=0 Feb 27 17:47:22 crc kubenswrapper[4830]: I0227 17:47:22.383192 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-djgzn" event={"ID":"971be4ad-5722-4c6f-8f27-5421e80d1cff","Type":"ContainerDied","Data":"231cb45d17859b26b95b7e952c172f0d5c3aee085704ca0e523a23a748f28e03"} Feb 27 17:47:23 crc kubenswrapper[4830]: I0227 17:47:23.990164 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-dc594bd7f-7cnbx"] Feb 27 17:47:24 crc kubenswrapper[4830]: E0227 17:47:24.014039 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c4f7b16-9303-44d5-a45b-a9365add4438" containerName="mariadb-database-create" Feb 27 17:47:24 crc kubenswrapper[4830]: I0227 17:47:24.014383 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c4f7b16-9303-44d5-a45b-a9365add4438" containerName="mariadb-database-create" Feb 27 17:47:24 crc kubenswrapper[4830]: E0227 17:47:24.014425 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bae2b61b-3081-415b-a231-4994052a20c4" containerName="mariadb-account-create-update" Feb 27 17:47:24 crc kubenswrapper[4830]: I0227 17:47:24.014432 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="bae2b61b-3081-415b-a231-4994052a20c4" containerName="mariadb-account-create-update" Feb 27 17:47:24 crc kubenswrapper[4830]: I0227 17:47:24.014657 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="bae2b61b-3081-415b-a231-4994052a20c4" containerName="mariadb-account-create-update" Feb 27 17:47:24 crc kubenswrapper[4830]: I0227 17:47:24.014675 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c4f7b16-9303-44d5-a45b-a9365add4438" containerName="mariadb-database-create" Feb 27 17:47:24 crc kubenswrapper[4830]: I0227 17:47:24.019654 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-dc594bd7f-7cnbx" Feb 27 17:47:24 crc kubenswrapper[4830]: I0227 17:47:24.027643 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-octavia-dockercfg-6dxgd" Feb 27 17:47:24 crc kubenswrapper[4830]: I0227 17:47:24.029692 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-scripts" Feb 27 17:47:24 crc kubenswrapper[4830]: I0227 17:47:24.029848 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-config-data" Feb 27 17:47:24 crc kubenswrapper[4830]: I0227 17:47:24.036135 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-dc594bd7f-7cnbx"] Feb 27 17:47:24 crc kubenswrapper[4830]: I0227 17:47:24.074058 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/27beea35-cf86-4a88-ae9a-1620fd0bc390-config-data-merged\") pod \"octavia-api-dc594bd7f-7cnbx\" (UID: \"27beea35-cf86-4a88-ae9a-1620fd0bc390\") " pod="openstack/octavia-api-dc594bd7f-7cnbx" Feb 27 17:47:24 crc kubenswrapper[4830]: I0227 17:47:24.074361 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27beea35-cf86-4a88-ae9a-1620fd0bc390-scripts\") pod \"octavia-api-dc594bd7f-7cnbx\" (UID: \"27beea35-cf86-4a88-ae9a-1620fd0bc390\") " pod="openstack/octavia-api-dc594bd7f-7cnbx" Feb 27 17:47:24 crc kubenswrapper[4830]: I0227 17:47:24.074473 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/27beea35-cf86-4a88-ae9a-1620fd0bc390-octavia-run\") pod \"octavia-api-dc594bd7f-7cnbx\" (UID: \"27beea35-cf86-4a88-ae9a-1620fd0bc390\") " pod="openstack/octavia-api-dc594bd7f-7cnbx" Feb 27 17:47:24 crc kubenswrapper[4830]: I0227 17:47:24.074849 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27beea35-cf86-4a88-ae9a-1620fd0bc390-combined-ca-bundle\") pod \"octavia-api-dc594bd7f-7cnbx\" (UID: \"27beea35-cf86-4a88-ae9a-1620fd0bc390\") " pod="openstack/octavia-api-dc594bd7f-7cnbx" Feb 27 17:47:24 crc kubenswrapper[4830]: I0227 17:47:24.074993 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27beea35-cf86-4a88-ae9a-1620fd0bc390-config-data\") pod \"octavia-api-dc594bd7f-7cnbx\" (UID: \"27beea35-cf86-4a88-ae9a-1620fd0bc390\") " pod="openstack/octavia-api-dc594bd7f-7cnbx" Feb 27 17:47:24 crc kubenswrapper[4830]: I0227 17:47:24.177923 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/27beea35-cf86-4a88-ae9a-1620fd0bc390-config-data-merged\") pod \"octavia-api-dc594bd7f-7cnbx\" (UID: \"27beea35-cf86-4a88-ae9a-1620fd0bc390\") " pod="openstack/octavia-api-dc594bd7f-7cnbx" Feb 27 17:47:24 crc kubenswrapper[4830]: I0227 17:47:24.178010 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27beea35-cf86-4a88-ae9a-1620fd0bc390-scripts\") pod \"octavia-api-dc594bd7f-7cnbx\" (UID: \"27beea35-cf86-4a88-ae9a-1620fd0bc390\") " pod="openstack/octavia-api-dc594bd7f-7cnbx" Feb 27 17:47:24 crc kubenswrapper[4830]: I0227 17:47:24.178048 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/27beea35-cf86-4a88-ae9a-1620fd0bc390-octavia-run\") pod \"octavia-api-dc594bd7f-7cnbx\" (UID: \"27beea35-cf86-4a88-ae9a-1620fd0bc390\") " pod="openstack/octavia-api-dc594bd7f-7cnbx" Feb 27 17:47:24 crc kubenswrapper[4830]: I0227 17:47:24.178110 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27beea35-cf86-4a88-ae9a-1620fd0bc390-combined-ca-bundle\") pod \"octavia-api-dc594bd7f-7cnbx\" (UID: \"27beea35-cf86-4a88-ae9a-1620fd0bc390\") " pod="openstack/octavia-api-dc594bd7f-7cnbx" Feb 27 17:47:24 crc kubenswrapper[4830]: I0227 17:47:24.178135 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27beea35-cf86-4a88-ae9a-1620fd0bc390-config-data\") pod \"octavia-api-dc594bd7f-7cnbx\" (UID: \"27beea35-cf86-4a88-ae9a-1620fd0bc390\") " pod="openstack/octavia-api-dc594bd7f-7cnbx" Feb 27 17:47:24 crc kubenswrapper[4830]: I0227 17:47:24.179116 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/27beea35-cf86-4a88-ae9a-1620fd0bc390-octavia-run\") pod \"octavia-api-dc594bd7f-7cnbx\" (UID: \"27beea35-cf86-4a88-ae9a-1620fd0bc390\") " pod="openstack/octavia-api-dc594bd7f-7cnbx" Feb 27 17:47:24 crc kubenswrapper[4830]: I0227 17:47:24.179147 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/27beea35-cf86-4a88-ae9a-1620fd0bc390-config-data-merged\") pod \"octavia-api-dc594bd7f-7cnbx\" (UID: \"27beea35-cf86-4a88-ae9a-1620fd0bc390\") " pod="openstack/octavia-api-dc594bd7f-7cnbx" Feb 27 17:47:24 crc kubenswrapper[4830]: I0227 17:47:24.186787 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27beea35-cf86-4a88-ae9a-1620fd0bc390-scripts\") pod \"octavia-api-dc594bd7f-7cnbx\" (UID: \"27beea35-cf86-4a88-ae9a-1620fd0bc390\") " pod="openstack/octavia-api-dc594bd7f-7cnbx" Feb 27 17:47:24 crc kubenswrapper[4830]: I0227 17:47:24.186802 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27beea35-cf86-4a88-ae9a-1620fd0bc390-combined-ca-bundle\") pod \"octavia-api-dc594bd7f-7cnbx\" (UID: \"27beea35-cf86-4a88-ae9a-1620fd0bc390\") " pod="openstack/octavia-api-dc594bd7f-7cnbx" Feb 27 17:47:24 crc kubenswrapper[4830]: I0227 17:47:24.187632 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27beea35-cf86-4a88-ae9a-1620fd0bc390-config-data\") pod \"octavia-api-dc594bd7f-7cnbx\" (UID: \"27beea35-cf86-4a88-ae9a-1620fd0bc390\") " pod="openstack/octavia-api-dc594bd7f-7cnbx" Feb 27 17:47:24 crc kubenswrapper[4830]: I0227 17:47:24.348983 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-dc594bd7f-7cnbx" Feb 27 17:47:24 crc kubenswrapper[4830]: I0227 17:47:24.410370 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-djgzn" event={"ID":"971be4ad-5722-4c6f-8f27-5421e80d1cff","Type":"ContainerStarted","Data":"f5fb7a7987fe17bb3ff3bae29b86cb5cb02006ecdb171f880c91fd39d1c0793a"} Feb 27 17:47:24 crc kubenswrapper[4830]: I0227 17:47:24.446752 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-djgzn" podStartSLOduration=2.9978911740000003 podStartE2EDuration="6.446728474s" podCreationTimestamp="2026-02-27 17:47:18 +0000 UTC" firstStartedPulling="2026-02-27 17:47:20.354454262 +0000 UTC m=+6036.443726725" lastFinishedPulling="2026-02-27 17:47:23.803291572 +0000 UTC m=+6039.892564025" observedRunningTime="2026-02-27 17:47:24.439066921 +0000 UTC m=+6040.528339384" watchObservedRunningTime="2026-02-27 17:47:24.446728474 +0000 UTC m=+6040.536000937" Feb 27 17:47:24 crc kubenswrapper[4830]: I0227 17:47:24.960768 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-dc594bd7f-7cnbx"] Feb 27 17:47:24 crc kubenswrapper[4830]: W0227 17:47:24.969415 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27beea35_cf86_4a88_ae9a_1620fd0bc390.slice/crio-0e04845289ad465e4ad5d7d23d4df07c2fa02bd5c017aa1206aca35219ec8c89 WatchSource:0}: Error finding container 0e04845289ad465e4ad5d7d23d4df07c2fa02bd5c017aa1206aca35219ec8c89: Status 404 returned error can't find the container with id 0e04845289ad465e4ad5d7d23d4df07c2fa02bd5c017aa1206aca35219ec8c89 Feb 27 17:47:25 crc kubenswrapper[4830]: I0227 17:47:25.424063 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-dc594bd7f-7cnbx" event={"ID":"27beea35-cf86-4a88-ae9a-1620fd0bc390","Type":"ContainerStarted","Data":"0e04845289ad465e4ad5d7d23d4df07c2fa02bd5c017aa1206aca35219ec8c89"} Feb 27 17:47:26 crc kubenswrapper[4830]: I0227 17:47:26.762460 4830 scope.go:117] "RemoveContainer" containerID="0f73af46b765b3d98f3f5e9883b49887ac4f2ac3485e91a001e18c5d351646d8" Feb 27 17:47:26 crc kubenswrapper[4830]: E0227 17:47:26.764341 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:47:29 crc kubenswrapper[4830]: I0227 17:47:29.208737 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-djgzn" Feb 27 17:47:29 crc kubenswrapper[4830]: I0227 17:47:29.209117 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-djgzn" Feb 27 17:47:30 crc kubenswrapper[4830]: I0227 17:47:30.279075 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-djgzn" podUID="971be4ad-5722-4c6f-8f27-5421e80d1cff" containerName="registry-server" probeResult="failure" output=< Feb 27 17:47:30 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Feb 27 17:47:30 crc kubenswrapper[4830]: > Feb 27 17:47:33 crc kubenswrapper[4830]: E0227 17:47:33.232761 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:47:34 crc kubenswrapper[4830]: I0227 17:47:34.047821 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-b8hbd" podUID="d4881336-2572-4aa9-a0c2-9c46b73b7898" containerName="ovn-controller" probeResult="failure" output=< Feb 27 17:47:34 crc kubenswrapper[4830]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 27 17:47:34 crc kubenswrapper[4830]: > Feb 27 17:47:34 crc kubenswrapper[4830]: I0227 17:47:34.063462 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-rfd9s" Feb 27 17:47:34 crc kubenswrapper[4830]: I0227 17:47:34.067106 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-rfd9s" Feb 27 17:47:34 crc kubenswrapper[4830]: I0227 17:47:34.192180 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-b8hbd-config-5ggmj"] Feb 27 17:47:34 crc kubenswrapper[4830]: I0227 17:47:34.194196 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-b8hbd-config-5ggmj" Feb 27 17:47:34 crc kubenswrapper[4830]: I0227 17:47:34.196373 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 27 17:47:34 crc kubenswrapper[4830]: I0227 17:47:34.202182 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-b8hbd-config-5ggmj"] Feb 27 17:47:34 crc kubenswrapper[4830]: I0227 17:47:34.322445 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k794n\" (UniqueName: \"kubernetes.io/projected/652bbfea-925e-4786-b3f8-703ad494e5e9-kube-api-access-k794n\") pod \"ovn-controller-b8hbd-config-5ggmj\" (UID: \"652bbfea-925e-4786-b3f8-703ad494e5e9\") " pod="openstack/ovn-controller-b8hbd-config-5ggmj" Feb 27 17:47:34 crc kubenswrapper[4830]: I0227 17:47:34.322635 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/652bbfea-925e-4786-b3f8-703ad494e5e9-additional-scripts\") pod \"ovn-controller-b8hbd-config-5ggmj\" (UID: \"652bbfea-925e-4786-b3f8-703ad494e5e9\") " pod="openstack/ovn-controller-b8hbd-config-5ggmj" Feb 27 17:47:34 crc kubenswrapper[4830]: I0227 17:47:34.322703 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/652bbfea-925e-4786-b3f8-703ad494e5e9-scripts\") pod \"ovn-controller-b8hbd-config-5ggmj\" (UID: \"652bbfea-925e-4786-b3f8-703ad494e5e9\") " pod="openstack/ovn-controller-b8hbd-config-5ggmj" Feb 27 17:47:34 crc kubenswrapper[4830]: I0227 17:47:34.322897 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/652bbfea-925e-4786-b3f8-703ad494e5e9-var-run\") pod \"ovn-controller-b8hbd-config-5ggmj\" (UID: \"652bbfea-925e-4786-b3f8-703ad494e5e9\") " pod="openstack/ovn-controller-b8hbd-config-5ggmj" Feb 27 17:47:34 crc kubenswrapper[4830]: I0227 17:47:34.323112 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/652bbfea-925e-4786-b3f8-703ad494e5e9-var-run-ovn\") pod \"ovn-controller-b8hbd-config-5ggmj\" (UID: \"652bbfea-925e-4786-b3f8-703ad494e5e9\") " pod="openstack/ovn-controller-b8hbd-config-5ggmj" Feb 27 17:47:34 crc kubenswrapper[4830]: I0227 17:47:34.323362 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/652bbfea-925e-4786-b3f8-703ad494e5e9-var-log-ovn\") pod \"ovn-controller-b8hbd-config-5ggmj\" (UID: \"652bbfea-925e-4786-b3f8-703ad494e5e9\") " pod="openstack/ovn-controller-b8hbd-config-5ggmj" Feb 27 17:47:34 crc kubenswrapper[4830]: I0227 17:47:34.426636 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/652bbfea-925e-4786-b3f8-703ad494e5e9-var-run-ovn\") pod \"ovn-controller-b8hbd-config-5ggmj\" (UID: \"652bbfea-925e-4786-b3f8-703ad494e5e9\") " pod="openstack/ovn-controller-b8hbd-config-5ggmj" Feb 27 17:47:34 crc kubenswrapper[4830]: I0227 17:47:34.427089 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/652bbfea-925e-4786-b3f8-703ad494e5e9-var-run-ovn\") pod \"ovn-controller-b8hbd-config-5ggmj\" (UID: \"652bbfea-925e-4786-b3f8-703ad494e5e9\") " pod="openstack/ovn-controller-b8hbd-config-5ggmj" Feb 27 17:47:34 crc kubenswrapper[4830]: I0227 17:47:34.427243 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/652bbfea-925e-4786-b3f8-703ad494e5e9-var-log-ovn\") pod \"ovn-controller-b8hbd-config-5ggmj\" (UID: \"652bbfea-925e-4786-b3f8-703ad494e5e9\") " pod="openstack/ovn-controller-b8hbd-config-5ggmj" Feb 27 17:47:34 crc kubenswrapper[4830]: I0227 17:47:34.427376 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k794n\" (UniqueName: \"kubernetes.io/projected/652bbfea-925e-4786-b3f8-703ad494e5e9-kube-api-access-k794n\") pod \"ovn-controller-b8hbd-config-5ggmj\" (UID: \"652bbfea-925e-4786-b3f8-703ad494e5e9\") " pod="openstack/ovn-controller-b8hbd-config-5ggmj" Feb 27 17:47:34 crc kubenswrapper[4830]: I0227 17:47:34.427411 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/652bbfea-925e-4786-b3f8-703ad494e5e9-var-log-ovn\") pod \"ovn-controller-b8hbd-config-5ggmj\" (UID: \"652bbfea-925e-4786-b3f8-703ad494e5e9\") " pod="openstack/ovn-controller-b8hbd-config-5ggmj" Feb 27 17:47:34 crc kubenswrapper[4830]: I0227 17:47:34.428610 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/652bbfea-925e-4786-b3f8-703ad494e5e9-additional-scripts\") pod \"ovn-controller-b8hbd-config-5ggmj\" (UID: \"652bbfea-925e-4786-b3f8-703ad494e5e9\") " pod="openstack/ovn-controller-b8hbd-config-5ggmj" Feb 27 17:47:34 crc kubenswrapper[4830]: I0227 17:47:34.428661 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/652bbfea-925e-4786-b3f8-703ad494e5e9-additional-scripts\") pod \"ovn-controller-b8hbd-config-5ggmj\" (UID: \"652bbfea-925e-4786-b3f8-703ad494e5e9\") " pod="openstack/ovn-controller-b8hbd-config-5ggmj" Feb 27 17:47:34 crc kubenswrapper[4830]: I0227 17:47:34.428743 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/652bbfea-925e-4786-b3f8-703ad494e5e9-scripts\") pod \"ovn-controller-b8hbd-config-5ggmj\" (UID: \"652bbfea-925e-4786-b3f8-703ad494e5e9\") " pod="openstack/ovn-controller-b8hbd-config-5ggmj" Feb 27 17:47:34 crc kubenswrapper[4830]: I0227 17:47:34.431394 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/652bbfea-925e-4786-b3f8-703ad494e5e9-var-run\") pod \"ovn-controller-b8hbd-config-5ggmj\" (UID: \"652bbfea-925e-4786-b3f8-703ad494e5e9\") " pod="openstack/ovn-controller-b8hbd-config-5ggmj" Feb 27 17:47:34 crc kubenswrapper[4830]: I0227 17:47:34.431496 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/652bbfea-925e-4786-b3f8-703ad494e5e9-var-run\") pod \"ovn-controller-b8hbd-config-5ggmj\" (UID: \"652bbfea-925e-4786-b3f8-703ad494e5e9\") " pod="openstack/ovn-controller-b8hbd-config-5ggmj" Feb 27 17:47:34 crc kubenswrapper[4830]: I0227 17:47:34.431527 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/652bbfea-925e-4786-b3f8-703ad494e5e9-scripts\") pod \"ovn-controller-b8hbd-config-5ggmj\" (UID: \"652bbfea-925e-4786-b3f8-703ad494e5e9\") " pod="openstack/ovn-controller-b8hbd-config-5ggmj" Feb 27 17:47:34 crc kubenswrapper[4830]: I0227 17:47:34.448171 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k794n\" (UniqueName: \"kubernetes.io/projected/652bbfea-925e-4786-b3f8-703ad494e5e9-kube-api-access-k794n\") pod \"ovn-controller-b8hbd-config-5ggmj\" (UID: \"652bbfea-925e-4786-b3f8-703ad494e5e9\") " pod="openstack/ovn-controller-b8hbd-config-5ggmj" Feb 27 17:47:34 crc kubenswrapper[4830]: I0227 17:47:34.520764 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-b8hbd-config-5ggmj" Feb 27 17:47:35 crc kubenswrapper[4830]: E0227 17:47:35.819060 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:47:36 crc kubenswrapper[4830]: I0227 17:47:36.336826 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-b8hbd-config-5ggmj"] Feb 27 17:47:36 crc kubenswrapper[4830]: W0227 17:47:36.392023 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod652bbfea_925e_4786_b3f8_703ad494e5e9.slice/crio-07497849a43756e4c7eafc53b2e272dd13d98a257d43be269a04c2d62f4270be WatchSource:0}: Error finding container 07497849a43756e4c7eafc53b2e272dd13d98a257d43be269a04c2d62f4270be: Status 404 returned error can't find the container with id 07497849a43756e4c7eafc53b2e272dd13d98a257d43be269a04c2d62f4270be Feb 27 17:47:36 crc kubenswrapper[4830]: I0227 17:47:36.586619 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-b8hbd-config-5ggmj" event={"ID":"652bbfea-925e-4786-b3f8-703ad494e5e9","Type":"ContainerStarted","Data":"07497849a43756e4c7eafc53b2e272dd13d98a257d43be269a04c2d62f4270be"} Feb 27 17:47:36 crc kubenswrapper[4830]: I0227 17:47:36.595553 4830 generic.go:334] "Generic (PLEG): container finished" podID="27beea35-cf86-4a88-ae9a-1620fd0bc390" containerID="3dea5e5aa3388a4da5e59f9a1fbddde932993d7f56da18306e0bb0b6d282141d" exitCode=0 Feb 27 17:47:36 crc kubenswrapper[4830]: I0227 17:47:36.595624 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-dc594bd7f-7cnbx" event={"ID":"27beea35-cf86-4a88-ae9a-1620fd0bc390","Type":"ContainerDied","Data":"3dea5e5aa3388a4da5e59f9a1fbddde932993d7f56da18306e0bb0b6d282141d"} Feb 27 17:47:37 crc kubenswrapper[4830]: I0227 17:47:37.616557 4830 generic.go:334] "Generic (PLEG): container finished" podID="652bbfea-925e-4786-b3f8-703ad494e5e9" containerID="f5a2f5698acb357deb9a63bacaefa3d2174568baf799501ee12a8ec1846a0f2d" exitCode=0 Feb 27 17:47:37 crc kubenswrapper[4830]: I0227 17:47:37.616654 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-b8hbd-config-5ggmj" event={"ID":"652bbfea-925e-4786-b3f8-703ad494e5e9","Type":"ContainerDied","Data":"f5a2f5698acb357deb9a63bacaefa3d2174568baf799501ee12a8ec1846a0f2d"} Feb 27 17:47:37 crc kubenswrapper[4830]: I0227 17:47:37.630275 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-dc594bd7f-7cnbx" event={"ID":"27beea35-cf86-4a88-ae9a-1620fd0bc390","Type":"ContainerStarted","Data":"7a9112f3dbd9dcf729dfe1a5d50d12cbe869f74bdb9d25dab77f7429d29c3994"} Feb 27 17:47:37 crc kubenswrapper[4830]: I0227 17:47:37.630345 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-dc594bd7f-7cnbx" event={"ID":"27beea35-cf86-4a88-ae9a-1620fd0bc390","Type":"ContainerStarted","Data":"6eb84a9e61b691317b1fe0e7843b3d9e5a346b318aada09bfb703ae1d2791c0d"} Feb 27 17:47:37 crc kubenswrapper[4830]: I0227 17:47:37.630702 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-dc594bd7f-7cnbx" Feb 27 17:47:37 crc kubenswrapper[4830]: I0227 17:47:37.630852 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-dc594bd7f-7cnbx" Feb 27 17:47:37 crc kubenswrapper[4830]: I0227 17:47:37.699978 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-dc594bd7f-7cnbx" podStartSLOduration=3.751730128 podStartE2EDuration="14.699937051s" podCreationTimestamp="2026-02-27 17:47:23 +0000 UTC" firstStartedPulling="2026-02-27 17:47:24.980936325 +0000 UTC m=+6041.070208778" lastFinishedPulling="2026-02-27 17:47:35.929143238 +0000 UTC m=+6052.018415701" observedRunningTime="2026-02-27 17:47:37.687355951 +0000 UTC m=+6053.776628424" watchObservedRunningTime="2026-02-27 17:47:37.699937051 +0000 UTC m=+6053.789209524" Feb 27 17:47:39 crc kubenswrapper[4830]: I0227 17:47:39.050632 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-b8hbd-config-5ggmj" Feb 27 17:47:39 crc kubenswrapper[4830]: I0227 17:47:39.061036 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-b8hbd" Feb 27 17:47:39 crc kubenswrapper[4830]: I0227 17:47:39.171129 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k794n\" (UniqueName: \"kubernetes.io/projected/652bbfea-925e-4786-b3f8-703ad494e5e9-kube-api-access-k794n\") pod \"652bbfea-925e-4786-b3f8-703ad494e5e9\" (UID: \"652bbfea-925e-4786-b3f8-703ad494e5e9\") " Feb 27 17:47:39 crc kubenswrapper[4830]: I0227 17:47:39.171196 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/652bbfea-925e-4786-b3f8-703ad494e5e9-var-run-ovn\") pod \"652bbfea-925e-4786-b3f8-703ad494e5e9\" (UID: \"652bbfea-925e-4786-b3f8-703ad494e5e9\") " Feb 27 17:47:39 crc kubenswrapper[4830]: I0227 17:47:39.171283 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/652bbfea-925e-4786-b3f8-703ad494e5e9-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "652bbfea-925e-4786-b3f8-703ad494e5e9" (UID: "652bbfea-925e-4786-b3f8-703ad494e5e9"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 17:47:39 crc kubenswrapper[4830]: I0227 17:47:39.172283 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/652bbfea-925e-4786-b3f8-703ad494e5e9-scripts\") pod \"652bbfea-925e-4786-b3f8-703ad494e5e9\" (UID: \"652bbfea-925e-4786-b3f8-703ad494e5e9\") " Feb 27 17:47:39 crc kubenswrapper[4830]: I0227 17:47:39.172352 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/652bbfea-925e-4786-b3f8-703ad494e5e9-var-run\") pod \"652bbfea-925e-4786-b3f8-703ad494e5e9\" (UID: \"652bbfea-925e-4786-b3f8-703ad494e5e9\") " Feb 27 17:47:39 crc kubenswrapper[4830]: I0227 17:47:39.172385 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/652bbfea-925e-4786-b3f8-703ad494e5e9-additional-scripts\") pod \"652bbfea-925e-4786-b3f8-703ad494e5e9\" (UID: \"652bbfea-925e-4786-b3f8-703ad494e5e9\") " Feb 27 17:47:39 crc kubenswrapper[4830]: I0227 17:47:39.172400 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/652bbfea-925e-4786-b3f8-703ad494e5e9-var-log-ovn\") pod \"652bbfea-925e-4786-b3f8-703ad494e5e9\" (UID: \"652bbfea-925e-4786-b3f8-703ad494e5e9\") " Feb 27 17:47:39 crc kubenswrapper[4830]: I0227 17:47:39.172922 4830 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/652bbfea-925e-4786-b3f8-703ad494e5e9-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 27 17:47:39 crc kubenswrapper[4830]: I0227 17:47:39.174603 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/652bbfea-925e-4786-b3f8-703ad494e5e9-scripts" (OuterVolumeSpecName: "scripts") pod "652bbfea-925e-4786-b3f8-703ad494e5e9" (UID: "652bbfea-925e-4786-b3f8-703ad494e5e9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:47:39 crc kubenswrapper[4830]: I0227 17:47:39.174666 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/652bbfea-925e-4786-b3f8-703ad494e5e9-var-run" (OuterVolumeSpecName: "var-run") pod "652bbfea-925e-4786-b3f8-703ad494e5e9" (UID: "652bbfea-925e-4786-b3f8-703ad494e5e9"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 17:47:39 crc kubenswrapper[4830]: I0227 17:47:39.174831 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/652bbfea-925e-4786-b3f8-703ad494e5e9-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "652bbfea-925e-4786-b3f8-703ad494e5e9" (UID: "652bbfea-925e-4786-b3f8-703ad494e5e9"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 17:47:39 crc kubenswrapper[4830]: I0227 17:47:39.175510 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/652bbfea-925e-4786-b3f8-703ad494e5e9-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "652bbfea-925e-4786-b3f8-703ad494e5e9" (UID: "652bbfea-925e-4786-b3f8-703ad494e5e9"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:47:39 crc kubenswrapper[4830]: I0227 17:47:39.180127 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/652bbfea-925e-4786-b3f8-703ad494e5e9-kube-api-access-k794n" (OuterVolumeSpecName: "kube-api-access-k794n") pod "652bbfea-925e-4786-b3f8-703ad494e5e9" (UID: "652bbfea-925e-4786-b3f8-703ad494e5e9"). InnerVolumeSpecName "kube-api-access-k794n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:47:39 crc kubenswrapper[4830]: I0227 17:47:39.258804 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-djgzn" Feb 27 17:47:39 crc kubenswrapper[4830]: I0227 17:47:39.275466 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k794n\" (UniqueName: \"kubernetes.io/projected/652bbfea-925e-4786-b3f8-703ad494e5e9-kube-api-access-k794n\") on node \"crc\" DevicePath \"\"" Feb 27 17:47:39 crc kubenswrapper[4830]: I0227 17:47:39.275519 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/652bbfea-925e-4786-b3f8-703ad494e5e9-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:47:39 crc kubenswrapper[4830]: I0227 17:47:39.275537 4830 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/652bbfea-925e-4786-b3f8-703ad494e5e9-var-run\") on node \"crc\" DevicePath \"\"" Feb 27 17:47:39 crc kubenswrapper[4830]: I0227 17:47:39.275572 4830 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/652bbfea-925e-4786-b3f8-703ad494e5e9-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:47:39 crc kubenswrapper[4830]: I0227 17:47:39.275590 4830 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/652bbfea-925e-4786-b3f8-703ad494e5e9-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 27 17:47:39 crc kubenswrapper[4830]: I0227 17:47:39.320398 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-djgzn" Feb 27 17:47:39 crc kubenswrapper[4830]: I0227 17:47:39.510857 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-djgzn"] Feb 27 17:47:39 crc kubenswrapper[4830]: I0227 17:47:39.657261 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-b8hbd-config-5ggmj" Feb 27 17:47:39 crc kubenswrapper[4830]: I0227 17:47:39.657289 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-b8hbd-config-5ggmj" event={"ID":"652bbfea-925e-4786-b3f8-703ad494e5e9","Type":"ContainerDied","Data":"07497849a43756e4c7eafc53b2e272dd13d98a257d43be269a04c2d62f4270be"} Feb 27 17:47:39 crc kubenswrapper[4830]: I0227 17:47:39.657377 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07497849a43756e4c7eafc53b2e272dd13d98a257d43be269a04c2d62f4270be" Feb 27 17:47:40 crc kubenswrapper[4830]: I0227 17:47:40.176107 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-b8hbd-config-5ggmj"] Feb 27 17:47:40 crc kubenswrapper[4830]: I0227 17:47:40.192868 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-b8hbd-config-5ggmj"] Feb 27 17:47:40 crc kubenswrapper[4830]: I0227 17:47:40.666672 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-djgzn" podUID="971be4ad-5722-4c6f-8f27-5421e80d1cff" containerName="registry-server" containerID="cri-o://f5fb7a7987fe17bb3ff3bae29b86cb5cb02006ecdb171f880c91fd39d1c0793a" gracePeriod=2 Feb 27 17:47:40 crc kubenswrapper[4830]: I0227 17:47:40.762879 4830 scope.go:117] "RemoveContainer" containerID="0f73af46b765b3d98f3f5e9883b49887ac4f2ac3485e91a001e18c5d351646d8" Feb 27 17:47:40 crc kubenswrapper[4830]: I0227 17:47:40.779366 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="652bbfea-925e-4786-b3f8-703ad494e5e9" path="/var/lib/kubelet/pods/652bbfea-925e-4786-b3f8-703ad494e5e9/volumes" Feb 27 17:47:41 crc kubenswrapper[4830]: I0227 17:47:41.197123 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-djgzn" Feb 27 17:47:41 crc kubenswrapper[4830]: I0227 17:47:41.329217 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq7fb\" (UniqueName: \"kubernetes.io/projected/971be4ad-5722-4c6f-8f27-5421e80d1cff-kube-api-access-kq7fb\") pod \"971be4ad-5722-4c6f-8f27-5421e80d1cff\" (UID: \"971be4ad-5722-4c6f-8f27-5421e80d1cff\") " Feb 27 17:47:41 crc kubenswrapper[4830]: I0227 17:47:41.329273 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/971be4ad-5722-4c6f-8f27-5421e80d1cff-utilities\") pod \"971be4ad-5722-4c6f-8f27-5421e80d1cff\" (UID: \"971be4ad-5722-4c6f-8f27-5421e80d1cff\") " Feb 27 17:47:41 crc kubenswrapper[4830]: I0227 17:47:41.329508 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/971be4ad-5722-4c6f-8f27-5421e80d1cff-catalog-content\") pod \"971be4ad-5722-4c6f-8f27-5421e80d1cff\" (UID: \"971be4ad-5722-4c6f-8f27-5421e80d1cff\") " Feb 27 17:47:41 crc kubenswrapper[4830]: I0227 17:47:41.330712 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/971be4ad-5722-4c6f-8f27-5421e80d1cff-utilities" (OuterVolumeSpecName: "utilities") pod "971be4ad-5722-4c6f-8f27-5421e80d1cff" (UID: "971be4ad-5722-4c6f-8f27-5421e80d1cff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:47:41 crc kubenswrapper[4830]: I0227 17:47:41.337680 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/971be4ad-5722-4c6f-8f27-5421e80d1cff-kube-api-access-kq7fb" (OuterVolumeSpecName: "kube-api-access-kq7fb") pod "971be4ad-5722-4c6f-8f27-5421e80d1cff" (UID: "971be4ad-5722-4c6f-8f27-5421e80d1cff"). InnerVolumeSpecName "kube-api-access-kq7fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:47:41 crc kubenswrapper[4830]: I0227 17:47:41.373638 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/971be4ad-5722-4c6f-8f27-5421e80d1cff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "971be4ad-5722-4c6f-8f27-5421e80d1cff" (UID: "971be4ad-5722-4c6f-8f27-5421e80d1cff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:47:41 crc kubenswrapper[4830]: I0227 17:47:41.431932 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kq7fb\" (UniqueName: \"kubernetes.io/projected/971be4ad-5722-4c6f-8f27-5421e80d1cff-kube-api-access-kq7fb\") on node \"crc\" DevicePath \"\"" Feb 27 17:47:41 crc kubenswrapper[4830]: I0227 17:47:41.431977 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/971be4ad-5722-4c6f-8f27-5421e80d1cff-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:47:41 crc kubenswrapper[4830]: I0227 17:47:41.431987 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/971be4ad-5722-4c6f-8f27-5421e80d1cff-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:47:41 crc kubenswrapper[4830]: I0227 17:47:41.692800 4830 generic.go:334] "Generic (PLEG): container finished" podID="971be4ad-5722-4c6f-8f27-5421e80d1cff" containerID="f5fb7a7987fe17bb3ff3bae29b86cb5cb02006ecdb171f880c91fd39d1c0793a" exitCode=0 Feb 27 17:47:41 crc kubenswrapper[4830]: I0227 17:47:41.693299 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-djgzn" event={"ID":"971be4ad-5722-4c6f-8f27-5421e80d1cff","Type":"ContainerDied","Data":"f5fb7a7987fe17bb3ff3bae29b86cb5cb02006ecdb171f880c91fd39d1c0793a"} Feb 27 17:47:41 crc kubenswrapper[4830]: I0227 17:47:41.693347 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-djgzn" event={"ID":"971be4ad-5722-4c6f-8f27-5421e80d1cff","Type":"ContainerDied","Data":"88515bb056831f8c6432a0f6c892c73e0a6a08e3f93106b1635466a2a9483107"} Feb 27 17:47:41 crc kubenswrapper[4830]: I0227 17:47:41.693376 4830 scope.go:117] "RemoveContainer" containerID="f5fb7a7987fe17bb3ff3bae29b86cb5cb02006ecdb171f880c91fd39d1c0793a" Feb 27 17:47:41 crc kubenswrapper[4830]: I0227 17:47:41.693576 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-djgzn" Feb 27 17:47:41 crc kubenswrapper[4830]: I0227 17:47:41.703110 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerStarted","Data":"1edc2346b55575fd27d28000f5321fa0e167abd0b9733373b1ab9e03d2bd8d16"} Feb 27 17:47:41 crc kubenswrapper[4830]: I0227 17:47:41.760107 4830 scope.go:117] "RemoveContainer" containerID="231cb45d17859b26b95b7e952c172f0d5c3aee085704ca0e523a23a748f28e03" Feb 27 17:47:41 crc kubenswrapper[4830]: I0227 17:47:41.779222 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-djgzn"] Feb 27 17:47:41 crc kubenswrapper[4830]: I0227 17:47:41.787752 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-djgzn"] Feb 27 17:47:41 crc kubenswrapper[4830]: I0227 17:47:41.799937 4830 scope.go:117] "RemoveContainer" containerID="8fd0162d9efe549e418be20b4fe1f5c5d5e47648e47d596b900e316f8d8994ce" Feb 27 17:47:41 crc kubenswrapper[4830]: I0227 17:47:41.855978 4830 scope.go:117] "RemoveContainer" containerID="f5fb7a7987fe17bb3ff3bae29b86cb5cb02006ecdb171f880c91fd39d1c0793a" Feb 27 17:47:41 crc kubenswrapper[4830]: E0227 17:47:41.856659 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5fb7a7987fe17bb3ff3bae29b86cb5cb02006ecdb171f880c91fd39d1c0793a\": container with ID starting with f5fb7a7987fe17bb3ff3bae29b86cb5cb02006ecdb171f880c91fd39d1c0793a not found: ID does not exist" containerID="f5fb7a7987fe17bb3ff3bae29b86cb5cb02006ecdb171f880c91fd39d1c0793a" Feb 27 17:47:41 crc kubenswrapper[4830]: I0227 17:47:41.856724 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5fb7a7987fe17bb3ff3bae29b86cb5cb02006ecdb171f880c91fd39d1c0793a"} err="failed to get container status \"f5fb7a7987fe17bb3ff3bae29b86cb5cb02006ecdb171f880c91fd39d1c0793a\": rpc error: code = NotFound desc = could not find container \"f5fb7a7987fe17bb3ff3bae29b86cb5cb02006ecdb171f880c91fd39d1c0793a\": container with ID starting with f5fb7a7987fe17bb3ff3bae29b86cb5cb02006ecdb171f880c91fd39d1c0793a not found: ID does not exist" Feb 27 17:47:41 crc kubenswrapper[4830]: I0227 17:47:41.856752 4830 scope.go:117] "RemoveContainer" containerID="231cb45d17859b26b95b7e952c172f0d5c3aee085704ca0e523a23a748f28e03" Feb 27 17:47:41 crc kubenswrapper[4830]: E0227 17:47:41.857293 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"231cb45d17859b26b95b7e952c172f0d5c3aee085704ca0e523a23a748f28e03\": container with ID starting with 231cb45d17859b26b95b7e952c172f0d5c3aee085704ca0e523a23a748f28e03 not found: ID does not exist" containerID="231cb45d17859b26b95b7e952c172f0d5c3aee085704ca0e523a23a748f28e03" Feb 27 17:47:41 crc kubenswrapper[4830]: I0227 17:47:41.857324 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"231cb45d17859b26b95b7e952c172f0d5c3aee085704ca0e523a23a748f28e03"} err="failed to get container status \"231cb45d17859b26b95b7e952c172f0d5c3aee085704ca0e523a23a748f28e03\": rpc error: code = NotFound desc = could not find container \"231cb45d17859b26b95b7e952c172f0d5c3aee085704ca0e523a23a748f28e03\": container with ID starting with 231cb45d17859b26b95b7e952c172f0d5c3aee085704ca0e523a23a748f28e03 not found: ID does not exist" Feb 27 17:47:41 crc kubenswrapper[4830]: I0227 17:47:41.857347 4830 scope.go:117] "RemoveContainer" containerID="8fd0162d9efe549e418be20b4fe1f5c5d5e47648e47d596b900e316f8d8994ce" Feb 27 17:47:41 crc kubenswrapper[4830]: E0227 17:47:41.858308 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fd0162d9efe549e418be20b4fe1f5c5d5e47648e47d596b900e316f8d8994ce\": container with ID starting with 8fd0162d9efe549e418be20b4fe1f5c5d5e47648e47d596b900e316f8d8994ce not found: ID does not exist" containerID="8fd0162d9efe549e418be20b4fe1f5c5d5e47648e47d596b900e316f8d8994ce" Feb 27 17:47:41 crc kubenswrapper[4830]: I0227 17:47:41.858334 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fd0162d9efe549e418be20b4fe1f5c5d5e47648e47d596b900e316f8d8994ce"} err="failed to get container status \"8fd0162d9efe549e418be20b4fe1f5c5d5e47648e47d596b900e316f8d8994ce\": rpc error: code = NotFound desc = could not find container \"8fd0162d9efe549e418be20b4fe1f5c5d5e47648e47d596b900e316f8d8994ce\": container with ID starting with 8fd0162d9efe549e418be20b4fe1f5c5d5e47648e47d596b900e316f8d8994ce not found: ID does not exist" Feb 27 17:47:42 crc kubenswrapper[4830]: I0227 17:47:42.779918 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="971be4ad-5722-4c6f-8f27-5421e80d1cff" path="/var/lib/kubelet/pods/971be4ad-5722-4c6f-8f27-5421e80d1cff/volumes" Feb 27 17:47:43 crc kubenswrapper[4830]: E0227 17:47:43.766330 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:47:46 crc kubenswrapper[4830]: E0227 17:47:46.766271 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:47:48 crc kubenswrapper[4830]: I0227 17:47:48.454234 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-rsyslog-xhmtb"] Feb 27 17:47:48 crc kubenswrapper[4830]: E0227 17:47:48.456058 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="971be4ad-5722-4c6f-8f27-5421e80d1cff" containerName="extract-utilities" Feb 27 17:47:48 crc kubenswrapper[4830]: I0227 17:47:48.456188 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="971be4ad-5722-4c6f-8f27-5421e80d1cff" containerName="extract-utilities" Feb 27 17:47:48 crc kubenswrapper[4830]: E0227 17:47:48.456277 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="971be4ad-5722-4c6f-8f27-5421e80d1cff" containerName="extract-content" Feb 27 17:47:48 crc kubenswrapper[4830]: I0227 17:47:48.456359 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="971be4ad-5722-4c6f-8f27-5421e80d1cff" containerName="extract-content" Feb 27 17:47:48 crc kubenswrapper[4830]: E0227 17:47:48.456462 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="971be4ad-5722-4c6f-8f27-5421e80d1cff" containerName="registry-server" Feb 27 17:47:48 crc kubenswrapper[4830]: I0227 17:47:48.456543 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="971be4ad-5722-4c6f-8f27-5421e80d1cff" containerName="registry-server" Feb 27 17:47:48 crc kubenswrapper[4830]: E0227 17:47:48.456625 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="652bbfea-925e-4786-b3f8-703ad494e5e9" containerName="ovn-config" Feb 27 17:47:48 crc kubenswrapper[4830]: I0227 17:47:48.456705 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="652bbfea-925e-4786-b3f8-703ad494e5e9" containerName="ovn-config" Feb 27 17:47:48 crc kubenswrapper[4830]: I0227 17:47:48.457025 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="652bbfea-925e-4786-b3f8-703ad494e5e9" containerName="ovn-config" Feb 27 17:47:48 crc kubenswrapper[4830]: I0227 17:47:48.457135 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="971be4ad-5722-4c6f-8f27-5421e80d1cff" containerName="registry-server" Feb 27 17:47:48 crc kubenswrapper[4830]: I0227 17:47:48.458476 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-xhmtb" Feb 27 17:47:48 crc kubenswrapper[4830]: I0227 17:47:48.461297 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-config-data" Feb 27 17:47:48 crc kubenswrapper[4830]: I0227 17:47:48.461521 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"octavia-hmport-map" Feb 27 17:47:48 crc kubenswrapper[4830]: I0227 17:47:48.461824 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-scripts" Feb 27 17:47:48 crc kubenswrapper[4830]: I0227 17:47:48.476076 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-xhmtb"] Feb 27 17:47:48 crc kubenswrapper[4830]: I0227 17:47:48.595544 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a5ea5263-f3d8-40bf-9d4f-66afaad4eeec-config-data-merged\") pod \"octavia-rsyslog-xhmtb\" (UID: \"a5ea5263-f3d8-40bf-9d4f-66afaad4eeec\") " pod="openstack/octavia-rsyslog-xhmtb" Feb 27 17:47:48 crc kubenswrapper[4830]: I0227 17:47:48.595735 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5ea5263-f3d8-40bf-9d4f-66afaad4eeec-config-data\") pod \"octavia-rsyslog-xhmtb\" (UID: \"a5ea5263-f3d8-40bf-9d4f-66afaad4eeec\") " pod="openstack/octavia-rsyslog-xhmtb" Feb 27 17:47:48 crc kubenswrapper[4830]: I0227 17:47:48.595778 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/a5ea5263-f3d8-40bf-9d4f-66afaad4eeec-hm-ports\") pod \"octavia-rsyslog-xhmtb\" (UID: \"a5ea5263-f3d8-40bf-9d4f-66afaad4eeec\") " pod="openstack/octavia-rsyslog-xhmtb" Feb 27 17:47:48 crc kubenswrapper[4830]: I0227 17:47:48.595838 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5ea5263-f3d8-40bf-9d4f-66afaad4eeec-scripts\") pod \"octavia-rsyslog-xhmtb\" (UID: \"a5ea5263-f3d8-40bf-9d4f-66afaad4eeec\") " pod="openstack/octavia-rsyslog-xhmtb" Feb 27 17:47:48 crc kubenswrapper[4830]: I0227 17:47:48.697384 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/a5ea5263-f3d8-40bf-9d4f-66afaad4eeec-hm-ports\") pod \"octavia-rsyslog-xhmtb\" (UID: \"a5ea5263-f3d8-40bf-9d4f-66afaad4eeec\") " pod="openstack/octavia-rsyslog-xhmtb" Feb 27 17:47:48 crc kubenswrapper[4830]: I0227 17:47:48.697465 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5ea5263-f3d8-40bf-9d4f-66afaad4eeec-scripts\") pod \"octavia-rsyslog-xhmtb\" (UID: \"a5ea5263-f3d8-40bf-9d4f-66afaad4eeec\") " pod="openstack/octavia-rsyslog-xhmtb" Feb 27 17:47:48 crc kubenswrapper[4830]: I0227 17:47:48.697519 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a5ea5263-f3d8-40bf-9d4f-66afaad4eeec-config-data-merged\") pod \"octavia-rsyslog-xhmtb\" (UID: \"a5ea5263-f3d8-40bf-9d4f-66afaad4eeec\") " pod="openstack/octavia-rsyslog-xhmtb" Feb 27 17:47:48 crc kubenswrapper[4830]: I0227 17:47:48.697618 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5ea5263-f3d8-40bf-9d4f-66afaad4eeec-config-data\") pod \"octavia-rsyslog-xhmtb\" (UID: \"a5ea5263-f3d8-40bf-9d4f-66afaad4eeec\") " pod="openstack/octavia-rsyslog-xhmtb" Feb 27 17:47:48 crc kubenswrapper[4830]: I0227 17:47:48.699321 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/a5ea5263-f3d8-40bf-9d4f-66afaad4eeec-config-data-merged\") pod \"octavia-rsyslog-xhmtb\" (UID: \"a5ea5263-f3d8-40bf-9d4f-66afaad4eeec\") " pod="openstack/octavia-rsyslog-xhmtb" Feb 27 17:47:48 crc kubenswrapper[4830]: I0227 17:47:48.700808 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/a5ea5263-f3d8-40bf-9d4f-66afaad4eeec-hm-ports\") pod \"octavia-rsyslog-xhmtb\" (UID: \"a5ea5263-f3d8-40bf-9d4f-66afaad4eeec\") " pod="openstack/octavia-rsyslog-xhmtb" Feb 27 17:47:48 crc kubenswrapper[4830]: I0227 17:47:48.703398 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5ea5263-f3d8-40bf-9d4f-66afaad4eeec-config-data\") pod \"octavia-rsyslog-xhmtb\" (UID: \"a5ea5263-f3d8-40bf-9d4f-66afaad4eeec\") " pod="openstack/octavia-rsyslog-xhmtb" Feb 27 17:47:48 crc kubenswrapper[4830]: I0227 17:47:48.704657 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5ea5263-f3d8-40bf-9d4f-66afaad4eeec-scripts\") pod \"octavia-rsyslog-xhmtb\" (UID: \"a5ea5263-f3d8-40bf-9d4f-66afaad4eeec\") " pod="openstack/octavia-rsyslog-xhmtb" Feb 27 17:47:48 crc kubenswrapper[4830]: I0227 17:47:48.811227 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-xhmtb" Feb 27 17:47:49 crc kubenswrapper[4830]: I0227 17:47:49.130643 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-59f8cff499-8m7cr"] Feb 27 17:47:49 crc kubenswrapper[4830]: I0227 17:47:49.133471 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-8m7cr" Feb 27 17:47:49 crc kubenswrapper[4830]: I0227 17:47:49.140497 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Feb 27 17:47:49 crc kubenswrapper[4830]: I0227 17:47:49.148597 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-8m7cr"] Feb 27 17:47:49 crc kubenswrapper[4830]: I0227 17:47:49.213350 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/fd025d27-c829-4a6f-a7c5-7399538b0872-amphora-image\") pod \"octavia-image-upload-59f8cff499-8m7cr\" (UID: \"fd025d27-c829-4a6f-a7c5-7399538b0872\") " pod="openstack/octavia-image-upload-59f8cff499-8m7cr" Feb 27 17:47:49 crc kubenswrapper[4830]: I0227 17:47:49.213439 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fd025d27-c829-4a6f-a7c5-7399538b0872-httpd-config\") pod \"octavia-image-upload-59f8cff499-8m7cr\" (UID: \"fd025d27-c829-4a6f-a7c5-7399538b0872\") " pod="openstack/octavia-image-upload-59f8cff499-8m7cr" Feb 27 17:47:49 crc kubenswrapper[4830]: I0227 17:47:49.315404 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/fd025d27-c829-4a6f-a7c5-7399538b0872-amphora-image\") pod \"octavia-image-upload-59f8cff499-8m7cr\" (UID: \"fd025d27-c829-4a6f-a7c5-7399538b0872\") " pod="openstack/octavia-image-upload-59f8cff499-8m7cr" Feb 27 17:47:49 crc kubenswrapper[4830]: I0227 17:47:49.315452 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fd025d27-c829-4a6f-a7c5-7399538b0872-httpd-config\") pod \"octavia-image-upload-59f8cff499-8m7cr\" (UID: \"fd025d27-c829-4a6f-a7c5-7399538b0872\") " pod="openstack/octavia-image-upload-59f8cff499-8m7cr" Feb 27 17:47:49 crc kubenswrapper[4830]: I0227 17:47:49.317583 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/fd025d27-c829-4a6f-a7c5-7399538b0872-amphora-image\") pod \"octavia-image-upload-59f8cff499-8m7cr\" (UID: \"fd025d27-c829-4a6f-a7c5-7399538b0872\") " pod="openstack/octavia-image-upload-59f8cff499-8m7cr" Feb 27 17:47:49 crc kubenswrapper[4830]: I0227 17:47:49.321158 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fd025d27-c829-4a6f-a7c5-7399538b0872-httpd-config\") pod \"octavia-image-upload-59f8cff499-8m7cr\" (UID: \"fd025d27-c829-4a6f-a7c5-7399538b0872\") " pod="openstack/octavia-image-upload-59f8cff499-8m7cr" Feb 27 17:47:49 crc kubenswrapper[4830]: I0227 17:47:49.431021 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-xhmtb"] Feb 27 17:47:49 crc kubenswrapper[4830]: I0227 17:47:49.462260 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-8m7cr" Feb 27 17:47:49 crc kubenswrapper[4830]: I0227 17:47:49.545845 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-xhmtb"] Feb 27 17:47:49 crc kubenswrapper[4830]: I0227 17:47:49.894007 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-xhmtb" event={"ID":"a5ea5263-f3d8-40bf-9d4f-66afaad4eeec","Type":"ContainerStarted","Data":"d41c0010abfba684f21314769998073197220754c0c679d60b9bf07a905fbef7"} Feb 27 17:47:49 crc kubenswrapper[4830]: I0227 17:47:49.910468 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-sync-zkk47"] Feb 27 17:47:49 crc kubenswrapper[4830]: I0227 17:47:49.912626 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-zkk47" Feb 27 17:47:49 crc kubenswrapper[4830]: I0227 17:47:49.920999 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-scripts" Feb 27 17:47:49 crc kubenswrapper[4830]: I0227 17:47:49.924031 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-zkk47"] Feb 27 17:47:49 crc kubenswrapper[4830]: I0227 17:47:49.967066 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-8m7cr"] Feb 27 17:47:49 crc kubenswrapper[4830]: W0227 17:47:49.969157 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd025d27_c829_4a6f_a7c5_7399538b0872.slice/crio-7a2aaae1e22c41707865fa1f3043606f243fb8899deec2ccb1c5f6d128b630c3 WatchSource:0}: Error finding container 7a2aaae1e22c41707865fa1f3043606f243fb8899deec2ccb1c5f6d128b630c3: Status 404 returned error can't find the container with id 7a2aaae1e22c41707865fa1f3043606f243fb8899deec2ccb1c5f6d128b630c3 Feb 27 17:47:50 crc kubenswrapper[4830]: I0227 17:47:50.029604 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aaaab7f-77d8-4a19-acef-c47cb951f5b0-combined-ca-bundle\") pod \"octavia-db-sync-zkk47\" (UID: \"0aaaab7f-77d8-4a19-acef-c47cb951f5b0\") " pod="openstack/octavia-db-sync-zkk47" Feb 27 17:47:50 crc kubenswrapper[4830]: I0227 17:47:50.029777 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0aaaab7f-77d8-4a19-acef-c47cb951f5b0-scripts\") pod \"octavia-db-sync-zkk47\" (UID: \"0aaaab7f-77d8-4a19-acef-c47cb951f5b0\") " pod="openstack/octavia-db-sync-zkk47" Feb 27 17:47:50 crc kubenswrapper[4830]: I0227 17:47:50.029867 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/0aaaab7f-77d8-4a19-acef-c47cb951f5b0-config-data-merged\") pod \"octavia-db-sync-zkk47\" (UID: \"0aaaab7f-77d8-4a19-acef-c47cb951f5b0\") " pod="openstack/octavia-db-sync-zkk47" Feb 27 17:47:50 crc kubenswrapper[4830]: I0227 17:47:50.029907 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aaaab7f-77d8-4a19-acef-c47cb951f5b0-config-data\") pod \"octavia-db-sync-zkk47\" (UID: \"0aaaab7f-77d8-4a19-acef-c47cb951f5b0\") " pod="openstack/octavia-db-sync-zkk47" Feb 27 17:47:50 crc kubenswrapper[4830]: I0227 17:47:50.132746 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0aaaab7f-77d8-4a19-acef-c47cb951f5b0-scripts\") pod \"octavia-db-sync-zkk47\" (UID: \"0aaaab7f-77d8-4a19-acef-c47cb951f5b0\") " pod="openstack/octavia-db-sync-zkk47" Feb 27 17:47:50 crc kubenswrapper[4830]: I0227 17:47:50.133750 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/0aaaab7f-77d8-4a19-acef-c47cb951f5b0-config-data-merged\") pod \"octavia-db-sync-zkk47\" (UID: \"0aaaab7f-77d8-4a19-acef-c47cb951f5b0\") " pod="openstack/octavia-db-sync-zkk47" Feb 27 17:47:50 crc kubenswrapper[4830]: I0227 17:47:50.133924 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aaaab7f-77d8-4a19-acef-c47cb951f5b0-config-data\") pod \"octavia-db-sync-zkk47\" (UID: \"0aaaab7f-77d8-4a19-acef-c47cb951f5b0\") " pod="openstack/octavia-db-sync-zkk47" Feb 27 17:47:50 crc kubenswrapper[4830]: I0227 17:47:50.134158 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aaaab7f-77d8-4a19-acef-c47cb951f5b0-combined-ca-bundle\") pod \"octavia-db-sync-zkk47\" (UID: \"0aaaab7f-77d8-4a19-acef-c47cb951f5b0\") " pod="openstack/octavia-db-sync-zkk47" Feb 27 17:47:50 crc kubenswrapper[4830]: I0227 17:47:50.134231 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/0aaaab7f-77d8-4a19-acef-c47cb951f5b0-config-data-merged\") pod \"octavia-db-sync-zkk47\" (UID: \"0aaaab7f-77d8-4a19-acef-c47cb951f5b0\") " pod="openstack/octavia-db-sync-zkk47" Feb 27 17:47:50 crc kubenswrapper[4830]: I0227 17:47:50.140273 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0aaaab7f-77d8-4a19-acef-c47cb951f5b0-scripts\") pod \"octavia-db-sync-zkk47\" (UID: \"0aaaab7f-77d8-4a19-acef-c47cb951f5b0\") " pod="openstack/octavia-db-sync-zkk47" Feb 27 17:47:50 crc kubenswrapper[4830]: I0227 17:47:50.140395 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aaaab7f-77d8-4a19-acef-c47cb951f5b0-combined-ca-bundle\") pod \"octavia-db-sync-zkk47\" (UID: \"0aaaab7f-77d8-4a19-acef-c47cb951f5b0\") " pod="openstack/octavia-db-sync-zkk47" Feb 27 17:47:50 crc kubenswrapper[4830]: I0227 17:47:50.153786 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aaaab7f-77d8-4a19-acef-c47cb951f5b0-config-data\") pod \"octavia-db-sync-zkk47\" (UID: \"0aaaab7f-77d8-4a19-acef-c47cb951f5b0\") " pod="openstack/octavia-db-sync-zkk47" Feb 27 17:47:50 crc kubenswrapper[4830]: I0227 17:47:50.236166 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-zkk47" Feb 27 17:47:50 crc kubenswrapper[4830]: I0227 17:47:50.692090 4830 scope.go:117] "RemoveContainer" containerID="6134fcedb998b9c4741d590a9737112edccecbfc5ea4fffb7c1568515daf569c" Feb 27 17:47:50 crc kubenswrapper[4830]: I0227 17:47:50.786549 4830 scope.go:117] "RemoveContainer" containerID="da5c9e4f1a7ad40a38bab01079f48f00c8d8cba892da8cc746dfeb95b8010427" Feb 27 17:47:50 crc kubenswrapper[4830]: I0227 17:47:50.790407 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-zkk47"] Feb 27 17:47:50 crc kubenswrapper[4830]: I0227 17:47:50.872496 4830 scope.go:117] "RemoveContainer" containerID="5e0f201ca150662efb93a94f4268c53997cdc9dd0dcb59d1c7c4c1cc51fb5617" Feb 27 17:47:50 crc kubenswrapper[4830]: I0227 17:47:50.905652 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-8m7cr" event={"ID":"fd025d27-c829-4a6f-a7c5-7399538b0872","Type":"ContainerStarted","Data":"7a2aaae1e22c41707865fa1f3043606f243fb8899deec2ccb1c5f6d128b630c3"} Feb 27 17:47:50 crc kubenswrapper[4830]: I0227 17:47:50.914290 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-zkk47" event={"ID":"0aaaab7f-77d8-4a19-acef-c47cb951f5b0","Type":"ContainerStarted","Data":"fc2d95a66ad9b27dea20766fced40834727aa86b42b829412a9975bee3fcb384"} Feb 27 17:47:50 crc kubenswrapper[4830]: I0227 17:47:50.929024 4830 scope.go:117] "RemoveContainer" containerID="eba1901f4ba5b5c8ed7f3d84d247dbbf6cf8573c4e266775b26c5dd56d91bf8b" Feb 27 17:47:50 crc kubenswrapper[4830]: I0227 17:47:50.957230 4830 scope.go:117] "RemoveContainer" containerID="b7f5bff39a429923be48a944cfda90a45f2e8ee3852b5b9470ef72543910087f" Feb 27 17:47:50 crc kubenswrapper[4830]: I0227 17:47:50.987856 4830 scope.go:117] "RemoveContainer" containerID="dd920ac26ad1978bd8f1d43253f8eee5c730be43c17d1952cae24a200f61d468" Feb 27 17:47:51 crc kubenswrapper[4830]: I0227 17:47:51.931037 4830 generic.go:334] "Generic (PLEG): container finished" podID="0aaaab7f-77d8-4a19-acef-c47cb951f5b0" containerID="b9164331e1f9e092beb4f47a40672a192a69ad41763af0dbc5f231e7646d3c69" exitCode=0 Feb 27 17:47:51 crc kubenswrapper[4830]: I0227 17:47:51.931603 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-zkk47" event={"ID":"0aaaab7f-77d8-4a19-acef-c47cb951f5b0","Type":"ContainerDied","Data":"b9164331e1f9e092beb4f47a40672a192a69ad41763af0dbc5f231e7646d3c69"} Feb 27 17:47:53 crc kubenswrapper[4830]: I0227 17:47:53.954494 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-xhmtb" event={"ID":"a5ea5263-f3d8-40bf-9d4f-66afaad4eeec","Type":"ContainerStarted","Data":"8187c2890eaab2a2567abd31e1db17d9c94a773771262c38b6326c420dcf293f"} Feb 27 17:47:53 crc kubenswrapper[4830]: I0227 17:47:53.959151 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-zkk47" event={"ID":"0aaaab7f-77d8-4a19-acef-c47cb951f5b0","Type":"ContainerStarted","Data":"cfdb726e0b3196e3a7d80143d9c41a14ab38fa91c4f91cbbb6fca41c9f303b57"} Feb 27 17:47:54 crc kubenswrapper[4830]: I0227 17:47:54.017273 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-db-sync-zkk47" podStartSLOduration=5.016323158 podStartE2EDuration="5.016323158s" podCreationTimestamp="2026-02-27 17:47:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:47:54.0105584 +0000 UTC m=+6070.099830873" watchObservedRunningTime="2026-02-27 17:47:54.016323158 +0000 UTC m=+6070.105595661" Feb 27 17:47:55 crc kubenswrapper[4830]: E0227 17:47:55.765054 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:47:55 crc kubenswrapper[4830]: I0227 17:47:55.990693 4830 generic.go:334] "Generic (PLEG): container finished" podID="a5ea5263-f3d8-40bf-9d4f-66afaad4eeec" containerID="8187c2890eaab2a2567abd31e1db17d9c94a773771262c38b6326c420dcf293f" exitCode=0 Feb 27 17:47:55 crc kubenswrapper[4830]: I0227 17:47:55.990876 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-xhmtb" event={"ID":"a5ea5263-f3d8-40bf-9d4f-66afaad4eeec","Type":"ContainerDied","Data":"8187c2890eaab2a2567abd31e1db17d9c94a773771262c38b6326c420dcf293f"} Feb 27 17:47:56 crc kubenswrapper[4830]: I0227 17:47:56.011350 4830 generic.go:334] "Generic (PLEG): container finished" podID="0aaaab7f-77d8-4a19-acef-c47cb951f5b0" containerID="cfdb726e0b3196e3a7d80143d9c41a14ab38fa91c4f91cbbb6fca41c9f303b57" exitCode=0 Feb 27 17:47:56 crc kubenswrapper[4830]: I0227 17:47:56.011423 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-zkk47" event={"ID":"0aaaab7f-77d8-4a19-acef-c47cb951f5b0","Type":"ContainerDied","Data":"cfdb726e0b3196e3a7d80143d9c41a14ab38fa91c4f91cbbb6fca41c9f303b57"} Feb 27 17:47:57 crc kubenswrapper[4830]: I0227 17:47:57.426350 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-zkk47" Feb 27 17:47:57 crc kubenswrapper[4830]: I0227 17:47:57.519880 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0aaaab7f-77d8-4a19-acef-c47cb951f5b0-scripts\") pod \"0aaaab7f-77d8-4a19-acef-c47cb951f5b0\" (UID: \"0aaaab7f-77d8-4a19-acef-c47cb951f5b0\") " Feb 27 17:47:57 crc kubenswrapper[4830]: I0227 17:47:57.519962 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aaaab7f-77d8-4a19-acef-c47cb951f5b0-combined-ca-bundle\") pod \"0aaaab7f-77d8-4a19-acef-c47cb951f5b0\" (UID: \"0aaaab7f-77d8-4a19-acef-c47cb951f5b0\") " Feb 27 17:47:57 crc kubenswrapper[4830]: I0227 17:47:57.520038 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aaaab7f-77d8-4a19-acef-c47cb951f5b0-config-data\") pod \"0aaaab7f-77d8-4a19-acef-c47cb951f5b0\" (UID: \"0aaaab7f-77d8-4a19-acef-c47cb951f5b0\") " Feb 27 17:47:57 crc kubenswrapper[4830]: I0227 17:47:57.520148 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/0aaaab7f-77d8-4a19-acef-c47cb951f5b0-config-data-merged\") pod \"0aaaab7f-77d8-4a19-acef-c47cb951f5b0\" (UID: \"0aaaab7f-77d8-4a19-acef-c47cb951f5b0\") " Feb 27 17:47:57 crc kubenswrapper[4830]: I0227 17:47:57.538740 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aaaab7f-77d8-4a19-acef-c47cb951f5b0-config-data" (OuterVolumeSpecName: "config-data") pod "0aaaab7f-77d8-4a19-acef-c47cb951f5b0" (UID: "0aaaab7f-77d8-4a19-acef-c47cb951f5b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:47:57 crc kubenswrapper[4830]: I0227 17:47:57.538762 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aaaab7f-77d8-4a19-acef-c47cb951f5b0-scripts" (OuterVolumeSpecName: "scripts") pod "0aaaab7f-77d8-4a19-acef-c47cb951f5b0" (UID: "0aaaab7f-77d8-4a19-acef-c47cb951f5b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:47:57 crc kubenswrapper[4830]: I0227 17:47:57.556452 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aaaab7f-77d8-4a19-acef-c47cb951f5b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0aaaab7f-77d8-4a19-acef-c47cb951f5b0" (UID: "0aaaab7f-77d8-4a19-acef-c47cb951f5b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:47:57 crc kubenswrapper[4830]: I0227 17:47:57.559706 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0aaaab7f-77d8-4a19-acef-c47cb951f5b0-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "0aaaab7f-77d8-4a19-acef-c47cb951f5b0" (UID: "0aaaab7f-77d8-4a19-acef-c47cb951f5b0"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:47:57 crc kubenswrapper[4830]: I0227 17:47:57.623831 4830 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/0aaaab7f-77d8-4a19-acef-c47cb951f5b0-config-data-merged\") on node \"crc\" DevicePath \"\"" Feb 27 17:47:57 crc kubenswrapper[4830]: I0227 17:47:57.623868 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0aaaab7f-77d8-4a19-acef-c47cb951f5b0-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:47:57 crc kubenswrapper[4830]: I0227 17:47:57.623878 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aaaab7f-77d8-4a19-acef-c47cb951f5b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:47:57 crc kubenswrapper[4830]: I0227 17:47:57.623887 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aaaab7f-77d8-4a19-acef-c47cb951f5b0-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:47:58 crc kubenswrapper[4830]: I0227 17:47:58.063498 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-zkk47" event={"ID":"0aaaab7f-77d8-4a19-acef-c47cb951f5b0","Type":"ContainerDied","Data":"fc2d95a66ad9b27dea20766fced40834727aa86b42b829412a9975bee3fcb384"} Feb 27 17:47:58 crc kubenswrapper[4830]: I0227 17:47:58.063547 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc2d95a66ad9b27dea20766fced40834727aa86b42b829412a9975bee3fcb384" Feb 27 17:47:58 crc kubenswrapper[4830]: I0227 17:47:58.063611 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-zkk47" Feb 27 17:47:58 crc kubenswrapper[4830]: I0227 17:47:58.138336 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-dc594bd7f-7cnbx" Feb 27 17:47:58 crc kubenswrapper[4830]: I0227 17:47:58.147595 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-dc594bd7f-7cnbx" Feb 27 17:47:58 crc kubenswrapper[4830]: E0227 17:47:58.766136 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:47:59 crc kubenswrapper[4830]: I0227 17:47:59.077557 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-xhmtb" event={"ID":"a5ea5263-f3d8-40bf-9d4f-66afaad4eeec","Type":"ContainerStarted","Data":"15e8a32240790e81c9ed36f9c11a6f8897b89c1c52fa3f5ee5ca22d5288a23f7"} Feb 27 17:47:59 crc kubenswrapper[4830]: I0227 17:47:59.100265 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-rsyslog-xhmtb" podStartSLOduration=2.665207155 podStartE2EDuration="11.100245136s" podCreationTimestamp="2026-02-27 17:47:48 +0000 UTC" firstStartedPulling="2026-02-27 17:47:49.438886848 +0000 UTC m=+6065.528159311" lastFinishedPulling="2026-02-27 17:47:57.873924829 +0000 UTC m=+6073.963197292" observedRunningTime="2026-02-27 17:47:59.099019717 +0000 UTC m=+6075.188292200" watchObservedRunningTime="2026-02-27 17:47:59.100245136 +0000 UTC m=+6075.189517589" Feb 27 17:48:00 crc kubenswrapper[4830]: I0227 17:48:00.214841 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536908-42g5s"] Feb 27 17:48:00 crc kubenswrapper[4830]: E0227 17:48:00.216513 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aaaab7f-77d8-4a19-acef-c47cb951f5b0" containerName="octavia-db-sync" Feb 27 17:48:00 crc kubenswrapper[4830]: I0227 17:48:00.216608 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aaaab7f-77d8-4a19-acef-c47cb951f5b0" containerName="octavia-db-sync" Feb 27 17:48:00 crc kubenswrapper[4830]: E0227 17:48:00.216683 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aaaab7f-77d8-4a19-acef-c47cb951f5b0" containerName="init" Feb 27 17:48:00 crc kubenswrapper[4830]: I0227 17:48:00.216746 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aaaab7f-77d8-4a19-acef-c47cb951f5b0" containerName="init" Feb 27 17:48:00 crc kubenswrapper[4830]: I0227 17:48:00.217039 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aaaab7f-77d8-4a19-acef-c47cb951f5b0" containerName="octavia-db-sync" Feb 27 17:48:00 crc kubenswrapper[4830]: I0227 17:48:00.217763 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536908-42g5s" Feb 27 17:48:00 crc kubenswrapper[4830]: I0227 17:48:00.227576 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536908-42g5s"] Feb 27 17:48:00 crc kubenswrapper[4830]: I0227 17:48:00.291104 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58p4h\" (UniqueName: \"kubernetes.io/projected/dd747a6d-ccf7-41bd-b8d8-b7480d6d950e-kube-api-access-58p4h\") pod \"auto-csr-approver-29536908-42g5s\" (UID: \"dd747a6d-ccf7-41bd-b8d8-b7480d6d950e\") " pod="openshift-infra/auto-csr-approver-29536908-42g5s" Feb 27 17:48:00 crc kubenswrapper[4830]: I0227 17:48:00.392738 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58p4h\" (UniqueName: \"kubernetes.io/projected/dd747a6d-ccf7-41bd-b8d8-b7480d6d950e-kube-api-access-58p4h\") pod \"auto-csr-approver-29536908-42g5s\" (UID: \"dd747a6d-ccf7-41bd-b8d8-b7480d6d950e\") " pod="openshift-infra/auto-csr-approver-29536908-42g5s" Feb 27 17:48:00 crc kubenswrapper[4830]: I0227 17:48:00.425938 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58p4h\" (UniqueName: \"kubernetes.io/projected/dd747a6d-ccf7-41bd-b8d8-b7480d6d950e-kube-api-access-58p4h\") pod \"auto-csr-approver-29536908-42g5s\" (UID: \"dd747a6d-ccf7-41bd-b8d8-b7480d6d950e\") " pod="openshift-infra/auto-csr-approver-29536908-42g5s" Feb 27 17:48:00 crc kubenswrapper[4830]: I0227 17:48:00.554591 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536908-42g5s" Feb 27 17:48:01 crc kubenswrapper[4830]: W0227 17:48:01.041887 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd747a6d_ccf7_41bd_b8d8_b7480d6d950e.slice/crio-dcf92036805c8a1b15cf452adbe090039907c9893f0e7ed9226382d2f2dde978 WatchSource:0}: Error finding container dcf92036805c8a1b15cf452adbe090039907c9893f0e7ed9226382d2f2dde978: Status 404 returned error can't find the container with id dcf92036805c8a1b15cf452adbe090039907c9893f0e7ed9226382d2f2dde978 Feb 27 17:48:01 crc kubenswrapper[4830]: I0227 17:48:01.051175 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536908-42g5s"] Feb 27 17:48:01 crc kubenswrapper[4830]: I0227 17:48:01.103533 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536908-42g5s" event={"ID":"dd747a6d-ccf7-41bd-b8d8-b7480d6d950e","Type":"ContainerStarted","Data":"dcf92036805c8a1b15cf452adbe090039907c9893f0e7ed9226382d2f2dde978"} Feb 27 17:48:01 crc kubenswrapper[4830]: E0227 17:48:01.987664 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 17:48:01 crc kubenswrapper[4830]: E0227 17:48:01.987868 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 17:48:01 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 17:48:01 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-58p4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536908-42g5s_openshift-infra(dd747a6d-ccf7-41bd-b8d8-b7480d6d950e): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 17:48:01 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 17:48:01 crc kubenswrapper[4830]: E0227 17:48:01.989183 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536908-42g5s" podUID="dd747a6d-ccf7-41bd-b8d8-b7480d6d950e" Feb 27 17:48:02 crc kubenswrapper[4830]: E0227 17:48:02.120101 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536908-42g5s" podUID="dd747a6d-ccf7-41bd-b8d8-b7480d6d950e" Feb 27 17:48:03 crc kubenswrapper[4830]: I0227 17:48:03.811667 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-rsyslog-xhmtb" Feb 27 17:48:03 crc kubenswrapper[4830]: I0227 17:48:03.856651 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-rsyslog-xhmtb" Feb 27 17:48:06 crc kubenswrapper[4830]: E0227 17:48:06.768086 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:48:14 crc kubenswrapper[4830]: E0227 17:48:14.220144 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:48:15 crc kubenswrapper[4830]: I0227 17:48:15.281002 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-8m7cr" event={"ID":"fd025d27-c829-4a6f-a7c5-7399538b0872","Type":"ContainerStarted","Data":"550f0ca61057194779530fb5b5ed940d96faca992015dd954881e6beb2a75632"} Feb 27 17:48:15 crc kubenswrapper[4830]: E0227 17:48:15.811353 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 17:48:15 crc kubenswrapper[4830]: E0227 17:48:15.811864 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 17:48:15 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 17:48:15 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-58p4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536908-42g5s_openshift-infra(dd747a6d-ccf7-41bd-b8d8-b7480d6d950e): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 17:48:15 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 17:48:15 crc kubenswrapper[4830]: E0227 17:48:15.813168 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536908-42g5s" podUID="dd747a6d-ccf7-41bd-b8d8-b7480d6d950e" Feb 27 17:48:16 crc kubenswrapper[4830]: I0227 17:48:16.297821 4830 generic.go:334] "Generic (PLEG): container finished" podID="fd025d27-c829-4a6f-a7c5-7399538b0872" containerID="550f0ca61057194779530fb5b5ed940d96faca992015dd954881e6beb2a75632" exitCode=0 Feb 27 17:48:16 crc kubenswrapper[4830]: I0227 17:48:16.297888 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-8m7cr" event={"ID":"fd025d27-c829-4a6f-a7c5-7399538b0872","Type":"ContainerDied","Data":"550f0ca61057194779530fb5b5ed940d96faca992015dd954881e6beb2a75632"} Feb 27 17:48:18 crc kubenswrapper[4830]: E0227 17:48:18.157456 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)" image="registry.redhat.io/ubi9/httpd-24:latest" Feb 27 17:48:18 crc kubenswrapper[4830]: E0227 17:48:18.158183 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:octavia-amphora-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/bin/bash],Args:[-c cp -f /usr/local/apache2/conf/httpd.conf /etc/httpd/conf/httpd.conf && /usr/bin/run-httpd],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:amphora-image,ReadOnly:false,MountPath:/usr/local/apache2/htdocs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:httpd-config,ReadOnly:true,MountPath:/usr/local/apache2/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-image-upload-59f8cff499-8m7cr_openstack(fd025d27-c829-4a6f-a7c5-7399538b0872): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 17:48:18 crc kubenswrapper[4830]: E0227 17:48:18.159411 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"octavia-amphora-httpd\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)\"" pod="openstack/octavia-image-upload-59f8cff499-8m7cr" podUID="fd025d27-c829-4a6f-a7c5-7399538b0872" Feb 27 17:48:18 crc kubenswrapper[4830]: E0227 17:48:18.330588 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"octavia-amphora-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/octavia-image-upload-59f8cff499-8m7cr" podUID="fd025d27-c829-4a6f-a7c5-7399538b0872" Feb 27 17:48:19 crc kubenswrapper[4830]: E0227 17:48:19.767826 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:48:27 crc kubenswrapper[4830]: E0227 17:48:27.766122 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:48:27 crc kubenswrapper[4830]: E0227 17:48:27.766144 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536908-42g5s" podUID="dd747a6d-ccf7-41bd-b8d8-b7480d6d950e" Feb 27 17:48:30 crc kubenswrapper[4830]: E0227 17:48:30.767556 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:48:32 crc kubenswrapper[4830]: E0227 17:48:32.304855 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)" image="registry.redhat.io/ubi9/httpd-24:latest" Feb 27 17:48:32 crc kubenswrapper[4830]: E0227 17:48:32.305813 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:octavia-amphora-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/bin/bash],Args:[-c cp -f /usr/local/apache2/conf/httpd.conf /etc/httpd/conf/httpd.conf && /usr/bin/run-httpd],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:amphora-image,ReadOnly:false,MountPath:/usr/local/apache2/htdocs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:httpd-config,ReadOnly:true,MountPath:/usr/local/apache2/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-image-upload-59f8cff499-8m7cr_openstack(fd025d27-c829-4a6f-a7c5-7399538b0872): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 17:48:32 crc kubenswrapper[4830]: E0227 17:48:32.307998 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"octavia-amphora-httpd\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)\"" pod="openstack/octavia-image-upload-59f8cff499-8m7cr" podUID="fd025d27-c829-4a6f-a7c5-7399538b0872" Feb 27 17:48:43 crc kubenswrapper[4830]: E0227 17:48:43.766162 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:48:43 crc kubenswrapper[4830]: E0227 17:48:43.847718 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 17:48:43 crc kubenswrapper[4830]: E0227 17:48:43.847943 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 17:48:43 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 17:48:43 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-58p4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536908-42g5s_openshift-infra(dd747a6d-ccf7-41bd-b8d8-b7480d6d950e): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 17:48:43 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 17:48:43 crc kubenswrapper[4830]: E0227 17:48:43.849204 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536908-42g5s" podUID="dd747a6d-ccf7-41bd-b8d8-b7480d6d950e" Feb 27 17:48:44 crc kubenswrapper[4830]: E0227 17:48:44.796537 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"octavia-amphora-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/octavia-image-upload-59f8cff499-8m7cr" podUID="fd025d27-c829-4a6f-a7c5-7399538b0872" Feb 27 17:48:53 crc kubenswrapper[4830]: I0227 17:48:53.986751 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-healthmanager-l9rlw"] Feb 27 17:48:53 crc kubenswrapper[4830]: I0227 17:48:53.993172 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-l9rlw" Feb 27 17:48:54 crc kubenswrapper[4830]: I0227 17:48:53.997517 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-scripts" Feb 27 17:48:54 crc kubenswrapper[4830]: I0227 17:48:53.998484 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-config-data" Feb 27 17:48:54 crc kubenswrapper[4830]: I0227 17:48:53.998819 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-certs-secret" Feb 27 17:48:54 crc kubenswrapper[4830]: I0227 17:48:54.000732 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-l9rlw"] Feb 27 17:48:54 crc kubenswrapper[4830]: I0227 17:48:54.095150 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e970daf4-00a2-473d-bfae-e985a7c78a94-scripts\") pod \"octavia-healthmanager-l9rlw\" (UID: \"e970daf4-00a2-473d-bfae-e985a7c78a94\") " pod="openstack/octavia-healthmanager-l9rlw" Feb 27 17:48:54 crc kubenswrapper[4830]: I0227 17:48:54.095373 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e970daf4-00a2-473d-bfae-e985a7c78a94-config-data-merged\") pod \"octavia-healthmanager-l9rlw\" (UID: \"e970daf4-00a2-473d-bfae-e985a7c78a94\") " pod="openstack/octavia-healthmanager-l9rlw" Feb 27 17:48:54 crc kubenswrapper[4830]: I0227 17:48:54.095473 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e970daf4-00a2-473d-bfae-e985a7c78a94-config-data\") pod \"octavia-healthmanager-l9rlw\" (UID: \"e970daf4-00a2-473d-bfae-e985a7c78a94\") " pod="openstack/octavia-healthmanager-l9rlw" Feb 27 17:48:54 crc kubenswrapper[4830]: I0227 17:48:54.095556 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e970daf4-00a2-473d-bfae-e985a7c78a94-combined-ca-bundle\") pod \"octavia-healthmanager-l9rlw\" (UID: \"e970daf4-00a2-473d-bfae-e985a7c78a94\") " pod="openstack/octavia-healthmanager-l9rlw" Feb 27 17:48:54 crc kubenswrapper[4830]: I0227 17:48:54.095711 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/e970daf4-00a2-473d-bfae-e985a7c78a94-amphora-certs\") pod \"octavia-healthmanager-l9rlw\" (UID: \"e970daf4-00a2-473d-bfae-e985a7c78a94\") " pod="openstack/octavia-healthmanager-l9rlw" Feb 27 17:48:54 crc kubenswrapper[4830]: I0227 17:48:54.095848 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e970daf4-00a2-473d-bfae-e985a7c78a94-hm-ports\") pod \"octavia-healthmanager-l9rlw\" (UID: \"e970daf4-00a2-473d-bfae-e985a7c78a94\") " pod="openstack/octavia-healthmanager-l9rlw" Feb 27 17:48:54 crc kubenswrapper[4830]: I0227 17:48:54.198463 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/e970daf4-00a2-473d-bfae-e985a7c78a94-amphora-certs\") pod \"octavia-healthmanager-l9rlw\" (UID: \"e970daf4-00a2-473d-bfae-e985a7c78a94\") " pod="openstack/octavia-healthmanager-l9rlw" Feb 27 17:48:54 crc kubenswrapper[4830]: I0227 17:48:54.198659 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e970daf4-00a2-473d-bfae-e985a7c78a94-hm-ports\") pod \"octavia-healthmanager-l9rlw\" (UID: \"e970daf4-00a2-473d-bfae-e985a7c78a94\") " pod="openstack/octavia-healthmanager-l9rlw" Feb 27 17:48:54 crc kubenswrapper[4830]: I0227 17:48:54.198789 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e970daf4-00a2-473d-bfae-e985a7c78a94-scripts\") pod \"octavia-healthmanager-l9rlw\" (UID: \"e970daf4-00a2-473d-bfae-e985a7c78a94\") " pod="openstack/octavia-healthmanager-l9rlw" Feb 27 17:48:54 crc kubenswrapper[4830]: I0227 17:48:54.199008 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e970daf4-00a2-473d-bfae-e985a7c78a94-config-data-merged\") pod \"octavia-healthmanager-l9rlw\" (UID: \"e970daf4-00a2-473d-bfae-e985a7c78a94\") " pod="openstack/octavia-healthmanager-l9rlw" Feb 27 17:48:54 crc kubenswrapper[4830]: I0227 17:48:54.199068 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e970daf4-00a2-473d-bfae-e985a7c78a94-config-data\") pod \"octavia-healthmanager-l9rlw\" (UID: \"e970daf4-00a2-473d-bfae-e985a7c78a94\") " pod="openstack/octavia-healthmanager-l9rlw" Feb 27 17:48:54 crc kubenswrapper[4830]: I0227 17:48:54.199144 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e970daf4-00a2-473d-bfae-e985a7c78a94-combined-ca-bundle\") pod \"octavia-healthmanager-l9rlw\" (UID: \"e970daf4-00a2-473d-bfae-e985a7c78a94\") " pod="openstack/octavia-healthmanager-l9rlw" Feb 27 17:48:54 crc kubenswrapper[4830]: I0227 17:48:54.199660 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e970daf4-00a2-473d-bfae-e985a7c78a94-config-data-merged\") pod \"octavia-healthmanager-l9rlw\" (UID: \"e970daf4-00a2-473d-bfae-e985a7c78a94\") " pod="openstack/octavia-healthmanager-l9rlw" Feb 27 17:48:54 crc kubenswrapper[4830]: I0227 17:48:54.201400 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e970daf4-00a2-473d-bfae-e985a7c78a94-hm-ports\") pod \"octavia-healthmanager-l9rlw\" (UID: \"e970daf4-00a2-473d-bfae-e985a7c78a94\") " pod="openstack/octavia-healthmanager-l9rlw" Feb 27 17:48:54 crc kubenswrapper[4830]: I0227 17:48:54.206560 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e970daf4-00a2-473d-bfae-e985a7c78a94-combined-ca-bundle\") pod \"octavia-healthmanager-l9rlw\" (UID: \"e970daf4-00a2-473d-bfae-e985a7c78a94\") " pod="openstack/octavia-healthmanager-l9rlw" Feb 27 17:48:54 crc kubenswrapper[4830]: I0227 17:48:54.207219 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e970daf4-00a2-473d-bfae-e985a7c78a94-scripts\") pod \"octavia-healthmanager-l9rlw\" (UID: \"e970daf4-00a2-473d-bfae-e985a7c78a94\") " pod="openstack/octavia-healthmanager-l9rlw" Feb 27 17:48:54 crc kubenswrapper[4830]: I0227 17:48:54.207285 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/e970daf4-00a2-473d-bfae-e985a7c78a94-amphora-certs\") pod \"octavia-healthmanager-l9rlw\" (UID: \"e970daf4-00a2-473d-bfae-e985a7c78a94\") " pod="openstack/octavia-healthmanager-l9rlw" Feb 27 17:48:54 crc kubenswrapper[4830]: I0227 17:48:54.220805 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e970daf4-00a2-473d-bfae-e985a7c78a94-config-data\") pod \"octavia-healthmanager-l9rlw\" (UID: \"e970daf4-00a2-473d-bfae-e985a7c78a94\") " pod="openstack/octavia-healthmanager-l9rlw" Feb 27 17:48:54 crc kubenswrapper[4830]: I0227 17:48:54.317208 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-l9rlw" Feb 27 17:48:54 crc kubenswrapper[4830]: I0227 17:48:54.846251 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-housekeeping-jvpsh"] Feb 27 17:48:54 crc kubenswrapper[4830]: I0227 17:48:54.883413 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-jvpsh"] Feb 27 17:48:54 crc kubenswrapper[4830]: I0227 17:48:54.883577 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-jvpsh" Feb 27 17:48:54 crc kubenswrapper[4830]: I0227 17:48:54.887289 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-config-data" Feb 27 17:48:54 crc kubenswrapper[4830]: I0227 17:48:54.888612 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-scripts" Feb 27 17:48:54 crc kubenswrapper[4830]: I0227 17:48:54.966879 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-l9rlw"] Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.029442 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b0a7833-e438-4248-a46f-bbeb413c9f1b-combined-ca-bundle\") pod \"octavia-housekeeping-jvpsh\" (UID: \"6b0a7833-e438-4248-a46f-bbeb413c9f1b\") " pod="openstack/octavia-housekeeping-jvpsh" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.029489 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/6b0a7833-e438-4248-a46f-bbeb413c9f1b-amphora-certs\") pod \"octavia-housekeeping-jvpsh\" (UID: \"6b0a7833-e438-4248-a46f-bbeb413c9f1b\") " pod="openstack/octavia-housekeeping-jvpsh" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.029539 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b0a7833-e438-4248-a46f-bbeb413c9f1b-scripts\") pod \"octavia-housekeeping-jvpsh\" (UID: \"6b0a7833-e438-4248-a46f-bbeb413c9f1b\") " pod="openstack/octavia-housekeeping-jvpsh" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.029566 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/6b0a7833-e438-4248-a46f-bbeb413c9f1b-hm-ports\") pod \"octavia-housekeeping-jvpsh\" (UID: \"6b0a7833-e438-4248-a46f-bbeb413c9f1b\") " pod="openstack/octavia-housekeeping-jvpsh" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.029638 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/6b0a7833-e438-4248-a46f-bbeb413c9f1b-config-data-merged\") pod \"octavia-housekeeping-jvpsh\" (UID: \"6b0a7833-e438-4248-a46f-bbeb413c9f1b\") " pod="openstack/octavia-housekeeping-jvpsh" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.029695 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b0a7833-e438-4248-a46f-bbeb413c9f1b-config-data\") pod \"octavia-housekeeping-jvpsh\" (UID: \"6b0a7833-e438-4248-a46f-bbeb413c9f1b\") " pod="openstack/octavia-housekeeping-jvpsh" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.132020 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b0a7833-e438-4248-a46f-bbeb413c9f1b-config-data\") pod \"octavia-housekeeping-jvpsh\" (UID: \"6b0a7833-e438-4248-a46f-bbeb413c9f1b\") " pod="openstack/octavia-housekeeping-jvpsh" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.133459 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b0a7833-e438-4248-a46f-bbeb413c9f1b-combined-ca-bundle\") pod \"octavia-housekeeping-jvpsh\" (UID: \"6b0a7833-e438-4248-a46f-bbeb413c9f1b\") " pod="openstack/octavia-housekeeping-jvpsh" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.133507 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/6b0a7833-e438-4248-a46f-bbeb413c9f1b-amphora-certs\") pod \"octavia-housekeeping-jvpsh\" (UID: \"6b0a7833-e438-4248-a46f-bbeb413c9f1b\") " pod="openstack/octavia-housekeeping-jvpsh" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.133544 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b0a7833-e438-4248-a46f-bbeb413c9f1b-scripts\") pod \"octavia-housekeeping-jvpsh\" (UID: \"6b0a7833-e438-4248-a46f-bbeb413c9f1b\") " pod="openstack/octavia-housekeeping-jvpsh" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.133593 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/6b0a7833-e438-4248-a46f-bbeb413c9f1b-hm-ports\") pod \"octavia-housekeeping-jvpsh\" (UID: \"6b0a7833-e438-4248-a46f-bbeb413c9f1b\") " pod="openstack/octavia-housekeeping-jvpsh" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.133678 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/6b0a7833-e438-4248-a46f-bbeb413c9f1b-config-data-merged\") pod \"octavia-housekeeping-jvpsh\" (UID: \"6b0a7833-e438-4248-a46f-bbeb413c9f1b\") " pod="openstack/octavia-housekeeping-jvpsh" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.134751 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/6b0a7833-e438-4248-a46f-bbeb413c9f1b-config-data-merged\") pod \"octavia-housekeeping-jvpsh\" (UID: \"6b0a7833-e438-4248-a46f-bbeb413c9f1b\") " pod="openstack/octavia-housekeeping-jvpsh" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.135520 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/6b0a7833-e438-4248-a46f-bbeb413c9f1b-hm-ports\") pod \"octavia-housekeeping-jvpsh\" (UID: \"6b0a7833-e438-4248-a46f-bbeb413c9f1b\") " pod="openstack/octavia-housekeeping-jvpsh" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.141021 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b0a7833-e438-4248-a46f-bbeb413c9f1b-scripts\") pod \"octavia-housekeeping-jvpsh\" (UID: \"6b0a7833-e438-4248-a46f-bbeb413c9f1b\") " pod="openstack/octavia-housekeeping-jvpsh" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.147125 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b0a7833-e438-4248-a46f-bbeb413c9f1b-config-data\") pod \"octavia-housekeeping-jvpsh\" (UID: \"6b0a7833-e438-4248-a46f-bbeb413c9f1b\") " pod="openstack/octavia-housekeeping-jvpsh" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.147876 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b0a7833-e438-4248-a46f-bbeb413c9f1b-combined-ca-bundle\") pod \"octavia-housekeeping-jvpsh\" (UID: \"6b0a7833-e438-4248-a46f-bbeb413c9f1b\") " pod="openstack/octavia-housekeeping-jvpsh" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.152620 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/6b0a7833-e438-4248-a46f-bbeb413c9f1b-amphora-certs\") pod \"octavia-housekeeping-jvpsh\" (UID: \"6b0a7833-e438-4248-a46f-bbeb413c9f1b\") " pod="openstack/octavia-housekeeping-jvpsh" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.208447 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-jvpsh" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.649094 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-worker-zbtll"] Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.653084 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-zbtll" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.656937 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-scripts" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.657264 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-config-data" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.671368 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-zbtll"] Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.830280 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-jvpsh"] Feb 27 17:48:55 crc kubenswrapper[4830]: W0227 17:48:55.834174 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b0a7833_e438_4248_a46f_bbeb413c9f1b.slice/crio-373dea2fb093298f2df4c4a997191b9493f4fce1bdd40339200c8adb5aef7545 WatchSource:0}: Error finding container 373dea2fb093298f2df4c4a997191b9493f4fce1bdd40339200c8adb5aef7545: Status 404 returned error can't find the container with id 373dea2fb093298f2df4c4a997191b9493f4fce1bdd40339200c8adb5aef7545 Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.849191 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/e70e9f25-ddb4-4592-acce-1cc44b59f2b8-amphora-certs\") pod \"octavia-worker-zbtll\" (UID: \"e70e9f25-ddb4-4592-acce-1cc44b59f2b8\") " pod="openstack/octavia-worker-zbtll" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.849236 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e70e9f25-ddb4-4592-acce-1cc44b59f2b8-scripts\") pod \"octavia-worker-zbtll\" (UID: \"e70e9f25-ddb4-4592-acce-1cc44b59f2b8\") " pod="openstack/octavia-worker-zbtll" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.849391 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e70e9f25-ddb4-4592-acce-1cc44b59f2b8-config-data-merged\") pod \"octavia-worker-zbtll\" (UID: \"e70e9f25-ddb4-4592-acce-1cc44b59f2b8\") " pod="openstack/octavia-worker-zbtll" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.849631 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e70e9f25-ddb4-4592-acce-1cc44b59f2b8-combined-ca-bundle\") pod \"octavia-worker-zbtll\" (UID: \"e70e9f25-ddb4-4592-acce-1cc44b59f2b8\") " pod="openstack/octavia-worker-zbtll" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.849825 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e70e9f25-ddb4-4592-acce-1cc44b59f2b8-hm-ports\") pod \"octavia-worker-zbtll\" (UID: \"e70e9f25-ddb4-4592-acce-1cc44b59f2b8\") " pod="openstack/octavia-worker-zbtll" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.849889 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e70e9f25-ddb4-4592-acce-1cc44b59f2b8-config-data\") pod \"octavia-worker-zbtll\" (UID: \"e70e9f25-ddb4-4592-acce-1cc44b59f2b8\") " pod="openstack/octavia-worker-zbtll" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.870165 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-l9rlw" event={"ID":"e970daf4-00a2-473d-bfae-e985a7c78a94","Type":"ContainerStarted","Data":"c317bdfad51a5665fb94d2df67e1dd7be2ac75aff4d78f614662ed7b9047a6d1"} Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.870279 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-l9rlw" event={"ID":"e970daf4-00a2-473d-bfae-e985a7c78a94","Type":"ContainerStarted","Data":"9b96835aecba6d20348094fad664cc4dc4e9ca184d7a7c8fe0fb68ee22b6854a"} Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.872583 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-jvpsh" event={"ID":"6b0a7833-e438-4248-a46f-bbeb413c9f1b","Type":"ContainerStarted","Data":"373dea2fb093298f2df4c4a997191b9493f4fce1bdd40339200c8adb5aef7545"} Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.951345 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e70e9f25-ddb4-4592-acce-1cc44b59f2b8-config-data-merged\") pod \"octavia-worker-zbtll\" (UID: \"e70e9f25-ddb4-4592-acce-1cc44b59f2b8\") " pod="openstack/octavia-worker-zbtll" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.951840 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e70e9f25-ddb4-4592-acce-1cc44b59f2b8-combined-ca-bundle\") pod \"octavia-worker-zbtll\" (UID: \"e70e9f25-ddb4-4592-acce-1cc44b59f2b8\") " pod="openstack/octavia-worker-zbtll" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.952100 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e70e9f25-ddb4-4592-acce-1cc44b59f2b8-hm-ports\") pod \"octavia-worker-zbtll\" (UID: \"e70e9f25-ddb4-4592-acce-1cc44b59f2b8\") " pod="openstack/octavia-worker-zbtll" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.952245 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e70e9f25-ddb4-4592-acce-1cc44b59f2b8-config-data-merged\") pod \"octavia-worker-zbtll\" (UID: \"e70e9f25-ddb4-4592-acce-1cc44b59f2b8\") " pod="openstack/octavia-worker-zbtll" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.952985 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e70e9f25-ddb4-4592-acce-1cc44b59f2b8-config-data\") pod \"octavia-worker-zbtll\" (UID: \"e70e9f25-ddb4-4592-acce-1cc44b59f2b8\") " pod="openstack/octavia-worker-zbtll" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.953098 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/e70e9f25-ddb4-4592-acce-1cc44b59f2b8-amphora-certs\") pod \"octavia-worker-zbtll\" (UID: \"e70e9f25-ddb4-4592-acce-1cc44b59f2b8\") " pod="openstack/octavia-worker-zbtll" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.953196 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e70e9f25-ddb4-4592-acce-1cc44b59f2b8-scripts\") pod \"octavia-worker-zbtll\" (UID: \"e70e9f25-ddb4-4592-acce-1cc44b59f2b8\") " pod="openstack/octavia-worker-zbtll" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.953691 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/e70e9f25-ddb4-4592-acce-1cc44b59f2b8-hm-ports\") pod \"octavia-worker-zbtll\" (UID: \"e70e9f25-ddb4-4592-acce-1cc44b59f2b8\") " pod="openstack/octavia-worker-zbtll" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.960473 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e70e9f25-ddb4-4592-acce-1cc44b59f2b8-combined-ca-bundle\") pod \"octavia-worker-zbtll\" (UID: \"e70e9f25-ddb4-4592-acce-1cc44b59f2b8\") " pod="openstack/octavia-worker-zbtll" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.961562 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/e70e9f25-ddb4-4592-acce-1cc44b59f2b8-amphora-certs\") pod \"octavia-worker-zbtll\" (UID: \"e70e9f25-ddb4-4592-acce-1cc44b59f2b8\") " pod="openstack/octavia-worker-zbtll" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.964237 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e70e9f25-ddb4-4592-acce-1cc44b59f2b8-scripts\") pod \"octavia-worker-zbtll\" (UID: \"e70e9f25-ddb4-4592-acce-1cc44b59f2b8\") " pod="openstack/octavia-worker-zbtll" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.969906 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e70e9f25-ddb4-4592-acce-1cc44b59f2b8-config-data\") pod \"octavia-worker-zbtll\" (UID: \"e70e9f25-ddb4-4592-acce-1cc44b59f2b8\") " pod="openstack/octavia-worker-zbtll" Feb 27 17:48:55 crc kubenswrapper[4830]: I0227 17:48:55.971838 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-zbtll" Feb 27 17:48:56 crc kubenswrapper[4830]: I0227 17:48:56.701042 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-zbtll"] Feb 27 17:48:56 crc kubenswrapper[4830]: W0227 17:48:56.705977 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode70e9f25_ddb4_4592_acce_1cc44b59f2b8.slice/crio-8487d6d091493bb09e387dc5a71d3decc1668afeb125214c5a5fd0746d36a751 WatchSource:0}: Error finding container 8487d6d091493bb09e387dc5a71d3decc1668afeb125214c5a5fd0746d36a751: Status 404 returned error can't find the container with id 8487d6d091493bb09e387dc5a71d3decc1668afeb125214c5a5fd0746d36a751 Feb 27 17:48:56 crc kubenswrapper[4830]: I0227 17:48:56.889302 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-zbtll" event={"ID":"e70e9f25-ddb4-4592-acce-1cc44b59f2b8","Type":"ContainerStarted","Data":"8487d6d091493bb09e387dc5a71d3decc1668afeb125214c5a5fd0746d36a751"} Feb 27 17:48:57 crc kubenswrapper[4830]: I0227 17:48:57.500098 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-l9rlw"] Feb 27 17:48:57 crc kubenswrapper[4830]: E0227 17:48:57.697642 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)" image="registry.redhat.io/ubi9/httpd-24:latest" Feb 27 17:48:57 crc kubenswrapper[4830]: E0227 17:48:57.698199 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:octavia-amphora-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/bin/bash],Args:[-c cp -f /usr/local/apache2/conf/httpd.conf /etc/httpd/conf/httpd.conf && /usr/bin/run-httpd],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:amphora-image,ReadOnly:false,MountPath:/usr/local/apache2/htdocs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:httpd-config,ReadOnly:true,MountPath:/usr/local/apache2/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-image-upload-59f8cff499-8m7cr_openstack(fd025d27-c829-4a6f-a7c5-7399538b0872): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 17:48:57 crc kubenswrapper[4830]: E0227 17:48:57.699674 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"octavia-amphora-httpd\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/ubi9/httpd-24@sha256=b5d4552d14730d8477ebb55268a66c88547913036d981314e9ea24373e9e0051/signature-21: status 500 (Internal Server Error)\"" pod="openstack/octavia-image-upload-59f8cff499-8m7cr" podUID="fd025d27-c829-4a6f-a7c5-7399538b0872" Feb 27 17:48:57 crc kubenswrapper[4830]: E0227 17:48:57.808975 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536908-42g5s" podUID="dd747a6d-ccf7-41bd-b8d8-b7480d6d950e" Feb 27 17:48:57 crc kubenswrapper[4830]: I0227 17:48:57.905754 4830 generic.go:334] "Generic (PLEG): container finished" podID="e970daf4-00a2-473d-bfae-e985a7c78a94" containerID="c317bdfad51a5665fb94d2df67e1dd7be2ac75aff4d78f614662ed7b9047a6d1" exitCode=0 Feb 27 17:48:57 crc kubenswrapper[4830]: I0227 17:48:57.905823 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-l9rlw" event={"ID":"e970daf4-00a2-473d-bfae-e985a7c78a94","Type":"ContainerDied","Data":"c317bdfad51a5665fb94d2df67e1dd7be2ac75aff4d78f614662ed7b9047a6d1"} Feb 27 17:48:58 crc kubenswrapper[4830]: I0227 17:48:58.919991 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-l9rlw" event={"ID":"e970daf4-00a2-473d-bfae-e985a7c78a94","Type":"ContainerStarted","Data":"b34c0087c93d2e8445da6cdb19e155de8016baf353d1777a88992d9191d00ef7"} Feb 27 17:48:58 crc kubenswrapper[4830]: I0227 17:48:58.920315 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-healthmanager-l9rlw" Feb 27 17:48:58 crc kubenswrapper[4830]: I0227 17:48:58.923816 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-jvpsh" event={"ID":"6b0a7833-e438-4248-a46f-bbeb413c9f1b","Type":"ContainerStarted","Data":"839f15221e996daf21e87d9415045b948271d8bb00feb01ea78b1fdc0b0c17c3"} Feb 27 17:48:58 crc kubenswrapper[4830]: I0227 17:48:58.974246 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-healthmanager-l9rlw" podStartSLOduration=5.974226078 podStartE2EDuration="5.974226078s" podCreationTimestamp="2026-02-27 17:48:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:48:58.950356597 +0000 UTC m=+6135.039629060" watchObservedRunningTime="2026-02-27 17:48:58.974226078 +0000 UTC m=+6135.063498541" Feb 27 17:48:58 crc kubenswrapper[4830]: E0227 17:48:58.998658 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:48:59 crc kubenswrapper[4830]: I0227 17:48:59.936857 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-zbtll" event={"ID":"e70e9f25-ddb4-4592-acce-1cc44b59f2b8","Type":"ContainerStarted","Data":"44601bd8add4d0dcadb1f9981beacfa3774e28e416dacfc3e2e3d1617d5d9e05"} Feb 27 17:48:59 crc kubenswrapper[4830]: I0227 17:48:59.939037 4830 generic.go:334] "Generic (PLEG): container finished" podID="6b0a7833-e438-4248-a46f-bbeb413c9f1b" containerID="839f15221e996daf21e87d9415045b948271d8bb00feb01ea78b1fdc0b0c17c3" exitCode=0 Feb 27 17:48:59 crc kubenswrapper[4830]: I0227 17:48:59.939171 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-jvpsh" event={"ID":"6b0a7833-e438-4248-a46f-bbeb413c9f1b","Type":"ContainerDied","Data":"839f15221e996daf21e87d9415045b948271d8bb00feb01ea78b1fdc0b0c17c3"} Feb 27 17:49:00 crc kubenswrapper[4830]: I0227 17:49:00.965218 4830 generic.go:334] "Generic (PLEG): container finished" podID="e70e9f25-ddb4-4592-acce-1cc44b59f2b8" containerID="44601bd8add4d0dcadb1f9981beacfa3774e28e416dacfc3e2e3d1617d5d9e05" exitCode=0 Feb 27 17:49:00 crc kubenswrapper[4830]: I0227 17:49:00.965717 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-zbtll" event={"ID":"e70e9f25-ddb4-4592-acce-1cc44b59f2b8","Type":"ContainerDied","Data":"44601bd8add4d0dcadb1f9981beacfa3774e28e416dacfc3e2e3d1617d5d9e05"} Feb 27 17:49:00 crc kubenswrapper[4830]: I0227 17:49:00.984539 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-jvpsh" event={"ID":"6b0a7833-e438-4248-a46f-bbeb413c9f1b","Type":"ContainerStarted","Data":"e9c35b8c008fafa4525f80c14dbdf8005b134f35c6bddb97c41abc385ff18298"} Feb 27 17:49:00 crc kubenswrapper[4830]: I0227 17:49:00.987287 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-housekeeping-jvpsh" Feb 27 17:49:02 crc kubenswrapper[4830]: I0227 17:49:02.005208 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-zbtll" event={"ID":"e70e9f25-ddb4-4592-acce-1cc44b59f2b8","Type":"ContainerStarted","Data":"e609c17d2acd267405af32f0a24bf5257c0a9344b83b75c13196591a62adcf4d"} Feb 27 17:49:02 crc kubenswrapper[4830]: I0227 17:49:02.005749 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-worker-zbtll" Feb 27 17:49:02 crc kubenswrapper[4830]: I0227 17:49:02.030842 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-worker-zbtll" podStartSLOduration=4.674059508 podStartE2EDuration="7.03082584s" podCreationTimestamp="2026-02-27 17:48:55 +0000 UTC" firstStartedPulling="2026-02-27 17:48:56.709267721 +0000 UTC m=+6132.798540224" lastFinishedPulling="2026-02-27 17:48:59.066034093 +0000 UTC m=+6135.155306556" observedRunningTime="2026-02-27 17:49:02.028976846 +0000 UTC m=+6138.118249309" watchObservedRunningTime="2026-02-27 17:49:02.03082584 +0000 UTC m=+6138.120098303" Feb 27 17:49:02 crc kubenswrapper[4830]: I0227 17:49:02.032314 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-housekeeping-jvpsh" podStartSLOduration=6.054543294 podStartE2EDuration="8.032310236s" podCreationTimestamp="2026-02-27 17:48:54 +0000 UTC" firstStartedPulling="2026-02-27 17:48:55.836928176 +0000 UTC m=+6131.926200639" lastFinishedPulling="2026-02-27 17:48:57.814695088 +0000 UTC m=+6133.903967581" observedRunningTime="2026-02-27 17:49:01.043468656 +0000 UTC m=+6137.132741129" watchObservedRunningTime="2026-02-27 17:49:02.032310236 +0000 UTC m=+6138.121582699" Feb 27 17:49:08 crc kubenswrapper[4830]: E0227 17:49:08.767162 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536908-42g5s" podUID="dd747a6d-ccf7-41bd-b8d8-b7480d6d950e" Feb 27 17:49:09 crc kubenswrapper[4830]: I0227 17:49:09.370973 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-healthmanager-l9rlw" Feb 27 17:49:09 crc kubenswrapper[4830]: E0227 17:49:09.765033 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"octavia-amphora-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/octavia-image-upload-59f8cff499-8m7cr" podUID="fd025d27-c829-4a6f-a7c5-7399538b0872" Feb 27 17:49:10 crc kubenswrapper[4830]: I0227 17:49:10.260081 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-housekeeping-jvpsh" Feb 27 17:49:10 crc kubenswrapper[4830]: E0227 17:49:10.765783 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" Feb 27 17:49:11 crc kubenswrapper[4830]: I0227 17:49:11.022514 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-worker-zbtll" Feb 27 17:49:22 crc kubenswrapper[4830]: E0227 17:49:22.764866 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536908-42g5s" podUID="dd747a6d-ccf7-41bd-b8d8-b7480d6d950e" Feb 27 17:49:23 crc kubenswrapper[4830]: E0227 17:49:23.766858 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"octavia-amphora-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/octavia-image-upload-59f8cff499-8m7cr" podUID="fd025d27-c829-4a6f-a7c5-7399538b0872" Feb 27 17:49:25 crc kubenswrapper[4830]: I0227 17:49:25.766128 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 17:49:27 crc kubenswrapper[4830]: I0227 17:49:27.394896 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" event={"ID":"204eb1af-36ad-4de7-9da7-9a37fefd3026","Type":"ContainerStarted","Data":"fa6d45dcce0156eb5a88ac195afd7712744613e8cabf76b9bdad3464a2496b86"} Feb 27 17:49:27 crc kubenswrapper[4830]: I0227 17:49:27.429379 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" podStartSLOduration=1.662892425 podStartE2EDuration="11m27.429335237s" podCreationTimestamp="2026-02-27 17:38:00 +0000 UTC" firstStartedPulling="2026-02-27 17:38:01.041473209 +0000 UTC m=+5477.130745702" lastFinishedPulling="2026-02-27 17:49:26.807916031 +0000 UTC m=+6162.897188514" observedRunningTime="2026-02-27 17:49:27.417268819 +0000 UTC m=+6163.506541292" watchObservedRunningTime="2026-02-27 17:49:27.429335237 +0000 UTC m=+6163.518607700" Feb 27 17:49:28 crc kubenswrapper[4830]: I0227 17:49:28.408888 4830 generic.go:334] "Generic (PLEG): container finished" podID="204eb1af-36ad-4de7-9da7-9a37fefd3026" containerID="fa6d45dcce0156eb5a88ac195afd7712744613e8cabf76b9bdad3464a2496b86" exitCode=0 Feb 27 17:49:28 crc kubenswrapper[4830]: I0227 17:49:28.409047 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" event={"ID":"204eb1af-36ad-4de7-9da7-9a37fefd3026","Type":"ContainerDied","Data":"fa6d45dcce0156eb5a88ac195afd7712744613e8cabf76b9bdad3464a2496b86"} Feb 27 17:49:29 crc kubenswrapper[4830]: I0227 17:49:29.899513 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" Feb 27 17:49:30 crc kubenswrapper[4830]: I0227 17:49:30.070262 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdb7h\" (UniqueName: \"kubernetes.io/projected/204eb1af-36ad-4de7-9da7-9a37fefd3026-kube-api-access-mdb7h\") pod \"204eb1af-36ad-4de7-9da7-9a37fefd3026\" (UID: \"204eb1af-36ad-4de7-9da7-9a37fefd3026\") " Feb 27 17:49:30 crc kubenswrapper[4830]: I0227 17:49:30.078553 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/204eb1af-36ad-4de7-9da7-9a37fefd3026-kube-api-access-mdb7h" (OuterVolumeSpecName: "kube-api-access-mdb7h") pod "204eb1af-36ad-4de7-9da7-9a37fefd3026" (UID: "204eb1af-36ad-4de7-9da7-9a37fefd3026"). InnerVolumeSpecName "kube-api-access-mdb7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:49:30 crc kubenswrapper[4830]: I0227 17:49:30.174361 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdb7h\" (UniqueName: \"kubernetes.io/projected/204eb1af-36ad-4de7-9da7-9a37fefd3026-kube-api-access-mdb7h\") on node \"crc\" DevicePath \"\"" Feb 27 17:49:30 crc kubenswrapper[4830]: I0227 17:49:30.444936 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" event={"ID":"204eb1af-36ad-4de7-9da7-9a37fefd3026","Type":"ContainerDied","Data":"3db9d6aea1c2c387a3f3cb880ea977586521a7ea06db01806d487256c1900006"} Feb 27 17:49:30 crc kubenswrapper[4830]: I0227 17:49:30.445022 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3db9d6aea1c2c387a3f3cb880ea977586521a7ea06db01806d487256c1900006" Feb 27 17:49:30 crc kubenswrapper[4830]: I0227 17:49:30.445027 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536898-vrwjs" Feb 27 17:49:30 crc kubenswrapper[4830]: I0227 17:49:30.510690 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536898-vrwjs"] Feb 27 17:49:30 crc kubenswrapper[4830]: I0227 17:49:30.520891 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536898-vrwjs"] Feb 27 17:49:30 crc kubenswrapper[4830]: I0227 17:49:30.781512 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" path="/var/lib/kubelet/pods/204eb1af-36ad-4de7-9da7-9a37fefd3026/volumes" Feb 27 17:49:34 crc kubenswrapper[4830]: I0227 17:49:34.059358 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-9b84v"] Feb 27 17:49:34 crc kubenswrapper[4830]: I0227 17:49:34.073705 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-9b84v"] Feb 27 17:49:34 crc kubenswrapper[4830]: E0227 17:49:34.775072 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"octavia-amphora-httpd\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/ubi9/httpd-24:latest\\\"\"" pod="openstack/octavia-image-upload-59f8cff499-8m7cr" podUID="fd025d27-c829-4a6f-a7c5-7399538b0872" Feb 27 17:49:34 crc kubenswrapper[4830]: I0227 17:49:34.794734 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b82dc08e-a7da-4563-af1b-25e6f06b353a" path="/var/lib/kubelet/pods/b82dc08e-a7da-4563-af1b-25e6f06b353a/volumes" Feb 27 17:49:36 crc kubenswrapper[4830]: I0227 17:49:36.047750 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-5ba2-account-create-update-jmd88"] Feb 27 17:49:36 crc kubenswrapper[4830]: I0227 17:49:36.068240 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-5ba2-account-create-update-jmd88"] Feb 27 17:49:36 crc kubenswrapper[4830]: I0227 17:49:36.774799 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd776b00-a862-464a-b2f5-bd60682f924c" path="/var/lib/kubelet/pods/cd776b00-a862-464a-b2f5-bd60682f924c/volumes" Feb 27 17:49:37 crc kubenswrapper[4830]: I0227 17:49:37.540558 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536908-42g5s" event={"ID":"dd747a6d-ccf7-41bd-b8d8-b7480d6d950e","Type":"ContainerStarted","Data":"38d463762cb6f6f960f6a295c4de3ff134a0de7d8d84fc6a56cf3b0b761e49a3"} Feb 27 17:49:37 crc kubenswrapper[4830]: I0227 17:49:37.598895 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536908-42g5s" podStartSLOduration=1.520227838 podStartE2EDuration="1m37.598862124s" podCreationTimestamp="2026-02-27 17:48:00 +0000 UTC" firstStartedPulling="2026-02-27 17:48:01.046044593 +0000 UTC m=+6077.135317066" lastFinishedPulling="2026-02-27 17:49:37.124678839 +0000 UTC m=+6173.213951352" observedRunningTime="2026-02-27 17:49:37.561663235 +0000 UTC m=+6173.650935788" watchObservedRunningTime="2026-02-27 17:49:37.598862124 +0000 UTC m=+6173.688134617" Feb 27 17:49:38 crc kubenswrapper[4830]: I0227 17:49:38.560948 4830 generic.go:334] "Generic (PLEG): container finished" podID="dd747a6d-ccf7-41bd-b8d8-b7480d6d950e" containerID="38d463762cb6f6f960f6a295c4de3ff134a0de7d8d84fc6a56cf3b0b761e49a3" exitCode=0 Feb 27 17:49:38 crc kubenswrapper[4830]: I0227 17:49:38.561072 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536908-42g5s" event={"ID":"dd747a6d-ccf7-41bd-b8d8-b7480d6d950e","Type":"ContainerDied","Data":"38d463762cb6f6f960f6a295c4de3ff134a0de7d8d84fc6a56cf3b0b761e49a3"} Feb 27 17:49:40 crc kubenswrapper[4830]: I0227 17:49:40.025316 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536908-42g5s" Feb 27 17:49:40 crc kubenswrapper[4830]: I0227 17:49:40.055939 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-djgjz"] Feb 27 17:49:40 crc kubenswrapper[4830]: I0227 17:49:40.067423 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-djgjz"] Feb 27 17:49:40 crc kubenswrapper[4830]: I0227 17:49:40.149838 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58p4h\" (UniqueName: \"kubernetes.io/projected/dd747a6d-ccf7-41bd-b8d8-b7480d6d950e-kube-api-access-58p4h\") pod \"dd747a6d-ccf7-41bd-b8d8-b7480d6d950e\" (UID: \"dd747a6d-ccf7-41bd-b8d8-b7480d6d950e\") " Feb 27 17:49:40 crc kubenswrapper[4830]: I0227 17:49:40.156853 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd747a6d-ccf7-41bd-b8d8-b7480d6d950e-kube-api-access-58p4h" (OuterVolumeSpecName: "kube-api-access-58p4h") pod "dd747a6d-ccf7-41bd-b8d8-b7480d6d950e" (UID: "dd747a6d-ccf7-41bd-b8d8-b7480d6d950e"). InnerVolumeSpecName "kube-api-access-58p4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:49:40 crc kubenswrapper[4830]: I0227 17:49:40.253453 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58p4h\" (UniqueName: \"kubernetes.io/projected/dd747a6d-ccf7-41bd-b8d8-b7480d6d950e-kube-api-access-58p4h\") on node \"crc\" DevicePath \"\"" Feb 27 17:49:40 crc kubenswrapper[4830]: I0227 17:49:40.589414 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536908-42g5s" event={"ID":"dd747a6d-ccf7-41bd-b8d8-b7480d6d950e","Type":"ContainerDied","Data":"dcf92036805c8a1b15cf452adbe090039907c9893f0e7ed9226382d2f2dde978"} Feb 27 17:49:40 crc kubenswrapper[4830]: I0227 17:49:40.589460 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcf92036805c8a1b15cf452adbe090039907c9893f0e7ed9226382d2f2dde978" Feb 27 17:49:40 crc kubenswrapper[4830]: I0227 17:49:40.589547 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536908-42g5s" Feb 27 17:49:40 crc kubenswrapper[4830]: I0227 17:49:40.662505 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536902-2942n"] Feb 27 17:49:40 crc kubenswrapper[4830]: I0227 17:49:40.672826 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536902-2942n"] Feb 27 17:49:40 crc kubenswrapper[4830]: I0227 17:49:40.779846 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ec03666-94da-435d-bfc4-5b7f8ed237b2" path="/var/lib/kubelet/pods/5ec03666-94da-435d-bfc4-5b7f8ed237b2/volumes" Feb 27 17:49:40 crc kubenswrapper[4830]: I0227 17:49:40.781436 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="840b1cf6-0ffb-47c8-9dac-779004f691b0" path="/var/lib/kubelet/pods/840b1cf6-0ffb-47c8-9dac-779004f691b0/volumes" Feb 27 17:49:42 crc kubenswrapper[4830]: E0227 17:49:42.792676 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 17:49:42 crc kubenswrapper[4830]: E0227 17:49:42.793427 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t9tjs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-gbcl6_openshift-marketplace(90e915d6-d74a-4f5b-a8da-8f0f2acdda48): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 17:49:42 crc kubenswrapper[4830]: E0227 17:49:42.794619 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:49:52 crc kubenswrapper[4830]: I0227 17:49:52.804225 4830 scope.go:117] "RemoveContainer" containerID="5a33dc38119460dac374b266d5f931d6a5fc8cd244d372f370999214ad65d58f" Feb 27 17:49:52 crc kubenswrapper[4830]: I0227 17:49:52.908167 4830 scope.go:117] "RemoveContainer" containerID="c8127dfea0e640ca387461852677ae251653369b15e612b21a844f5474210fa1" Feb 27 17:49:52 crc kubenswrapper[4830]: I0227 17:49:52.979932 4830 scope.go:117] "RemoveContainer" containerID="c25e29a5c819cf324ba7ab3dec326fbb20097cf6d51fe143e8ab2797af03800c" Feb 27 17:49:53 crc kubenswrapper[4830]: I0227 17:49:53.053716 4830 scope.go:117] "RemoveContainer" containerID="66d60c3b592c6831df559473b6d404fbaf00c6d1b56cf75eadbe991ab774a372" Feb 27 17:49:53 crc kubenswrapper[4830]: I0227 17:49:53.789565 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-8m7cr" event={"ID":"fd025d27-c829-4a6f-a7c5-7399538b0872","Type":"ContainerStarted","Data":"80df712b3d130c6a1990b366a41be056a917f812a66b017a06de8c8c83eaf523"} Feb 27 17:49:53 crc kubenswrapper[4830]: I0227 17:49:53.820211 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-59f8cff499-8m7cr" podStartSLOduration=1.5610953250000001 podStartE2EDuration="2m4.820174588s" podCreationTimestamp="2026-02-27 17:47:49 +0000 UTC" firstStartedPulling="2026-02-27 17:47:49.971571512 +0000 UTC m=+6066.060843975" lastFinishedPulling="2026-02-27 17:49:53.230650785 +0000 UTC m=+6189.319923238" observedRunningTime="2026-02-27 17:49:53.805358104 +0000 UTC m=+6189.894630577" watchObservedRunningTime="2026-02-27 17:49:53.820174588 +0000 UTC m=+6189.909447071" Feb 27 17:49:56 crc kubenswrapper[4830]: E0227 17:49:56.767843 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:50:00 crc kubenswrapper[4830]: I0227 17:50:00.220790 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536910-jpww7"] Feb 27 17:50:00 crc kubenswrapper[4830]: E0227 17:50:00.222235 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" containerName="oc" Feb 27 17:50:00 crc kubenswrapper[4830]: I0227 17:50:00.222254 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" containerName="oc" Feb 27 17:50:00 crc kubenswrapper[4830]: E0227 17:50:00.222271 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd747a6d-ccf7-41bd-b8d8-b7480d6d950e" containerName="oc" Feb 27 17:50:00 crc kubenswrapper[4830]: I0227 17:50:00.222279 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd747a6d-ccf7-41bd-b8d8-b7480d6d950e" containerName="oc" Feb 27 17:50:00 crc kubenswrapper[4830]: I0227 17:50:00.222503 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="204eb1af-36ad-4de7-9da7-9a37fefd3026" containerName="oc" Feb 27 17:50:00 crc kubenswrapper[4830]: I0227 17:50:00.222531 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd747a6d-ccf7-41bd-b8d8-b7480d6d950e" containerName="oc" Feb 27 17:50:00 crc kubenswrapper[4830]: I0227 17:50:00.223478 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536910-jpww7" Feb 27 17:50:00 crc kubenswrapper[4830]: I0227 17:50:00.231269 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 17:50:00 crc kubenswrapper[4830]: I0227 17:50:00.231407 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:50:00 crc kubenswrapper[4830]: I0227 17:50:00.231746 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:50:00 crc kubenswrapper[4830]: I0227 17:50:00.247324 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536910-jpww7"] Feb 27 17:50:00 crc kubenswrapper[4830]: I0227 17:50:00.355136 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzl7z\" (UniqueName: \"kubernetes.io/projected/ec982c69-2d78-4ebd-beb8-d2b640955d6f-kube-api-access-tzl7z\") pod \"auto-csr-approver-29536910-jpww7\" (UID: \"ec982c69-2d78-4ebd-beb8-d2b640955d6f\") " pod="openshift-infra/auto-csr-approver-29536910-jpww7" Feb 27 17:50:00 crc kubenswrapper[4830]: I0227 17:50:00.458080 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzl7z\" (UniqueName: \"kubernetes.io/projected/ec982c69-2d78-4ebd-beb8-d2b640955d6f-kube-api-access-tzl7z\") pod \"auto-csr-approver-29536910-jpww7\" (UID: \"ec982c69-2d78-4ebd-beb8-d2b640955d6f\") " pod="openshift-infra/auto-csr-approver-29536910-jpww7" Feb 27 17:50:00 crc kubenswrapper[4830]: I0227 17:50:00.483346 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzl7z\" (UniqueName: \"kubernetes.io/projected/ec982c69-2d78-4ebd-beb8-d2b640955d6f-kube-api-access-tzl7z\") pod \"auto-csr-approver-29536910-jpww7\" (UID: \"ec982c69-2d78-4ebd-beb8-d2b640955d6f\") " pod="openshift-infra/auto-csr-approver-29536910-jpww7" Feb 27 17:50:00 crc kubenswrapper[4830]: I0227 17:50:00.558242 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536910-jpww7" Feb 27 17:50:01 crc kubenswrapper[4830]: I0227 17:50:01.099806 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536910-jpww7"] Feb 27 17:50:01 crc kubenswrapper[4830]: I0227 17:50:01.899991 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536910-jpww7" event={"ID":"ec982c69-2d78-4ebd-beb8-d2b640955d6f","Type":"ContainerStarted","Data":"103db2dce4411daefa5a68473b36a892705b4f9752193fb887471f1284e9854d"} Feb 27 17:50:03 crc kubenswrapper[4830]: I0227 17:50:03.160056 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:50:03 crc kubenswrapper[4830]: I0227 17:50:03.160654 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:50:03 crc kubenswrapper[4830]: I0227 17:50:03.932345 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536910-jpww7" event={"ID":"ec982c69-2d78-4ebd-beb8-d2b640955d6f","Type":"ContainerStarted","Data":"79563642ace9758e8b592e78f9523911e3bb953444a953e0c63d0f7bbac7d789"} Feb 27 17:50:04 crc kubenswrapper[4830]: I0227 17:50:04.026376 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536910-jpww7" podStartSLOduration=2.684096313 podStartE2EDuration="4.026352131s" podCreationTimestamp="2026-02-27 17:50:00 +0000 UTC" firstStartedPulling="2026-02-27 17:50:01.109181302 +0000 UTC m=+6197.198453775" lastFinishedPulling="2026-02-27 17:50:02.45143712 +0000 UTC m=+6198.540709593" observedRunningTime="2026-02-27 17:50:03.96313337 +0000 UTC m=+6200.052405833" watchObservedRunningTime="2026-02-27 17:50:04.026352131 +0000 UTC m=+6200.115624764" Feb 27 17:50:04 crc kubenswrapper[4830]: I0227 17:50:04.902318 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4wqjr"] Feb 27 17:50:04 crc kubenswrapper[4830]: I0227 17:50:04.921906 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4wqjr"] Feb 27 17:50:04 crc kubenswrapper[4830]: I0227 17:50:04.922033 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4wqjr" Feb 27 17:50:04 crc kubenswrapper[4830]: I0227 17:50:04.970853 4830 generic.go:334] "Generic (PLEG): container finished" podID="ec982c69-2d78-4ebd-beb8-d2b640955d6f" containerID="79563642ace9758e8b592e78f9523911e3bb953444a953e0c63d0f7bbac7d789" exitCode=0 Feb 27 17:50:04 crc kubenswrapper[4830]: I0227 17:50:04.971266 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536910-jpww7" event={"ID":"ec982c69-2d78-4ebd-beb8-d2b640955d6f","Type":"ContainerDied","Data":"79563642ace9758e8b592e78f9523911e3bb953444a953e0c63d0f7bbac7d789"} Feb 27 17:50:04 crc kubenswrapper[4830]: I0227 17:50:04.987341 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c203a516-7e0f-4255-a789-c5f10a297916-catalog-content\") pod \"certified-operators-4wqjr\" (UID: \"c203a516-7e0f-4255-a789-c5f10a297916\") " pod="openshift-marketplace/certified-operators-4wqjr" Feb 27 17:50:04 crc kubenswrapper[4830]: I0227 17:50:04.987425 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkdfr\" (UniqueName: \"kubernetes.io/projected/c203a516-7e0f-4255-a789-c5f10a297916-kube-api-access-vkdfr\") pod \"certified-operators-4wqjr\" (UID: \"c203a516-7e0f-4255-a789-c5f10a297916\") " pod="openshift-marketplace/certified-operators-4wqjr" Feb 27 17:50:04 crc kubenswrapper[4830]: I0227 17:50:04.987478 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c203a516-7e0f-4255-a789-c5f10a297916-utilities\") pod \"certified-operators-4wqjr\" (UID: \"c203a516-7e0f-4255-a789-c5f10a297916\") " pod="openshift-marketplace/certified-operators-4wqjr" Feb 27 17:50:05 crc kubenswrapper[4830]: I0227 17:50:05.089675 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkdfr\" (UniqueName: \"kubernetes.io/projected/c203a516-7e0f-4255-a789-c5f10a297916-kube-api-access-vkdfr\") pod \"certified-operators-4wqjr\" (UID: \"c203a516-7e0f-4255-a789-c5f10a297916\") " pod="openshift-marketplace/certified-operators-4wqjr" Feb 27 17:50:05 crc kubenswrapper[4830]: I0227 17:50:05.089749 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c203a516-7e0f-4255-a789-c5f10a297916-utilities\") pod \"certified-operators-4wqjr\" (UID: \"c203a516-7e0f-4255-a789-c5f10a297916\") " pod="openshift-marketplace/certified-operators-4wqjr" Feb 27 17:50:05 crc kubenswrapper[4830]: I0227 17:50:05.089897 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c203a516-7e0f-4255-a789-c5f10a297916-catalog-content\") pod \"certified-operators-4wqjr\" (UID: \"c203a516-7e0f-4255-a789-c5f10a297916\") " pod="openshift-marketplace/certified-operators-4wqjr" Feb 27 17:50:05 crc kubenswrapper[4830]: I0227 17:50:05.090414 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c203a516-7e0f-4255-a789-c5f10a297916-utilities\") pod \"certified-operators-4wqjr\" (UID: \"c203a516-7e0f-4255-a789-c5f10a297916\") " pod="openshift-marketplace/certified-operators-4wqjr" Feb 27 17:50:05 crc kubenswrapper[4830]: I0227 17:50:05.094416 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c203a516-7e0f-4255-a789-c5f10a297916-catalog-content\") pod \"certified-operators-4wqjr\" (UID: \"c203a516-7e0f-4255-a789-c5f10a297916\") " pod="openshift-marketplace/certified-operators-4wqjr" Feb 27 17:50:05 crc kubenswrapper[4830]: I0227 17:50:05.113631 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkdfr\" (UniqueName: \"kubernetes.io/projected/c203a516-7e0f-4255-a789-c5f10a297916-kube-api-access-vkdfr\") pod \"certified-operators-4wqjr\" (UID: \"c203a516-7e0f-4255-a789-c5f10a297916\") " pod="openshift-marketplace/certified-operators-4wqjr" Feb 27 17:50:05 crc kubenswrapper[4830]: I0227 17:50:05.276314 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4wqjr" Feb 27 17:50:05 crc kubenswrapper[4830]: I0227 17:50:05.850569 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4wqjr"] Feb 27 17:50:05 crc kubenswrapper[4830]: W0227 17:50:05.853148 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc203a516_7e0f_4255_a789_c5f10a297916.slice/crio-dd6c917917bb8b3849f1f0f02a80fd76d69d0ce6ef1399d0c9b55f605bc33b09 WatchSource:0}: Error finding container dd6c917917bb8b3849f1f0f02a80fd76d69d0ce6ef1399d0c9b55f605bc33b09: Status 404 returned error can't find the container with id dd6c917917bb8b3849f1f0f02a80fd76d69d0ce6ef1399d0c9b55f605bc33b09 Feb 27 17:50:05 crc kubenswrapper[4830]: I0227 17:50:05.984541 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wqjr" event={"ID":"c203a516-7e0f-4255-a789-c5f10a297916","Type":"ContainerStarted","Data":"dd6c917917bb8b3849f1f0f02a80fd76d69d0ce6ef1399d0c9b55f605bc33b09"} Feb 27 17:50:06 crc kubenswrapper[4830]: I0227 17:50:06.402275 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536910-jpww7" Feb 27 17:50:06 crc kubenswrapper[4830]: I0227 17:50:06.525233 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzl7z\" (UniqueName: \"kubernetes.io/projected/ec982c69-2d78-4ebd-beb8-d2b640955d6f-kube-api-access-tzl7z\") pod \"ec982c69-2d78-4ebd-beb8-d2b640955d6f\" (UID: \"ec982c69-2d78-4ebd-beb8-d2b640955d6f\") " Feb 27 17:50:06 crc kubenswrapper[4830]: I0227 17:50:06.534610 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec982c69-2d78-4ebd-beb8-d2b640955d6f-kube-api-access-tzl7z" (OuterVolumeSpecName: "kube-api-access-tzl7z") pod "ec982c69-2d78-4ebd-beb8-d2b640955d6f" (UID: "ec982c69-2d78-4ebd-beb8-d2b640955d6f"). InnerVolumeSpecName "kube-api-access-tzl7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:50:06 crc kubenswrapper[4830]: I0227 17:50:06.631599 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzl7z\" (UniqueName: \"kubernetes.io/projected/ec982c69-2d78-4ebd-beb8-d2b640955d6f-kube-api-access-tzl7z\") on node \"crc\" DevicePath \"\"" Feb 27 17:50:07 crc kubenswrapper[4830]: I0227 17:50:07.000930 4830 generic.go:334] "Generic (PLEG): container finished" podID="c203a516-7e0f-4255-a789-c5f10a297916" containerID="d3638db2958b4dcbc2bcc731785645095b55346ae36d1950a7aef2b36eb0379d" exitCode=0 Feb 27 17:50:07 crc kubenswrapper[4830]: I0227 17:50:07.001043 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wqjr" event={"ID":"c203a516-7e0f-4255-a789-c5f10a297916","Type":"ContainerDied","Data":"d3638db2958b4dcbc2bcc731785645095b55346ae36d1950a7aef2b36eb0379d"} Feb 27 17:50:07 crc kubenswrapper[4830]: I0227 17:50:07.004152 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536910-jpww7" event={"ID":"ec982c69-2d78-4ebd-beb8-d2b640955d6f","Type":"ContainerDied","Data":"103db2dce4411daefa5a68473b36a892705b4f9752193fb887471f1284e9854d"} Feb 27 17:50:07 crc kubenswrapper[4830]: I0227 17:50:07.004235 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="103db2dce4411daefa5a68473b36a892705b4f9752193fb887471f1284e9854d" Feb 27 17:50:07 crc kubenswrapper[4830]: I0227 17:50:07.004264 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536910-jpww7" Feb 27 17:50:07 crc kubenswrapper[4830]: I0227 17:50:07.102647 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536904-jrdqt"] Feb 27 17:50:07 crc kubenswrapper[4830]: I0227 17:50:07.116677 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536904-jrdqt"] Feb 27 17:50:08 crc kubenswrapper[4830]: I0227 17:50:08.782015 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77856f9c-1131-4857-9fff-bddf1d27b5d3" path="/var/lib/kubelet/pods/77856f9c-1131-4857-9fff-bddf1d27b5d3/volumes" Feb 27 17:50:09 crc kubenswrapper[4830]: I0227 17:50:09.045460 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-0cac-account-create-update-j4sk4"] Feb 27 17:50:09 crc kubenswrapper[4830]: I0227 17:50:09.054211 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-tvmtc"] Feb 27 17:50:09 crc kubenswrapper[4830]: I0227 17:50:09.063110 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-0cac-account-create-update-j4sk4"] Feb 27 17:50:09 crc kubenswrapper[4830]: I0227 17:50:09.068989 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wqjr" event={"ID":"c203a516-7e0f-4255-a789-c5f10a297916","Type":"ContainerStarted","Data":"f2c03c07b96eb6df4ac8905cbb304a9a9a7e8dab1f6e40bea85943c6a6823885"} Feb 27 17:50:09 crc kubenswrapper[4830]: I0227 17:50:09.071476 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-tvmtc"] Feb 27 17:50:09 crc kubenswrapper[4830]: E0227 17:50:09.781471 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:50:10 crc kubenswrapper[4830]: I0227 17:50:10.785146 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10cd9813-51dd-4c03-a406-ef763ae8952f" path="/var/lib/kubelet/pods/10cd9813-51dd-4c03-a406-ef763ae8952f/volumes" Feb 27 17:50:10 crc kubenswrapper[4830]: I0227 17:50:10.786851 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c330e013-ad56-4282-9e44-1b0ca4ceaf6c" path="/var/lib/kubelet/pods/c330e013-ad56-4282-9e44-1b0ca4ceaf6c/volumes" Feb 27 17:50:12 crc kubenswrapper[4830]: I0227 17:50:12.128532 4830 generic.go:334] "Generic (PLEG): container finished" podID="c203a516-7e0f-4255-a789-c5f10a297916" containerID="f2c03c07b96eb6df4ac8905cbb304a9a9a7e8dab1f6e40bea85943c6a6823885" exitCode=0 Feb 27 17:50:12 crc kubenswrapper[4830]: I0227 17:50:12.128588 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wqjr" event={"ID":"c203a516-7e0f-4255-a789-c5f10a297916","Type":"ContainerDied","Data":"f2c03c07b96eb6df4ac8905cbb304a9a9a7e8dab1f6e40bea85943c6a6823885"} Feb 27 17:50:13 crc kubenswrapper[4830]: I0227 17:50:13.149349 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wqjr" event={"ID":"c203a516-7e0f-4255-a789-c5f10a297916","Type":"ContainerStarted","Data":"2797910a9fc6e679bc91ea30dffbff3b49fdc62e00bcffeb8b11d7125968ec26"} Feb 27 17:50:13 crc kubenswrapper[4830]: I0227 17:50:13.185303 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4wqjr" podStartSLOduration=3.634374398 podStartE2EDuration="9.185261359s" podCreationTimestamp="2026-02-27 17:50:04 +0000 UTC" firstStartedPulling="2026-02-27 17:50:07.008285169 +0000 UTC m=+6203.097557632" lastFinishedPulling="2026-02-27 17:50:12.55917211 +0000 UTC m=+6208.648444593" observedRunningTime="2026-02-27 17:50:13.173322313 +0000 UTC m=+6209.262594776" watchObservedRunningTime="2026-02-27 17:50:13.185261359 +0000 UTC m=+6209.274533862" Feb 27 17:50:15 crc kubenswrapper[4830]: I0227 17:50:15.276684 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4wqjr" Feb 27 17:50:15 crc kubenswrapper[4830]: I0227 17:50:15.277428 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4wqjr" Feb 27 17:50:16 crc kubenswrapper[4830]: I0227 17:50:16.355132 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-4wqjr" podUID="c203a516-7e0f-4255-a789-c5f10a297916" containerName="registry-server" probeResult="failure" output=< Feb 27 17:50:16 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Feb 27 17:50:16 crc kubenswrapper[4830]: > Feb 27 17:50:18 crc kubenswrapper[4830]: I0227 17:50:18.057420 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-bqrrs"] Feb 27 17:50:18 crc kubenswrapper[4830]: I0227 17:50:18.075464 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-bqrrs"] Feb 27 17:50:18 crc kubenswrapper[4830]: I0227 17:50:18.784220 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27" path="/var/lib/kubelet/pods/cb46f1e5-ccd7-49e2-9f90-fc2e504dcc27/volumes" Feb 27 17:50:23 crc kubenswrapper[4830]: E0227 17:50:23.766804 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:50:25 crc kubenswrapper[4830]: I0227 17:50:25.355980 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4wqjr" Feb 27 17:50:25 crc kubenswrapper[4830]: I0227 17:50:25.423368 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4wqjr" Feb 27 17:50:25 crc kubenswrapper[4830]: I0227 17:50:25.600394 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4wqjr"] Feb 27 17:50:27 crc kubenswrapper[4830]: I0227 17:50:27.349407 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4wqjr" podUID="c203a516-7e0f-4255-a789-c5f10a297916" containerName="registry-server" containerID="cri-o://2797910a9fc6e679bc91ea30dffbff3b49fdc62e00bcffeb8b11d7125968ec26" gracePeriod=2 Feb 27 17:50:28 crc kubenswrapper[4830]: I0227 17:50:28.175206 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4wqjr" Feb 27 17:50:28 crc kubenswrapper[4830]: I0227 17:50:28.197417 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkdfr\" (UniqueName: \"kubernetes.io/projected/c203a516-7e0f-4255-a789-c5f10a297916-kube-api-access-vkdfr\") pod \"c203a516-7e0f-4255-a789-c5f10a297916\" (UID: \"c203a516-7e0f-4255-a789-c5f10a297916\") " Feb 27 17:50:28 crc kubenswrapper[4830]: I0227 17:50:28.197685 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c203a516-7e0f-4255-a789-c5f10a297916-catalog-content\") pod \"c203a516-7e0f-4255-a789-c5f10a297916\" (UID: \"c203a516-7e0f-4255-a789-c5f10a297916\") " Feb 27 17:50:28 crc kubenswrapper[4830]: I0227 17:50:28.197781 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c203a516-7e0f-4255-a789-c5f10a297916-utilities\") pod \"c203a516-7e0f-4255-a789-c5f10a297916\" (UID: \"c203a516-7e0f-4255-a789-c5f10a297916\") " Feb 27 17:50:28 crc kubenswrapper[4830]: I0227 17:50:28.198959 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c203a516-7e0f-4255-a789-c5f10a297916-utilities" (OuterVolumeSpecName: "utilities") pod "c203a516-7e0f-4255-a789-c5f10a297916" (UID: "c203a516-7e0f-4255-a789-c5f10a297916"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:50:28 crc kubenswrapper[4830]: I0227 17:50:28.238666 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c203a516-7e0f-4255-a789-c5f10a297916-kube-api-access-vkdfr" (OuterVolumeSpecName: "kube-api-access-vkdfr") pod "c203a516-7e0f-4255-a789-c5f10a297916" (UID: "c203a516-7e0f-4255-a789-c5f10a297916"). InnerVolumeSpecName "kube-api-access-vkdfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:50:28 crc kubenswrapper[4830]: I0227 17:50:28.282112 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c203a516-7e0f-4255-a789-c5f10a297916-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c203a516-7e0f-4255-a789-c5f10a297916" (UID: "c203a516-7e0f-4255-a789-c5f10a297916"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:50:28 crc kubenswrapper[4830]: I0227 17:50:28.299723 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkdfr\" (UniqueName: \"kubernetes.io/projected/c203a516-7e0f-4255-a789-c5f10a297916-kube-api-access-vkdfr\") on node \"crc\" DevicePath \"\"" Feb 27 17:50:28 crc kubenswrapper[4830]: I0227 17:50:28.299749 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c203a516-7e0f-4255-a789-c5f10a297916-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:50:28 crc kubenswrapper[4830]: I0227 17:50:28.299758 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c203a516-7e0f-4255-a789-c5f10a297916-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:50:28 crc kubenswrapper[4830]: I0227 17:50:28.364574 4830 generic.go:334] "Generic (PLEG): container finished" podID="c203a516-7e0f-4255-a789-c5f10a297916" containerID="2797910a9fc6e679bc91ea30dffbff3b49fdc62e00bcffeb8b11d7125968ec26" exitCode=0 Feb 27 17:50:28 crc kubenswrapper[4830]: I0227 17:50:28.364626 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wqjr" event={"ID":"c203a516-7e0f-4255-a789-c5f10a297916","Type":"ContainerDied","Data":"2797910a9fc6e679bc91ea30dffbff3b49fdc62e00bcffeb8b11d7125968ec26"} Feb 27 17:50:28 crc kubenswrapper[4830]: I0227 17:50:28.364656 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4wqjr" event={"ID":"c203a516-7e0f-4255-a789-c5f10a297916","Type":"ContainerDied","Data":"dd6c917917bb8b3849f1f0f02a80fd76d69d0ce6ef1399d0c9b55f605bc33b09"} Feb 27 17:50:28 crc kubenswrapper[4830]: I0227 17:50:28.364667 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4wqjr" Feb 27 17:50:28 crc kubenswrapper[4830]: I0227 17:50:28.364673 4830 scope.go:117] "RemoveContainer" containerID="2797910a9fc6e679bc91ea30dffbff3b49fdc62e00bcffeb8b11d7125968ec26" Feb 27 17:50:28 crc kubenswrapper[4830]: I0227 17:50:28.413707 4830 scope.go:117] "RemoveContainer" containerID="f2c03c07b96eb6df4ac8905cbb304a9a9a7e8dab1f6e40bea85943c6a6823885" Feb 27 17:50:28 crc kubenswrapper[4830]: I0227 17:50:28.448350 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4wqjr"] Feb 27 17:50:28 crc kubenswrapper[4830]: I0227 17:50:28.466773 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4wqjr"] Feb 27 17:50:28 crc kubenswrapper[4830]: I0227 17:50:28.473099 4830 scope.go:117] "RemoveContainer" containerID="d3638db2958b4dcbc2bcc731785645095b55346ae36d1950a7aef2b36eb0379d" Feb 27 17:50:28 crc kubenswrapper[4830]: I0227 17:50:28.516051 4830 scope.go:117] "RemoveContainer" containerID="2797910a9fc6e679bc91ea30dffbff3b49fdc62e00bcffeb8b11d7125968ec26" Feb 27 17:50:28 crc kubenswrapper[4830]: E0227 17:50:28.516861 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2797910a9fc6e679bc91ea30dffbff3b49fdc62e00bcffeb8b11d7125968ec26\": container with ID starting with 2797910a9fc6e679bc91ea30dffbff3b49fdc62e00bcffeb8b11d7125968ec26 not found: ID does not exist" containerID="2797910a9fc6e679bc91ea30dffbff3b49fdc62e00bcffeb8b11d7125968ec26" Feb 27 17:50:28 crc kubenswrapper[4830]: I0227 17:50:28.517023 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2797910a9fc6e679bc91ea30dffbff3b49fdc62e00bcffeb8b11d7125968ec26"} err="failed to get container status \"2797910a9fc6e679bc91ea30dffbff3b49fdc62e00bcffeb8b11d7125968ec26\": rpc error: code = NotFound desc = could not find container \"2797910a9fc6e679bc91ea30dffbff3b49fdc62e00bcffeb8b11d7125968ec26\": container with ID starting with 2797910a9fc6e679bc91ea30dffbff3b49fdc62e00bcffeb8b11d7125968ec26 not found: ID does not exist" Feb 27 17:50:28 crc kubenswrapper[4830]: I0227 17:50:28.517107 4830 scope.go:117] "RemoveContainer" containerID="f2c03c07b96eb6df4ac8905cbb304a9a9a7e8dab1f6e40bea85943c6a6823885" Feb 27 17:50:28 crc kubenswrapper[4830]: E0227 17:50:28.517619 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2c03c07b96eb6df4ac8905cbb304a9a9a7e8dab1f6e40bea85943c6a6823885\": container with ID starting with f2c03c07b96eb6df4ac8905cbb304a9a9a7e8dab1f6e40bea85943c6a6823885 not found: ID does not exist" containerID="f2c03c07b96eb6df4ac8905cbb304a9a9a7e8dab1f6e40bea85943c6a6823885" Feb 27 17:50:28 crc kubenswrapper[4830]: I0227 17:50:28.517753 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2c03c07b96eb6df4ac8905cbb304a9a9a7e8dab1f6e40bea85943c6a6823885"} err="failed to get container status \"f2c03c07b96eb6df4ac8905cbb304a9a9a7e8dab1f6e40bea85943c6a6823885\": rpc error: code = NotFound desc = could not find container \"f2c03c07b96eb6df4ac8905cbb304a9a9a7e8dab1f6e40bea85943c6a6823885\": container with ID starting with f2c03c07b96eb6df4ac8905cbb304a9a9a7e8dab1f6e40bea85943c6a6823885 not found: ID does not exist" Feb 27 17:50:28 crc kubenswrapper[4830]: I0227 17:50:28.517837 4830 scope.go:117] "RemoveContainer" containerID="d3638db2958b4dcbc2bcc731785645095b55346ae36d1950a7aef2b36eb0379d" Feb 27 17:50:28 crc kubenswrapper[4830]: E0227 17:50:28.518295 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3638db2958b4dcbc2bcc731785645095b55346ae36d1950a7aef2b36eb0379d\": container with ID starting with d3638db2958b4dcbc2bcc731785645095b55346ae36d1950a7aef2b36eb0379d not found: ID does not exist" containerID="d3638db2958b4dcbc2bcc731785645095b55346ae36d1950a7aef2b36eb0379d" Feb 27 17:50:28 crc kubenswrapper[4830]: I0227 17:50:28.518386 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3638db2958b4dcbc2bcc731785645095b55346ae36d1950a7aef2b36eb0379d"} err="failed to get container status \"d3638db2958b4dcbc2bcc731785645095b55346ae36d1950a7aef2b36eb0379d\": rpc error: code = NotFound desc = could not find container \"d3638db2958b4dcbc2bcc731785645095b55346ae36d1950a7aef2b36eb0379d\": container with ID starting with d3638db2958b4dcbc2bcc731785645095b55346ae36d1950a7aef2b36eb0379d not found: ID does not exist" Feb 27 17:50:28 crc kubenswrapper[4830]: I0227 17:50:28.702748 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-8m7cr"] Feb 27 17:50:28 crc kubenswrapper[4830]: I0227 17:50:28.703069 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-image-upload-59f8cff499-8m7cr" podUID="fd025d27-c829-4a6f-a7c5-7399538b0872" containerName="octavia-amphora-httpd" containerID="cri-o://80df712b3d130c6a1990b366a41be056a917f812a66b017a06de8c8c83eaf523" gracePeriod=30 Feb 27 17:50:28 crc kubenswrapper[4830]: I0227 17:50:28.775598 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c203a516-7e0f-4255-a789-c5f10a297916" path="/var/lib/kubelet/pods/c203a516-7e0f-4255-a789-c5f10a297916/volumes" Feb 27 17:50:29 crc kubenswrapper[4830]: I0227 17:50:29.378768 4830 generic.go:334] "Generic (PLEG): container finished" podID="fd025d27-c829-4a6f-a7c5-7399538b0872" containerID="80df712b3d130c6a1990b366a41be056a917f812a66b017a06de8c8c83eaf523" exitCode=0 Feb 27 17:50:29 crc kubenswrapper[4830]: I0227 17:50:29.378988 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-8m7cr" event={"ID":"fd025d27-c829-4a6f-a7c5-7399538b0872","Type":"ContainerDied","Data":"80df712b3d130c6a1990b366a41be056a917f812a66b017a06de8c8c83eaf523"} Feb 27 17:50:29 crc kubenswrapper[4830]: I0227 17:50:29.845892 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-8m7cr" Feb 27 17:50:29 crc kubenswrapper[4830]: I0227 17:50:29.938800 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/fd025d27-c829-4a6f-a7c5-7399538b0872-amphora-image\") pod \"fd025d27-c829-4a6f-a7c5-7399538b0872\" (UID: \"fd025d27-c829-4a6f-a7c5-7399538b0872\") " Feb 27 17:50:29 crc kubenswrapper[4830]: I0227 17:50:29.939065 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fd025d27-c829-4a6f-a7c5-7399538b0872-httpd-config\") pod \"fd025d27-c829-4a6f-a7c5-7399538b0872\" (UID: \"fd025d27-c829-4a6f-a7c5-7399538b0872\") " Feb 27 17:50:29 crc kubenswrapper[4830]: I0227 17:50:29.978122 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd025d27-c829-4a6f-a7c5-7399538b0872-amphora-image" (OuterVolumeSpecName: "amphora-image") pod "fd025d27-c829-4a6f-a7c5-7399538b0872" (UID: "fd025d27-c829-4a6f-a7c5-7399538b0872"). InnerVolumeSpecName "amphora-image". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:50:30 crc kubenswrapper[4830]: I0227 17:50:30.001742 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd025d27-c829-4a6f-a7c5-7399538b0872-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "fd025d27-c829-4a6f-a7c5-7399538b0872" (UID: "fd025d27-c829-4a6f-a7c5-7399538b0872"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:50:30 crc kubenswrapper[4830]: I0227 17:50:30.042839 4830 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/fd025d27-c829-4a6f-a7c5-7399538b0872-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:50:30 crc kubenswrapper[4830]: I0227 17:50:30.042874 4830 reconciler_common.go:293] "Volume detached for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/fd025d27-c829-4a6f-a7c5-7399538b0872-amphora-image\") on node \"crc\" DevicePath \"\"" Feb 27 17:50:30 crc kubenswrapper[4830]: I0227 17:50:30.400610 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-8m7cr" event={"ID":"fd025d27-c829-4a6f-a7c5-7399538b0872","Type":"ContainerDied","Data":"7a2aaae1e22c41707865fa1f3043606f243fb8899deec2ccb1c5f6d128b630c3"} Feb 27 17:50:30 crc kubenswrapper[4830]: I0227 17:50:30.400701 4830 scope.go:117] "RemoveContainer" containerID="80df712b3d130c6a1990b366a41be056a917f812a66b017a06de8c8c83eaf523" Feb 27 17:50:30 crc kubenswrapper[4830]: I0227 17:50:30.400942 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-8m7cr" Feb 27 17:50:30 crc kubenswrapper[4830]: I0227 17:50:30.444427 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-8m7cr"] Feb 27 17:50:30 crc kubenswrapper[4830]: I0227 17:50:30.451410 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-8m7cr"] Feb 27 17:50:30 crc kubenswrapper[4830]: I0227 17:50:30.456057 4830 scope.go:117] "RemoveContainer" containerID="550f0ca61057194779530fb5b5ed940d96faca992015dd954881e6beb2a75632" Feb 27 17:50:30 crc kubenswrapper[4830]: I0227 17:50:30.786598 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd025d27-c829-4a6f-a7c5-7399538b0872" path="/var/lib/kubelet/pods/fd025d27-c829-4a6f-a7c5-7399538b0872/volumes" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.160498 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.162702 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.374718 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7fffd66c5c-klpbv"] Feb 27 17:50:33 crc kubenswrapper[4830]: E0227 17:50:33.375323 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c203a516-7e0f-4255-a789-c5f10a297916" containerName="extract-utilities" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.375345 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="c203a516-7e0f-4255-a789-c5f10a297916" containerName="extract-utilities" Feb 27 17:50:33 crc kubenswrapper[4830]: E0227 17:50:33.375369 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd025d27-c829-4a6f-a7c5-7399538b0872" containerName="octavia-amphora-httpd" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.375377 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd025d27-c829-4a6f-a7c5-7399538b0872" containerName="octavia-amphora-httpd" Feb 27 17:50:33 crc kubenswrapper[4830]: E0227 17:50:33.375393 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd025d27-c829-4a6f-a7c5-7399538b0872" containerName="init" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.375401 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd025d27-c829-4a6f-a7c5-7399538b0872" containerName="init" Feb 27 17:50:33 crc kubenswrapper[4830]: E0227 17:50:33.375420 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec982c69-2d78-4ebd-beb8-d2b640955d6f" containerName="oc" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.375430 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec982c69-2d78-4ebd-beb8-d2b640955d6f" containerName="oc" Feb 27 17:50:33 crc kubenswrapper[4830]: E0227 17:50:33.375446 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c203a516-7e0f-4255-a789-c5f10a297916" containerName="extract-content" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.375455 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="c203a516-7e0f-4255-a789-c5f10a297916" containerName="extract-content" Feb 27 17:50:33 crc kubenswrapper[4830]: E0227 17:50:33.375470 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c203a516-7e0f-4255-a789-c5f10a297916" containerName="registry-server" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.375478 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="c203a516-7e0f-4255-a789-c5f10a297916" containerName="registry-server" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.375736 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd025d27-c829-4a6f-a7c5-7399538b0872" containerName="octavia-amphora-httpd" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.375756 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="c203a516-7e0f-4255-a789-c5f10a297916" containerName="registry-server" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.375770 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec982c69-2d78-4ebd-beb8-d2b640955d6f" containerName="oc" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.377056 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7fffd66c5c-klpbv" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.387691 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7fffd66c5c-klpbv"] Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.388656 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.388729 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.389100 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.389163 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-tk55z" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.429725 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/668cf4df-9017-4e66-9260-f2601d78a3d7-config-data\") pod \"horizon-7fffd66c5c-klpbv\" (UID: \"668cf4df-9017-4e66-9260-f2601d78a3d7\") " pod="openstack/horizon-7fffd66c5c-klpbv" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.429805 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/668cf4df-9017-4e66-9260-f2601d78a3d7-scripts\") pod \"horizon-7fffd66c5c-klpbv\" (UID: \"668cf4df-9017-4e66-9260-f2601d78a3d7\") " pod="openstack/horizon-7fffd66c5c-klpbv" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.429836 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/668cf4df-9017-4e66-9260-f2601d78a3d7-horizon-secret-key\") pod \"horizon-7fffd66c5c-klpbv\" (UID: \"668cf4df-9017-4e66-9260-f2601d78a3d7\") " pod="openstack/horizon-7fffd66c5c-klpbv" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.429896 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/668cf4df-9017-4e66-9260-f2601d78a3d7-logs\") pod \"horizon-7fffd66c5c-klpbv\" (UID: \"668cf4df-9017-4e66-9260-f2601d78a3d7\") " pod="openstack/horizon-7fffd66c5c-klpbv" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.429932 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4jsh\" (UniqueName: \"kubernetes.io/projected/668cf4df-9017-4e66-9260-f2601d78a3d7-kube-api-access-b4jsh\") pod \"horizon-7fffd66c5c-klpbv\" (UID: \"668cf4df-9017-4e66-9260-f2601d78a3d7\") " pod="openstack/horizon-7fffd66c5c-klpbv" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.443660 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.443876 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="43c19785-15fd-46d8-bea3-2c6fbc7c8bf7" containerName="glance-log" containerID="cri-o://08e5b77d43fbd61b42463151c48883dabac9bf64fa9819a06275e18cc611c769" gracePeriod=30 Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.444134 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="43c19785-15fd-46d8-bea3-2c6fbc7c8bf7" containerName="glance-httpd" containerID="cri-o://945f9961f41cab34c1f2ad257ca5c49f7ab25490a9bcff8ea8570507d7a41270" gracePeriod=30 Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.495802 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5767946c5c-wgc8m"] Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.497321 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5767946c5c-wgc8m" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.510812 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5767946c5c-wgc8m"] Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.532657 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/668cf4df-9017-4e66-9260-f2601d78a3d7-config-data\") pod \"horizon-7fffd66c5c-klpbv\" (UID: \"668cf4df-9017-4e66-9260-f2601d78a3d7\") " pod="openstack/horizon-7fffd66c5c-klpbv" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.532747 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/668cf4df-9017-4e66-9260-f2601d78a3d7-scripts\") pod \"horizon-7fffd66c5c-klpbv\" (UID: \"668cf4df-9017-4e66-9260-f2601d78a3d7\") " pod="openstack/horizon-7fffd66c5c-klpbv" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.532780 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/668cf4df-9017-4e66-9260-f2601d78a3d7-horizon-secret-key\") pod \"horizon-7fffd66c5c-klpbv\" (UID: \"668cf4df-9017-4e66-9260-f2601d78a3d7\") " pod="openstack/horizon-7fffd66c5c-klpbv" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.532836 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpsng\" (UniqueName: \"kubernetes.io/projected/de0219d5-88bc-44ef-a815-643f36288601-kube-api-access-vpsng\") pod \"horizon-5767946c5c-wgc8m\" (UID: \"de0219d5-88bc-44ef-a815-643f36288601\") " pod="openstack/horizon-5767946c5c-wgc8m" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.532859 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/668cf4df-9017-4e66-9260-f2601d78a3d7-logs\") pod \"horizon-7fffd66c5c-klpbv\" (UID: \"668cf4df-9017-4e66-9260-f2601d78a3d7\") " pod="openstack/horizon-7fffd66c5c-klpbv" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.532887 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/de0219d5-88bc-44ef-a815-643f36288601-config-data\") pod \"horizon-5767946c5c-wgc8m\" (UID: \"de0219d5-88bc-44ef-a815-643f36288601\") " pod="openstack/horizon-5767946c5c-wgc8m" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.532905 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de0219d5-88bc-44ef-a815-643f36288601-scripts\") pod \"horizon-5767946c5c-wgc8m\" (UID: \"de0219d5-88bc-44ef-a815-643f36288601\") " pod="openstack/horizon-5767946c5c-wgc8m" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.532926 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4jsh\" (UniqueName: \"kubernetes.io/projected/668cf4df-9017-4e66-9260-f2601d78a3d7-kube-api-access-b4jsh\") pod \"horizon-7fffd66c5c-klpbv\" (UID: \"668cf4df-9017-4e66-9260-f2601d78a3d7\") " pod="openstack/horizon-7fffd66c5c-klpbv" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.532943 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/de0219d5-88bc-44ef-a815-643f36288601-horizon-secret-key\") pod \"horizon-5767946c5c-wgc8m\" (UID: \"de0219d5-88bc-44ef-a815-643f36288601\") " pod="openstack/horizon-5767946c5c-wgc8m" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.532994 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de0219d5-88bc-44ef-a815-643f36288601-logs\") pod \"horizon-5767946c5c-wgc8m\" (UID: \"de0219d5-88bc-44ef-a815-643f36288601\") " pod="openstack/horizon-5767946c5c-wgc8m" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.533912 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/668cf4df-9017-4e66-9260-f2601d78a3d7-logs\") pod \"horizon-7fffd66c5c-klpbv\" (UID: \"668cf4df-9017-4e66-9260-f2601d78a3d7\") " pod="openstack/horizon-7fffd66c5c-klpbv" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.534191 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/668cf4df-9017-4e66-9260-f2601d78a3d7-config-data\") pod \"horizon-7fffd66c5c-klpbv\" (UID: \"668cf4df-9017-4e66-9260-f2601d78a3d7\") " pod="openstack/horizon-7fffd66c5c-klpbv" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.535131 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/668cf4df-9017-4e66-9260-f2601d78a3d7-scripts\") pod \"horizon-7fffd66c5c-klpbv\" (UID: \"668cf4df-9017-4e66-9260-f2601d78a3d7\") " pod="openstack/horizon-7fffd66c5c-klpbv" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.543684 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/668cf4df-9017-4e66-9260-f2601d78a3d7-horizon-secret-key\") pod \"horizon-7fffd66c5c-klpbv\" (UID: \"668cf4df-9017-4e66-9260-f2601d78a3d7\") " pod="openstack/horizon-7fffd66c5c-klpbv" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.567463 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4jsh\" (UniqueName: \"kubernetes.io/projected/668cf4df-9017-4e66-9260-f2601d78a3d7-kube-api-access-b4jsh\") pod \"horizon-7fffd66c5c-klpbv\" (UID: \"668cf4df-9017-4e66-9260-f2601d78a3d7\") " pod="openstack/horizon-7fffd66c5c-klpbv" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.569671 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.569897 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="28170d63-b3d4-4887-bb9d-e17e979cec89" containerName="glance-log" containerID="cri-o://dd4e3c74774bfde30e50e4455cca036b74ea59298b988394eeaeb19a9e5cafcf" gracePeriod=30 Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.570121 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="28170d63-b3d4-4887-bb9d-e17e979cec89" containerName="glance-httpd" containerID="cri-o://fa61205fa6454ae2809f5029a3713725201018ef2bb09a6eea256357d98e99cd" gracePeriod=30 Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.634307 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpsng\" (UniqueName: \"kubernetes.io/projected/de0219d5-88bc-44ef-a815-643f36288601-kube-api-access-vpsng\") pod \"horizon-5767946c5c-wgc8m\" (UID: \"de0219d5-88bc-44ef-a815-643f36288601\") " pod="openstack/horizon-5767946c5c-wgc8m" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.634377 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/de0219d5-88bc-44ef-a815-643f36288601-config-data\") pod \"horizon-5767946c5c-wgc8m\" (UID: \"de0219d5-88bc-44ef-a815-643f36288601\") " pod="openstack/horizon-5767946c5c-wgc8m" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.634401 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de0219d5-88bc-44ef-a815-643f36288601-scripts\") pod \"horizon-5767946c5c-wgc8m\" (UID: \"de0219d5-88bc-44ef-a815-643f36288601\") " pod="openstack/horizon-5767946c5c-wgc8m" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.634420 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/de0219d5-88bc-44ef-a815-643f36288601-horizon-secret-key\") pod \"horizon-5767946c5c-wgc8m\" (UID: \"de0219d5-88bc-44ef-a815-643f36288601\") " pod="openstack/horizon-5767946c5c-wgc8m" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.634460 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de0219d5-88bc-44ef-a815-643f36288601-logs\") pod \"horizon-5767946c5c-wgc8m\" (UID: \"de0219d5-88bc-44ef-a815-643f36288601\") " pod="openstack/horizon-5767946c5c-wgc8m" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.635066 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de0219d5-88bc-44ef-a815-643f36288601-logs\") pod \"horizon-5767946c5c-wgc8m\" (UID: \"de0219d5-88bc-44ef-a815-643f36288601\") " pod="openstack/horizon-5767946c5c-wgc8m" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.635230 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de0219d5-88bc-44ef-a815-643f36288601-scripts\") pod \"horizon-5767946c5c-wgc8m\" (UID: \"de0219d5-88bc-44ef-a815-643f36288601\") " pod="openstack/horizon-5767946c5c-wgc8m" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.635648 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/de0219d5-88bc-44ef-a815-643f36288601-config-data\") pod \"horizon-5767946c5c-wgc8m\" (UID: \"de0219d5-88bc-44ef-a815-643f36288601\") " pod="openstack/horizon-5767946c5c-wgc8m" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.638290 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/de0219d5-88bc-44ef-a815-643f36288601-horizon-secret-key\") pod \"horizon-5767946c5c-wgc8m\" (UID: \"de0219d5-88bc-44ef-a815-643f36288601\") " pod="openstack/horizon-5767946c5c-wgc8m" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.653289 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpsng\" (UniqueName: \"kubernetes.io/projected/de0219d5-88bc-44ef-a815-643f36288601-kube-api-access-vpsng\") pod \"horizon-5767946c5c-wgc8m\" (UID: \"de0219d5-88bc-44ef-a815-643f36288601\") " pod="openstack/horizon-5767946c5c-wgc8m" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.699744 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7fffd66c5c-klpbv" Feb 27 17:50:33 crc kubenswrapper[4830]: I0227 17:50:33.816387 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5767946c5c-wgc8m" Feb 27 17:50:34 crc kubenswrapper[4830]: I0227 17:50:34.106606 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5767946c5c-wgc8m"] Feb 27 17:50:34 crc kubenswrapper[4830]: I0227 17:50:34.145124 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-684cf744b5-pzh2b"] Feb 27 17:50:34 crc kubenswrapper[4830]: I0227 17:50:34.146798 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-684cf744b5-pzh2b" Feb 27 17:50:34 crc kubenswrapper[4830]: I0227 17:50:34.168929 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-684cf744b5-pzh2b"] Feb 27 17:50:34 crc kubenswrapper[4830]: I0227 17:50:34.208055 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7fffd66c5c-klpbv"] Feb 27 17:50:34 crc kubenswrapper[4830]: I0227 17:50:34.256106 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e8a4ffdf-3cc8-491c-8795-5226996342cc-horizon-secret-key\") pod \"horizon-684cf744b5-pzh2b\" (UID: \"e8a4ffdf-3cc8-491c-8795-5226996342cc\") " pod="openstack/horizon-684cf744b5-pzh2b" Feb 27 17:50:34 crc kubenswrapper[4830]: I0227 17:50:34.256155 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e8a4ffdf-3cc8-491c-8795-5226996342cc-config-data\") pod \"horizon-684cf744b5-pzh2b\" (UID: \"e8a4ffdf-3cc8-491c-8795-5226996342cc\") " pod="openstack/horizon-684cf744b5-pzh2b" Feb 27 17:50:34 crc kubenswrapper[4830]: I0227 17:50:34.256199 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8a4ffdf-3cc8-491c-8795-5226996342cc-logs\") pod \"horizon-684cf744b5-pzh2b\" (UID: \"e8a4ffdf-3cc8-491c-8795-5226996342cc\") " pod="openstack/horizon-684cf744b5-pzh2b" Feb 27 17:50:34 crc kubenswrapper[4830]: I0227 17:50:34.256402 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e8a4ffdf-3cc8-491c-8795-5226996342cc-scripts\") pod \"horizon-684cf744b5-pzh2b\" (UID: \"e8a4ffdf-3cc8-491c-8795-5226996342cc\") " pod="openstack/horizon-684cf744b5-pzh2b" Feb 27 17:50:34 crc kubenswrapper[4830]: I0227 17:50:34.256654 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zpg2\" (UniqueName: \"kubernetes.io/projected/e8a4ffdf-3cc8-491c-8795-5226996342cc-kube-api-access-2zpg2\") pod \"horizon-684cf744b5-pzh2b\" (UID: \"e8a4ffdf-3cc8-491c-8795-5226996342cc\") " pod="openstack/horizon-684cf744b5-pzh2b" Feb 27 17:50:34 crc kubenswrapper[4830]: I0227 17:50:34.277866 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-59f8cff499-rf92d"] Feb 27 17:50:34 crc kubenswrapper[4830]: I0227 17:50:34.279526 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-rf92d" Feb 27 17:50:34 crc kubenswrapper[4830]: I0227 17:50:34.283114 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Feb 27 17:50:34 crc kubenswrapper[4830]: I0227 17:50:34.287430 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-rf92d"] Feb 27 17:50:34 crc kubenswrapper[4830]: W0227 17:50:34.354270 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde0219d5_88bc_44ef_a815_643f36288601.slice/crio-86081fd5e6d6cc3f5eff3d4b063badef2a915b01fa9f6e503dad92a7e9889eae WatchSource:0}: Error finding container 86081fd5e6d6cc3f5eff3d4b063badef2a915b01fa9f6e503dad92a7e9889eae: Status 404 returned error can't find the container with id 86081fd5e6d6cc3f5eff3d4b063badef2a915b01fa9f6e503dad92a7e9889eae Feb 27 17:50:34 crc kubenswrapper[4830]: I0227 17:50:34.356558 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5767946c5c-wgc8m"] Feb 27 17:50:34 crc kubenswrapper[4830]: I0227 17:50:34.358041 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zpg2\" (UniqueName: \"kubernetes.io/projected/e8a4ffdf-3cc8-491c-8795-5226996342cc-kube-api-access-2zpg2\") pod \"horizon-684cf744b5-pzh2b\" (UID: \"e8a4ffdf-3cc8-491c-8795-5226996342cc\") " pod="openstack/horizon-684cf744b5-pzh2b" Feb 27 17:50:34 crc kubenswrapper[4830]: I0227 17:50:34.358103 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/f8b0f281-569f-4fbe-ab94-b604360aaafe-amphora-image\") pod \"octavia-image-upload-59f8cff499-rf92d\" (UID: \"f8b0f281-569f-4fbe-ab94-b604360aaafe\") " pod="openstack/octavia-image-upload-59f8cff499-rf92d" Feb 27 17:50:34 crc kubenswrapper[4830]: I0227 17:50:34.358135 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f8b0f281-569f-4fbe-ab94-b604360aaafe-httpd-config\") pod \"octavia-image-upload-59f8cff499-rf92d\" (UID: \"f8b0f281-569f-4fbe-ab94-b604360aaafe\") " pod="openstack/octavia-image-upload-59f8cff499-rf92d" Feb 27 17:50:34 crc kubenswrapper[4830]: I0227 17:50:34.358158 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e8a4ffdf-3cc8-491c-8795-5226996342cc-horizon-secret-key\") pod \"horizon-684cf744b5-pzh2b\" (UID: \"e8a4ffdf-3cc8-491c-8795-5226996342cc\") " pod="openstack/horizon-684cf744b5-pzh2b" Feb 27 17:50:34 crc kubenswrapper[4830]: I0227 17:50:34.358174 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e8a4ffdf-3cc8-491c-8795-5226996342cc-config-data\") pod \"horizon-684cf744b5-pzh2b\" (UID: \"e8a4ffdf-3cc8-491c-8795-5226996342cc\") " pod="openstack/horizon-684cf744b5-pzh2b" Feb 27 17:50:34 crc kubenswrapper[4830]: I0227 17:50:34.358207 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8a4ffdf-3cc8-491c-8795-5226996342cc-logs\") pod \"horizon-684cf744b5-pzh2b\" (UID: \"e8a4ffdf-3cc8-491c-8795-5226996342cc\") " pod="openstack/horizon-684cf744b5-pzh2b" Feb 27 17:50:34 crc kubenswrapper[4830]: I0227 17:50:34.358262 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e8a4ffdf-3cc8-491c-8795-5226996342cc-scripts\") pod \"horizon-684cf744b5-pzh2b\" (UID: \"e8a4ffdf-3cc8-491c-8795-5226996342cc\") " pod="openstack/horizon-684cf744b5-pzh2b" Feb 27 17:50:34 crc kubenswrapper[4830]: I0227 17:50:34.359217 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e8a4ffdf-3cc8-491c-8795-5226996342cc-scripts\") pod \"horizon-684cf744b5-pzh2b\" (UID: \"e8a4ffdf-3cc8-491c-8795-5226996342cc\") " pod="openstack/horizon-684cf744b5-pzh2b" Feb 27 17:50:34 crc kubenswrapper[4830]: I0227 17:50:34.360334 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8a4ffdf-3cc8-491c-8795-5226996342cc-logs\") pod \"horizon-684cf744b5-pzh2b\" (UID: \"e8a4ffdf-3cc8-491c-8795-5226996342cc\") " pod="openstack/horizon-684cf744b5-pzh2b" Feb 27 17:50:34 crc kubenswrapper[4830]: I0227 17:50:34.360899 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e8a4ffdf-3cc8-491c-8795-5226996342cc-config-data\") pod \"horizon-684cf744b5-pzh2b\" (UID: \"e8a4ffdf-3cc8-491c-8795-5226996342cc\") " pod="openstack/horizon-684cf744b5-pzh2b" Feb 27 17:50:34 crc kubenswrapper[4830]: I0227 17:50:34.365304 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e8a4ffdf-3cc8-491c-8795-5226996342cc-horizon-secret-key\") pod \"horizon-684cf744b5-pzh2b\" (UID: \"e8a4ffdf-3cc8-491c-8795-5226996342cc\") " pod="openstack/horizon-684cf744b5-pzh2b" Feb 27 17:50:34 crc kubenswrapper[4830]: I0227 17:50:34.376213 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zpg2\" (UniqueName: \"kubernetes.io/projected/e8a4ffdf-3cc8-491c-8795-5226996342cc-kube-api-access-2zpg2\") pod \"horizon-684cf744b5-pzh2b\" (UID: \"e8a4ffdf-3cc8-491c-8795-5226996342cc\") " pod="openstack/horizon-684cf744b5-pzh2b" Feb 27 17:50:34 crc kubenswrapper[4830]: I0227 17:50:34.459883 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/f8b0f281-569f-4fbe-ab94-b604360aaafe-amphora-image\") pod \"octavia-image-upload-59f8cff499-rf92d\" (UID: \"f8b0f281-569f-4fbe-ab94-b604360aaafe\") " pod="openstack/octavia-image-upload-59f8cff499-rf92d" Feb 27 17:50:34 crc kubenswrapper[4830]: I0227 17:50:34.460176 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f8b0f281-569f-4fbe-ab94-b604360aaafe-httpd-config\") pod \"octavia-image-upload-59f8cff499-rf92d\" (UID: \"f8b0f281-569f-4fbe-ab94-b604360aaafe\") " pod="openstack/octavia-image-upload-59f8cff499-rf92d" Feb 27 17:50:34 crc kubenswrapper[4830]: I0227 17:50:34.460372 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/f8b0f281-569f-4fbe-ab94-b604360aaafe-amphora-image\") pod \"octavia-image-upload-59f8cff499-rf92d\" (UID: \"f8b0f281-569f-4fbe-ab94-b604360aaafe\") " pod="openstack/octavia-image-upload-59f8cff499-rf92d" Feb 27 17:50:34 crc kubenswrapper[4830]: I0227 17:50:34.464987 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f8b0f281-569f-4fbe-ab94-b604360aaafe-httpd-config\") pod \"octavia-image-upload-59f8cff499-rf92d\" (UID: \"f8b0f281-569f-4fbe-ab94-b604360aaafe\") " pod="openstack/octavia-image-upload-59f8cff499-rf92d" Feb 27 17:50:34 crc kubenswrapper[4830]: I0227 17:50:34.477597 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-684cf744b5-pzh2b" Feb 27 17:50:34 crc kubenswrapper[4830]: I0227 17:50:34.478377 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5767946c5c-wgc8m" event={"ID":"de0219d5-88bc-44ef-a815-643f36288601","Type":"ContainerStarted","Data":"86081fd5e6d6cc3f5eff3d4b063badef2a915b01fa9f6e503dad92a7e9889eae"} Feb 27 17:50:34 crc kubenswrapper[4830]: I0227 17:50:34.484366 4830 generic.go:334] "Generic (PLEG): container finished" podID="28170d63-b3d4-4887-bb9d-e17e979cec89" containerID="dd4e3c74774bfde30e50e4455cca036b74ea59298b988394eeaeb19a9e5cafcf" exitCode=143 Feb 27 17:50:34 crc kubenswrapper[4830]: I0227 17:50:34.484613 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"28170d63-b3d4-4887-bb9d-e17e979cec89","Type":"ContainerDied","Data":"dd4e3c74774bfde30e50e4455cca036b74ea59298b988394eeaeb19a9e5cafcf"} Feb 27 17:50:34 crc kubenswrapper[4830]: I0227 17:50:34.487884 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7fffd66c5c-klpbv" event={"ID":"668cf4df-9017-4e66-9260-f2601d78a3d7","Type":"ContainerStarted","Data":"c8950b70d79ede89e301f91ceed6366b0fdd1fd9171ba13c883ee78f091b12b1"} Feb 27 17:50:34 crc kubenswrapper[4830]: I0227 17:50:34.491729 4830 generic.go:334] "Generic (PLEG): container finished" podID="43c19785-15fd-46d8-bea3-2c6fbc7c8bf7" containerID="08e5b77d43fbd61b42463151c48883dabac9bf64fa9819a06275e18cc611c769" exitCode=143 Feb 27 17:50:34 crc kubenswrapper[4830]: I0227 17:50:34.491759 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7","Type":"ContainerDied","Data":"08e5b77d43fbd61b42463151c48883dabac9bf64fa9819a06275e18cc611c769"} Feb 27 17:50:34 crc kubenswrapper[4830]: I0227 17:50:34.596650 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-59f8cff499-rf92d" Feb 27 17:50:35 crc kubenswrapper[4830]: I0227 17:50:35.032564 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-684cf744b5-pzh2b"] Feb 27 17:50:35 crc kubenswrapper[4830]: W0227 17:50:35.033484 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8a4ffdf_3cc8_491c_8795_5226996342cc.slice/crio-317083791c03c034064a4b1b8c072335d11e7fcb1a0611584346faf7884e6b6a WatchSource:0}: Error finding container 317083791c03c034064a4b1b8c072335d11e7fcb1a0611584346faf7884e6b6a: Status 404 returned error can't find the container with id 317083791c03c034064a4b1b8c072335d11e7fcb1a0611584346faf7884e6b6a Feb 27 17:50:35 crc kubenswrapper[4830]: I0227 17:50:35.140866 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-59f8cff499-rf92d"] Feb 27 17:50:35 crc kubenswrapper[4830]: W0227 17:50:35.154614 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8b0f281_569f_4fbe_ab94_b604360aaafe.slice/crio-05024217222ecf6542cd395075f83049a5910cc5ae6b0035f88658d0139c875c WatchSource:0}: Error finding container 05024217222ecf6542cd395075f83049a5910cc5ae6b0035f88658d0139c875c: Status 404 returned error can't find the container with id 05024217222ecf6542cd395075f83049a5910cc5ae6b0035f88658d0139c875c Feb 27 17:50:35 crc kubenswrapper[4830]: I0227 17:50:35.504623 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-rf92d" event={"ID":"f8b0f281-569f-4fbe-ab94-b604360aaafe","Type":"ContainerStarted","Data":"05024217222ecf6542cd395075f83049a5910cc5ae6b0035f88658d0139c875c"} Feb 27 17:50:35 crc kubenswrapper[4830]: I0227 17:50:35.506705 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-684cf744b5-pzh2b" event={"ID":"e8a4ffdf-3cc8-491c-8795-5226996342cc","Type":"ContainerStarted","Data":"317083791c03c034064a4b1b8c072335d11e7fcb1a0611584346faf7884e6b6a"} Feb 27 17:50:35 crc kubenswrapper[4830]: E0227 17:50:35.763871 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:50:36 crc kubenswrapper[4830]: I0227 17:50:36.524792 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-rf92d" event={"ID":"f8b0f281-569f-4fbe-ab94-b604360aaafe","Type":"ContainerStarted","Data":"bae3dc562406b864dbe3acd50feeb397c6473c856ce509ed1379a3f4dcf85f1f"} Feb 27 17:50:37 crc kubenswrapper[4830]: I0227 17:50:37.539854 4830 generic.go:334] "Generic (PLEG): container finished" podID="f8b0f281-569f-4fbe-ab94-b604360aaafe" containerID="bae3dc562406b864dbe3acd50feeb397c6473c856ce509ed1379a3f4dcf85f1f" exitCode=0 Feb 27 17:50:37 crc kubenswrapper[4830]: I0227 17:50:37.539894 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-rf92d" event={"ID":"f8b0f281-569f-4fbe-ab94-b604360aaafe","Type":"ContainerDied","Data":"bae3dc562406b864dbe3acd50feeb397c6473c856ce509ed1379a3f4dcf85f1f"} Feb 27 17:50:37 crc kubenswrapper[4830]: I0227 17:50:37.548033 4830 generic.go:334] "Generic (PLEG): container finished" podID="28170d63-b3d4-4887-bb9d-e17e979cec89" containerID="fa61205fa6454ae2809f5029a3713725201018ef2bb09a6eea256357d98e99cd" exitCode=0 Feb 27 17:50:37 crc kubenswrapper[4830]: I0227 17:50:37.548140 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"28170d63-b3d4-4887-bb9d-e17e979cec89","Type":"ContainerDied","Data":"fa61205fa6454ae2809f5029a3713725201018ef2bb09a6eea256357d98e99cd"} Feb 27 17:50:37 crc kubenswrapper[4830]: I0227 17:50:37.551280 4830 generic.go:334] "Generic (PLEG): container finished" podID="43c19785-15fd-46d8-bea3-2c6fbc7c8bf7" containerID="945f9961f41cab34c1f2ad257ca5c49f7ab25490a9bcff8ea8570507d7a41270" exitCode=0 Feb 27 17:50:37 crc kubenswrapper[4830]: I0227 17:50:37.551323 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7","Type":"ContainerDied","Data":"945f9961f41cab34c1f2ad257ca5c49f7ab25490a9bcff8ea8570507d7a41270"} Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.397699 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.400848 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.475621 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzvht\" (UniqueName: \"kubernetes.io/projected/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-kube-api-access-nzvht\") pod \"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7\" (UID: \"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7\") " Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.475704 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28170d63-b3d4-4887-bb9d-e17e979cec89-combined-ca-bundle\") pod \"28170d63-b3d4-4887-bb9d-e17e979cec89\" (UID: \"28170d63-b3d4-4887-bb9d-e17e979cec89\") " Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.475785 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs52k\" (UniqueName: \"kubernetes.io/projected/28170d63-b3d4-4887-bb9d-e17e979cec89-kube-api-access-qs52k\") pod \"28170d63-b3d4-4887-bb9d-e17e979cec89\" (UID: \"28170d63-b3d4-4887-bb9d-e17e979cec89\") " Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.475808 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28170d63-b3d4-4887-bb9d-e17e979cec89-scripts\") pod \"28170d63-b3d4-4887-bb9d-e17e979cec89\" (UID: \"28170d63-b3d4-4887-bb9d-e17e979cec89\") " Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.475859 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/28170d63-b3d4-4887-bb9d-e17e979cec89-ceph\") pod \"28170d63-b3d4-4887-bb9d-e17e979cec89\" (UID: \"28170d63-b3d4-4887-bb9d-e17e979cec89\") " Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.475902 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-httpd-run\") pod \"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7\" (UID: \"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7\") " Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.475938 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28170d63-b3d4-4887-bb9d-e17e979cec89-config-data\") pod \"28170d63-b3d4-4887-bb9d-e17e979cec89\" (UID: \"28170d63-b3d4-4887-bb9d-e17e979cec89\") " Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.475963 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28170d63-b3d4-4887-bb9d-e17e979cec89-logs\") pod \"28170d63-b3d4-4887-bb9d-e17e979cec89\" (UID: \"28170d63-b3d4-4887-bb9d-e17e979cec89\") " Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.476003 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/28170d63-b3d4-4887-bb9d-e17e979cec89-httpd-run\") pod \"28170d63-b3d4-4887-bb9d-e17e979cec89\" (UID: \"28170d63-b3d4-4887-bb9d-e17e979cec89\") " Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.476027 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-scripts\") pod \"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7\" (UID: \"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7\") " Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.476057 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-combined-ca-bundle\") pod \"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7\" (UID: \"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7\") " Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.476079 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-ceph\") pod \"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7\" (UID: \"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7\") " Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.476105 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-config-data\") pod \"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7\" (UID: \"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7\") " Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.476143 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-logs\") pod \"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7\" (UID: \"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7\") " Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.477857 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-logs" (OuterVolumeSpecName: "logs") pod "43c19785-15fd-46d8-bea3-2c6fbc7c8bf7" (UID: "43c19785-15fd-46d8-bea3-2c6fbc7c8bf7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.478353 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28170d63-b3d4-4887-bb9d-e17e979cec89-logs" (OuterVolumeSpecName: "logs") pod "28170d63-b3d4-4887-bb9d-e17e979cec89" (UID: "28170d63-b3d4-4887-bb9d-e17e979cec89"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.478688 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "43c19785-15fd-46d8-bea3-2c6fbc7c8bf7" (UID: "43c19785-15fd-46d8-bea3-2c6fbc7c8bf7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.479111 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28170d63-b3d4-4887-bb9d-e17e979cec89-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "28170d63-b3d4-4887-bb9d-e17e979cec89" (UID: "28170d63-b3d4-4887-bb9d-e17e979cec89"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.518672 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-kube-api-access-nzvht" (OuterVolumeSpecName: "kube-api-access-nzvht") pod "43c19785-15fd-46d8-bea3-2c6fbc7c8bf7" (UID: "43c19785-15fd-46d8-bea3-2c6fbc7c8bf7"). InnerVolumeSpecName "kube-api-access-nzvht". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.540220 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28170d63-b3d4-4887-bb9d-e17e979cec89-ceph" (OuterVolumeSpecName: "ceph") pod "28170d63-b3d4-4887-bb9d-e17e979cec89" (UID: "28170d63-b3d4-4887-bb9d-e17e979cec89"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.552030 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28170d63-b3d4-4887-bb9d-e17e979cec89-scripts" (OuterVolumeSpecName: "scripts") pod "28170d63-b3d4-4887-bb9d-e17e979cec89" (UID: "28170d63-b3d4-4887-bb9d-e17e979cec89"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.579669 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-logs\") on node \"crc\" DevicePath \"\"" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.579704 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzvht\" (UniqueName: \"kubernetes.io/projected/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-kube-api-access-nzvht\") on node \"crc\" DevicePath \"\"" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.579717 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28170d63-b3d4-4887-bb9d-e17e979cec89-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.579725 4830 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/28170d63-b3d4-4887-bb9d-e17e979cec89-ceph\") on node \"crc\" DevicePath \"\"" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.579733 4830 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.579741 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28170d63-b3d4-4887-bb9d-e17e979cec89-logs\") on node \"crc\" DevicePath \"\"" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.579749 4830 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/28170d63-b3d4-4887-bb9d-e17e979cec89-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.587279 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-ceph" (OuterVolumeSpecName: "ceph") pod "43c19785-15fd-46d8-bea3-2c6fbc7c8bf7" (UID: "43c19785-15fd-46d8-bea3-2c6fbc7c8bf7"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.587385 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-scripts" (OuterVolumeSpecName: "scripts") pod "43c19785-15fd-46d8-bea3-2c6fbc7c8bf7" (UID: "43c19785-15fd-46d8-bea3-2c6fbc7c8bf7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.587455 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28170d63-b3d4-4887-bb9d-e17e979cec89-kube-api-access-qs52k" (OuterVolumeSpecName: "kube-api-access-qs52k") pod "28170d63-b3d4-4887-bb9d-e17e979cec89" (UID: "28170d63-b3d4-4887-bb9d-e17e979cec89"). InnerVolumeSpecName "kube-api-access-qs52k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.684794 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs52k\" (UniqueName: \"kubernetes.io/projected/28170d63-b3d4-4887-bb9d-e17e979cec89-kube-api-access-qs52k\") on node \"crc\" DevicePath \"\"" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.685427 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.685532 4830 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-ceph\") on node \"crc\" DevicePath \"\"" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.719159 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28170d63-b3d4-4887-bb9d-e17e979cec89-config-data" (OuterVolumeSpecName: "config-data") pod "28170d63-b3d4-4887-bb9d-e17e979cec89" (UID: "28170d63-b3d4-4887-bb9d-e17e979cec89"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.721309 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43c19785-15fd-46d8-bea3-2c6fbc7c8bf7" (UID: "43c19785-15fd-46d8-bea3-2c6fbc7c8bf7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.741771 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.742210 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"28170d63-b3d4-4887-bb9d-e17e979cec89","Type":"ContainerDied","Data":"f364dc41e60262ec252156f22bf12b859401f6da1bf6a642ec2dd4dc45e9640c"} Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.742285 4830 scope.go:117] "RemoveContainer" containerID="fa61205fa6454ae2809f5029a3713725201018ef2bb09a6eea256357d98e99cd" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.742134 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-config-data" (OuterVolumeSpecName: "config-data") pod "43c19785-15fd-46d8-bea3-2c6fbc7c8bf7" (UID: "43c19785-15fd-46d8-bea3-2c6fbc7c8bf7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.774488 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.787068 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.787092 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.787101 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28170d63-b3d4-4887-bb9d-e17e979cec89-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.801286 4830 scope.go:117] "RemoveContainer" containerID="dd4e3c74774bfde30e50e4455cca036b74ea59298b988394eeaeb19a9e5cafcf" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.810988 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28170d63-b3d4-4887-bb9d-e17e979cec89-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28170d63-b3d4-4887-bb9d-e17e979cec89" (UID: "28170d63-b3d4-4887-bb9d-e17e979cec89"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.816581 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"43c19785-15fd-46d8-bea3-2c6fbc7c8bf7","Type":"ContainerDied","Data":"151a9474431af495751426435a282175f4c701117043f07bf2e36495175f058e"} Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.842297 4830 scope.go:117] "RemoveContainer" containerID="945f9961f41cab34c1f2ad257ca5c49f7ab25490a9bcff8ea8570507d7a41270" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.889247 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28170d63-b3d4-4887-bb9d-e17e979cec89-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.887636 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.902360 4830 scope.go:117] "RemoveContainer" containerID="08e5b77d43fbd61b42463151c48883dabac9bf64fa9819a06275e18cc611c769" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.902508 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.926452 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 17:50:42 crc kubenswrapper[4830]: E0227 17:50:42.926872 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28170d63-b3d4-4887-bb9d-e17e979cec89" containerName="glance-log" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.926888 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="28170d63-b3d4-4887-bb9d-e17e979cec89" containerName="glance-log" Feb 27 17:50:42 crc kubenswrapper[4830]: E0227 17:50:42.926916 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43c19785-15fd-46d8-bea3-2c6fbc7c8bf7" containerName="glance-log" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.926923 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="43c19785-15fd-46d8-bea3-2c6fbc7c8bf7" containerName="glance-log" Feb 27 17:50:42 crc kubenswrapper[4830]: E0227 17:50:42.926939 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43c19785-15fd-46d8-bea3-2c6fbc7c8bf7" containerName="glance-httpd" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.926947 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="43c19785-15fd-46d8-bea3-2c6fbc7c8bf7" containerName="glance-httpd" Feb 27 17:50:42 crc kubenswrapper[4830]: E0227 17:50:42.926955 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28170d63-b3d4-4887-bb9d-e17e979cec89" containerName="glance-httpd" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.926960 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="28170d63-b3d4-4887-bb9d-e17e979cec89" containerName="glance-httpd" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.927203 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="43c19785-15fd-46d8-bea3-2c6fbc7c8bf7" containerName="glance-httpd" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.927217 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="28170d63-b3d4-4887-bb9d-e17e979cec89" containerName="glance-httpd" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.927230 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="28170d63-b3d4-4887-bb9d-e17e979cec89" containerName="glance-log" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.927264 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="43c19785-15fd-46d8-bea3-2c6fbc7c8bf7" containerName="glance-log" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.928275 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.934155 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 27 17:50:42 crc kubenswrapper[4830]: I0227 17:50:42.965457 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.080267 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.094871 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b3006a5-059d-4325-ab11-bb77351ab8f6-config-data\") pod \"glance-default-external-api-0\" (UID: \"4b3006a5-059d-4325-ab11-bb77351ab8f6\") " pod="openstack/glance-default-external-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.094986 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b3006a5-059d-4325-ab11-bb77351ab8f6-logs\") pod \"glance-default-external-api-0\" (UID: \"4b3006a5-059d-4325-ab11-bb77351ab8f6\") " pod="openstack/glance-default-external-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.095038 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4b3006a5-059d-4325-ab11-bb77351ab8f6-ceph\") pod \"glance-default-external-api-0\" (UID: \"4b3006a5-059d-4325-ab11-bb77351ab8f6\") " pod="openstack/glance-default-external-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.095058 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b3006a5-059d-4325-ab11-bb77351ab8f6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4b3006a5-059d-4325-ab11-bb77351ab8f6\") " pod="openstack/glance-default-external-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.095074 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6646\" (UniqueName: \"kubernetes.io/projected/4b3006a5-059d-4325-ab11-bb77351ab8f6-kube-api-access-t6646\") pod \"glance-default-external-api-0\" (UID: \"4b3006a5-059d-4325-ab11-bb77351ab8f6\") " pod="openstack/glance-default-external-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.095098 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b3006a5-059d-4325-ab11-bb77351ab8f6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4b3006a5-059d-4325-ab11-bb77351ab8f6\") " pod="openstack/glance-default-external-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.095130 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b3006a5-059d-4325-ab11-bb77351ab8f6-scripts\") pod \"glance-default-external-api-0\" (UID: \"4b3006a5-059d-4325-ab11-bb77351ab8f6\") " pod="openstack/glance-default-external-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.098251 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.116530 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.128220 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.155782 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.164622 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.197024 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7edff5b-0c5e-4950-ae29-5cd0af755e35-logs\") pod \"glance-default-internal-api-0\" (UID: \"d7edff5b-0c5e-4950-ae29-5cd0af755e35\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.197077 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b3006a5-059d-4325-ab11-bb77351ab8f6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4b3006a5-059d-4325-ab11-bb77351ab8f6\") " pod="openstack/glance-default-external-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.197118 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b3006a5-059d-4325-ab11-bb77351ab8f6-scripts\") pod \"glance-default-external-api-0\" (UID: \"4b3006a5-059d-4325-ab11-bb77351ab8f6\") " pod="openstack/glance-default-external-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.197145 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7edff5b-0c5e-4950-ae29-5cd0af755e35-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d7edff5b-0c5e-4950-ae29-5cd0af755e35\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.197213 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7edff5b-0c5e-4950-ae29-5cd0af755e35-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d7edff5b-0c5e-4950-ae29-5cd0af755e35\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.197244 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b3006a5-059d-4325-ab11-bb77351ab8f6-config-data\") pod \"glance-default-external-api-0\" (UID: \"4b3006a5-059d-4325-ab11-bb77351ab8f6\") " pod="openstack/glance-default-external-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.197303 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b3006a5-059d-4325-ab11-bb77351ab8f6-logs\") pod \"glance-default-external-api-0\" (UID: \"4b3006a5-059d-4325-ab11-bb77351ab8f6\") " pod="openstack/glance-default-external-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.197322 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7edff5b-0c5e-4950-ae29-5cd0af755e35-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d7edff5b-0c5e-4950-ae29-5cd0af755e35\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.197343 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d7edff5b-0c5e-4950-ae29-5cd0af755e35-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d7edff5b-0c5e-4950-ae29-5cd0af755e35\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.197386 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4b3006a5-059d-4325-ab11-bb77351ab8f6-ceph\") pod \"glance-default-external-api-0\" (UID: \"4b3006a5-059d-4325-ab11-bb77351ab8f6\") " pod="openstack/glance-default-external-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.197405 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d7edff5b-0c5e-4950-ae29-5cd0af755e35-ceph\") pod \"glance-default-internal-api-0\" (UID: \"d7edff5b-0c5e-4950-ae29-5cd0af755e35\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.197424 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b3006a5-059d-4325-ab11-bb77351ab8f6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4b3006a5-059d-4325-ab11-bb77351ab8f6\") " pod="openstack/glance-default-external-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.197445 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6646\" (UniqueName: \"kubernetes.io/projected/4b3006a5-059d-4325-ab11-bb77351ab8f6-kube-api-access-t6646\") pod \"glance-default-external-api-0\" (UID: \"4b3006a5-059d-4325-ab11-bb77351ab8f6\") " pod="openstack/glance-default-external-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.197463 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg2kp\" (UniqueName: \"kubernetes.io/projected/d7edff5b-0c5e-4950-ae29-5cd0af755e35-kube-api-access-rg2kp\") pod \"glance-default-internal-api-0\" (UID: \"d7edff5b-0c5e-4950-ae29-5cd0af755e35\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.198615 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b3006a5-059d-4325-ab11-bb77351ab8f6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4b3006a5-059d-4325-ab11-bb77351ab8f6\") " pod="openstack/glance-default-external-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.199547 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b3006a5-059d-4325-ab11-bb77351ab8f6-logs\") pod \"glance-default-external-api-0\" (UID: \"4b3006a5-059d-4325-ab11-bb77351ab8f6\") " pod="openstack/glance-default-external-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.201849 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4b3006a5-059d-4325-ab11-bb77351ab8f6-ceph\") pod \"glance-default-external-api-0\" (UID: \"4b3006a5-059d-4325-ab11-bb77351ab8f6\") " pod="openstack/glance-default-external-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.203586 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b3006a5-059d-4325-ab11-bb77351ab8f6-config-data\") pod \"glance-default-external-api-0\" (UID: \"4b3006a5-059d-4325-ab11-bb77351ab8f6\") " pod="openstack/glance-default-external-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.203657 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b3006a5-059d-4325-ab11-bb77351ab8f6-scripts\") pod \"glance-default-external-api-0\" (UID: \"4b3006a5-059d-4325-ab11-bb77351ab8f6\") " pod="openstack/glance-default-external-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.204134 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b3006a5-059d-4325-ab11-bb77351ab8f6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4b3006a5-059d-4325-ab11-bb77351ab8f6\") " pod="openstack/glance-default-external-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.217141 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6646\" (UniqueName: \"kubernetes.io/projected/4b3006a5-059d-4325-ab11-bb77351ab8f6-kube-api-access-t6646\") pod \"glance-default-external-api-0\" (UID: \"4b3006a5-059d-4325-ab11-bb77351ab8f6\") " pod="openstack/glance-default-external-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.256365 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.299552 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7edff5b-0c5e-4950-ae29-5cd0af755e35-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d7edff5b-0c5e-4950-ae29-5cd0af755e35\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.300709 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d7edff5b-0c5e-4950-ae29-5cd0af755e35-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d7edff5b-0c5e-4950-ae29-5cd0af755e35\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.300790 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d7edff5b-0c5e-4950-ae29-5cd0af755e35-ceph\") pod \"glance-default-internal-api-0\" (UID: \"d7edff5b-0c5e-4950-ae29-5cd0af755e35\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.300813 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg2kp\" (UniqueName: \"kubernetes.io/projected/d7edff5b-0c5e-4950-ae29-5cd0af755e35-kube-api-access-rg2kp\") pod \"glance-default-internal-api-0\" (UID: \"d7edff5b-0c5e-4950-ae29-5cd0af755e35\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.300836 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7edff5b-0c5e-4950-ae29-5cd0af755e35-logs\") pod \"glance-default-internal-api-0\" (UID: \"d7edff5b-0c5e-4950-ae29-5cd0af755e35\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.300879 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7edff5b-0c5e-4950-ae29-5cd0af755e35-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d7edff5b-0c5e-4950-ae29-5cd0af755e35\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.300925 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7edff5b-0c5e-4950-ae29-5cd0af755e35-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d7edff5b-0c5e-4950-ae29-5cd0af755e35\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.302554 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d7edff5b-0c5e-4950-ae29-5cd0af755e35-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d7edff5b-0c5e-4950-ae29-5cd0af755e35\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.306775 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7edff5b-0c5e-4950-ae29-5cd0af755e35-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d7edff5b-0c5e-4950-ae29-5cd0af755e35\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.307467 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7edff5b-0c5e-4950-ae29-5cd0af755e35-logs\") pod \"glance-default-internal-api-0\" (UID: \"d7edff5b-0c5e-4950-ae29-5cd0af755e35\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.309172 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7edff5b-0c5e-4950-ae29-5cd0af755e35-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d7edff5b-0c5e-4950-ae29-5cd0af755e35\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.317497 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d7edff5b-0c5e-4950-ae29-5cd0af755e35-ceph\") pod \"glance-default-internal-api-0\" (UID: \"d7edff5b-0c5e-4950-ae29-5cd0af755e35\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.318270 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg2kp\" (UniqueName: \"kubernetes.io/projected/d7edff5b-0c5e-4950-ae29-5cd0af755e35-kube-api-access-rg2kp\") pod \"glance-default-internal-api-0\" (UID: \"d7edff5b-0c5e-4950-ae29-5cd0af755e35\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.325598 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7edff5b-0c5e-4950-ae29-5cd0af755e35-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d7edff5b-0c5e-4950-ae29-5cd0af755e35\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.476187 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.794671 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-684cf744b5-pzh2b" event={"ID":"e8a4ffdf-3cc8-491c-8795-5226996342cc","Type":"ContainerStarted","Data":"778fde59b569a28b8849d4c16df30e107135a979d8f9f7724ff452e47b32a740"} Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.794714 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-684cf744b5-pzh2b" event={"ID":"e8a4ffdf-3cc8-491c-8795-5226996342cc","Type":"ContainerStarted","Data":"d12fdec6bb2fe1d1fb3852c6738cebb28432f42eeff19e89964818907186d5a1"} Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.798246 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7fffd66c5c-klpbv" event={"ID":"668cf4df-9017-4e66-9260-f2601d78a3d7","Type":"ContainerStarted","Data":"eb1cabbb40c471774960e5ff9ad92401514a25734f49aa36622d164c99800a45"} Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.798290 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7fffd66c5c-klpbv" event={"ID":"668cf4df-9017-4e66-9260-f2601d78a3d7","Type":"ContainerStarted","Data":"8c188b53f11ca0837e9e9cbeee1f5bee1ea0f40d8aad5b80f61673d2819c3ee6"} Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.802030 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5767946c5c-wgc8m" event={"ID":"de0219d5-88bc-44ef-a815-643f36288601","Type":"ContainerStarted","Data":"76dee8c8675076174e182c25237040c99bb6a31a793dd993524be0662806f266"} Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.802067 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5767946c5c-wgc8m" event={"ID":"de0219d5-88bc-44ef-a815-643f36288601","Type":"ContainerStarted","Data":"462c4f2e241affc624a4dd25875f81ef7688f725cea165d1b60574a237248f1b"} Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.802153 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5767946c5c-wgc8m" podUID="de0219d5-88bc-44ef-a815-643f36288601" containerName="horizon-log" containerID="cri-o://462c4f2e241affc624a4dd25875f81ef7688f725cea165d1b60574a237248f1b" gracePeriod=30 Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.802211 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5767946c5c-wgc8m" podUID="de0219d5-88bc-44ef-a815-643f36288601" containerName="horizon" containerID="cri-o://76dee8c8675076174e182c25237040c99bb6a31a793dd993524be0662806f266" gracePeriod=30 Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.816633 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5767946c5c-wgc8m" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.825401 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-684cf744b5-pzh2b" podStartSLOduration=2.407227771 podStartE2EDuration="9.825375452s" podCreationTimestamp="2026-02-27 17:50:34 +0000 UTC" firstStartedPulling="2026-02-27 17:50:35.036045011 +0000 UTC m=+6231.125317474" lastFinishedPulling="2026-02-27 17:50:42.454192682 +0000 UTC m=+6238.543465155" observedRunningTime="2026-02-27 17:50:43.813682203 +0000 UTC m=+6239.902954666" watchObservedRunningTime="2026-02-27 17:50:43.825375452 +0000 UTC m=+6239.914647915" Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.843821 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7fffd66c5c-klpbv" podStartSLOduration=2.646628788 podStartE2EDuration="10.843802823s" podCreationTimestamp="2026-02-27 17:50:33 +0000 UTC" firstStartedPulling="2026-02-27 17:50:34.207105134 +0000 UTC m=+6230.296377597" lastFinishedPulling="2026-02-27 17:50:42.404279159 +0000 UTC m=+6238.493551632" observedRunningTime="2026-02-27 17:50:43.834757697 +0000 UTC m=+6239.924030160" watchObservedRunningTime="2026-02-27 17:50:43.843802823 +0000 UTC m=+6239.933075286" Feb 27 17:50:43 crc kubenswrapper[4830]: W0227 17:50:43.855216 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b3006a5_059d_4325_ab11_bb77351ab8f6.slice/crio-1d03c7c0f594a57426a3ee68acc12234025f7aeea5b61b440f732505a109a519 WatchSource:0}: Error finding container 1d03c7c0f594a57426a3ee68acc12234025f7aeea5b61b440f732505a109a519: Status 404 returned error can't find the container with id 1d03c7c0f594a57426a3ee68acc12234025f7aeea5b61b440f732505a109a519 Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.865105 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 17:50:43 crc kubenswrapper[4830]: I0227 17:50:43.875398 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5767946c5c-wgc8m" podStartSLOduration=2.853220998 podStartE2EDuration="10.875380978s" podCreationTimestamp="2026-02-27 17:50:33 +0000 UTC" firstStartedPulling="2026-02-27 17:50:34.356740102 +0000 UTC m=+6230.446012565" lastFinishedPulling="2026-02-27 17:50:42.378900062 +0000 UTC m=+6238.468172545" observedRunningTime="2026-02-27 17:50:43.874402304 +0000 UTC m=+6239.963674767" watchObservedRunningTime="2026-02-27 17:50:43.875380978 +0000 UTC m=+6239.964653441" Feb 27 17:50:44 crc kubenswrapper[4830]: I0227 17:50:44.050265 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 17:50:44 crc kubenswrapper[4830]: I0227 17:50:44.478218 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-684cf744b5-pzh2b" Feb 27 17:50:44 crc kubenswrapper[4830]: I0227 17:50:44.478324 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-684cf744b5-pzh2b" Feb 27 17:50:44 crc kubenswrapper[4830]: I0227 17:50:44.780657 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28170d63-b3d4-4887-bb9d-e17e979cec89" path="/var/lib/kubelet/pods/28170d63-b3d4-4887-bb9d-e17e979cec89/volumes" Feb 27 17:50:44 crc kubenswrapper[4830]: I0227 17:50:44.782084 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43c19785-15fd-46d8-bea3-2c6fbc7c8bf7" path="/var/lib/kubelet/pods/43c19785-15fd-46d8-bea3-2c6fbc7c8bf7/volumes" Feb 27 17:50:44 crc kubenswrapper[4830]: I0227 17:50:44.814367 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d7edff5b-0c5e-4950-ae29-5cd0af755e35","Type":"ContainerStarted","Data":"bf30aadcf0be5cf786608385a4a3f53f4a7b23fb27800f894dfdbdef0bd40f75"} Feb 27 17:50:44 crc kubenswrapper[4830]: I0227 17:50:44.818358 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4b3006a5-059d-4325-ab11-bb77351ab8f6","Type":"ContainerStarted","Data":"87bb7a23520ff43871e9a243b734053d26c36dc9cc8384f0f96d1203179fc205"} Feb 27 17:50:44 crc kubenswrapper[4830]: I0227 17:50:44.818467 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4b3006a5-059d-4325-ab11-bb77351ab8f6","Type":"ContainerStarted","Data":"1d03c7c0f594a57426a3ee68acc12234025f7aeea5b61b440f732505a109a519"} Feb 27 17:50:45 crc kubenswrapper[4830]: I0227 17:50:45.838229 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-59f8cff499-rf92d" event={"ID":"f8b0f281-569f-4fbe-ab94-b604360aaafe","Type":"ContainerStarted","Data":"c2a510796fe49c85b3877681815de2a2613d124182812c9429164a0991f6c439"} Feb 27 17:50:45 crc kubenswrapper[4830]: I0227 17:50:45.844939 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d7edff5b-0c5e-4950-ae29-5cd0af755e35","Type":"ContainerStarted","Data":"4d01b55dcb7bbfca7499787ab35fd48568dda713845b6abf3d5975ee2f8fd138"} Feb 27 17:50:45 crc kubenswrapper[4830]: I0227 17:50:45.845010 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d7edff5b-0c5e-4950-ae29-5cd0af755e35","Type":"ContainerStarted","Data":"60c2546d470d02ba4b65a8710a7a743058d896eb1b29d4714eef48dd1dfe8307"} Feb 27 17:50:45 crc kubenswrapper[4830]: I0227 17:50:45.874128 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-59f8cff499-rf92d" podStartSLOduration=2.119300527 podStartE2EDuration="11.87410735s" podCreationTimestamp="2026-02-27 17:50:34 +0000 UTC" firstStartedPulling="2026-02-27 17:50:35.1594125 +0000 UTC m=+6231.248684973" lastFinishedPulling="2026-02-27 17:50:44.914219333 +0000 UTC m=+6241.003491796" observedRunningTime="2026-02-27 17:50:45.858701782 +0000 UTC m=+6241.947974245" watchObservedRunningTime="2026-02-27 17:50:45.87410735 +0000 UTC m=+6241.963379823" Feb 27 17:50:46 crc kubenswrapper[4830]: I0227 17:50:46.861811 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4b3006a5-059d-4325-ab11-bb77351ab8f6","Type":"ContainerStarted","Data":"894feb2fbf22f947da8f9fd92935f92fbb52101bc6a010f39824dc76398201ab"} Feb 27 17:50:46 crc kubenswrapper[4830]: I0227 17:50:46.902571 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.902540206 podStartE2EDuration="3.902540206s" podCreationTimestamp="2026-02-27 17:50:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:50:46.891644996 +0000 UTC m=+6242.980917489" watchObservedRunningTime="2026-02-27 17:50:46.902540206 +0000 UTC m=+6242.991812699" Feb 27 17:50:46 crc kubenswrapper[4830]: I0227 17:50:46.927535 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.927514334 podStartE2EDuration="4.927514334s" podCreationTimestamp="2026-02-27 17:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:50:46.917244997 +0000 UTC m=+6243.006517460" watchObservedRunningTime="2026-02-27 17:50:46.927514334 +0000 UTC m=+6243.016786797" Feb 27 17:50:50 crc kubenswrapper[4830]: E0227 17:50:50.766056 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:50:53 crc kubenswrapper[4830]: I0227 17:50:53.208291 4830 scope.go:117] "RemoveContainer" containerID="5555cb99baa299be153d20d08b4486f006126c30bd46dbed11c76edee3a19b70" Feb 27 17:50:53 crc kubenswrapper[4830]: I0227 17:50:53.243116 4830 scope.go:117] "RemoveContainer" containerID="05360378ab057b13551d131ac1406057daf407391b34f6e4a5314119293b601e" Feb 27 17:50:53 crc kubenswrapper[4830]: I0227 17:50:53.257097 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 27 17:50:53 crc kubenswrapper[4830]: I0227 17:50:53.258349 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 27 17:50:53 crc kubenswrapper[4830]: I0227 17:50:53.293159 4830 scope.go:117] "RemoveContainer" containerID="1741f85485987bfb4d4d76628430e674b6e549e230ddfafab44f9bad653a361a" Feb 27 17:50:53 crc kubenswrapper[4830]: I0227 17:50:53.303873 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 27 17:50:53 crc kubenswrapper[4830]: I0227 17:50:53.305552 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 27 17:50:53 crc kubenswrapper[4830]: I0227 17:50:53.477185 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 27 17:50:53 crc kubenswrapper[4830]: I0227 17:50:53.477757 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 27 17:50:53 crc kubenswrapper[4830]: I0227 17:50:53.511517 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 27 17:50:53 crc kubenswrapper[4830]: I0227 17:50:53.536385 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 27 17:50:53 crc kubenswrapper[4830]: I0227 17:50:53.700905 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7fffd66c5c-klpbv" Feb 27 17:50:53 crc kubenswrapper[4830]: I0227 17:50:53.701165 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7fffd66c5c-klpbv" Feb 27 17:50:53 crc kubenswrapper[4830]: I0227 17:50:53.702705 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7fffd66c5c-klpbv" podUID="668cf4df-9017-4e66-9260-f2601d78a3d7" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.154:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.154:8080: connect: connection refused" Feb 27 17:50:53 crc kubenswrapper[4830]: I0227 17:50:53.976813 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 27 17:50:53 crc kubenswrapper[4830]: I0227 17:50:53.977333 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 27 17:50:53 crc kubenswrapper[4830]: I0227 17:50:53.977497 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 27 17:50:53 crc kubenswrapper[4830]: I0227 17:50:53.977627 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 27 17:50:54 crc kubenswrapper[4830]: I0227 17:50:54.482543 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-684cf744b5-pzh2b" podUID="e8a4ffdf-3cc8-491c-8795-5226996342cc" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.156:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.156:8080: connect: connection refused" Feb 27 17:50:55 crc kubenswrapper[4830]: I0227 17:50:55.879113 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 27 17:50:55 crc kubenswrapper[4830]: I0227 17:50:55.886321 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 27 17:50:55 crc kubenswrapper[4830]: I0227 17:50:55.967023 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 27 17:50:56 crc kubenswrapper[4830]: I0227 17:50:56.019858 4830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 27 17:50:56 crc kubenswrapper[4830]: I0227 17:50:56.048432 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 27 17:51:01 crc kubenswrapper[4830]: I0227 17:51:01.102067 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-7fjgs"] Feb 27 17:51:01 crc kubenswrapper[4830]: I0227 17:51:01.110171 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-8348-account-create-update-kh6fw"] Feb 27 17:51:01 crc kubenswrapper[4830]: I0227 17:51:01.118605 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-7fjgs"] Feb 27 17:51:01 crc kubenswrapper[4830]: I0227 17:51:01.130071 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-8348-account-create-update-kh6fw"] Feb 27 17:51:02 crc kubenswrapper[4830]: E0227 17:51:02.766398 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:51:02 crc kubenswrapper[4830]: I0227 17:51:02.780400 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="603ffaad-dc9f-4434-ab69-7d7f0b818991" path="/var/lib/kubelet/pods/603ffaad-dc9f-4434-ab69-7d7f0b818991/volumes" Feb 27 17:51:02 crc kubenswrapper[4830]: I0227 17:51:02.782382 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1507562-13d1-412c-ace5-6598ce757fdd" path="/var/lib/kubelet/pods/c1507562-13d1-412c-ace5-6598ce757fdd/volumes" Feb 27 17:51:03 crc kubenswrapper[4830]: I0227 17:51:03.160512 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:51:03 crc kubenswrapper[4830]: I0227 17:51:03.160578 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:51:03 crc kubenswrapper[4830]: I0227 17:51:03.160629 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 17:51:03 crc kubenswrapper[4830]: I0227 17:51:03.161451 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1edc2346b55575fd27d28000f5321fa0e167abd0b9733373b1ab9e03d2bd8d16"} pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 17:51:03 crc kubenswrapper[4830]: I0227 17:51:03.161510 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" containerID="cri-o://1edc2346b55575fd27d28000f5321fa0e167abd0b9733373b1ab9e03d2bd8d16" gracePeriod=600 Feb 27 17:51:04 crc kubenswrapper[4830]: I0227 17:51:04.119566 4830 generic.go:334] "Generic (PLEG): container finished" podID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerID="1edc2346b55575fd27d28000f5321fa0e167abd0b9733373b1ab9e03d2bd8d16" exitCode=0 Feb 27 17:51:04 crc kubenswrapper[4830]: I0227 17:51:04.119629 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerDied","Data":"1edc2346b55575fd27d28000f5321fa0e167abd0b9733373b1ab9e03d2bd8d16"} Feb 27 17:51:04 crc kubenswrapper[4830]: I0227 17:51:04.120029 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerStarted","Data":"9c7600b1d02b30467a3c9249e6962a5faed8288686d8237218305c7cc4357171"} Feb 27 17:51:04 crc kubenswrapper[4830]: I0227 17:51:04.120059 4830 scope.go:117] "RemoveContainer" containerID="0f73af46b765b3d98f3f5e9883b49887ac4f2ac3485e91a001e18c5d351646d8" Feb 27 17:51:05 crc kubenswrapper[4830]: I0227 17:51:05.348928 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7fffd66c5c-klpbv" Feb 27 17:51:06 crc kubenswrapper[4830]: I0227 17:51:06.136113 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-684cf744b5-pzh2b" Feb 27 17:51:06 crc kubenswrapper[4830]: I0227 17:51:06.973664 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7fffd66c5c-klpbv" Feb 27 17:51:07 crc kubenswrapper[4830]: I0227 17:51:07.702846 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-684cf744b5-pzh2b" Feb 27 17:51:07 crc kubenswrapper[4830]: I0227 17:51:07.790340 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7fffd66c5c-klpbv"] Feb 27 17:51:07 crc kubenswrapper[4830]: I0227 17:51:07.792293 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7fffd66c5c-klpbv" podUID="668cf4df-9017-4e66-9260-f2601d78a3d7" containerName="horizon-log" containerID="cri-o://8c188b53f11ca0837e9e9cbeee1f5bee1ea0f40d8aad5b80f61673d2819c3ee6" gracePeriod=30 Feb 27 17:51:07 crc kubenswrapper[4830]: I0227 17:51:07.792388 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7fffd66c5c-klpbv" podUID="668cf4df-9017-4e66-9260-f2601d78a3d7" containerName="horizon" containerID="cri-o://eb1cabbb40c471774960e5ff9ad92401514a25734f49aa36622d164c99800a45" gracePeriod=30 Feb 27 17:51:09 crc kubenswrapper[4830]: I0227 17:51:09.046357 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-6w48k"] Feb 27 17:51:09 crc kubenswrapper[4830]: I0227 17:51:09.057188 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-6w48k"] Feb 27 17:51:10 crc kubenswrapper[4830]: I0227 17:51:10.784970 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="277e5f01-7cfa-40fd-a52d-8af10c6090f8" path="/var/lib/kubelet/pods/277e5f01-7cfa-40fd-a52d-8af10c6090f8/volumes" Feb 27 17:51:11 crc kubenswrapper[4830]: I0227 17:51:11.205431 4830 generic.go:334] "Generic (PLEG): container finished" podID="668cf4df-9017-4e66-9260-f2601d78a3d7" containerID="eb1cabbb40c471774960e5ff9ad92401514a25734f49aa36622d164c99800a45" exitCode=0 Feb 27 17:51:11 crc kubenswrapper[4830]: I0227 17:51:11.205498 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7fffd66c5c-klpbv" event={"ID":"668cf4df-9017-4e66-9260-f2601d78a3d7","Type":"ContainerDied","Data":"eb1cabbb40c471774960e5ff9ad92401514a25734f49aa36622d164c99800a45"} Feb 27 17:51:13 crc kubenswrapper[4830]: I0227 17:51:13.700343 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7fffd66c5c-klpbv" podUID="668cf4df-9017-4e66-9260-f2601d78a3d7" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.154:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.154:8080: connect: connection refused" Feb 27 17:51:14 crc kubenswrapper[4830]: I0227 17:51:14.244785 4830 generic.go:334] "Generic (PLEG): container finished" podID="de0219d5-88bc-44ef-a815-643f36288601" containerID="76dee8c8675076174e182c25237040c99bb6a31a793dd993524be0662806f266" exitCode=137 Feb 27 17:51:14 crc kubenswrapper[4830]: I0227 17:51:14.245205 4830 generic.go:334] "Generic (PLEG): container finished" podID="de0219d5-88bc-44ef-a815-643f36288601" containerID="462c4f2e241affc624a4dd25875f81ef7688f725cea165d1b60574a237248f1b" exitCode=137 Feb 27 17:51:14 crc kubenswrapper[4830]: I0227 17:51:14.244873 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5767946c5c-wgc8m" event={"ID":"de0219d5-88bc-44ef-a815-643f36288601","Type":"ContainerDied","Data":"76dee8c8675076174e182c25237040c99bb6a31a793dd993524be0662806f266"} Feb 27 17:51:14 crc kubenswrapper[4830]: I0227 17:51:14.245246 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5767946c5c-wgc8m" event={"ID":"de0219d5-88bc-44ef-a815-643f36288601","Type":"ContainerDied","Data":"462c4f2e241affc624a4dd25875f81ef7688f725cea165d1b60574a237248f1b"} Feb 27 17:51:14 crc kubenswrapper[4830]: I0227 17:51:14.245276 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5767946c5c-wgc8m" event={"ID":"de0219d5-88bc-44ef-a815-643f36288601","Type":"ContainerDied","Data":"86081fd5e6d6cc3f5eff3d4b063badef2a915b01fa9f6e503dad92a7e9889eae"} Feb 27 17:51:14 crc kubenswrapper[4830]: I0227 17:51:14.245291 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86081fd5e6d6cc3f5eff3d4b063badef2a915b01fa9f6e503dad92a7e9889eae" Feb 27 17:51:14 crc kubenswrapper[4830]: I0227 17:51:14.327390 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5767946c5c-wgc8m" Feb 27 17:51:14 crc kubenswrapper[4830]: I0227 17:51:14.346242 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de0219d5-88bc-44ef-a815-643f36288601-scripts\") pod \"de0219d5-88bc-44ef-a815-643f36288601\" (UID: \"de0219d5-88bc-44ef-a815-643f36288601\") " Feb 27 17:51:14 crc kubenswrapper[4830]: I0227 17:51:14.346758 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/de0219d5-88bc-44ef-a815-643f36288601-config-data\") pod \"de0219d5-88bc-44ef-a815-643f36288601\" (UID: \"de0219d5-88bc-44ef-a815-643f36288601\") " Feb 27 17:51:14 crc kubenswrapper[4830]: I0227 17:51:14.346794 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/de0219d5-88bc-44ef-a815-643f36288601-horizon-secret-key\") pod \"de0219d5-88bc-44ef-a815-643f36288601\" (UID: \"de0219d5-88bc-44ef-a815-643f36288601\") " Feb 27 17:51:14 crc kubenswrapper[4830]: I0227 17:51:14.346909 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de0219d5-88bc-44ef-a815-643f36288601-logs\") pod \"de0219d5-88bc-44ef-a815-643f36288601\" (UID: \"de0219d5-88bc-44ef-a815-643f36288601\") " Feb 27 17:51:14 crc kubenswrapper[4830]: I0227 17:51:14.346933 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpsng\" (UniqueName: \"kubernetes.io/projected/de0219d5-88bc-44ef-a815-643f36288601-kube-api-access-vpsng\") pod \"de0219d5-88bc-44ef-a815-643f36288601\" (UID: \"de0219d5-88bc-44ef-a815-643f36288601\") " Feb 27 17:51:14 crc kubenswrapper[4830]: I0227 17:51:14.347884 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de0219d5-88bc-44ef-a815-643f36288601-logs" (OuterVolumeSpecName: "logs") pod "de0219d5-88bc-44ef-a815-643f36288601" (UID: "de0219d5-88bc-44ef-a815-643f36288601"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:51:14 crc kubenswrapper[4830]: I0227 17:51:14.359107 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de0219d5-88bc-44ef-a815-643f36288601-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "de0219d5-88bc-44ef-a815-643f36288601" (UID: "de0219d5-88bc-44ef-a815-643f36288601"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:51:14 crc kubenswrapper[4830]: I0227 17:51:14.359197 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de0219d5-88bc-44ef-a815-643f36288601-kube-api-access-vpsng" (OuterVolumeSpecName: "kube-api-access-vpsng") pod "de0219d5-88bc-44ef-a815-643f36288601" (UID: "de0219d5-88bc-44ef-a815-643f36288601"). InnerVolumeSpecName "kube-api-access-vpsng". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:51:14 crc kubenswrapper[4830]: I0227 17:51:14.382727 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de0219d5-88bc-44ef-a815-643f36288601-scripts" (OuterVolumeSpecName: "scripts") pod "de0219d5-88bc-44ef-a815-643f36288601" (UID: "de0219d5-88bc-44ef-a815-643f36288601"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:51:14 crc kubenswrapper[4830]: I0227 17:51:14.412208 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de0219d5-88bc-44ef-a815-643f36288601-config-data" (OuterVolumeSpecName: "config-data") pod "de0219d5-88bc-44ef-a815-643f36288601" (UID: "de0219d5-88bc-44ef-a815-643f36288601"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:51:14 crc kubenswrapper[4830]: I0227 17:51:14.448985 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de0219d5-88bc-44ef-a815-643f36288601-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:51:14 crc kubenswrapper[4830]: I0227 17:51:14.449019 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/de0219d5-88bc-44ef-a815-643f36288601-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:51:14 crc kubenswrapper[4830]: I0227 17:51:14.449030 4830 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/de0219d5-88bc-44ef-a815-643f36288601-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 27 17:51:14 crc kubenswrapper[4830]: I0227 17:51:14.449040 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de0219d5-88bc-44ef-a815-643f36288601-logs\") on node \"crc\" DevicePath \"\"" Feb 27 17:51:14 crc kubenswrapper[4830]: I0227 17:51:14.449049 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpsng\" (UniqueName: \"kubernetes.io/projected/de0219d5-88bc-44ef-a815-643f36288601-kube-api-access-vpsng\") on node \"crc\" DevicePath \"\"" Feb 27 17:51:14 crc kubenswrapper[4830]: E0227 17:51:14.771730 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:51:15 crc kubenswrapper[4830]: I0227 17:51:15.254687 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5767946c5c-wgc8m" Feb 27 17:51:15 crc kubenswrapper[4830]: I0227 17:51:15.298037 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5767946c5c-wgc8m"] Feb 27 17:51:15 crc kubenswrapper[4830]: I0227 17:51:15.307151 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5767946c5c-wgc8m"] Feb 27 17:51:16 crc kubenswrapper[4830]: I0227 17:51:16.788444 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de0219d5-88bc-44ef-a815-643f36288601" path="/var/lib/kubelet/pods/de0219d5-88bc-44ef-a815-643f36288601/volumes" Feb 27 17:51:23 crc kubenswrapper[4830]: I0227 17:51:23.701150 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7fffd66c5c-klpbv" podUID="668cf4df-9017-4e66-9260-f2601d78a3d7" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.154:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.154:8080: connect: connection refused" Feb 27 17:51:24 crc kubenswrapper[4830]: I0227 17:51:24.186600 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8q6c2"] Feb 27 17:51:24 crc kubenswrapper[4830]: E0227 17:51:24.187833 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de0219d5-88bc-44ef-a815-643f36288601" containerName="horizon-log" Feb 27 17:51:24 crc kubenswrapper[4830]: I0227 17:51:24.187865 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="de0219d5-88bc-44ef-a815-643f36288601" containerName="horizon-log" Feb 27 17:51:24 crc kubenswrapper[4830]: E0227 17:51:24.187886 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de0219d5-88bc-44ef-a815-643f36288601" containerName="horizon" Feb 27 17:51:24 crc kubenswrapper[4830]: I0227 17:51:24.187901 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="de0219d5-88bc-44ef-a815-643f36288601" containerName="horizon" Feb 27 17:51:24 crc kubenswrapper[4830]: I0227 17:51:24.188411 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="de0219d5-88bc-44ef-a815-643f36288601" containerName="horizon-log" Feb 27 17:51:24 crc kubenswrapper[4830]: I0227 17:51:24.188445 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="de0219d5-88bc-44ef-a815-643f36288601" containerName="horizon" Feb 27 17:51:24 crc kubenswrapper[4830]: I0227 17:51:24.190988 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8q6c2" Feb 27 17:51:24 crc kubenswrapper[4830]: I0227 17:51:24.211107 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8q6c2"] Feb 27 17:51:24 crc kubenswrapper[4830]: I0227 17:51:24.315151 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bab9b8c9-003b-4139-b9d5-2302e4773442-utilities\") pod \"redhat-marketplace-8q6c2\" (UID: \"bab9b8c9-003b-4139-b9d5-2302e4773442\") " pod="openshift-marketplace/redhat-marketplace-8q6c2" Feb 27 17:51:24 crc kubenswrapper[4830]: I0227 17:51:24.315334 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bab9b8c9-003b-4139-b9d5-2302e4773442-catalog-content\") pod \"redhat-marketplace-8q6c2\" (UID: \"bab9b8c9-003b-4139-b9d5-2302e4773442\") " pod="openshift-marketplace/redhat-marketplace-8q6c2" Feb 27 17:51:24 crc kubenswrapper[4830]: I0227 17:51:24.315642 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2hnz\" (UniqueName: \"kubernetes.io/projected/bab9b8c9-003b-4139-b9d5-2302e4773442-kube-api-access-z2hnz\") pod \"redhat-marketplace-8q6c2\" (UID: \"bab9b8c9-003b-4139-b9d5-2302e4773442\") " pod="openshift-marketplace/redhat-marketplace-8q6c2" Feb 27 17:51:24 crc kubenswrapper[4830]: I0227 17:51:24.418029 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bab9b8c9-003b-4139-b9d5-2302e4773442-catalog-content\") pod \"redhat-marketplace-8q6c2\" (UID: \"bab9b8c9-003b-4139-b9d5-2302e4773442\") " pod="openshift-marketplace/redhat-marketplace-8q6c2" Feb 27 17:51:24 crc kubenswrapper[4830]: I0227 17:51:24.418124 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2hnz\" (UniqueName: \"kubernetes.io/projected/bab9b8c9-003b-4139-b9d5-2302e4773442-kube-api-access-z2hnz\") pod \"redhat-marketplace-8q6c2\" (UID: \"bab9b8c9-003b-4139-b9d5-2302e4773442\") " pod="openshift-marketplace/redhat-marketplace-8q6c2" Feb 27 17:51:24 crc kubenswrapper[4830]: I0227 17:51:24.418231 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bab9b8c9-003b-4139-b9d5-2302e4773442-utilities\") pod \"redhat-marketplace-8q6c2\" (UID: \"bab9b8c9-003b-4139-b9d5-2302e4773442\") " pod="openshift-marketplace/redhat-marketplace-8q6c2" Feb 27 17:51:24 crc kubenswrapper[4830]: I0227 17:51:24.418731 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bab9b8c9-003b-4139-b9d5-2302e4773442-utilities\") pod \"redhat-marketplace-8q6c2\" (UID: \"bab9b8c9-003b-4139-b9d5-2302e4773442\") " pod="openshift-marketplace/redhat-marketplace-8q6c2" Feb 27 17:51:24 crc kubenswrapper[4830]: I0227 17:51:24.419231 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bab9b8c9-003b-4139-b9d5-2302e4773442-catalog-content\") pod \"redhat-marketplace-8q6c2\" (UID: \"bab9b8c9-003b-4139-b9d5-2302e4773442\") " pod="openshift-marketplace/redhat-marketplace-8q6c2" Feb 27 17:51:24 crc kubenswrapper[4830]: I0227 17:51:24.456760 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2hnz\" (UniqueName: \"kubernetes.io/projected/bab9b8c9-003b-4139-b9d5-2302e4773442-kube-api-access-z2hnz\") pod \"redhat-marketplace-8q6c2\" (UID: \"bab9b8c9-003b-4139-b9d5-2302e4773442\") " pod="openshift-marketplace/redhat-marketplace-8q6c2" Feb 27 17:51:24 crc kubenswrapper[4830]: I0227 17:51:24.535167 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8q6c2" Feb 27 17:51:25 crc kubenswrapper[4830]: I0227 17:51:25.149241 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8q6c2"] Feb 27 17:51:25 crc kubenswrapper[4830]: W0227 17:51:25.150570 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbab9b8c9_003b_4139_b9d5_2302e4773442.slice/crio-823d33d38e084915744e29545c6ae15c31aa64ed15de8f65bfdf834ee4d420d3 WatchSource:0}: Error finding container 823d33d38e084915744e29545c6ae15c31aa64ed15de8f65bfdf834ee4d420d3: Status 404 returned error can't find the container with id 823d33d38e084915744e29545c6ae15c31aa64ed15de8f65bfdf834ee4d420d3 Feb 27 17:51:25 crc kubenswrapper[4830]: I0227 17:51:25.396791 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8q6c2" event={"ID":"bab9b8c9-003b-4139-b9d5-2302e4773442","Type":"ContainerStarted","Data":"823d33d38e084915744e29545c6ae15c31aa64ed15de8f65bfdf834ee4d420d3"} Feb 27 17:51:26 crc kubenswrapper[4830]: I0227 17:51:26.410319 4830 generic.go:334] "Generic (PLEG): container finished" podID="bab9b8c9-003b-4139-b9d5-2302e4773442" containerID="ec82a86b53241a4e93b305870a39cd73c74a95ed3d5b16f627981d401859878c" exitCode=0 Feb 27 17:51:26 crc kubenswrapper[4830]: I0227 17:51:26.410441 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8q6c2" event={"ID":"bab9b8c9-003b-4139-b9d5-2302e4773442","Type":"ContainerDied","Data":"ec82a86b53241a4e93b305870a39cd73c74a95ed3d5b16f627981d401859878c"} Feb 27 17:51:26 crc kubenswrapper[4830]: E0227 17:51:26.765402 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:51:27 crc kubenswrapper[4830]: E0227 17:51:27.036881 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 17:51:27 crc kubenswrapper[4830]: E0227 17:51:27.037116 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z2hnz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-8q6c2_openshift-marketplace(bab9b8c9-003b-4139-b9d5-2302e4773442): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 17:51:27 crc kubenswrapper[4830]: E0227 17:51:27.038853 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-8q6c2" podUID="bab9b8c9-003b-4139-b9d5-2302e4773442" Feb 27 17:51:27 crc kubenswrapper[4830]: E0227 17:51:27.428379 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-8q6c2" podUID="bab9b8c9-003b-4139-b9d5-2302e4773442" Feb 27 17:51:33 crc kubenswrapper[4830]: I0227 17:51:33.701910 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7fffd66c5c-klpbv" podUID="668cf4df-9017-4e66-9260-f2601d78a3d7" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.154:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.154:8080: connect: connection refused" Feb 27 17:51:33 crc kubenswrapper[4830]: I0227 17:51:33.703409 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7fffd66c5c-klpbv" Feb 27 17:51:38 crc kubenswrapper[4830]: E0227 17:51:38.324033 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 17:51:38 crc kubenswrapper[4830]: E0227 17:51:38.325052 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z2hnz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-8q6c2_openshift-marketplace(bab9b8c9-003b-4139-b9d5-2302e4773442): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 17:51:38 crc kubenswrapper[4830]: E0227 17:51:38.327250 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-8q6c2" podUID="bab9b8c9-003b-4139-b9d5-2302e4773442" Feb 27 17:51:38 crc kubenswrapper[4830]: I0227 17:51:38.368870 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7fffd66c5c-klpbv" Feb 27 17:51:38 crc kubenswrapper[4830]: I0227 17:51:38.438064 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/668cf4df-9017-4e66-9260-f2601d78a3d7-logs\") pod \"668cf4df-9017-4e66-9260-f2601d78a3d7\" (UID: \"668cf4df-9017-4e66-9260-f2601d78a3d7\") " Feb 27 17:51:38 crc kubenswrapper[4830]: I0227 17:51:38.438328 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4jsh\" (UniqueName: \"kubernetes.io/projected/668cf4df-9017-4e66-9260-f2601d78a3d7-kube-api-access-b4jsh\") pod \"668cf4df-9017-4e66-9260-f2601d78a3d7\" (UID: \"668cf4df-9017-4e66-9260-f2601d78a3d7\") " Feb 27 17:51:38 crc kubenswrapper[4830]: I0227 17:51:38.438439 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/668cf4df-9017-4e66-9260-f2601d78a3d7-scripts\") pod \"668cf4df-9017-4e66-9260-f2601d78a3d7\" (UID: \"668cf4df-9017-4e66-9260-f2601d78a3d7\") " Feb 27 17:51:38 crc kubenswrapper[4830]: I0227 17:51:38.438761 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/668cf4df-9017-4e66-9260-f2601d78a3d7-horizon-secret-key\") pod \"668cf4df-9017-4e66-9260-f2601d78a3d7\" (UID: \"668cf4df-9017-4e66-9260-f2601d78a3d7\") " Feb 27 17:51:38 crc kubenswrapper[4830]: I0227 17:51:38.439342 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/668cf4df-9017-4e66-9260-f2601d78a3d7-logs" (OuterVolumeSpecName: "logs") pod "668cf4df-9017-4e66-9260-f2601d78a3d7" (UID: "668cf4df-9017-4e66-9260-f2601d78a3d7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:51:38 crc kubenswrapper[4830]: I0227 17:51:38.439981 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/668cf4df-9017-4e66-9260-f2601d78a3d7-config-data\") pod \"668cf4df-9017-4e66-9260-f2601d78a3d7\" (UID: \"668cf4df-9017-4e66-9260-f2601d78a3d7\") " Feb 27 17:51:38 crc kubenswrapper[4830]: I0227 17:51:38.440810 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/668cf4df-9017-4e66-9260-f2601d78a3d7-logs\") on node \"crc\" DevicePath \"\"" Feb 27 17:51:38 crc kubenswrapper[4830]: I0227 17:51:38.447959 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/668cf4df-9017-4e66-9260-f2601d78a3d7-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "668cf4df-9017-4e66-9260-f2601d78a3d7" (UID: "668cf4df-9017-4e66-9260-f2601d78a3d7"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:51:38 crc kubenswrapper[4830]: I0227 17:51:38.451384 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/668cf4df-9017-4e66-9260-f2601d78a3d7-kube-api-access-b4jsh" (OuterVolumeSpecName: "kube-api-access-b4jsh") pod "668cf4df-9017-4e66-9260-f2601d78a3d7" (UID: "668cf4df-9017-4e66-9260-f2601d78a3d7"). InnerVolumeSpecName "kube-api-access-b4jsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:51:38 crc kubenswrapper[4830]: I0227 17:51:38.489511 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/668cf4df-9017-4e66-9260-f2601d78a3d7-scripts" (OuterVolumeSpecName: "scripts") pod "668cf4df-9017-4e66-9260-f2601d78a3d7" (UID: "668cf4df-9017-4e66-9260-f2601d78a3d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:51:38 crc kubenswrapper[4830]: I0227 17:51:38.494075 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/668cf4df-9017-4e66-9260-f2601d78a3d7-config-data" (OuterVolumeSpecName: "config-data") pod "668cf4df-9017-4e66-9260-f2601d78a3d7" (UID: "668cf4df-9017-4e66-9260-f2601d78a3d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:51:38 crc kubenswrapper[4830]: I0227 17:51:38.542704 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4jsh\" (UniqueName: \"kubernetes.io/projected/668cf4df-9017-4e66-9260-f2601d78a3d7-kube-api-access-b4jsh\") on node \"crc\" DevicePath \"\"" Feb 27 17:51:38 crc kubenswrapper[4830]: I0227 17:51:38.542749 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/668cf4df-9017-4e66-9260-f2601d78a3d7-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:51:38 crc kubenswrapper[4830]: I0227 17:51:38.542769 4830 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/668cf4df-9017-4e66-9260-f2601d78a3d7-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 27 17:51:38 crc kubenswrapper[4830]: I0227 17:51:38.542786 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/668cf4df-9017-4e66-9260-f2601d78a3d7-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:51:38 crc kubenswrapper[4830]: I0227 17:51:38.586308 4830 generic.go:334] "Generic (PLEG): container finished" podID="668cf4df-9017-4e66-9260-f2601d78a3d7" containerID="8c188b53f11ca0837e9e9cbeee1f5bee1ea0f40d8aad5b80f61673d2819c3ee6" exitCode=137 Feb 27 17:51:38 crc kubenswrapper[4830]: I0227 17:51:38.586361 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7fffd66c5c-klpbv" event={"ID":"668cf4df-9017-4e66-9260-f2601d78a3d7","Type":"ContainerDied","Data":"8c188b53f11ca0837e9e9cbeee1f5bee1ea0f40d8aad5b80f61673d2819c3ee6"} Feb 27 17:51:38 crc kubenswrapper[4830]: I0227 17:51:38.586392 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7fffd66c5c-klpbv" event={"ID":"668cf4df-9017-4e66-9260-f2601d78a3d7","Type":"ContainerDied","Data":"c8950b70d79ede89e301f91ceed6366b0fdd1fd9171ba13c883ee78f091b12b1"} Feb 27 17:51:38 crc kubenswrapper[4830]: I0227 17:51:38.586410 4830 scope.go:117] "RemoveContainer" containerID="eb1cabbb40c471774960e5ff9ad92401514a25734f49aa36622d164c99800a45" Feb 27 17:51:38 crc kubenswrapper[4830]: I0227 17:51:38.586604 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7fffd66c5c-klpbv" Feb 27 17:51:38 crc kubenswrapper[4830]: I0227 17:51:38.623043 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7fffd66c5c-klpbv"] Feb 27 17:51:38 crc kubenswrapper[4830]: I0227 17:51:38.631275 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7fffd66c5c-klpbv"] Feb 27 17:51:38 crc kubenswrapper[4830]: I0227 17:51:38.777383 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="668cf4df-9017-4e66-9260-f2601d78a3d7" path="/var/lib/kubelet/pods/668cf4df-9017-4e66-9260-f2601d78a3d7/volumes" Feb 27 17:51:38 crc kubenswrapper[4830]: I0227 17:51:38.804479 4830 scope.go:117] "RemoveContainer" containerID="8c188b53f11ca0837e9e9cbeee1f5bee1ea0f40d8aad5b80f61673d2819c3ee6" Feb 27 17:51:38 crc kubenswrapper[4830]: E0227 17:51:38.805401 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:51:38 crc kubenswrapper[4830]: I0227 17:51:38.831278 4830 scope.go:117] "RemoveContainer" containerID="eb1cabbb40c471774960e5ff9ad92401514a25734f49aa36622d164c99800a45" Feb 27 17:51:38 crc kubenswrapper[4830]: E0227 17:51:38.840736 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb1cabbb40c471774960e5ff9ad92401514a25734f49aa36622d164c99800a45\": container with ID starting with eb1cabbb40c471774960e5ff9ad92401514a25734f49aa36622d164c99800a45 not found: ID does not exist" containerID="eb1cabbb40c471774960e5ff9ad92401514a25734f49aa36622d164c99800a45" Feb 27 17:51:38 crc kubenswrapper[4830]: I0227 17:51:38.840850 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb1cabbb40c471774960e5ff9ad92401514a25734f49aa36622d164c99800a45"} err="failed to get container status \"eb1cabbb40c471774960e5ff9ad92401514a25734f49aa36622d164c99800a45\": rpc error: code = NotFound desc = could not find container \"eb1cabbb40c471774960e5ff9ad92401514a25734f49aa36622d164c99800a45\": container with ID starting with eb1cabbb40c471774960e5ff9ad92401514a25734f49aa36622d164c99800a45 not found: ID does not exist" Feb 27 17:51:38 crc kubenswrapper[4830]: I0227 17:51:38.840888 4830 scope.go:117] "RemoveContainer" containerID="8c188b53f11ca0837e9e9cbeee1f5bee1ea0f40d8aad5b80f61673d2819c3ee6" Feb 27 17:51:38 crc kubenswrapper[4830]: E0227 17:51:38.841579 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c188b53f11ca0837e9e9cbeee1f5bee1ea0f40d8aad5b80f61673d2819c3ee6\": container with ID starting with 8c188b53f11ca0837e9e9cbeee1f5bee1ea0f40d8aad5b80f61673d2819c3ee6 not found: ID does not exist" containerID="8c188b53f11ca0837e9e9cbeee1f5bee1ea0f40d8aad5b80f61673d2819c3ee6" Feb 27 17:51:38 crc kubenswrapper[4830]: I0227 17:51:38.841657 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c188b53f11ca0837e9e9cbeee1f5bee1ea0f40d8aad5b80f61673d2819c3ee6"} err="failed to get container status \"8c188b53f11ca0837e9e9cbeee1f5bee1ea0f40d8aad5b80f61673d2819c3ee6\": rpc error: code = NotFound desc = could not find container \"8c188b53f11ca0837e9e9cbeee1f5bee1ea0f40d8aad5b80f61673d2819c3ee6\": container with ID starting with 8c188b53f11ca0837e9e9cbeee1f5bee1ea0f40d8aad5b80f61673d2819c3ee6 not found: ID does not exist" Feb 27 17:51:40 crc kubenswrapper[4830]: I0227 17:51:40.093938 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-a9e5-account-create-update-8sbl6"] Feb 27 17:51:40 crc kubenswrapper[4830]: I0227 17:51:40.143377 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-9pfvn"] Feb 27 17:51:40 crc kubenswrapper[4830]: I0227 17:51:40.158200 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-a9e5-account-create-update-8sbl6"] Feb 27 17:51:40 crc kubenswrapper[4830]: I0227 17:51:40.170056 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-9pfvn"] Feb 27 17:51:40 crc kubenswrapper[4830]: I0227 17:51:40.782187 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40ca4866-696f-4bcf-81ca-b7e20a20faa0" path="/var/lib/kubelet/pods/40ca4866-696f-4bcf-81ca-b7e20a20faa0/volumes" Feb 27 17:51:40 crc kubenswrapper[4830]: I0227 17:51:40.784410 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="999b0c09-f55e-4f61-b7dd-71580d4003bd" path="/var/lib/kubelet/pods/999b0c09-f55e-4f61-b7dd-71580d4003bd/volumes" Feb 27 17:51:47 crc kubenswrapper[4830]: I0227 17:51:47.062800 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-qbvfw"] Feb 27 17:51:47 crc kubenswrapper[4830]: I0227 17:51:47.087230 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-qbvfw"] Feb 27 17:51:48 crc kubenswrapper[4830]: I0227 17:51:48.783704 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75929ab1-64c8-4a78-822f-b3a2701dbcdd" path="/var/lib/kubelet/pods/75929ab1-64c8-4a78-822f-b3a2701dbcdd/volumes" Feb 27 17:51:51 crc kubenswrapper[4830]: I0227 17:51:51.149138 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7796b64d89-v2b4b"] Feb 27 17:51:51 crc kubenswrapper[4830]: E0227 17:51:51.150020 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="668cf4df-9017-4e66-9260-f2601d78a3d7" containerName="horizon-log" Feb 27 17:51:51 crc kubenswrapper[4830]: I0227 17:51:51.150033 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="668cf4df-9017-4e66-9260-f2601d78a3d7" containerName="horizon-log" Feb 27 17:51:51 crc kubenswrapper[4830]: E0227 17:51:51.150058 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="668cf4df-9017-4e66-9260-f2601d78a3d7" containerName="horizon" Feb 27 17:51:51 crc kubenswrapper[4830]: I0227 17:51:51.150063 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="668cf4df-9017-4e66-9260-f2601d78a3d7" containerName="horizon" Feb 27 17:51:51 crc kubenswrapper[4830]: I0227 17:51:51.150249 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="668cf4df-9017-4e66-9260-f2601d78a3d7" containerName="horizon" Feb 27 17:51:51 crc kubenswrapper[4830]: I0227 17:51:51.150260 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="668cf4df-9017-4e66-9260-f2601d78a3d7" containerName="horizon-log" Feb 27 17:51:51 crc kubenswrapper[4830]: I0227 17:51:51.151208 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7796b64d89-v2b4b" Feb 27 17:51:51 crc kubenswrapper[4830]: I0227 17:51:51.166758 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7796b64d89-v2b4b"] Feb 27 17:51:51 crc kubenswrapper[4830]: I0227 17:51:51.302586 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b07ca473-049b-41a3-bb57-a16764c45d86-horizon-secret-key\") pod \"horizon-7796b64d89-v2b4b\" (UID: \"b07ca473-049b-41a3-bb57-a16764c45d86\") " pod="openstack/horizon-7796b64d89-v2b4b" Feb 27 17:51:51 crc kubenswrapper[4830]: I0227 17:51:51.302662 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsrz5\" (UniqueName: \"kubernetes.io/projected/b07ca473-049b-41a3-bb57-a16764c45d86-kube-api-access-fsrz5\") pod \"horizon-7796b64d89-v2b4b\" (UID: \"b07ca473-049b-41a3-bb57-a16764c45d86\") " pod="openstack/horizon-7796b64d89-v2b4b" Feb 27 17:51:51 crc kubenswrapper[4830]: I0227 17:51:51.302694 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b07ca473-049b-41a3-bb57-a16764c45d86-config-data\") pod \"horizon-7796b64d89-v2b4b\" (UID: \"b07ca473-049b-41a3-bb57-a16764c45d86\") " pod="openstack/horizon-7796b64d89-v2b4b" Feb 27 17:51:51 crc kubenswrapper[4830]: I0227 17:51:51.302734 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b07ca473-049b-41a3-bb57-a16764c45d86-logs\") pod \"horizon-7796b64d89-v2b4b\" (UID: \"b07ca473-049b-41a3-bb57-a16764c45d86\") " pod="openstack/horizon-7796b64d89-v2b4b" Feb 27 17:51:51 crc kubenswrapper[4830]: I0227 17:51:51.302804 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b07ca473-049b-41a3-bb57-a16764c45d86-scripts\") pod \"horizon-7796b64d89-v2b4b\" (UID: \"b07ca473-049b-41a3-bb57-a16764c45d86\") " pod="openstack/horizon-7796b64d89-v2b4b" Feb 27 17:51:51 crc kubenswrapper[4830]: I0227 17:51:51.404344 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b07ca473-049b-41a3-bb57-a16764c45d86-scripts\") pod \"horizon-7796b64d89-v2b4b\" (UID: \"b07ca473-049b-41a3-bb57-a16764c45d86\") " pod="openstack/horizon-7796b64d89-v2b4b" Feb 27 17:51:51 crc kubenswrapper[4830]: I0227 17:51:51.404422 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b07ca473-049b-41a3-bb57-a16764c45d86-horizon-secret-key\") pod \"horizon-7796b64d89-v2b4b\" (UID: \"b07ca473-049b-41a3-bb57-a16764c45d86\") " pod="openstack/horizon-7796b64d89-v2b4b" Feb 27 17:51:51 crc kubenswrapper[4830]: I0227 17:51:51.404473 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsrz5\" (UniqueName: \"kubernetes.io/projected/b07ca473-049b-41a3-bb57-a16764c45d86-kube-api-access-fsrz5\") pod \"horizon-7796b64d89-v2b4b\" (UID: \"b07ca473-049b-41a3-bb57-a16764c45d86\") " pod="openstack/horizon-7796b64d89-v2b4b" Feb 27 17:51:51 crc kubenswrapper[4830]: I0227 17:51:51.404502 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b07ca473-049b-41a3-bb57-a16764c45d86-config-data\") pod \"horizon-7796b64d89-v2b4b\" (UID: \"b07ca473-049b-41a3-bb57-a16764c45d86\") " pod="openstack/horizon-7796b64d89-v2b4b" Feb 27 17:51:51 crc kubenswrapper[4830]: I0227 17:51:51.404537 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b07ca473-049b-41a3-bb57-a16764c45d86-logs\") pod \"horizon-7796b64d89-v2b4b\" (UID: \"b07ca473-049b-41a3-bb57-a16764c45d86\") " pod="openstack/horizon-7796b64d89-v2b4b" Feb 27 17:51:51 crc kubenswrapper[4830]: I0227 17:51:51.404904 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b07ca473-049b-41a3-bb57-a16764c45d86-logs\") pod \"horizon-7796b64d89-v2b4b\" (UID: \"b07ca473-049b-41a3-bb57-a16764c45d86\") " pod="openstack/horizon-7796b64d89-v2b4b" Feb 27 17:51:51 crc kubenswrapper[4830]: I0227 17:51:51.405375 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b07ca473-049b-41a3-bb57-a16764c45d86-scripts\") pod \"horizon-7796b64d89-v2b4b\" (UID: \"b07ca473-049b-41a3-bb57-a16764c45d86\") " pod="openstack/horizon-7796b64d89-v2b4b" Feb 27 17:51:51 crc kubenswrapper[4830]: I0227 17:51:51.407124 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b07ca473-049b-41a3-bb57-a16764c45d86-config-data\") pod \"horizon-7796b64d89-v2b4b\" (UID: \"b07ca473-049b-41a3-bb57-a16764c45d86\") " pod="openstack/horizon-7796b64d89-v2b4b" Feb 27 17:51:51 crc kubenswrapper[4830]: I0227 17:51:51.412282 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b07ca473-049b-41a3-bb57-a16764c45d86-horizon-secret-key\") pod \"horizon-7796b64d89-v2b4b\" (UID: \"b07ca473-049b-41a3-bb57-a16764c45d86\") " pod="openstack/horizon-7796b64d89-v2b4b" Feb 27 17:51:51 crc kubenswrapper[4830]: I0227 17:51:51.432652 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsrz5\" (UniqueName: \"kubernetes.io/projected/b07ca473-049b-41a3-bb57-a16764c45d86-kube-api-access-fsrz5\") pod \"horizon-7796b64d89-v2b4b\" (UID: \"b07ca473-049b-41a3-bb57-a16764c45d86\") " pod="openstack/horizon-7796b64d89-v2b4b" Feb 27 17:51:51 crc kubenswrapper[4830]: I0227 17:51:51.506925 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7796b64d89-v2b4b" Feb 27 17:51:52 crc kubenswrapper[4830]: I0227 17:51:52.050691 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7796b64d89-v2b4b"] Feb 27 17:51:52 crc kubenswrapper[4830]: W0227 17:51:52.064000 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb07ca473_049b_41a3_bb57_a16764c45d86.slice/crio-4867f2d6b5c5437e1f5645ded32a87f2cfbd49d0d1bb66f4756e8a64c3ad064d WatchSource:0}: Error finding container 4867f2d6b5c5437e1f5645ded32a87f2cfbd49d0d1bb66f4756e8a64c3ad064d: Status 404 returned error can't find the container with id 4867f2d6b5c5437e1f5645ded32a87f2cfbd49d0d1bb66f4756e8a64c3ad064d Feb 27 17:51:52 crc kubenswrapper[4830]: I0227 17:51:52.382558 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-lnc76"] Feb 27 17:51:52 crc kubenswrapper[4830]: I0227 17:51:52.385187 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-lnc76" Feb 27 17:51:52 crc kubenswrapper[4830]: I0227 17:51:52.461654 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-lnc76"] Feb 27 17:51:52 crc kubenswrapper[4830]: I0227 17:51:52.503040 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-d74f-account-create-update-4bpbv"] Feb 27 17:51:52 crc kubenswrapper[4830]: I0227 17:51:52.504560 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-d74f-account-create-update-4bpbv" Feb 27 17:51:52 crc kubenswrapper[4830]: I0227 17:51:52.508365 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Feb 27 17:51:52 crc kubenswrapper[4830]: I0227 17:51:52.523488 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-d74f-account-create-update-4bpbv"] Feb 27 17:51:52 crc kubenswrapper[4830]: I0227 17:51:52.541376 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm697\" (UniqueName: \"kubernetes.io/projected/69d24cdc-6ac8-49bc-aca6-81956b204c0b-kube-api-access-nm697\") pod \"heat-db-create-lnc76\" (UID: \"69d24cdc-6ac8-49bc-aca6-81956b204c0b\") " pod="openstack/heat-db-create-lnc76" Feb 27 17:51:52 crc kubenswrapper[4830]: I0227 17:51:52.541455 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69d24cdc-6ac8-49bc-aca6-81956b204c0b-operator-scripts\") pod \"heat-db-create-lnc76\" (UID: \"69d24cdc-6ac8-49bc-aca6-81956b204c0b\") " pod="openstack/heat-db-create-lnc76" Feb 27 17:51:52 crc kubenswrapper[4830]: I0227 17:51:52.643659 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm697\" (UniqueName: \"kubernetes.io/projected/69d24cdc-6ac8-49bc-aca6-81956b204c0b-kube-api-access-nm697\") pod \"heat-db-create-lnc76\" (UID: \"69d24cdc-6ac8-49bc-aca6-81956b204c0b\") " pod="openstack/heat-db-create-lnc76" Feb 27 17:51:52 crc kubenswrapper[4830]: I0227 17:51:52.644334 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c62765c-1e54-4883-bb95-ae8b9727ace2-operator-scripts\") pod \"heat-d74f-account-create-update-4bpbv\" (UID: \"4c62765c-1e54-4883-bb95-ae8b9727ace2\") " pod="openstack/heat-d74f-account-create-update-4bpbv" Feb 27 17:51:52 crc kubenswrapper[4830]: I0227 17:51:52.644429 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69d24cdc-6ac8-49bc-aca6-81956b204c0b-operator-scripts\") pod \"heat-db-create-lnc76\" (UID: \"69d24cdc-6ac8-49bc-aca6-81956b204c0b\") " pod="openstack/heat-db-create-lnc76" Feb 27 17:51:52 crc kubenswrapper[4830]: I0227 17:51:52.644516 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl6wf\" (UniqueName: \"kubernetes.io/projected/4c62765c-1e54-4883-bb95-ae8b9727ace2-kube-api-access-wl6wf\") pod \"heat-d74f-account-create-update-4bpbv\" (UID: \"4c62765c-1e54-4883-bb95-ae8b9727ace2\") " pod="openstack/heat-d74f-account-create-update-4bpbv" Feb 27 17:51:52 crc kubenswrapper[4830]: I0227 17:51:52.645736 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69d24cdc-6ac8-49bc-aca6-81956b204c0b-operator-scripts\") pod \"heat-db-create-lnc76\" (UID: \"69d24cdc-6ac8-49bc-aca6-81956b204c0b\") " pod="openstack/heat-db-create-lnc76" Feb 27 17:51:52 crc kubenswrapper[4830]: I0227 17:51:52.660687 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm697\" (UniqueName: \"kubernetes.io/projected/69d24cdc-6ac8-49bc-aca6-81956b204c0b-kube-api-access-nm697\") pod \"heat-db-create-lnc76\" (UID: \"69d24cdc-6ac8-49bc-aca6-81956b204c0b\") " pod="openstack/heat-db-create-lnc76" Feb 27 17:51:52 crc kubenswrapper[4830]: I0227 17:51:52.747083 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c62765c-1e54-4883-bb95-ae8b9727ace2-operator-scripts\") pod \"heat-d74f-account-create-update-4bpbv\" (UID: \"4c62765c-1e54-4883-bb95-ae8b9727ace2\") " pod="openstack/heat-d74f-account-create-update-4bpbv" Feb 27 17:51:52 crc kubenswrapper[4830]: I0227 17:51:52.747337 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl6wf\" (UniqueName: \"kubernetes.io/projected/4c62765c-1e54-4883-bb95-ae8b9727ace2-kube-api-access-wl6wf\") pod \"heat-d74f-account-create-update-4bpbv\" (UID: \"4c62765c-1e54-4883-bb95-ae8b9727ace2\") " pod="openstack/heat-d74f-account-create-update-4bpbv" Feb 27 17:51:52 crc kubenswrapper[4830]: I0227 17:51:52.748317 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c62765c-1e54-4883-bb95-ae8b9727ace2-operator-scripts\") pod \"heat-d74f-account-create-update-4bpbv\" (UID: \"4c62765c-1e54-4883-bb95-ae8b9727ace2\") " pod="openstack/heat-d74f-account-create-update-4bpbv" Feb 27 17:51:52 crc kubenswrapper[4830]: I0227 17:51:52.765454 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl6wf\" (UniqueName: \"kubernetes.io/projected/4c62765c-1e54-4883-bb95-ae8b9727ace2-kube-api-access-wl6wf\") pod \"heat-d74f-account-create-update-4bpbv\" (UID: \"4c62765c-1e54-4883-bb95-ae8b9727ace2\") " pod="openstack/heat-d74f-account-create-update-4bpbv" Feb 27 17:51:52 crc kubenswrapper[4830]: I0227 17:51:52.782916 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7796b64d89-v2b4b" event={"ID":"b07ca473-049b-41a3-bb57-a16764c45d86","Type":"ContainerStarted","Data":"7a55c464496ecf7aa15a3ac33d4de2c640ab5f28b922c572b1663ff26cdd2809"} Feb 27 17:51:52 crc kubenswrapper[4830]: I0227 17:51:52.783060 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7796b64d89-v2b4b" event={"ID":"b07ca473-049b-41a3-bb57-a16764c45d86","Type":"ContainerStarted","Data":"74a8a93d52b15cbe8bf08450a36816eda939d48fb1ac35b7bba107dda912ade0"} Feb 27 17:51:52 crc kubenswrapper[4830]: I0227 17:51:52.783124 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7796b64d89-v2b4b" event={"ID":"b07ca473-049b-41a3-bb57-a16764c45d86","Type":"ContainerStarted","Data":"4867f2d6b5c5437e1f5645ded32a87f2cfbd49d0d1bb66f4756e8a64c3ad064d"} Feb 27 17:51:52 crc kubenswrapper[4830]: I0227 17:51:52.825636 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7796b64d89-v2b4b" podStartSLOduration=1.825610111 podStartE2EDuration="1.825610111s" podCreationTimestamp="2026-02-27 17:51:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:51:52.81134903 +0000 UTC m=+6308.900621493" watchObservedRunningTime="2026-02-27 17:51:52.825610111 +0000 UTC m=+6308.914882584" Feb 27 17:51:52 crc kubenswrapper[4830]: I0227 17:51:52.826290 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-lnc76" Feb 27 17:51:52 crc kubenswrapper[4830]: I0227 17:51:52.838813 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-d74f-account-create-update-4bpbv" Feb 27 17:51:53 crc kubenswrapper[4830]: W0227 17:51:53.439230 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69d24cdc_6ac8_49bc_aca6_81956b204c0b.slice/crio-d2bcb345199fcec480b46fd4901976a77dab950ca3e112688ff6e1c1adebab3e WatchSource:0}: Error finding container d2bcb345199fcec480b46fd4901976a77dab950ca3e112688ff6e1c1adebab3e: Status 404 returned error can't find the container with id d2bcb345199fcec480b46fd4901976a77dab950ca3e112688ff6e1c1adebab3e Feb 27 17:51:53 crc kubenswrapper[4830]: I0227 17:51:53.445104 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-lnc76"] Feb 27 17:51:53 crc kubenswrapper[4830]: I0227 17:51:53.510913 4830 scope.go:117] "RemoveContainer" containerID="20a73155e16a680bcaeef5e2ae214a36a2059c9520795cefe96d510ae3d1a618" Feb 27 17:51:53 crc kubenswrapper[4830]: I0227 17:51:53.586693 4830 scope.go:117] "RemoveContainer" containerID="71370417538fd2c6bd53b284b401fca4285542e8326674eb686c6728aa0a07c3" Feb 27 17:51:53 crc kubenswrapper[4830]: I0227 17:51:53.604912 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-d74f-account-create-update-4bpbv"] Feb 27 17:51:53 crc kubenswrapper[4830]: W0227 17:51:53.613289 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c62765c_1e54_4883_bb95_ae8b9727ace2.slice/crio-2372da4769058738564bea2ca481c3f2a9f1ef1b9297d947aec1f38dc12721d3 WatchSource:0}: Error finding container 2372da4769058738564bea2ca481c3f2a9f1ef1b9297d947aec1f38dc12721d3: Status 404 returned error can't find the container with id 2372da4769058738564bea2ca481c3f2a9f1ef1b9297d947aec1f38dc12721d3 Feb 27 17:51:53 crc kubenswrapper[4830]: I0227 17:51:53.775043 4830 scope.go:117] "RemoveContainer" containerID="8796327ff924c8489b6e6d9b0bd9cdf89d2a62f3ba1335b489ef3339d1c3304a" Feb 27 17:51:53 crc kubenswrapper[4830]: E0227 17:51:53.775041 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:51:53 crc kubenswrapper[4830]: E0227 17:51:53.776858 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-8q6c2" podUID="bab9b8c9-003b-4139-b9d5-2302e4773442" Feb 27 17:51:53 crc kubenswrapper[4830]: I0227 17:51:53.810172 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-lnc76" event={"ID":"69d24cdc-6ac8-49bc-aca6-81956b204c0b","Type":"ContainerStarted","Data":"d2bcb345199fcec480b46fd4901976a77dab950ca3e112688ff6e1c1adebab3e"} Feb 27 17:51:53 crc kubenswrapper[4830]: I0227 17:51:53.813009 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-d74f-account-create-update-4bpbv" event={"ID":"4c62765c-1e54-4883-bb95-ae8b9727ace2","Type":"ContainerStarted","Data":"2372da4769058738564bea2ca481c3f2a9f1ef1b9297d947aec1f38dc12721d3"} Feb 27 17:51:53 crc kubenswrapper[4830]: I0227 17:51:53.836070 4830 scope.go:117] "RemoveContainer" containerID="d1f9ed46fa0e79149abfd8a8fbf4baefc01d0dfbe0873f82800363c800abfb57" Feb 27 17:51:53 crc kubenswrapper[4830]: I0227 17:51:53.866381 4830 scope.go:117] "RemoveContainer" containerID="1f0d6ae756f012b90b0ca967dd7d86f0649dc830c16363cfb292fc8b7a069ad9" Feb 27 17:51:53 crc kubenswrapper[4830]: I0227 17:51:53.893882 4830 scope.go:117] "RemoveContainer" containerID="23e56562b97439c8ca29a75f37d58f75827c5a7bed19c12c1a1a8a6fef736d1f" Feb 27 17:51:53 crc kubenswrapper[4830]: I0227 17:51:53.918139 4830 scope.go:117] "RemoveContainer" containerID="829a58febd013e9966ceec94981836968748df8ff2a4c693b3d3e273263ae144" Feb 27 17:51:54 crc kubenswrapper[4830]: I0227 17:51:54.834699 4830 generic.go:334] "Generic (PLEG): container finished" podID="69d24cdc-6ac8-49bc-aca6-81956b204c0b" containerID="89440017ef91fbeaae35638eeed4966b5dbbd07cdbc63e99fffc1fbbc0e8ddae" exitCode=0 Feb 27 17:51:54 crc kubenswrapper[4830]: I0227 17:51:54.834811 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-lnc76" event={"ID":"69d24cdc-6ac8-49bc-aca6-81956b204c0b","Type":"ContainerDied","Data":"89440017ef91fbeaae35638eeed4966b5dbbd07cdbc63e99fffc1fbbc0e8ddae"} Feb 27 17:51:54 crc kubenswrapper[4830]: I0227 17:51:54.837028 4830 generic.go:334] "Generic (PLEG): container finished" podID="4c62765c-1e54-4883-bb95-ae8b9727ace2" containerID="aa4edb17d0fc5db0eacc87959db200bad8ecd7e4d182baaa7b5f61d043ea4413" exitCode=0 Feb 27 17:51:54 crc kubenswrapper[4830]: I0227 17:51:54.837106 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-d74f-account-create-update-4bpbv" event={"ID":"4c62765c-1e54-4883-bb95-ae8b9727ace2","Type":"ContainerDied","Data":"aa4edb17d0fc5db0eacc87959db200bad8ecd7e4d182baaa7b5f61d043ea4413"} Feb 27 17:51:56 crc kubenswrapper[4830]: I0227 17:51:56.414568 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-d74f-account-create-update-4bpbv" Feb 27 17:51:56 crc kubenswrapper[4830]: I0227 17:51:56.419449 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-lnc76" Feb 27 17:51:56 crc kubenswrapper[4830]: I0227 17:51:56.539290 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69d24cdc-6ac8-49bc-aca6-81956b204c0b-operator-scripts\") pod \"69d24cdc-6ac8-49bc-aca6-81956b204c0b\" (UID: \"69d24cdc-6ac8-49bc-aca6-81956b204c0b\") " Feb 27 17:51:56 crc kubenswrapper[4830]: I0227 17:51:56.539429 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wl6wf\" (UniqueName: \"kubernetes.io/projected/4c62765c-1e54-4883-bb95-ae8b9727ace2-kube-api-access-wl6wf\") pod \"4c62765c-1e54-4883-bb95-ae8b9727ace2\" (UID: \"4c62765c-1e54-4883-bb95-ae8b9727ace2\") " Feb 27 17:51:56 crc kubenswrapper[4830]: I0227 17:51:56.539566 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm697\" (UniqueName: \"kubernetes.io/projected/69d24cdc-6ac8-49bc-aca6-81956b204c0b-kube-api-access-nm697\") pod \"69d24cdc-6ac8-49bc-aca6-81956b204c0b\" (UID: \"69d24cdc-6ac8-49bc-aca6-81956b204c0b\") " Feb 27 17:51:56 crc kubenswrapper[4830]: I0227 17:51:56.539592 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c62765c-1e54-4883-bb95-ae8b9727ace2-operator-scripts\") pod \"4c62765c-1e54-4883-bb95-ae8b9727ace2\" (UID: \"4c62765c-1e54-4883-bb95-ae8b9727ace2\") " Feb 27 17:51:56 crc kubenswrapper[4830]: I0227 17:51:56.540619 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c62765c-1e54-4883-bb95-ae8b9727ace2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c62765c-1e54-4883-bb95-ae8b9727ace2" (UID: "4c62765c-1e54-4883-bb95-ae8b9727ace2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:51:56 crc kubenswrapper[4830]: I0227 17:51:56.542456 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69d24cdc-6ac8-49bc-aca6-81956b204c0b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "69d24cdc-6ac8-49bc-aca6-81956b204c0b" (UID: "69d24cdc-6ac8-49bc-aca6-81956b204c0b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:51:56 crc kubenswrapper[4830]: I0227 17:51:56.549352 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c62765c-1e54-4883-bb95-ae8b9727ace2-kube-api-access-wl6wf" (OuterVolumeSpecName: "kube-api-access-wl6wf") pod "4c62765c-1e54-4883-bb95-ae8b9727ace2" (UID: "4c62765c-1e54-4883-bb95-ae8b9727ace2"). InnerVolumeSpecName "kube-api-access-wl6wf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:51:56 crc kubenswrapper[4830]: I0227 17:51:56.549439 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69d24cdc-6ac8-49bc-aca6-81956b204c0b-kube-api-access-nm697" (OuterVolumeSpecName: "kube-api-access-nm697") pod "69d24cdc-6ac8-49bc-aca6-81956b204c0b" (UID: "69d24cdc-6ac8-49bc-aca6-81956b204c0b"). InnerVolumeSpecName "kube-api-access-nm697". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:51:56 crc kubenswrapper[4830]: I0227 17:51:56.643186 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69d24cdc-6ac8-49bc-aca6-81956b204c0b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:51:56 crc kubenswrapper[4830]: I0227 17:51:56.643278 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wl6wf\" (UniqueName: \"kubernetes.io/projected/4c62765c-1e54-4883-bb95-ae8b9727ace2-kube-api-access-wl6wf\") on node \"crc\" DevicePath \"\"" Feb 27 17:51:56 crc kubenswrapper[4830]: I0227 17:51:56.643341 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nm697\" (UniqueName: \"kubernetes.io/projected/69d24cdc-6ac8-49bc-aca6-81956b204c0b-kube-api-access-nm697\") on node \"crc\" DevicePath \"\"" Feb 27 17:51:56 crc kubenswrapper[4830]: I0227 17:51:56.643360 4830 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c62765c-1e54-4883-bb95-ae8b9727ace2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:51:56 crc kubenswrapper[4830]: I0227 17:51:56.878258 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-lnc76" Feb 27 17:51:56 crc kubenswrapper[4830]: I0227 17:51:56.878334 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-lnc76" event={"ID":"69d24cdc-6ac8-49bc-aca6-81956b204c0b","Type":"ContainerDied","Data":"d2bcb345199fcec480b46fd4901976a77dab950ca3e112688ff6e1c1adebab3e"} Feb 27 17:51:56 crc kubenswrapper[4830]: I0227 17:51:56.878803 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2bcb345199fcec480b46fd4901976a77dab950ca3e112688ff6e1c1adebab3e" Feb 27 17:51:56 crc kubenswrapper[4830]: I0227 17:51:56.881132 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-d74f-account-create-update-4bpbv" event={"ID":"4c62765c-1e54-4883-bb95-ae8b9727ace2","Type":"ContainerDied","Data":"2372da4769058738564bea2ca481c3f2a9f1ef1b9297d947aec1f38dc12721d3"} Feb 27 17:51:56 crc kubenswrapper[4830]: I0227 17:51:56.881204 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2372da4769058738564bea2ca481c3f2a9f1ef1b9297d947aec1f38dc12721d3" Feb 27 17:51:56 crc kubenswrapper[4830]: I0227 17:51:56.881300 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-d74f-account-create-update-4bpbv" Feb 27 17:52:00 crc kubenswrapper[4830]: I0227 17:52:00.151499 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536912-2rxlg"] Feb 27 17:52:00 crc kubenswrapper[4830]: E0227 17:52:00.152762 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c62765c-1e54-4883-bb95-ae8b9727ace2" containerName="mariadb-account-create-update" Feb 27 17:52:00 crc kubenswrapper[4830]: I0227 17:52:00.152780 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c62765c-1e54-4883-bb95-ae8b9727ace2" containerName="mariadb-account-create-update" Feb 27 17:52:00 crc kubenswrapper[4830]: E0227 17:52:00.152798 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69d24cdc-6ac8-49bc-aca6-81956b204c0b" containerName="mariadb-database-create" Feb 27 17:52:00 crc kubenswrapper[4830]: I0227 17:52:00.152806 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="69d24cdc-6ac8-49bc-aca6-81956b204c0b" containerName="mariadb-database-create" Feb 27 17:52:00 crc kubenswrapper[4830]: I0227 17:52:00.153118 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c62765c-1e54-4883-bb95-ae8b9727ace2" containerName="mariadb-account-create-update" Feb 27 17:52:00 crc kubenswrapper[4830]: I0227 17:52:00.153142 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="69d24cdc-6ac8-49bc-aca6-81956b204c0b" containerName="mariadb-database-create" Feb 27 17:52:00 crc kubenswrapper[4830]: I0227 17:52:00.154124 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536912-2rxlg" Feb 27 17:52:00 crc kubenswrapper[4830]: I0227 17:52:00.157393 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:52:00 crc kubenswrapper[4830]: I0227 17:52:00.157728 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 17:52:00 crc kubenswrapper[4830]: I0227 17:52:00.157792 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:52:00 crc kubenswrapper[4830]: I0227 17:52:00.168277 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536912-2rxlg"] Feb 27 17:52:00 crc kubenswrapper[4830]: I0227 17:52:00.241347 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjjpk\" (UniqueName: \"kubernetes.io/projected/a2476383-c615-49e7-b34c-e824adab8603-kube-api-access-wjjpk\") pod \"auto-csr-approver-29536912-2rxlg\" (UID: \"a2476383-c615-49e7-b34c-e824adab8603\") " pod="openshift-infra/auto-csr-approver-29536912-2rxlg" Feb 27 17:52:00 crc kubenswrapper[4830]: I0227 17:52:00.345028 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjjpk\" (UniqueName: \"kubernetes.io/projected/a2476383-c615-49e7-b34c-e824adab8603-kube-api-access-wjjpk\") pod \"auto-csr-approver-29536912-2rxlg\" (UID: \"a2476383-c615-49e7-b34c-e824adab8603\") " pod="openshift-infra/auto-csr-approver-29536912-2rxlg" Feb 27 17:52:00 crc kubenswrapper[4830]: I0227 17:52:00.396922 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjjpk\" (UniqueName: \"kubernetes.io/projected/a2476383-c615-49e7-b34c-e824adab8603-kube-api-access-wjjpk\") pod \"auto-csr-approver-29536912-2rxlg\" (UID: \"a2476383-c615-49e7-b34c-e824adab8603\") " pod="openshift-infra/auto-csr-approver-29536912-2rxlg" Feb 27 17:52:00 crc kubenswrapper[4830]: I0227 17:52:00.505264 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536912-2rxlg" Feb 27 17:52:01 crc kubenswrapper[4830]: I0227 17:52:01.079378 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536912-2rxlg"] Feb 27 17:52:01 crc kubenswrapper[4830]: I0227 17:52:01.508771 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7796b64d89-v2b4b" Feb 27 17:52:01 crc kubenswrapper[4830]: I0227 17:52:01.513188 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7796b64d89-v2b4b" Feb 27 17:52:01 crc kubenswrapper[4830]: I0227 17:52:01.943035 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536912-2rxlg" event={"ID":"a2476383-c615-49e7-b34c-e824adab8603","Type":"ContainerStarted","Data":"996eac75d75b24304c08977546f0615a6b79e45ee6349cbc00e2ad371852e743"} Feb 27 17:52:02 crc kubenswrapper[4830]: I0227 17:52:02.470732 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-2tcsn"] Feb 27 17:52:02 crc kubenswrapper[4830]: I0227 17:52:02.479973 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-2tcsn" Feb 27 17:52:02 crc kubenswrapper[4830]: I0227 17:52:02.484399 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 27 17:52:02 crc kubenswrapper[4830]: I0227 17:52:02.484911 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-mwqcq" Feb 27 17:52:02 crc kubenswrapper[4830]: I0227 17:52:02.486865 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-2tcsn"] Feb 27 17:52:02 crc kubenswrapper[4830]: I0227 17:52:02.600176 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/370be3ee-4c90-499b-a826-5b39169ac10a-config-data\") pod \"heat-db-sync-2tcsn\" (UID: \"370be3ee-4c90-499b-a826-5b39169ac10a\") " pod="openstack/heat-db-sync-2tcsn" Feb 27 17:52:02 crc kubenswrapper[4830]: I0227 17:52:02.601374 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/370be3ee-4c90-499b-a826-5b39169ac10a-combined-ca-bundle\") pod \"heat-db-sync-2tcsn\" (UID: \"370be3ee-4c90-499b-a826-5b39169ac10a\") " pod="openstack/heat-db-sync-2tcsn" Feb 27 17:52:02 crc kubenswrapper[4830]: I0227 17:52:02.601489 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2mnb\" (UniqueName: \"kubernetes.io/projected/370be3ee-4c90-499b-a826-5b39169ac10a-kube-api-access-t2mnb\") pod \"heat-db-sync-2tcsn\" (UID: \"370be3ee-4c90-499b-a826-5b39169ac10a\") " pod="openstack/heat-db-sync-2tcsn" Feb 27 17:52:02 crc kubenswrapper[4830]: I0227 17:52:02.703190 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/370be3ee-4c90-499b-a826-5b39169ac10a-combined-ca-bundle\") pod \"heat-db-sync-2tcsn\" (UID: \"370be3ee-4c90-499b-a826-5b39169ac10a\") " pod="openstack/heat-db-sync-2tcsn" Feb 27 17:52:02 crc kubenswrapper[4830]: I0227 17:52:02.703292 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2mnb\" (UniqueName: \"kubernetes.io/projected/370be3ee-4c90-499b-a826-5b39169ac10a-kube-api-access-t2mnb\") pod \"heat-db-sync-2tcsn\" (UID: \"370be3ee-4c90-499b-a826-5b39169ac10a\") " pod="openstack/heat-db-sync-2tcsn" Feb 27 17:52:02 crc kubenswrapper[4830]: I0227 17:52:02.703370 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/370be3ee-4c90-499b-a826-5b39169ac10a-config-data\") pod \"heat-db-sync-2tcsn\" (UID: \"370be3ee-4c90-499b-a826-5b39169ac10a\") " pod="openstack/heat-db-sync-2tcsn" Feb 27 17:52:02 crc kubenswrapper[4830]: I0227 17:52:02.727906 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/370be3ee-4c90-499b-a826-5b39169ac10a-combined-ca-bundle\") pod \"heat-db-sync-2tcsn\" (UID: \"370be3ee-4c90-499b-a826-5b39169ac10a\") " pod="openstack/heat-db-sync-2tcsn" Feb 27 17:52:02 crc kubenswrapper[4830]: I0227 17:52:02.728440 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/370be3ee-4c90-499b-a826-5b39169ac10a-config-data\") pod \"heat-db-sync-2tcsn\" (UID: \"370be3ee-4c90-499b-a826-5b39169ac10a\") " pod="openstack/heat-db-sync-2tcsn" Feb 27 17:52:02 crc kubenswrapper[4830]: I0227 17:52:02.729546 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2mnb\" (UniqueName: \"kubernetes.io/projected/370be3ee-4c90-499b-a826-5b39169ac10a-kube-api-access-t2mnb\") pod \"heat-db-sync-2tcsn\" (UID: \"370be3ee-4c90-499b-a826-5b39169ac10a\") " pod="openstack/heat-db-sync-2tcsn" Feb 27 17:52:02 crc kubenswrapper[4830]: I0227 17:52:02.813326 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-2tcsn" Feb 27 17:52:03 crc kubenswrapper[4830]: I0227 17:52:03.453410 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-2tcsn"] Feb 27 17:52:03 crc kubenswrapper[4830]: I0227 17:52:03.970240 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-2tcsn" event={"ID":"370be3ee-4c90-499b-a826-5b39169ac10a","Type":"ContainerStarted","Data":"5ee904d34eb88d766fd149cf2ebbe48fd07a8246baf7ff771ce63fe731594c21"} Feb 27 17:52:03 crc kubenswrapper[4830]: I0227 17:52:03.973206 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536912-2rxlg" event={"ID":"a2476383-c615-49e7-b34c-e824adab8603","Type":"ContainerStarted","Data":"be34b7a5698aae5a9c1ba9bb648c8fff1a0cbc53687a974ba33846a2c0c0cc9b"} Feb 27 17:52:04 crc kubenswrapper[4830]: I0227 17:52:04.010095 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536912-2rxlg" podStartSLOduration=2.416213467 podStartE2EDuration="4.010061201s" podCreationTimestamp="2026-02-27 17:52:00 +0000 UTC" firstStartedPulling="2026-02-27 17:52:01.086523249 +0000 UTC m=+6317.175795712" lastFinishedPulling="2026-02-27 17:52:02.680370973 +0000 UTC m=+6318.769643446" observedRunningTime="2026-02-27 17:52:04.000710648 +0000 UTC m=+6320.089983181" watchObservedRunningTime="2026-02-27 17:52:04.010061201 +0000 UTC m=+6320.099333674" Feb 27 17:52:04 crc kubenswrapper[4830]: I0227 17:52:04.991509 4830 generic.go:334] "Generic (PLEG): container finished" podID="a2476383-c615-49e7-b34c-e824adab8603" containerID="be34b7a5698aae5a9c1ba9bb648c8fff1a0cbc53687a974ba33846a2c0c0cc9b" exitCode=0 Feb 27 17:52:04 crc kubenswrapper[4830]: I0227 17:52:04.992017 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536912-2rxlg" event={"ID":"a2476383-c615-49e7-b34c-e824adab8603","Type":"ContainerDied","Data":"be34b7a5698aae5a9c1ba9bb648c8fff1a0cbc53687a974ba33846a2c0c0cc9b"} Feb 27 17:52:06 crc kubenswrapper[4830]: I0227 17:52:06.484884 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536912-2rxlg" Feb 27 17:52:06 crc kubenswrapper[4830]: I0227 17:52:06.637637 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjjpk\" (UniqueName: \"kubernetes.io/projected/a2476383-c615-49e7-b34c-e824adab8603-kube-api-access-wjjpk\") pod \"a2476383-c615-49e7-b34c-e824adab8603\" (UID: \"a2476383-c615-49e7-b34c-e824adab8603\") " Feb 27 17:52:06 crc kubenswrapper[4830]: I0227 17:52:06.646628 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2476383-c615-49e7-b34c-e824adab8603-kube-api-access-wjjpk" (OuterVolumeSpecName: "kube-api-access-wjjpk") pod "a2476383-c615-49e7-b34c-e824adab8603" (UID: "a2476383-c615-49e7-b34c-e824adab8603"). InnerVolumeSpecName "kube-api-access-wjjpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:52:06 crc kubenswrapper[4830]: I0227 17:52:06.741226 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjjpk\" (UniqueName: \"kubernetes.io/projected/a2476383-c615-49e7-b34c-e824adab8603-kube-api-access-wjjpk\") on node \"crc\" DevicePath \"\"" Feb 27 17:52:07 crc kubenswrapper[4830]: I0227 17:52:07.026727 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536912-2rxlg" event={"ID":"a2476383-c615-49e7-b34c-e824adab8603","Type":"ContainerDied","Data":"996eac75d75b24304c08977546f0615a6b79e45ee6349cbc00e2ad371852e743"} Feb 27 17:52:07 crc kubenswrapper[4830]: I0227 17:52:07.026786 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="996eac75d75b24304c08977546f0615a6b79e45ee6349cbc00e2ad371852e743" Feb 27 17:52:07 crc kubenswrapper[4830]: I0227 17:52:07.026827 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536912-2rxlg" Feb 27 17:52:07 crc kubenswrapper[4830]: I0227 17:52:07.056544 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536906-9ktdj"] Feb 27 17:52:07 crc kubenswrapper[4830]: I0227 17:52:07.068735 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536906-9ktdj"] Feb 27 17:52:07 crc kubenswrapper[4830]: E0227 17:52:07.766025 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:52:08 crc kubenswrapper[4830]: I0227 17:52:08.796847 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09e651f3-9b46-4392-9ce4-a653c4ad3415" path="/var/lib/kubelet/pods/09e651f3-9b46-4392-9ce4-a653c4ad3415/volumes" Feb 27 17:52:13 crc kubenswrapper[4830]: I0227 17:52:13.391590 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7796b64d89-v2b4b" Feb 27 17:52:14 crc kubenswrapper[4830]: E0227 17:52:14.072237 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 17:52:14 crc kubenswrapper[4830]: E0227 17:52:14.072813 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z2hnz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-8q6c2_openshift-marketplace(bab9b8c9-003b-4139-b9d5-2302e4773442): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 17:52:14 crc kubenswrapper[4830]: E0227 17:52:14.074058 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-8q6c2" podUID="bab9b8c9-003b-4139-b9d5-2302e4773442" Feb 27 17:52:15 crc kubenswrapper[4830]: I0227 17:52:15.006223 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7796b64d89-v2b4b" Feb 27 17:52:15 crc kubenswrapper[4830]: I0227 17:52:15.100217 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-684cf744b5-pzh2b"] Feb 27 17:52:15 crc kubenswrapper[4830]: I0227 17:52:15.103193 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-684cf744b5-pzh2b" podUID="e8a4ffdf-3cc8-491c-8795-5226996342cc" containerName="horizon-log" containerID="cri-o://d12fdec6bb2fe1d1fb3852c6738cebb28432f42eeff19e89964818907186d5a1" gracePeriod=30 Feb 27 17:52:15 crc kubenswrapper[4830]: I0227 17:52:15.103568 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-684cf744b5-pzh2b" podUID="e8a4ffdf-3cc8-491c-8795-5226996342cc" containerName="horizon" containerID="cri-o://778fde59b569a28b8849d4c16df30e107135a979d8f9f7724ff452e47b32a740" gracePeriod=30 Feb 27 17:52:16 crc kubenswrapper[4830]: I0227 17:52:16.173744 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-2tcsn" event={"ID":"370be3ee-4c90-499b-a826-5b39169ac10a","Type":"ContainerStarted","Data":"2789089b2b7910526790c32e7c41f21b278ea5ba6260d31dbd2c81df1ad29faa"} Feb 27 17:52:16 crc kubenswrapper[4830]: I0227 17:52:16.215320 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-2tcsn" podStartSLOduration=2.586680024 podStartE2EDuration="14.215294864s" podCreationTimestamp="2026-02-27 17:52:02 +0000 UTC" firstStartedPulling="2026-02-27 17:52:03.497853165 +0000 UTC m=+6319.587125658" lastFinishedPulling="2026-02-27 17:52:15.126468045 +0000 UTC m=+6331.215740498" observedRunningTime="2026-02-27 17:52:16.197668353 +0000 UTC m=+6332.286940856" watchObservedRunningTime="2026-02-27 17:52:16.215294864 +0000 UTC m=+6332.304567367" Feb 27 17:52:17 crc kubenswrapper[4830]: I0227 17:52:17.184797 4830 generic.go:334] "Generic (PLEG): container finished" podID="370be3ee-4c90-499b-a826-5b39169ac10a" containerID="2789089b2b7910526790c32e7c41f21b278ea5ba6260d31dbd2c81df1ad29faa" exitCode=0 Feb 27 17:52:17 crc kubenswrapper[4830]: I0227 17:52:17.184919 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-2tcsn" event={"ID":"370be3ee-4c90-499b-a826-5b39169ac10a","Type":"ContainerDied","Data":"2789089b2b7910526790c32e7c41f21b278ea5ba6260d31dbd2c81df1ad29faa"} Feb 27 17:52:18 crc kubenswrapper[4830]: I0227 17:52:18.635136 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-2tcsn" Feb 27 17:52:18 crc kubenswrapper[4830]: I0227 17:52:18.716046 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/370be3ee-4c90-499b-a826-5b39169ac10a-combined-ca-bundle\") pod \"370be3ee-4c90-499b-a826-5b39169ac10a\" (UID: \"370be3ee-4c90-499b-a826-5b39169ac10a\") " Feb 27 17:52:18 crc kubenswrapper[4830]: I0227 17:52:18.716180 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2mnb\" (UniqueName: \"kubernetes.io/projected/370be3ee-4c90-499b-a826-5b39169ac10a-kube-api-access-t2mnb\") pod \"370be3ee-4c90-499b-a826-5b39169ac10a\" (UID: \"370be3ee-4c90-499b-a826-5b39169ac10a\") " Feb 27 17:52:18 crc kubenswrapper[4830]: I0227 17:52:18.716379 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/370be3ee-4c90-499b-a826-5b39169ac10a-config-data\") pod \"370be3ee-4c90-499b-a826-5b39169ac10a\" (UID: \"370be3ee-4c90-499b-a826-5b39169ac10a\") " Feb 27 17:52:18 crc kubenswrapper[4830]: I0227 17:52:18.724315 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/370be3ee-4c90-499b-a826-5b39169ac10a-kube-api-access-t2mnb" (OuterVolumeSpecName: "kube-api-access-t2mnb") pod "370be3ee-4c90-499b-a826-5b39169ac10a" (UID: "370be3ee-4c90-499b-a826-5b39169ac10a"). InnerVolumeSpecName "kube-api-access-t2mnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:52:18 crc kubenswrapper[4830]: I0227 17:52:18.755623 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/370be3ee-4c90-499b-a826-5b39169ac10a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "370be3ee-4c90-499b-a826-5b39169ac10a" (UID: "370be3ee-4c90-499b-a826-5b39169ac10a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:52:18 crc kubenswrapper[4830]: I0227 17:52:18.820892 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/370be3ee-4c90-499b-a826-5b39169ac10a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:52:18 crc kubenswrapper[4830]: I0227 17:52:18.820931 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2mnb\" (UniqueName: \"kubernetes.io/projected/370be3ee-4c90-499b-a826-5b39169ac10a-kube-api-access-t2mnb\") on node \"crc\" DevicePath \"\"" Feb 27 17:52:18 crc kubenswrapper[4830]: I0227 17:52:18.833443 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/370be3ee-4c90-499b-a826-5b39169ac10a-config-data" (OuterVolumeSpecName: "config-data") pod "370be3ee-4c90-499b-a826-5b39169ac10a" (UID: "370be3ee-4c90-499b-a826-5b39169ac10a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:52:18 crc kubenswrapper[4830]: I0227 17:52:18.922672 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/370be3ee-4c90-499b-a826-5b39169ac10a-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:52:19 crc kubenswrapper[4830]: I0227 17:52:19.208382 4830 generic.go:334] "Generic (PLEG): container finished" podID="e8a4ffdf-3cc8-491c-8795-5226996342cc" containerID="778fde59b569a28b8849d4c16df30e107135a979d8f9f7724ff452e47b32a740" exitCode=0 Feb 27 17:52:19 crc kubenswrapper[4830]: I0227 17:52:19.208457 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-684cf744b5-pzh2b" event={"ID":"e8a4ffdf-3cc8-491c-8795-5226996342cc","Type":"ContainerDied","Data":"778fde59b569a28b8849d4c16df30e107135a979d8f9f7724ff452e47b32a740"} Feb 27 17:52:19 crc kubenswrapper[4830]: I0227 17:52:19.210098 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-2tcsn" event={"ID":"370be3ee-4c90-499b-a826-5b39169ac10a","Type":"ContainerDied","Data":"5ee904d34eb88d766fd149cf2ebbe48fd07a8246baf7ff771ce63fe731594c21"} Feb 27 17:52:19 crc kubenswrapper[4830]: I0227 17:52:19.210144 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-2tcsn" Feb 27 17:52:19 crc kubenswrapper[4830]: I0227 17:52:19.210145 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ee904d34eb88d766fd149cf2ebbe48fd07a8246baf7ff771ce63fe731594c21" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.593215 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-649cbc8c57-j4q69"] Feb 27 17:52:20 crc kubenswrapper[4830]: E0227 17:52:20.594038 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="370be3ee-4c90-499b-a826-5b39169ac10a" containerName="heat-db-sync" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.594054 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="370be3ee-4c90-499b-a826-5b39169ac10a" containerName="heat-db-sync" Feb 27 17:52:20 crc kubenswrapper[4830]: E0227 17:52:20.594081 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2476383-c615-49e7-b34c-e824adab8603" containerName="oc" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.594090 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2476383-c615-49e7-b34c-e824adab8603" containerName="oc" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.594348 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="370be3ee-4c90-499b-a826-5b39169ac10a" containerName="heat-db-sync" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.594365 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2476383-c615-49e7-b34c-e824adab8603" containerName="oc" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.595212 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-649cbc8c57-j4q69" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.597684 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.598263 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.598595 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-mwqcq" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.622881 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-649cbc8c57-j4q69"] Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.669026 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7d576967dc-475nd"] Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.670427 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7d576967dc-475nd" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.673627 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.681232 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7d576967dc-475nd"] Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.772115 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwxmf\" (UniqueName: \"kubernetes.io/projected/add92f79-a9b6-4757-a50d-902c8de76fdc-kube-api-access-kwxmf\") pod \"heat-engine-649cbc8c57-j4q69\" (UID: \"add92f79-a9b6-4757-a50d-902c8de76fdc\") " pod="openstack/heat-engine-649cbc8c57-j4q69" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.772211 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/add92f79-a9b6-4757-a50d-902c8de76fdc-config-data\") pod \"heat-engine-649cbc8c57-j4q69\" (UID: \"add92f79-a9b6-4757-a50d-902c8de76fdc\") " pod="openstack/heat-engine-649cbc8c57-j4q69" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.772250 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e31db19-37ac-4e76-a650-dacf0b71c2fa-combined-ca-bundle\") pod \"heat-cfnapi-7d576967dc-475nd\" (UID: \"6e31db19-37ac-4e76-a650-dacf0b71c2fa\") " pod="openstack/heat-cfnapi-7d576967dc-475nd" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.772283 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/add92f79-a9b6-4757-a50d-902c8de76fdc-combined-ca-bundle\") pod \"heat-engine-649cbc8c57-j4q69\" (UID: \"add92f79-a9b6-4757-a50d-902c8de76fdc\") " pod="openstack/heat-engine-649cbc8c57-j4q69" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.772316 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e31db19-37ac-4e76-a650-dacf0b71c2fa-config-data-custom\") pod \"heat-cfnapi-7d576967dc-475nd\" (UID: \"6e31db19-37ac-4e76-a650-dacf0b71c2fa\") " pod="openstack/heat-cfnapi-7d576967dc-475nd" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.772352 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpnkg\" (UniqueName: \"kubernetes.io/projected/6e31db19-37ac-4e76-a650-dacf0b71c2fa-kube-api-access-vpnkg\") pod \"heat-cfnapi-7d576967dc-475nd\" (UID: \"6e31db19-37ac-4e76-a650-dacf0b71c2fa\") " pod="openstack/heat-cfnapi-7d576967dc-475nd" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.772377 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e31db19-37ac-4e76-a650-dacf0b71c2fa-config-data\") pod \"heat-cfnapi-7d576967dc-475nd\" (UID: \"6e31db19-37ac-4e76-a650-dacf0b71c2fa\") " pod="openstack/heat-cfnapi-7d576967dc-475nd" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.772392 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/add92f79-a9b6-4757-a50d-902c8de76fdc-config-data-custom\") pod \"heat-engine-649cbc8c57-j4q69\" (UID: \"add92f79-a9b6-4757-a50d-902c8de76fdc\") " pod="openstack/heat-engine-649cbc8c57-j4q69" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.823594 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5fbb4bdc94-5c6mv"] Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.824997 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5fbb4bdc94-5c6mv" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.829535 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.840971 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5fbb4bdc94-5c6mv"] Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.875084 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpnkg\" (UniqueName: \"kubernetes.io/projected/6e31db19-37ac-4e76-a650-dacf0b71c2fa-kube-api-access-vpnkg\") pod \"heat-cfnapi-7d576967dc-475nd\" (UID: \"6e31db19-37ac-4e76-a650-dacf0b71c2fa\") " pod="openstack/heat-cfnapi-7d576967dc-475nd" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.875143 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e31db19-37ac-4e76-a650-dacf0b71c2fa-config-data\") pod \"heat-cfnapi-7d576967dc-475nd\" (UID: \"6e31db19-37ac-4e76-a650-dacf0b71c2fa\") " pod="openstack/heat-cfnapi-7d576967dc-475nd" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.875187 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/add92f79-a9b6-4757-a50d-902c8de76fdc-config-data-custom\") pod \"heat-engine-649cbc8c57-j4q69\" (UID: \"add92f79-a9b6-4757-a50d-902c8de76fdc\") " pod="openstack/heat-engine-649cbc8c57-j4q69" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.875243 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwxmf\" (UniqueName: \"kubernetes.io/projected/add92f79-a9b6-4757-a50d-902c8de76fdc-kube-api-access-kwxmf\") pod \"heat-engine-649cbc8c57-j4q69\" (UID: \"add92f79-a9b6-4757-a50d-902c8de76fdc\") " pod="openstack/heat-engine-649cbc8c57-j4q69" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.875349 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/add92f79-a9b6-4757-a50d-902c8de76fdc-config-data\") pod \"heat-engine-649cbc8c57-j4q69\" (UID: \"add92f79-a9b6-4757-a50d-902c8de76fdc\") " pod="openstack/heat-engine-649cbc8c57-j4q69" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.875403 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e31db19-37ac-4e76-a650-dacf0b71c2fa-combined-ca-bundle\") pod \"heat-cfnapi-7d576967dc-475nd\" (UID: \"6e31db19-37ac-4e76-a650-dacf0b71c2fa\") " pod="openstack/heat-cfnapi-7d576967dc-475nd" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.875446 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/add92f79-a9b6-4757-a50d-902c8de76fdc-combined-ca-bundle\") pod \"heat-engine-649cbc8c57-j4q69\" (UID: \"add92f79-a9b6-4757-a50d-902c8de76fdc\") " pod="openstack/heat-engine-649cbc8c57-j4q69" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.875498 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e31db19-37ac-4e76-a650-dacf0b71c2fa-config-data-custom\") pod \"heat-cfnapi-7d576967dc-475nd\" (UID: \"6e31db19-37ac-4e76-a650-dacf0b71c2fa\") " pod="openstack/heat-cfnapi-7d576967dc-475nd" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.890872 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/add92f79-a9b6-4757-a50d-902c8de76fdc-combined-ca-bundle\") pod \"heat-engine-649cbc8c57-j4q69\" (UID: \"add92f79-a9b6-4757-a50d-902c8de76fdc\") " pod="openstack/heat-engine-649cbc8c57-j4q69" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.893661 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e31db19-37ac-4e76-a650-dacf0b71c2fa-config-data-custom\") pod \"heat-cfnapi-7d576967dc-475nd\" (UID: \"6e31db19-37ac-4e76-a650-dacf0b71c2fa\") " pod="openstack/heat-cfnapi-7d576967dc-475nd" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.896537 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e31db19-37ac-4e76-a650-dacf0b71c2fa-config-data\") pod \"heat-cfnapi-7d576967dc-475nd\" (UID: \"6e31db19-37ac-4e76-a650-dacf0b71c2fa\") " pod="openstack/heat-cfnapi-7d576967dc-475nd" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.899721 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e31db19-37ac-4e76-a650-dacf0b71c2fa-combined-ca-bundle\") pod \"heat-cfnapi-7d576967dc-475nd\" (UID: \"6e31db19-37ac-4e76-a650-dacf0b71c2fa\") " pod="openstack/heat-cfnapi-7d576967dc-475nd" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.900828 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/add92f79-a9b6-4757-a50d-902c8de76fdc-config-data-custom\") pod \"heat-engine-649cbc8c57-j4q69\" (UID: \"add92f79-a9b6-4757-a50d-902c8de76fdc\") " pod="openstack/heat-engine-649cbc8c57-j4q69" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.901877 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/add92f79-a9b6-4757-a50d-902c8de76fdc-config-data\") pod \"heat-engine-649cbc8c57-j4q69\" (UID: \"add92f79-a9b6-4757-a50d-902c8de76fdc\") " pod="openstack/heat-engine-649cbc8c57-j4q69" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.908580 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwxmf\" (UniqueName: \"kubernetes.io/projected/add92f79-a9b6-4757-a50d-902c8de76fdc-kube-api-access-kwxmf\") pod \"heat-engine-649cbc8c57-j4q69\" (UID: \"add92f79-a9b6-4757-a50d-902c8de76fdc\") " pod="openstack/heat-engine-649cbc8c57-j4q69" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.917012 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpnkg\" (UniqueName: \"kubernetes.io/projected/6e31db19-37ac-4e76-a650-dacf0b71c2fa-kube-api-access-vpnkg\") pod \"heat-cfnapi-7d576967dc-475nd\" (UID: \"6e31db19-37ac-4e76-a650-dacf0b71c2fa\") " pod="openstack/heat-cfnapi-7d576967dc-475nd" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.937644 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-649cbc8c57-j4q69" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.980265 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/522dc2a3-ea31-4a6e-a591-31b8988518e9-config-data\") pod \"heat-api-5fbb4bdc94-5c6mv\" (UID: \"522dc2a3-ea31-4a6e-a591-31b8988518e9\") " pod="openstack/heat-api-5fbb4bdc94-5c6mv" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.980347 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/522dc2a3-ea31-4a6e-a591-31b8988518e9-config-data-custom\") pod \"heat-api-5fbb4bdc94-5c6mv\" (UID: \"522dc2a3-ea31-4a6e-a591-31b8988518e9\") " pod="openstack/heat-api-5fbb4bdc94-5c6mv" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.980386 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckrt7\" (UniqueName: \"kubernetes.io/projected/522dc2a3-ea31-4a6e-a591-31b8988518e9-kube-api-access-ckrt7\") pod \"heat-api-5fbb4bdc94-5c6mv\" (UID: \"522dc2a3-ea31-4a6e-a591-31b8988518e9\") " pod="openstack/heat-api-5fbb4bdc94-5c6mv" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.980418 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/522dc2a3-ea31-4a6e-a591-31b8988518e9-combined-ca-bundle\") pod \"heat-api-5fbb4bdc94-5c6mv\" (UID: \"522dc2a3-ea31-4a6e-a591-31b8988518e9\") " pod="openstack/heat-api-5fbb4bdc94-5c6mv" Feb 27 17:52:20 crc kubenswrapper[4830]: I0227 17:52:20.997288 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7d576967dc-475nd" Feb 27 17:52:21 crc kubenswrapper[4830]: I0227 17:52:21.081715 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckrt7\" (UniqueName: \"kubernetes.io/projected/522dc2a3-ea31-4a6e-a591-31b8988518e9-kube-api-access-ckrt7\") pod \"heat-api-5fbb4bdc94-5c6mv\" (UID: \"522dc2a3-ea31-4a6e-a591-31b8988518e9\") " pod="openstack/heat-api-5fbb4bdc94-5c6mv" Feb 27 17:52:21 crc kubenswrapper[4830]: I0227 17:52:21.081766 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/522dc2a3-ea31-4a6e-a591-31b8988518e9-combined-ca-bundle\") pod \"heat-api-5fbb4bdc94-5c6mv\" (UID: \"522dc2a3-ea31-4a6e-a591-31b8988518e9\") " pod="openstack/heat-api-5fbb4bdc94-5c6mv" Feb 27 17:52:21 crc kubenswrapper[4830]: I0227 17:52:21.081883 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/522dc2a3-ea31-4a6e-a591-31b8988518e9-config-data\") pod \"heat-api-5fbb4bdc94-5c6mv\" (UID: \"522dc2a3-ea31-4a6e-a591-31b8988518e9\") " pod="openstack/heat-api-5fbb4bdc94-5c6mv" Feb 27 17:52:21 crc kubenswrapper[4830]: I0227 17:52:21.081936 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/522dc2a3-ea31-4a6e-a591-31b8988518e9-config-data-custom\") pod \"heat-api-5fbb4bdc94-5c6mv\" (UID: \"522dc2a3-ea31-4a6e-a591-31b8988518e9\") " pod="openstack/heat-api-5fbb4bdc94-5c6mv" Feb 27 17:52:21 crc kubenswrapper[4830]: I0227 17:52:21.100979 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/522dc2a3-ea31-4a6e-a591-31b8988518e9-config-data-custom\") pod \"heat-api-5fbb4bdc94-5c6mv\" (UID: \"522dc2a3-ea31-4a6e-a591-31b8988518e9\") " pod="openstack/heat-api-5fbb4bdc94-5c6mv" Feb 27 17:52:21 crc kubenswrapper[4830]: I0227 17:52:21.103481 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/522dc2a3-ea31-4a6e-a591-31b8988518e9-combined-ca-bundle\") pod \"heat-api-5fbb4bdc94-5c6mv\" (UID: \"522dc2a3-ea31-4a6e-a591-31b8988518e9\") " pod="openstack/heat-api-5fbb4bdc94-5c6mv" Feb 27 17:52:21 crc kubenswrapper[4830]: I0227 17:52:21.113727 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/522dc2a3-ea31-4a6e-a591-31b8988518e9-config-data\") pod \"heat-api-5fbb4bdc94-5c6mv\" (UID: \"522dc2a3-ea31-4a6e-a591-31b8988518e9\") " pod="openstack/heat-api-5fbb4bdc94-5c6mv" Feb 27 17:52:21 crc kubenswrapper[4830]: I0227 17:52:21.116107 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckrt7\" (UniqueName: \"kubernetes.io/projected/522dc2a3-ea31-4a6e-a591-31b8988518e9-kube-api-access-ckrt7\") pod \"heat-api-5fbb4bdc94-5c6mv\" (UID: \"522dc2a3-ea31-4a6e-a591-31b8988518e9\") " pod="openstack/heat-api-5fbb4bdc94-5c6mv" Feb 27 17:52:21 crc kubenswrapper[4830]: I0227 17:52:21.158443 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5fbb4bdc94-5c6mv" Feb 27 17:52:21 crc kubenswrapper[4830]: I0227 17:52:21.594985 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-649cbc8c57-j4q69"] Feb 27 17:52:21 crc kubenswrapper[4830]: I0227 17:52:21.754079 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7d576967dc-475nd"] Feb 27 17:52:21 crc kubenswrapper[4830]: E0227 17:52:21.767510 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:52:21 crc kubenswrapper[4830]: I0227 17:52:21.945589 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5fbb4bdc94-5c6mv"] Feb 27 17:52:22 crc kubenswrapper[4830]: I0227 17:52:22.284344 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-649cbc8c57-j4q69" event={"ID":"add92f79-a9b6-4757-a50d-902c8de76fdc","Type":"ContainerStarted","Data":"f8992b820c56e2f6b8b4b2ed3483421c51758d3f107273e005dd9163108bb27c"} Feb 27 17:52:22 crc kubenswrapper[4830]: I0227 17:52:22.284385 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-649cbc8c57-j4q69" event={"ID":"add92f79-a9b6-4757-a50d-902c8de76fdc","Type":"ContainerStarted","Data":"df90f774dc2dabb6281e3e71fbee3dda53279ed33fda0963fcde4bfc42893446"} Feb 27 17:52:22 crc kubenswrapper[4830]: I0227 17:52:22.286389 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-649cbc8c57-j4q69" Feb 27 17:52:22 crc kubenswrapper[4830]: I0227 17:52:22.291424 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7d576967dc-475nd" event={"ID":"6e31db19-37ac-4e76-a650-dacf0b71c2fa","Type":"ContainerStarted","Data":"2a76001bbdd1b3bf6056c195ac2a148c66dcc1aa5927984df6fdef1ff4ac6685"} Feb 27 17:52:22 crc kubenswrapper[4830]: I0227 17:52:22.292991 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5fbb4bdc94-5c6mv" event={"ID":"522dc2a3-ea31-4a6e-a591-31b8988518e9","Type":"ContainerStarted","Data":"b0cbb0520c1720b7524c3ce4440c20b2f88a79e609096cc0be631c7da2563006"} Feb 27 17:52:22 crc kubenswrapper[4830]: I0227 17:52:22.307672 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-649cbc8c57-j4q69" podStartSLOduration=2.307650601 podStartE2EDuration="2.307650601s" podCreationTimestamp="2026-02-27 17:52:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:52:22.300682445 +0000 UTC m=+6338.389954908" watchObservedRunningTime="2026-02-27 17:52:22.307650601 +0000 UTC m=+6338.396923064" Feb 27 17:52:24 crc kubenswrapper[4830]: I0227 17:52:24.479147 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-684cf744b5-pzh2b" podUID="e8a4ffdf-3cc8-491c-8795-5226996342cc" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.156:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.156:8080: connect: connection refused" Feb 27 17:52:25 crc kubenswrapper[4830]: I0227 17:52:25.332580 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5fbb4bdc94-5c6mv" event={"ID":"522dc2a3-ea31-4a6e-a591-31b8988518e9","Type":"ContainerStarted","Data":"70e792552767720f9dce7b4e12a0b7a83875f9d9fb702afd64b17d2a2f7ad149"} Feb 27 17:52:25 crc kubenswrapper[4830]: I0227 17:52:25.333038 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5fbb4bdc94-5c6mv" Feb 27 17:52:25 crc kubenswrapper[4830]: I0227 17:52:25.335189 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7d576967dc-475nd" event={"ID":"6e31db19-37ac-4e76-a650-dacf0b71c2fa","Type":"ContainerStarted","Data":"b5053b1569622d0f161cf484d5ee97c4e353c0bf4a53bf9ba3c72936f86a3d6c"} Feb 27 17:52:25 crc kubenswrapper[4830]: I0227 17:52:25.335477 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7d576967dc-475nd" Feb 27 17:52:25 crc kubenswrapper[4830]: I0227 17:52:25.373421 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5fbb4bdc94-5c6mv" podStartSLOduration=3.134408616 podStartE2EDuration="5.373389211s" podCreationTimestamp="2026-02-27 17:52:20 +0000 UTC" firstStartedPulling="2026-02-27 17:52:21.949269664 +0000 UTC m=+6338.038542127" lastFinishedPulling="2026-02-27 17:52:24.188250269 +0000 UTC m=+6340.277522722" observedRunningTime="2026-02-27 17:52:25.350631098 +0000 UTC m=+6341.439903561" watchObservedRunningTime="2026-02-27 17:52:25.373389211 +0000 UTC m=+6341.462661714" Feb 27 17:52:25 crc kubenswrapper[4830]: I0227 17:52:25.386389 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7d576967dc-475nd" podStartSLOduration=2.957014944 podStartE2EDuration="5.386367631s" podCreationTimestamp="2026-02-27 17:52:20 +0000 UTC" firstStartedPulling="2026-02-27 17:52:21.762474497 +0000 UTC m=+6337.851746960" lastFinishedPulling="2026-02-27 17:52:24.191827184 +0000 UTC m=+6340.281099647" observedRunningTime="2026-02-27 17:52:25.381228189 +0000 UTC m=+6341.470500672" watchObservedRunningTime="2026-02-27 17:52:25.386367631 +0000 UTC m=+6341.475640094" Feb 27 17:52:25 crc kubenswrapper[4830]: E0227 17:52:25.765023 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-8q6c2" podUID="bab9b8c9-003b-4139-b9d5-2302e4773442" Feb 27 17:52:32 crc kubenswrapper[4830]: I0227 17:52:32.199512 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-7d576967dc-475nd" Feb 27 17:52:32 crc kubenswrapper[4830]: I0227 17:52:32.410448 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-5fbb4bdc94-5c6mv" Feb 27 17:52:34 crc kubenswrapper[4830]: I0227 17:52:34.478876 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-684cf744b5-pzh2b" podUID="e8a4ffdf-3cc8-491c-8795-5226996342cc" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.156:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.156:8080: connect: connection refused" Feb 27 17:52:35 crc kubenswrapper[4830]: E0227 17:52:35.585885 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 17:52:35 crc kubenswrapper[4830]: E0227 17:52:35.586556 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t9tjs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-gbcl6_openshift-marketplace(90e915d6-d74a-4f5b-a8da-8f0f2acdda48): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 17:52:35 crc kubenswrapper[4830]: E0227 17:52:35.588108 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:52:38 crc kubenswrapper[4830]: E0227 17:52:38.768708 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-8q6c2" podUID="bab9b8c9-003b-4139-b9d5-2302e4773442" Feb 27 17:52:40 crc kubenswrapper[4830]: I0227 17:52:40.993590 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-649cbc8c57-j4q69" Feb 27 17:52:44 crc kubenswrapper[4830]: I0227 17:52:44.479121 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-684cf744b5-pzh2b" podUID="e8a4ffdf-3cc8-491c-8795-5226996342cc" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.156:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.156:8080: connect: connection refused" Feb 27 17:52:44 crc kubenswrapper[4830]: I0227 17:52:44.479984 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-684cf744b5-pzh2b" Feb 27 17:52:45 crc kubenswrapper[4830]: I0227 17:52:45.594497 4830 generic.go:334] "Generic (PLEG): container finished" podID="e8a4ffdf-3cc8-491c-8795-5226996342cc" containerID="d12fdec6bb2fe1d1fb3852c6738cebb28432f42eeff19e89964818907186d5a1" exitCode=137 Feb 27 17:52:45 crc kubenswrapper[4830]: I0227 17:52:45.594551 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-684cf744b5-pzh2b" event={"ID":"e8a4ffdf-3cc8-491c-8795-5226996342cc","Type":"ContainerDied","Data":"d12fdec6bb2fe1d1fb3852c6738cebb28432f42eeff19e89964818907186d5a1"} Feb 27 17:52:45 crc kubenswrapper[4830]: I0227 17:52:45.767657 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-684cf744b5-pzh2b" Feb 27 17:52:45 crc kubenswrapper[4830]: I0227 17:52:45.965310 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zpg2\" (UniqueName: \"kubernetes.io/projected/e8a4ffdf-3cc8-491c-8795-5226996342cc-kube-api-access-2zpg2\") pod \"e8a4ffdf-3cc8-491c-8795-5226996342cc\" (UID: \"e8a4ffdf-3cc8-491c-8795-5226996342cc\") " Feb 27 17:52:45 crc kubenswrapper[4830]: I0227 17:52:45.965382 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e8a4ffdf-3cc8-491c-8795-5226996342cc-config-data\") pod \"e8a4ffdf-3cc8-491c-8795-5226996342cc\" (UID: \"e8a4ffdf-3cc8-491c-8795-5226996342cc\") " Feb 27 17:52:45 crc kubenswrapper[4830]: I0227 17:52:45.965487 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e8a4ffdf-3cc8-491c-8795-5226996342cc-horizon-secret-key\") pod \"e8a4ffdf-3cc8-491c-8795-5226996342cc\" (UID: \"e8a4ffdf-3cc8-491c-8795-5226996342cc\") " Feb 27 17:52:45 crc kubenswrapper[4830]: I0227 17:52:45.965590 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8a4ffdf-3cc8-491c-8795-5226996342cc-logs\") pod \"e8a4ffdf-3cc8-491c-8795-5226996342cc\" (UID: \"e8a4ffdf-3cc8-491c-8795-5226996342cc\") " Feb 27 17:52:45 crc kubenswrapper[4830]: I0227 17:52:45.965795 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e8a4ffdf-3cc8-491c-8795-5226996342cc-scripts\") pod \"e8a4ffdf-3cc8-491c-8795-5226996342cc\" (UID: \"e8a4ffdf-3cc8-491c-8795-5226996342cc\") " Feb 27 17:52:45 crc kubenswrapper[4830]: I0227 17:52:45.968580 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8a4ffdf-3cc8-491c-8795-5226996342cc-logs" (OuterVolumeSpecName: "logs") pod "e8a4ffdf-3cc8-491c-8795-5226996342cc" (UID: "e8a4ffdf-3cc8-491c-8795-5226996342cc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:52:45 crc kubenswrapper[4830]: I0227 17:52:45.988706 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8a4ffdf-3cc8-491c-8795-5226996342cc-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "e8a4ffdf-3cc8-491c-8795-5226996342cc" (UID: "e8a4ffdf-3cc8-491c-8795-5226996342cc"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:52:45 crc kubenswrapper[4830]: I0227 17:52:45.989356 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8a4ffdf-3cc8-491c-8795-5226996342cc-kube-api-access-2zpg2" (OuterVolumeSpecName: "kube-api-access-2zpg2") pod "e8a4ffdf-3cc8-491c-8795-5226996342cc" (UID: "e8a4ffdf-3cc8-491c-8795-5226996342cc"). InnerVolumeSpecName "kube-api-access-2zpg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:52:46 crc kubenswrapper[4830]: I0227 17:52:46.011769 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8a4ffdf-3cc8-491c-8795-5226996342cc-config-data" (OuterVolumeSpecName: "config-data") pod "e8a4ffdf-3cc8-491c-8795-5226996342cc" (UID: "e8a4ffdf-3cc8-491c-8795-5226996342cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:52:46 crc kubenswrapper[4830]: I0227 17:52:46.026632 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8a4ffdf-3cc8-491c-8795-5226996342cc-scripts" (OuterVolumeSpecName: "scripts") pod "e8a4ffdf-3cc8-491c-8795-5226996342cc" (UID: "e8a4ffdf-3cc8-491c-8795-5226996342cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:52:46 crc kubenswrapper[4830]: I0227 17:52:46.068999 4830 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e8a4ffdf-3cc8-491c-8795-5226996342cc-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:52:46 crc kubenswrapper[4830]: I0227 17:52:46.069029 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zpg2\" (UniqueName: \"kubernetes.io/projected/e8a4ffdf-3cc8-491c-8795-5226996342cc-kube-api-access-2zpg2\") on node \"crc\" DevicePath \"\"" Feb 27 17:52:46 crc kubenswrapper[4830]: I0227 17:52:46.069040 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e8a4ffdf-3cc8-491c-8795-5226996342cc-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:52:46 crc kubenswrapper[4830]: I0227 17:52:46.069051 4830 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e8a4ffdf-3cc8-491c-8795-5226996342cc-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 27 17:52:46 crc kubenswrapper[4830]: I0227 17:52:46.069062 4830 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e8a4ffdf-3cc8-491c-8795-5226996342cc-logs\") on node \"crc\" DevicePath \"\"" Feb 27 17:52:46 crc kubenswrapper[4830]: I0227 17:52:46.614687 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-684cf744b5-pzh2b" event={"ID":"e8a4ffdf-3cc8-491c-8795-5226996342cc","Type":"ContainerDied","Data":"317083791c03c034064a4b1b8c072335d11e7fcb1a0611584346faf7884e6b6a"} Feb 27 17:52:46 crc kubenswrapper[4830]: I0227 17:52:46.614766 4830 scope.go:117] "RemoveContainer" containerID="778fde59b569a28b8849d4c16df30e107135a979d8f9f7724ff452e47b32a740" Feb 27 17:52:46 crc kubenswrapper[4830]: I0227 17:52:46.615012 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-684cf744b5-pzh2b" Feb 27 17:52:46 crc kubenswrapper[4830]: I0227 17:52:46.677353 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-684cf744b5-pzh2b"] Feb 27 17:52:46 crc kubenswrapper[4830]: I0227 17:52:46.685563 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-684cf744b5-pzh2b"] Feb 27 17:52:46 crc kubenswrapper[4830]: I0227 17:52:46.776661 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8a4ffdf-3cc8-491c-8795-5226996342cc" path="/var/lib/kubelet/pods/e8a4ffdf-3cc8-491c-8795-5226996342cc/volumes" Feb 27 17:52:46 crc kubenswrapper[4830]: I0227 17:52:46.822756 4830 scope.go:117] "RemoveContainer" containerID="d12fdec6bb2fe1d1fb3852c6738cebb28432f42eeff19e89964818907186d5a1" Feb 27 17:52:48 crc kubenswrapper[4830]: I0227 17:52:48.050295 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-7npc7"] Feb 27 17:52:48 crc kubenswrapper[4830]: I0227 17:52:48.058480 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-dzdtc"] Feb 27 17:52:48 crc kubenswrapper[4830]: I0227 17:52:48.065458 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0173-account-create-update-vcvw8"] Feb 27 17:52:48 crc kubenswrapper[4830]: I0227 17:52:48.076079 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-dzdtc"] Feb 27 17:52:48 crc kubenswrapper[4830]: I0227 17:52:48.082813 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-7npc7"] Feb 27 17:52:48 crc kubenswrapper[4830]: I0227 17:52:48.100546 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0173-account-create-update-vcvw8"] Feb 27 17:52:48 crc kubenswrapper[4830]: I0227 17:52:48.783535 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1476f120-cb3a-4ddb-8876-14c9cd912d49" path="/var/lib/kubelet/pods/1476f120-cb3a-4ddb-8876-14c9cd912d49/volumes" Feb 27 17:52:48 crc kubenswrapper[4830]: I0227 17:52:48.785405 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efe5a2c2-2f81-419f-ba45-287441964844" path="/var/lib/kubelet/pods/efe5a2c2-2f81-419f-ba45-287441964844/volumes" Feb 27 17:52:48 crc kubenswrapper[4830]: I0227 17:52:48.786452 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faee69e3-9f85-4d66-91c8-76e6888f678c" path="/var/lib/kubelet/pods/faee69e3-9f85-4d66-91c8-76e6888f678c/volumes" Feb 27 17:52:49 crc kubenswrapper[4830]: I0227 17:52:49.059191 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-99dgz"] Feb 27 17:52:49 crc kubenswrapper[4830]: I0227 17:52:49.074020 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-43a2-account-create-update-wd25q"] Feb 27 17:52:49 crc kubenswrapper[4830]: I0227 17:52:49.084168 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-411a-account-create-update-qrfdh"] Feb 27 17:52:49 crc kubenswrapper[4830]: I0227 17:52:49.092629 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-99dgz"] Feb 27 17:52:49 crc kubenswrapper[4830]: I0227 17:52:49.101966 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-43a2-account-create-update-wd25q"] Feb 27 17:52:49 crc kubenswrapper[4830]: I0227 17:52:49.107762 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-411a-account-create-update-qrfdh"] Feb 27 17:52:49 crc kubenswrapper[4830]: E0227 17:52:49.767545 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:52:50 crc kubenswrapper[4830]: I0227 17:52:50.775932 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d43b6d8-d47a-4b6e-8dbb-27a222cd971f" path="/var/lib/kubelet/pods/3d43b6d8-d47a-4b6e-8dbb-27a222cd971f/volumes" Feb 27 17:52:50 crc kubenswrapper[4830]: I0227 17:52:50.777952 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="458a8af3-6366-427a-8641-9b5014271de7" path="/var/lib/kubelet/pods/458a8af3-6366-427a-8641-9b5014271de7/volumes" Feb 27 17:52:50 crc kubenswrapper[4830]: I0227 17:52:50.779470 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54fa8c61-cab3-4696-93d5-32120c184f0b" path="/var/lib/kubelet/pods/54fa8c61-cab3-4696-93d5-32120c184f0b/volumes" Feb 27 17:52:53 crc kubenswrapper[4830]: E0227 17:52:53.766139 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-8q6c2" podUID="bab9b8c9-003b-4139-b9d5-2302e4773442" Feb 27 17:52:54 crc kubenswrapper[4830]: I0227 17:52:54.136607 4830 scope.go:117] "RemoveContainer" containerID="b0bf7173c82f7f2d011318a3c91727ae7ba189f22e558e4935be7ddca99b9944" Feb 27 17:52:54 crc kubenswrapper[4830]: I0227 17:52:54.202156 4830 scope.go:117] "RemoveContainer" containerID="a80924d0975c926796267ebea18562103390ff8a948cc38ff6e01a9908d57e50" Feb 27 17:52:54 crc kubenswrapper[4830]: I0227 17:52:54.250526 4830 scope.go:117] "RemoveContainer" containerID="00ce924c2d7d1449257642d31c8cfda7074d454ea033bd9e486dc28819419487" Feb 27 17:52:54 crc kubenswrapper[4830]: I0227 17:52:54.287725 4830 scope.go:117] "RemoveContainer" containerID="074b328a067e7c0b45867468df72d820653ebaa0fd03f032bf1952c6e9c5e5b7" Feb 27 17:52:54 crc kubenswrapper[4830]: I0227 17:52:54.324933 4830 scope.go:117] "RemoveContainer" containerID="33efaa3263cf90f03be2b5d8ffc0d24676f1e485d67916311511375e814cee21" Feb 27 17:52:54 crc kubenswrapper[4830]: I0227 17:52:54.373318 4830 scope.go:117] "RemoveContainer" containerID="e14be69ff82f403db9606cf6289c49390341a1eaafabf98bc23c6d11df3670b3" Feb 27 17:52:54 crc kubenswrapper[4830]: I0227 17:52:54.402452 4830 scope.go:117] "RemoveContainer" containerID="9320716aa73ae27cd08d34aaf0c214120b64afcd8ea2f3012ac92e711ce2a3a6" Feb 27 17:53:03 crc kubenswrapper[4830]: I0227 17:53:03.047740 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6vf7l"] Feb 27 17:53:03 crc kubenswrapper[4830]: I0227 17:53:03.059063 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6vf7l"] Feb 27 17:53:03 crc kubenswrapper[4830]: I0227 17:53:03.160696 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:53:03 crc kubenswrapper[4830]: I0227 17:53:03.161078 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:53:04 crc kubenswrapper[4830]: E0227 17:53:04.779388 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:53:04 crc kubenswrapper[4830]: I0227 17:53:04.782849 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f727246-5bd6-417b-b56f-d9c8913ec2c7" path="/var/lib/kubelet/pods/7f727246-5bd6-417b-b56f-d9c8913ec2c7/volumes" Feb 27 17:53:08 crc kubenswrapper[4830]: E0227 17:53:08.413935 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 17:53:08 crc kubenswrapper[4830]: E0227 17:53:08.414824 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z2hnz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-8q6c2_openshift-marketplace(bab9b8c9-003b-4139-b9d5-2302e4773442): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 17:53:08 crc kubenswrapper[4830]: E0227 17:53:08.416014 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-8q6c2" podUID="bab9b8c9-003b-4139-b9d5-2302e4773442" Feb 27 17:53:16 crc kubenswrapper[4830]: E0227 17:53:16.768091 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:53:21 crc kubenswrapper[4830]: I0227 17:53:21.068002 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-kq4fc"] Feb 27 17:53:21 crc kubenswrapper[4830]: I0227 17:53:21.086340 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-kq4fc"] Feb 27 17:53:21 crc kubenswrapper[4830]: I0227 17:53:21.552382 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0882lj7"] Feb 27 17:53:21 crc kubenswrapper[4830]: E0227 17:53:21.552821 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8a4ffdf-3cc8-491c-8795-5226996342cc" containerName="horizon" Feb 27 17:53:21 crc kubenswrapper[4830]: I0227 17:53:21.552840 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8a4ffdf-3cc8-491c-8795-5226996342cc" containerName="horizon" Feb 27 17:53:21 crc kubenswrapper[4830]: E0227 17:53:21.552856 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8a4ffdf-3cc8-491c-8795-5226996342cc" containerName="horizon-log" Feb 27 17:53:21 crc kubenswrapper[4830]: I0227 17:53:21.552864 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8a4ffdf-3cc8-491c-8795-5226996342cc" containerName="horizon-log" Feb 27 17:53:21 crc kubenswrapper[4830]: I0227 17:53:21.553160 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8a4ffdf-3cc8-491c-8795-5226996342cc" containerName="horizon-log" Feb 27 17:53:21 crc kubenswrapper[4830]: I0227 17:53:21.553200 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8a4ffdf-3cc8-491c-8795-5226996342cc" containerName="horizon" Feb 27 17:53:21 crc kubenswrapper[4830]: I0227 17:53:21.554856 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0882lj7" Feb 27 17:53:21 crc kubenswrapper[4830]: I0227 17:53:21.557096 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 27 17:53:21 crc kubenswrapper[4830]: I0227 17:53:21.624586 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0882lj7"] Feb 27 17:53:21 crc kubenswrapper[4830]: I0227 17:53:21.671231 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e4939dcd-a003-4c9f-8883-1f8361eee450-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0882lj7\" (UID: \"e4939dcd-a003-4c9f-8883-1f8361eee450\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0882lj7" Feb 27 17:53:21 crc kubenswrapper[4830]: I0227 17:53:21.671311 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg2fw\" (UniqueName: \"kubernetes.io/projected/e4939dcd-a003-4c9f-8883-1f8361eee450-kube-api-access-sg2fw\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0882lj7\" (UID: \"e4939dcd-a003-4c9f-8883-1f8361eee450\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0882lj7" Feb 27 17:53:21 crc kubenswrapper[4830]: I0227 17:53:21.671387 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e4939dcd-a003-4c9f-8883-1f8361eee450-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0882lj7\" (UID: \"e4939dcd-a003-4c9f-8883-1f8361eee450\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0882lj7" Feb 27 17:53:21 crc kubenswrapper[4830]: I0227 17:53:21.772906 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e4939dcd-a003-4c9f-8883-1f8361eee450-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0882lj7\" (UID: \"e4939dcd-a003-4c9f-8883-1f8361eee450\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0882lj7" Feb 27 17:53:21 crc kubenswrapper[4830]: I0227 17:53:21.773001 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg2fw\" (UniqueName: \"kubernetes.io/projected/e4939dcd-a003-4c9f-8883-1f8361eee450-kube-api-access-sg2fw\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0882lj7\" (UID: \"e4939dcd-a003-4c9f-8883-1f8361eee450\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0882lj7" Feb 27 17:53:21 crc kubenswrapper[4830]: I0227 17:53:21.773081 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e4939dcd-a003-4c9f-8883-1f8361eee450-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0882lj7\" (UID: \"e4939dcd-a003-4c9f-8883-1f8361eee450\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0882lj7" Feb 27 17:53:21 crc kubenswrapper[4830]: I0227 17:53:21.773644 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e4939dcd-a003-4c9f-8883-1f8361eee450-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0882lj7\" (UID: \"e4939dcd-a003-4c9f-8883-1f8361eee450\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0882lj7" Feb 27 17:53:21 crc kubenswrapper[4830]: I0227 17:53:21.774926 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e4939dcd-a003-4c9f-8883-1f8361eee450-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0882lj7\" (UID: \"e4939dcd-a003-4c9f-8883-1f8361eee450\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0882lj7" Feb 27 17:53:21 crc kubenswrapper[4830]: I0227 17:53:21.802994 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg2fw\" (UniqueName: \"kubernetes.io/projected/e4939dcd-a003-4c9f-8883-1f8361eee450-kube-api-access-sg2fw\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0882lj7\" (UID: \"e4939dcd-a003-4c9f-8883-1f8361eee450\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0882lj7" Feb 27 17:53:21 crc kubenswrapper[4830]: I0227 17:53:21.914007 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0882lj7" Feb 27 17:53:22 crc kubenswrapper[4830]: I0227 17:53:22.421928 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0882lj7"] Feb 27 17:53:22 crc kubenswrapper[4830]: I0227 17:53:22.784211 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c" path="/var/lib/kubelet/pods/ffc038e5-1cf6-4e18-b5a8-c9546ec8a16c/volumes" Feb 27 17:53:23 crc kubenswrapper[4830]: I0227 17:53:23.042847 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-ndmjp"] Feb 27 17:53:23 crc kubenswrapper[4830]: I0227 17:53:23.059229 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-ndmjp"] Feb 27 17:53:23 crc kubenswrapper[4830]: I0227 17:53:23.071012 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0882lj7" event={"ID":"e4939dcd-a003-4c9f-8883-1f8361eee450","Type":"ContainerStarted","Data":"680a85af437218b31e84d2113a70c3164b22dbd5046fbcb21a26dc6e92c9712f"} Feb 27 17:53:23 crc kubenswrapper[4830]: I0227 17:53:23.071054 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0882lj7" event={"ID":"e4939dcd-a003-4c9f-8883-1f8361eee450","Type":"ContainerStarted","Data":"9075ac124afda4ac983ab9305635d32e25eed846ac1d39aaa014c427fcfa93d1"} Feb 27 17:53:23 crc kubenswrapper[4830]: E0227 17:53:23.765719 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-8q6c2" podUID="bab9b8c9-003b-4139-b9d5-2302e4773442" Feb 27 17:53:24 crc kubenswrapper[4830]: I0227 17:53:24.084280 4830 generic.go:334] "Generic (PLEG): container finished" podID="e4939dcd-a003-4c9f-8883-1f8361eee450" containerID="680a85af437218b31e84d2113a70c3164b22dbd5046fbcb21a26dc6e92c9712f" exitCode=0 Feb 27 17:53:24 crc kubenswrapper[4830]: I0227 17:53:24.084370 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0882lj7" event={"ID":"e4939dcd-a003-4c9f-8883-1f8361eee450","Type":"ContainerDied","Data":"680a85af437218b31e84d2113a70c3164b22dbd5046fbcb21a26dc6e92c9712f"} Feb 27 17:53:24 crc kubenswrapper[4830]: I0227 17:53:24.808932 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61ef5006-416b-43e0-a9f1-7b69382403be" path="/var/lib/kubelet/pods/61ef5006-416b-43e0-a9f1-7b69382403be/volumes" Feb 27 17:53:31 crc kubenswrapper[4830]: E0227 17:53:31.768312 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:53:33 crc kubenswrapper[4830]: I0227 17:53:33.160628 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:53:33 crc kubenswrapper[4830]: I0227 17:53:33.161124 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:53:35 crc kubenswrapper[4830]: E0227 17:53:35.766920 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-8q6c2" podUID="bab9b8c9-003b-4139-b9d5-2302e4773442" Feb 27 17:53:36 crc kubenswrapper[4830]: I0227 17:53:36.047205 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-c4dnj"] Feb 27 17:53:36 crc kubenswrapper[4830]: I0227 17:53:36.064765 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-c4dnj"] Feb 27 17:53:36 crc kubenswrapper[4830]: I0227 17:53:36.785164 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="683c8608-155c-4dfc-89ca-0710ffbb8ea6" path="/var/lib/kubelet/pods/683c8608-155c-4dfc-89ca-0710ffbb8ea6/volumes" Feb 27 17:53:43 crc kubenswrapper[4830]: E0227 17:53:43.768415 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:53:48 crc kubenswrapper[4830]: E0227 17:53:48.768927 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-8q6c2" podUID="bab9b8c9-003b-4139-b9d5-2302e4773442" Feb 27 17:53:54 crc kubenswrapper[4830]: I0227 17:53:54.606363 4830 scope.go:117] "RemoveContainer" containerID="776de742b2e8d0a4a3e92855d0ca447fa7f9d8077499d1940b60d39f08e69d48" Feb 27 17:53:54 crc kubenswrapper[4830]: I0227 17:53:54.686089 4830 scope.go:117] "RemoveContainer" containerID="f5a2f5698acb357deb9a63bacaefa3d2174568baf799501ee12a8ec1846a0f2d" Feb 27 17:53:54 crc kubenswrapper[4830]: I0227 17:53:54.761218 4830 scope.go:117] "RemoveContainer" containerID="abbdd76657daf1d0b034f8f7bf5e22ad0804162eb5fc26addce2f418dc7fe2ec" Feb 27 17:53:54 crc kubenswrapper[4830]: I0227 17:53:54.842399 4830 scope.go:117] "RemoveContainer" containerID="a9bf9967bbceebd84b8dc260a334a1841094db77babbd526ae46b5b15bdef700" Feb 27 17:53:54 crc kubenswrapper[4830]: I0227 17:53:54.896222 4830 scope.go:117] "RemoveContainer" containerID="81c39bf03dba3f79e5092a516a3901e60d5f58387766449ed4e8712344ebc8c1" Feb 27 17:53:58 crc kubenswrapper[4830]: E0227 17:53:58.767492 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:53:59 crc kubenswrapper[4830]: E0227 17:53:59.766057 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-8q6c2" podUID="bab9b8c9-003b-4139-b9d5-2302e4773442" Feb 27 17:54:00 crc kubenswrapper[4830]: I0227 17:54:00.182121 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536914-v4qf9"] Feb 27 17:54:00 crc kubenswrapper[4830]: I0227 17:54:00.184490 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536914-v4qf9" Feb 27 17:54:00 crc kubenswrapper[4830]: I0227 17:54:00.187308 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:54:00 crc kubenswrapper[4830]: I0227 17:54:00.187665 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:54:00 crc kubenswrapper[4830]: I0227 17:54:00.188167 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 17:54:00 crc kubenswrapper[4830]: I0227 17:54:00.193141 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536914-v4qf9"] Feb 27 17:54:00 crc kubenswrapper[4830]: I0227 17:54:00.342583 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc9rl\" (UniqueName: \"kubernetes.io/projected/dcab97ec-480d-4b72-a183-cfebb2ceeec0-kube-api-access-jc9rl\") pod \"auto-csr-approver-29536914-v4qf9\" (UID: \"dcab97ec-480d-4b72-a183-cfebb2ceeec0\") " pod="openshift-infra/auto-csr-approver-29536914-v4qf9" Feb 27 17:54:00 crc kubenswrapper[4830]: I0227 17:54:00.444965 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc9rl\" (UniqueName: \"kubernetes.io/projected/dcab97ec-480d-4b72-a183-cfebb2ceeec0-kube-api-access-jc9rl\") pod \"auto-csr-approver-29536914-v4qf9\" (UID: \"dcab97ec-480d-4b72-a183-cfebb2ceeec0\") " pod="openshift-infra/auto-csr-approver-29536914-v4qf9" Feb 27 17:54:00 crc kubenswrapper[4830]: I0227 17:54:00.476486 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc9rl\" (UniqueName: \"kubernetes.io/projected/dcab97ec-480d-4b72-a183-cfebb2ceeec0-kube-api-access-jc9rl\") pod \"auto-csr-approver-29536914-v4qf9\" (UID: \"dcab97ec-480d-4b72-a183-cfebb2ceeec0\") " pod="openshift-infra/auto-csr-approver-29536914-v4qf9" Feb 27 17:54:00 crc kubenswrapper[4830]: I0227 17:54:00.542916 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536914-v4qf9" Feb 27 17:54:01 crc kubenswrapper[4830]: I0227 17:54:01.036135 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536914-v4qf9"] Feb 27 17:54:01 crc kubenswrapper[4830]: I0227 17:54:01.643553 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536914-v4qf9" event={"ID":"dcab97ec-480d-4b72-a183-cfebb2ceeec0","Type":"ContainerStarted","Data":"e894a47bd14796ead8e66db74eae12bdc3d00bbface508b934150a77eb1134e3"} Feb 27 17:54:02 crc kubenswrapper[4830]: I0227 17:54:02.654281 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536914-v4qf9" event={"ID":"dcab97ec-480d-4b72-a183-cfebb2ceeec0","Type":"ContainerStarted","Data":"f201355b4dfea4e7690badf25e5b965b799f90fafedbd2c49142ca103aaea93b"} Feb 27 17:54:02 crc kubenswrapper[4830]: I0227 17:54:02.682365 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536914-v4qf9" podStartSLOduration=1.719016829 podStartE2EDuration="2.682336139s" podCreationTimestamp="2026-02-27 17:54:00 +0000 UTC" firstStartedPulling="2026-02-27 17:54:01.048342476 +0000 UTC m=+6437.137614939" lastFinishedPulling="2026-02-27 17:54:02.011661746 +0000 UTC m=+6438.100934249" observedRunningTime="2026-02-27 17:54:02.669384299 +0000 UTC m=+6438.758656762" watchObservedRunningTime="2026-02-27 17:54:02.682336139 +0000 UTC m=+6438.771608632" Feb 27 17:54:03 crc kubenswrapper[4830]: I0227 17:54:03.159906 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:54:03 crc kubenswrapper[4830]: I0227 17:54:03.160034 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:54:03 crc kubenswrapper[4830]: I0227 17:54:03.160100 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 17:54:03 crc kubenswrapper[4830]: I0227 17:54:03.161033 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9c7600b1d02b30467a3c9249e6962a5faed8288686d8237218305c7cc4357171"} pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 17:54:03 crc kubenswrapper[4830]: I0227 17:54:03.161140 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" containerID="cri-o://9c7600b1d02b30467a3c9249e6962a5faed8288686d8237218305c7cc4357171" gracePeriod=600 Feb 27 17:54:03 crc kubenswrapper[4830]: E0227 17:54:03.291918 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:54:03 crc kubenswrapper[4830]: I0227 17:54:03.669441 4830 generic.go:334] "Generic (PLEG): container finished" podID="dcab97ec-480d-4b72-a183-cfebb2ceeec0" containerID="f201355b4dfea4e7690badf25e5b965b799f90fafedbd2c49142ca103aaea93b" exitCode=0 Feb 27 17:54:03 crc kubenswrapper[4830]: I0227 17:54:03.669583 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536914-v4qf9" event={"ID":"dcab97ec-480d-4b72-a183-cfebb2ceeec0","Type":"ContainerDied","Data":"f201355b4dfea4e7690badf25e5b965b799f90fafedbd2c49142ca103aaea93b"} Feb 27 17:54:03 crc kubenswrapper[4830]: I0227 17:54:03.673887 4830 generic.go:334] "Generic (PLEG): container finished" podID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerID="9c7600b1d02b30467a3c9249e6962a5faed8288686d8237218305c7cc4357171" exitCode=0 Feb 27 17:54:03 crc kubenswrapper[4830]: I0227 17:54:03.673942 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerDied","Data":"9c7600b1d02b30467a3c9249e6962a5faed8288686d8237218305c7cc4357171"} Feb 27 17:54:03 crc kubenswrapper[4830]: I0227 17:54:03.674034 4830 scope.go:117] "RemoveContainer" containerID="1edc2346b55575fd27d28000f5321fa0e167abd0b9733373b1ab9e03d2bd8d16" Feb 27 17:54:03 crc kubenswrapper[4830]: I0227 17:54:03.674695 4830 scope.go:117] "RemoveContainer" containerID="9c7600b1d02b30467a3c9249e6962a5faed8288686d8237218305c7cc4357171" Feb 27 17:54:03 crc kubenswrapper[4830]: E0227 17:54:03.675290 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:54:05 crc kubenswrapper[4830]: I0227 17:54:05.120615 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536914-v4qf9" Feb 27 17:54:05 crc kubenswrapper[4830]: I0227 17:54:05.292596 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jc9rl\" (UniqueName: \"kubernetes.io/projected/dcab97ec-480d-4b72-a183-cfebb2ceeec0-kube-api-access-jc9rl\") pod \"dcab97ec-480d-4b72-a183-cfebb2ceeec0\" (UID: \"dcab97ec-480d-4b72-a183-cfebb2ceeec0\") " Feb 27 17:54:05 crc kubenswrapper[4830]: I0227 17:54:05.314255 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcab97ec-480d-4b72-a183-cfebb2ceeec0-kube-api-access-jc9rl" (OuterVolumeSpecName: "kube-api-access-jc9rl") pod "dcab97ec-480d-4b72-a183-cfebb2ceeec0" (UID: "dcab97ec-480d-4b72-a183-cfebb2ceeec0"). InnerVolumeSpecName "kube-api-access-jc9rl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:54:05 crc kubenswrapper[4830]: I0227 17:54:05.396159 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jc9rl\" (UniqueName: \"kubernetes.io/projected/dcab97ec-480d-4b72-a183-cfebb2ceeec0-kube-api-access-jc9rl\") on node \"crc\" DevicePath \"\"" Feb 27 17:54:05 crc kubenswrapper[4830]: I0227 17:54:05.727571 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536914-v4qf9" event={"ID":"dcab97ec-480d-4b72-a183-cfebb2ceeec0","Type":"ContainerDied","Data":"e894a47bd14796ead8e66db74eae12bdc3d00bbface508b934150a77eb1134e3"} Feb 27 17:54:05 crc kubenswrapper[4830]: I0227 17:54:05.727634 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e894a47bd14796ead8e66db74eae12bdc3d00bbface508b934150a77eb1134e3" Feb 27 17:54:05 crc kubenswrapper[4830]: I0227 17:54:05.727729 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536914-v4qf9" Feb 27 17:54:05 crc kubenswrapper[4830]: I0227 17:54:05.791860 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536908-42g5s"] Feb 27 17:54:05 crc kubenswrapper[4830]: I0227 17:54:05.807639 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536908-42g5s"] Feb 27 17:54:06 crc kubenswrapper[4830]: I0227 17:54:06.782207 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd747a6d-ccf7-41bd-b8d8-b7480d6d950e" path="/var/lib/kubelet/pods/dd747a6d-ccf7-41bd-b8d8-b7480d6d950e/volumes" Feb 27 17:54:09 crc kubenswrapper[4830]: E0227 17:54:09.767272 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:54:11 crc kubenswrapper[4830]: E0227 17:54:11.766557 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-8q6c2" podUID="bab9b8c9-003b-4139-b9d5-2302e4773442" Feb 27 17:54:15 crc kubenswrapper[4830]: I0227 17:54:15.763678 4830 scope.go:117] "RemoveContainer" containerID="9c7600b1d02b30467a3c9249e6962a5faed8288686d8237218305c7cc4357171" Feb 27 17:54:15 crc kubenswrapper[4830]: E0227 17:54:15.765306 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:54:19 crc kubenswrapper[4830]: I0227 17:54:19.063144 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-8f2c-account-create-update-dxpp6"] Feb 27 17:54:19 crc kubenswrapper[4830]: I0227 17:54:19.084434 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-8f2c-account-create-update-dxpp6"] Feb 27 17:54:20 crc kubenswrapper[4830]: I0227 17:54:20.068820 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-44qds"] Feb 27 17:54:20 crc kubenswrapper[4830]: I0227 17:54:20.081337 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-44qds"] Feb 27 17:54:20 crc kubenswrapper[4830]: E0227 17:54:20.767701 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:54:20 crc kubenswrapper[4830]: I0227 17:54:20.782906 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1804cc70-6b63-4738-8dd9-19129e207c08" path="/var/lib/kubelet/pods/1804cc70-6b63-4738-8dd9-19129e207c08/volumes" Feb 27 17:54:20 crc kubenswrapper[4830]: I0227 17:54:20.784142 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f413ccbf-4c04-4ac8-9698-421fee71e5ca" path="/var/lib/kubelet/pods/f413ccbf-4c04-4ac8-9698-421fee71e5ca/volumes" Feb 27 17:54:24 crc kubenswrapper[4830]: I0227 17:54:24.994096 4830 generic.go:334] "Generic (PLEG): container finished" podID="e4939dcd-a003-4c9f-8883-1f8361eee450" containerID="bfeb43f79ec95370ab42d089ae2e411a5b0cd0d64496c261da69f824c04000ff" exitCode=0 Feb 27 17:54:24 crc kubenswrapper[4830]: I0227 17:54:24.995048 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0882lj7" event={"ID":"e4939dcd-a003-4c9f-8883-1f8361eee450","Type":"ContainerDied","Data":"bfeb43f79ec95370ab42d089ae2e411a5b0cd0d64496c261da69f824c04000ff"} Feb 27 17:54:26 crc kubenswrapper[4830]: I0227 17:54:26.011940 4830 generic.go:334] "Generic (PLEG): container finished" podID="e4939dcd-a003-4c9f-8883-1f8361eee450" containerID="740bf318f23ccdba31908e7ffa72d666f8c42a7381fa703df1c78c1d86477c2a" exitCode=0 Feb 27 17:54:26 crc kubenswrapper[4830]: I0227 17:54:26.012253 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0882lj7" event={"ID":"e4939dcd-a003-4c9f-8883-1f8361eee450","Type":"ContainerDied","Data":"740bf318f23ccdba31908e7ffa72d666f8c42a7381fa703df1c78c1d86477c2a"} Feb 27 17:54:26 crc kubenswrapper[4830]: I0227 17:54:26.038487 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-m9wvm"] Feb 27 17:54:26 crc kubenswrapper[4830]: I0227 17:54:26.051616 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-m9wvm"] Feb 27 17:54:26 crc kubenswrapper[4830]: E0227 17:54:26.766922 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-8q6c2" podUID="bab9b8c9-003b-4139-b9d5-2302e4773442" Feb 27 17:54:26 crc kubenswrapper[4830]: I0227 17:54:26.784849 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff265913-df5e-490b-ba35-98be9b52fdb3" path="/var/lib/kubelet/pods/ff265913-df5e-490b-ba35-98be9b52fdb3/volumes" Feb 27 17:54:27 crc kubenswrapper[4830]: I0227 17:54:27.525251 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0882lj7" Feb 27 17:54:27 crc kubenswrapper[4830]: I0227 17:54:27.611684 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg2fw\" (UniqueName: \"kubernetes.io/projected/e4939dcd-a003-4c9f-8883-1f8361eee450-kube-api-access-sg2fw\") pod \"e4939dcd-a003-4c9f-8883-1f8361eee450\" (UID: \"e4939dcd-a003-4c9f-8883-1f8361eee450\") " Feb 27 17:54:27 crc kubenswrapper[4830]: I0227 17:54:27.611859 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e4939dcd-a003-4c9f-8883-1f8361eee450-util\") pod \"e4939dcd-a003-4c9f-8883-1f8361eee450\" (UID: \"e4939dcd-a003-4c9f-8883-1f8361eee450\") " Feb 27 17:54:27 crc kubenswrapper[4830]: I0227 17:54:27.612107 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e4939dcd-a003-4c9f-8883-1f8361eee450-bundle\") pod \"e4939dcd-a003-4c9f-8883-1f8361eee450\" (UID: \"e4939dcd-a003-4c9f-8883-1f8361eee450\") " Feb 27 17:54:27 crc kubenswrapper[4830]: I0227 17:54:27.616484 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4939dcd-a003-4c9f-8883-1f8361eee450-bundle" (OuterVolumeSpecName: "bundle") pod "e4939dcd-a003-4c9f-8883-1f8361eee450" (UID: "e4939dcd-a003-4c9f-8883-1f8361eee450"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:54:27 crc kubenswrapper[4830]: I0227 17:54:27.622388 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4939dcd-a003-4c9f-8883-1f8361eee450-kube-api-access-sg2fw" (OuterVolumeSpecName: "kube-api-access-sg2fw") pod "e4939dcd-a003-4c9f-8883-1f8361eee450" (UID: "e4939dcd-a003-4c9f-8883-1f8361eee450"). InnerVolumeSpecName "kube-api-access-sg2fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:54:27 crc kubenswrapper[4830]: I0227 17:54:27.625599 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4939dcd-a003-4c9f-8883-1f8361eee450-util" (OuterVolumeSpecName: "util") pod "e4939dcd-a003-4c9f-8883-1f8361eee450" (UID: "e4939dcd-a003-4c9f-8883-1f8361eee450"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:54:27 crc kubenswrapper[4830]: I0227 17:54:27.715728 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sg2fw\" (UniqueName: \"kubernetes.io/projected/e4939dcd-a003-4c9f-8883-1f8361eee450-kube-api-access-sg2fw\") on node \"crc\" DevicePath \"\"" Feb 27 17:54:27 crc kubenswrapper[4830]: I0227 17:54:27.715888 4830 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e4939dcd-a003-4c9f-8883-1f8361eee450-util\") on node \"crc\" DevicePath \"\"" Feb 27 17:54:27 crc kubenswrapper[4830]: I0227 17:54:27.715942 4830 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e4939dcd-a003-4c9f-8883-1f8361eee450-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:54:28 crc kubenswrapper[4830]: I0227 17:54:28.047388 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0882lj7" event={"ID":"e4939dcd-a003-4c9f-8883-1f8361eee450","Type":"ContainerDied","Data":"9075ac124afda4ac983ab9305635d32e25eed846ac1d39aaa014c427fcfa93d1"} Feb 27 17:54:28 crc kubenswrapper[4830]: I0227 17:54:28.047444 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0882lj7" Feb 27 17:54:28 crc kubenswrapper[4830]: I0227 17:54:28.047467 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9075ac124afda4ac983ab9305635d32e25eed846ac1d39aaa014c427fcfa93d1" Feb 27 17:54:30 crc kubenswrapper[4830]: I0227 17:54:30.762334 4830 scope.go:117] "RemoveContainer" containerID="9c7600b1d02b30467a3c9249e6962a5faed8288686d8237218305c7cc4357171" Feb 27 17:54:30 crc kubenswrapper[4830]: E0227 17:54:30.763163 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:54:33 crc kubenswrapper[4830]: E0227 17:54:33.765096 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:54:34 crc kubenswrapper[4830]: I0227 17:54:34.753209 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-x6smj"] Feb 27 17:54:34 crc kubenswrapper[4830]: E0227 17:54:34.753985 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4939dcd-a003-4c9f-8883-1f8361eee450" containerName="pull" Feb 27 17:54:34 crc kubenswrapper[4830]: I0227 17:54:34.754002 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4939dcd-a003-4c9f-8883-1f8361eee450" containerName="pull" Feb 27 17:54:34 crc kubenswrapper[4830]: E0227 17:54:34.754041 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4939dcd-a003-4c9f-8883-1f8361eee450" containerName="util" Feb 27 17:54:34 crc kubenswrapper[4830]: I0227 17:54:34.754048 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4939dcd-a003-4c9f-8883-1f8361eee450" containerName="util" Feb 27 17:54:34 crc kubenswrapper[4830]: E0227 17:54:34.754060 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4939dcd-a003-4c9f-8883-1f8361eee450" containerName="extract" Feb 27 17:54:34 crc kubenswrapper[4830]: I0227 17:54:34.754066 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4939dcd-a003-4c9f-8883-1f8361eee450" containerName="extract" Feb 27 17:54:34 crc kubenswrapper[4830]: E0227 17:54:34.754087 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcab97ec-480d-4b72-a183-cfebb2ceeec0" containerName="oc" Feb 27 17:54:34 crc kubenswrapper[4830]: I0227 17:54:34.754093 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcab97ec-480d-4b72-a183-cfebb2ceeec0" containerName="oc" Feb 27 17:54:34 crc kubenswrapper[4830]: I0227 17:54:34.754264 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcab97ec-480d-4b72-a183-cfebb2ceeec0" containerName="oc" Feb 27 17:54:34 crc kubenswrapper[4830]: I0227 17:54:34.754301 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4939dcd-a003-4c9f-8883-1f8361eee450" containerName="extract" Feb 27 17:54:34 crc kubenswrapper[4830]: I0227 17:54:34.755051 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6smj" Feb 27 17:54:34 crc kubenswrapper[4830]: I0227 17:54:34.756489 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 27 17:54:34 crc kubenswrapper[4830]: I0227 17:54:34.756728 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 27 17:54:34 crc kubenswrapper[4830]: I0227 17:54:34.756863 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-f79tf" Feb 27 17:54:34 crc kubenswrapper[4830]: I0227 17:54:34.801144 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-x6smj"] Feb 27 17:54:34 crc kubenswrapper[4830]: I0227 17:54:34.872940 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66fc8b6fcc-5nr6x"] Feb 27 17:54:34 crc kubenswrapper[4830]: I0227 17:54:34.874486 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66fc8b6fcc-5nr6x" Feb 27 17:54:34 crc kubenswrapper[4830]: I0227 17:54:34.876712 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 27 17:54:34 crc kubenswrapper[4830]: I0227 17:54:34.880303 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq658\" (UniqueName: \"kubernetes.io/projected/fb796dd0-1d3a-4037-a42a-7427293ea799-kube-api-access-mq658\") pod \"obo-prometheus-operator-68bc856cb9-x6smj\" (UID: \"fb796dd0-1d3a-4037-a42a-7427293ea799\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6smj" Feb 27 17:54:34 crc kubenswrapper[4830]: I0227 17:54:34.880599 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-g89hx" Feb 27 17:54:34 crc kubenswrapper[4830]: I0227 17:54:34.885834 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66fc8b6fcc-lvddk"] Feb 27 17:54:34 crc kubenswrapper[4830]: I0227 17:54:34.887387 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66fc8b6fcc-lvddk" Feb 27 17:54:34 crc kubenswrapper[4830]: I0227 17:54:34.901886 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66fc8b6fcc-5nr6x"] Feb 27 17:54:34 crc kubenswrapper[4830]: I0227 17:54:34.931637 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66fc8b6fcc-lvddk"] Feb 27 17:54:34 crc kubenswrapper[4830]: I0227 17:54:34.985209 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cfe8c971-6fe4-44ae-bea8-d3b6a17821d0-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66fc8b6fcc-5nr6x\" (UID: \"cfe8c971-6fe4-44ae-bea8-d3b6a17821d0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66fc8b6fcc-5nr6x" Feb 27 17:54:34 crc kubenswrapper[4830]: I0227 17:54:34.985289 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cfe8c971-6fe4-44ae-bea8-d3b6a17821d0-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66fc8b6fcc-5nr6x\" (UID: \"cfe8c971-6fe4-44ae-bea8-d3b6a17821d0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66fc8b6fcc-5nr6x" Feb 27 17:54:34 crc kubenswrapper[4830]: I0227 17:54:34.985360 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2e9c720f-41bf-4770-a857-835cd3bf0cbb-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66fc8b6fcc-lvddk\" (UID: \"2e9c720f-41bf-4770-a857-835cd3bf0cbb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66fc8b6fcc-lvddk" Feb 27 17:54:34 crc kubenswrapper[4830]: I0227 17:54:34.985393 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq658\" (UniqueName: \"kubernetes.io/projected/fb796dd0-1d3a-4037-a42a-7427293ea799-kube-api-access-mq658\") pod \"obo-prometheus-operator-68bc856cb9-x6smj\" (UID: \"fb796dd0-1d3a-4037-a42a-7427293ea799\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6smj" Feb 27 17:54:34 crc kubenswrapper[4830]: I0227 17:54:34.985469 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2e9c720f-41bf-4770-a857-835cd3bf0cbb-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66fc8b6fcc-lvddk\" (UID: \"2e9c720f-41bf-4770-a857-835cd3bf0cbb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66fc8b6fcc-lvddk" Feb 27 17:54:35 crc kubenswrapper[4830]: I0227 17:54:35.007839 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq658\" (UniqueName: \"kubernetes.io/projected/fb796dd0-1d3a-4037-a42a-7427293ea799-kube-api-access-mq658\") pod \"obo-prometheus-operator-68bc856cb9-x6smj\" (UID: \"fb796dd0-1d3a-4037-a42a-7427293ea799\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6smj" Feb 27 17:54:35 crc kubenswrapper[4830]: I0227 17:54:35.070507 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-czxql"] Feb 27 17:54:35 crc kubenswrapper[4830]: I0227 17:54:35.072317 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-czxql" Feb 27 17:54:35 crc kubenswrapper[4830]: I0227 17:54:35.076114 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6smj" Feb 27 17:54:35 crc kubenswrapper[4830]: I0227 17:54:35.076542 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-bmtp6" Feb 27 17:54:35 crc kubenswrapper[4830]: I0227 17:54:35.079762 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 27 17:54:35 crc kubenswrapper[4830]: I0227 17:54:35.088282 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2e9c720f-41bf-4770-a857-835cd3bf0cbb-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66fc8b6fcc-lvddk\" (UID: \"2e9c720f-41bf-4770-a857-835cd3bf0cbb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66fc8b6fcc-lvddk" Feb 27 17:54:35 crc kubenswrapper[4830]: I0227 17:54:35.088401 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2e9c720f-41bf-4770-a857-835cd3bf0cbb-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66fc8b6fcc-lvddk\" (UID: \"2e9c720f-41bf-4770-a857-835cd3bf0cbb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66fc8b6fcc-lvddk" Feb 27 17:54:35 crc kubenswrapper[4830]: I0227 17:54:35.088473 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cfe8c971-6fe4-44ae-bea8-d3b6a17821d0-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66fc8b6fcc-5nr6x\" (UID: \"cfe8c971-6fe4-44ae-bea8-d3b6a17821d0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66fc8b6fcc-5nr6x" Feb 27 17:54:35 crc kubenswrapper[4830]: I0227 17:54:35.088534 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cfe8c971-6fe4-44ae-bea8-d3b6a17821d0-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66fc8b6fcc-5nr6x\" (UID: \"cfe8c971-6fe4-44ae-bea8-d3b6a17821d0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66fc8b6fcc-5nr6x" Feb 27 17:54:35 crc kubenswrapper[4830]: I0227 17:54:35.108051 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cfe8c971-6fe4-44ae-bea8-d3b6a17821d0-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66fc8b6fcc-5nr6x\" (UID: \"cfe8c971-6fe4-44ae-bea8-d3b6a17821d0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66fc8b6fcc-5nr6x" Feb 27 17:54:35 crc kubenswrapper[4830]: I0227 17:54:35.110587 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cfe8c971-6fe4-44ae-bea8-d3b6a17821d0-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66fc8b6fcc-5nr6x\" (UID: \"cfe8c971-6fe4-44ae-bea8-d3b6a17821d0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66fc8b6fcc-5nr6x" Feb 27 17:54:35 crc kubenswrapper[4830]: I0227 17:54:35.111453 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2e9c720f-41bf-4770-a857-835cd3bf0cbb-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-66fc8b6fcc-lvddk\" (UID: \"2e9c720f-41bf-4770-a857-835cd3bf0cbb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66fc8b6fcc-lvddk" Feb 27 17:54:35 crc kubenswrapper[4830]: I0227 17:54:35.115296 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2e9c720f-41bf-4770-a857-835cd3bf0cbb-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-66fc8b6fcc-lvddk\" (UID: \"2e9c720f-41bf-4770-a857-835cd3bf0cbb\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-66fc8b6fcc-lvddk" Feb 27 17:54:35 crc kubenswrapper[4830]: I0227 17:54:35.123680 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-czxql"] Feb 27 17:54:35 crc kubenswrapper[4830]: I0227 17:54:35.199713 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66fc8b6fcc-5nr6x" Feb 27 17:54:35 crc kubenswrapper[4830]: I0227 17:54:35.200735 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/428d2446-f933-4f1d-b757-501fb5695db2-observability-operator-tls\") pod \"observability-operator-59bdc8b94-czxql\" (UID: \"428d2446-f933-4f1d-b757-501fb5695db2\") " pod="openshift-operators/observability-operator-59bdc8b94-czxql" Feb 27 17:54:35 crc kubenswrapper[4830]: I0227 17:54:35.200871 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb5r6\" (UniqueName: \"kubernetes.io/projected/428d2446-f933-4f1d-b757-501fb5695db2-kube-api-access-bb5r6\") pod \"observability-operator-59bdc8b94-czxql\" (UID: \"428d2446-f933-4f1d-b757-501fb5695db2\") " pod="openshift-operators/observability-operator-59bdc8b94-czxql" Feb 27 17:54:35 crc kubenswrapper[4830]: I0227 17:54:35.220168 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66fc8b6fcc-lvddk" Feb 27 17:54:35 crc kubenswrapper[4830]: I0227 17:54:35.305640 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/428d2446-f933-4f1d-b757-501fb5695db2-observability-operator-tls\") pod \"observability-operator-59bdc8b94-czxql\" (UID: \"428d2446-f933-4f1d-b757-501fb5695db2\") " pod="openshift-operators/observability-operator-59bdc8b94-czxql" Feb 27 17:54:35 crc kubenswrapper[4830]: I0227 17:54:35.305773 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb5r6\" (UniqueName: \"kubernetes.io/projected/428d2446-f933-4f1d-b757-501fb5695db2-kube-api-access-bb5r6\") pod \"observability-operator-59bdc8b94-czxql\" (UID: \"428d2446-f933-4f1d-b757-501fb5695db2\") " pod="openshift-operators/observability-operator-59bdc8b94-czxql" Feb 27 17:54:35 crc kubenswrapper[4830]: I0227 17:54:35.322110 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/428d2446-f933-4f1d-b757-501fb5695db2-observability-operator-tls\") pod \"observability-operator-59bdc8b94-czxql\" (UID: \"428d2446-f933-4f1d-b757-501fb5695db2\") " pod="openshift-operators/observability-operator-59bdc8b94-czxql" Feb 27 17:54:35 crc kubenswrapper[4830]: I0227 17:54:35.334784 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-5tqdq"] Feb 27 17:54:35 crc kubenswrapper[4830]: I0227 17:54:35.338919 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-5tqdq" Feb 27 17:54:35 crc kubenswrapper[4830]: I0227 17:54:35.358671 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-sr6xx" Feb 27 17:54:35 crc kubenswrapper[4830]: I0227 17:54:35.367154 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb5r6\" (UniqueName: \"kubernetes.io/projected/428d2446-f933-4f1d-b757-501fb5695db2-kube-api-access-bb5r6\") pod \"observability-operator-59bdc8b94-czxql\" (UID: \"428d2446-f933-4f1d-b757-501fb5695db2\") " pod="openshift-operators/observability-operator-59bdc8b94-czxql" Feb 27 17:54:35 crc kubenswrapper[4830]: I0227 17:54:35.371738 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-5tqdq"] Feb 27 17:54:35 crc kubenswrapper[4830]: I0227 17:54:35.410394 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/df7ff018-e3f5-4243-bb66-c04cfa3ff9f9-openshift-service-ca\") pod \"perses-operator-5bf474d74f-5tqdq\" (UID: \"df7ff018-e3f5-4243-bb66-c04cfa3ff9f9\") " pod="openshift-operators/perses-operator-5bf474d74f-5tqdq" Feb 27 17:54:35 crc kubenswrapper[4830]: I0227 17:54:35.410891 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glp5d\" (UniqueName: \"kubernetes.io/projected/df7ff018-e3f5-4243-bb66-c04cfa3ff9f9-kube-api-access-glp5d\") pod \"perses-operator-5bf474d74f-5tqdq\" (UID: \"df7ff018-e3f5-4243-bb66-c04cfa3ff9f9\") " pod="openshift-operators/perses-operator-5bf474d74f-5tqdq" Feb 27 17:54:35 crc kubenswrapper[4830]: I0227 17:54:35.518421 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glp5d\" (UniqueName: \"kubernetes.io/projected/df7ff018-e3f5-4243-bb66-c04cfa3ff9f9-kube-api-access-glp5d\") pod \"perses-operator-5bf474d74f-5tqdq\" (UID: \"df7ff018-e3f5-4243-bb66-c04cfa3ff9f9\") " pod="openshift-operators/perses-operator-5bf474d74f-5tqdq" Feb 27 17:54:35 crc kubenswrapper[4830]: I0227 17:54:35.518590 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/df7ff018-e3f5-4243-bb66-c04cfa3ff9f9-openshift-service-ca\") pod \"perses-operator-5bf474d74f-5tqdq\" (UID: \"df7ff018-e3f5-4243-bb66-c04cfa3ff9f9\") " pod="openshift-operators/perses-operator-5bf474d74f-5tqdq" Feb 27 17:54:35 crc kubenswrapper[4830]: I0227 17:54:35.519463 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/df7ff018-e3f5-4243-bb66-c04cfa3ff9f9-openshift-service-ca\") pod \"perses-operator-5bf474d74f-5tqdq\" (UID: \"df7ff018-e3f5-4243-bb66-c04cfa3ff9f9\") " pod="openshift-operators/perses-operator-5bf474d74f-5tqdq" Feb 27 17:54:35 crc kubenswrapper[4830]: I0227 17:54:35.549594 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glp5d\" (UniqueName: \"kubernetes.io/projected/df7ff018-e3f5-4243-bb66-c04cfa3ff9f9-kube-api-access-glp5d\") pod \"perses-operator-5bf474d74f-5tqdq\" (UID: \"df7ff018-e3f5-4243-bb66-c04cfa3ff9f9\") " pod="openshift-operators/perses-operator-5bf474d74f-5tqdq" Feb 27 17:54:35 crc kubenswrapper[4830]: I0227 17:54:35.578714 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-czxql" Feb 27 17:54:35 crc kubenswrapper[4830]: I0227 17:54:35.687721 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-5tqdq" Feb 27 17:54:35 crc kubenswrapper[4830]: I0227 17:54:35.748852 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-x6smj"] Feb 27 17:54:35 crc kubenswrapper[4830]: I0227 17:54:35.779923 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 17:54:35 crc kubenswrapper[4830]: I0227 17:54:35.790381 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66fc8b6fcc-5nr6x"] Feb 27 17:54:35 crc kubenswrapper[4830]: W0227 17:54:35.802928 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcfe8c971_6fe4_44ae_bea8_d3b6a17821d0.slice/crio-fdb9f65cb82bff4bef3d7e5005ac18c046202459dc45db8884a23bdcf5251a5d WatchSource:0}: Error finding container fdb9f65cb82bff4bef3d7e5005ac18c046202459dc45db8884a23bdcf5251a5d: Status 404 returned error can't find the container with id fdb9f65cb82bff4bef3d7e5005ac18c046202459dc45db8884a23bdcf5251a5d Feb 27 17:54:35 crc kubenswrapper[4830]: I0227 17:54:35.909878 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-66fc8b6fcc-lvddk"] Feb 27 17:54:36 crc kubenswrapper[4830]: I0227 17:54:36.150919 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66fc8b6fcc-5nr6x" event={"ID":"cfe8c971-6fe4-44ae-bea8-d3b6a17821d0","Type":"ContainerStarted","Data":"fdb9f65cb82bff4bef3d7e5005ac18c046202459dc45db8884a23bdcf5251a5d"} Feb 27 17:54:36 crc kubenswrapper[4830]: I0227 17:54:36.152102 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66fc8b6fcc-lvddk" event={"ID":"2e9c720f-41bf-4770-a857-835cd3bf0cbb","Type":"ContainerStarted","Data":"95f5380ed8509186001b728fd6df8a66e15a209f6da01dd38f61ede52407018a"} Feb 27 17:54:36 crc kubenswrapper[4830]: I0227 17:54:36.153078 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6smj" event={"ID":"fb796dd0-1d3a-4037-a42a-7427293ea799","Type":"ContainerStarted","Data":"b7b94090e1c753b31600c50edd80721650f4010167196a06ed0eb1c2be17f22e"} Feb 27 17:54:36 crc kubenswrapper[4830]: I0227 17:54:36.182600 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-czxql"] Feb 27 17:54:36 crc kubenswrapper[4830]: W0227 17:54:36.378127 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf7ff018_e3f5_4243_bb66_c04cfa3ff9f9.slice/crio-363ab1df406f27623c1906fa56fa9a55fed367644eb153e31f5f6773d76965b3 WatchSource:0}: Error finding container 363ab1df406f27623c1906fa56fa9a55fed367644eb153e31f5f6773d76965b3: Status 404 returned error can't find the container with id 363ab1df406f27623c1906fa56fa9a55fed367644eb153e31f5f6773d76965b3 Feb 27 17:54:36 crc kubenswrapper[4830]: I0227 17:54:36.382483 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-5tqdq"] Feb 27 17:54:37 crc kubenswrapper[4830]: I0227 17:54:37.177400 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-czxql" event={"ID":"428d2446-f933-4f1d-b757-501fb5695db2","Type":"ContainerStarted","Data":"a0d0b746c58e71b5eabf17631d4c2835675b763c2161f1d72f78a3566fd2a844"} Feb 27 17:54:37 crc kubenswrapper[4830]: I0227 17:54:37.198157 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-5tqdq" event={"ID":"df7ff018-e3f5-4243-bb66-c04cfa3ff9f9","Type":"ContainerStarted","Data":"363ab1df406f27623c1906fa56fa9a55fed367644eb153e31f5f6773d76965b3"} Feb 27 17:54:41 crc kubenswrapper[4830]: I0227 17:54:41.762713 4830 scope.go:117] "RemoveContainer" containerID="9c7600b1d02b30467a3c9249e6962a5faed8288686d8237218305c7cc4357171" Feb 27 17:54:41 crc kubenswrapper[4830]: E0227 17:54:41.764332 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:54:46 crc kubenswrapper[4830]: E0227 17:54:46.824603 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:54:47 crc kubenswrapper[4830]: I0227 17:54:47.415508 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-5tqdq" event={"ID":"df7ff018-e3f5-4243-bb66-c04cfa3ff9f9","Type":"ContainerStarted","Data":"c71c5a41f84ae5ac3c93e5d64926e64bc95789fd21422b62c6a3845f0447e8b2"} Feb 27 17:54:47 crc kubenswrapper[4830]: I0227 17:54:47.416022 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-5tqdq" Feb 27 17:54:47 crc kubenswrapper[4830]: I0227 17:54:47.417764 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8q6c2" event={"ID":"bab9b8c9-003b-4139-b9d5-2302e4773442","Type":"ContainerStarted","Data":"aea3a89c3bf0634211953c9ff348caa6b50b938e0d2b975a48889bae42cac06c"} Feb 27 17:54:47 crc kubenswrapper[4830]: I0227 17:54:47.420926 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-czxql" event={"ID":"428d2446-f933-4f1d-b757-501fb5695db2","Type":"ContainerStarted","Data":"a7e4d088664dc95b06d46e04cc27a453ca89fd6d545e84e97466a659559a06a9"} Feb 27 17:54:47 crc kubenswrapper[4830]: I0227 17:54:47.421182 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-czxql" Feb 27 17:54:47 crc kubenswrapper[4830]: I0227 17:54:47.422593 4830 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-czxql container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.1.174:8081/healthz\": dial tcp 10.217.1.174:8081: connect: connection refused" start-of-body= Feb 27 17:54:47 crc kubenswrapper[4830]: I0227 17:54:47.422628 4830 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-czxql" podUID="428d2446-f933-4f1d-b757-501fb5695db2" containerName="operator" probeResult="failure" output="Get \"http://10.217.1.174:8081/healthz\": dial tcp 10.217.1.174:8081: connect: connection refused" Feb 27 17:54:47 crc kubenswrapper[4830]: I0227 17:54:47.440324 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-5tqdq" podStartSLOduration=1.953317995 podStartE2EDuration="12.440302831s" podCreationTimestamp="2026-02-27 17:54:35 +0000 UTC" firstStartedPulling="2026-02-27 17:54:36.381404153 +0000 UTC m=+6472.470676616" lastFinishedPulling="2026-02-27 17:54:46.868388989 +0000 UTC m=+6482.957661452" observedRunningTime="2026-02-27 17:54:47.432844793 +0000 UTC m=+6483.522117256" watchObservedRunningTime="2026-02-27 17:54:47.440302831 +0000 UTC m=+6483.529575304" Feb 27 17:54:47 crc kubenswrapper[4830]: I0227 17:54:47.486094 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-czxql" podStartSLOduration=1.7990469249999999 podStartE2EDuration="12.486070875s" podCreationTimestamp="2026-02-27 17:54:35 +0000 UTC" firstStartedPulling="2026-02-27 17:54:36.192902165 +0000 UTC m=+6472.282174628" lastFinishedPulling="2026-02-27 17:54:46.879926115 +0000 UTC m=+6482.969198578" observedRunningTime="2026-02-27 17:54:47.461967889 +0000 UTC m=+6483.551240352" watchObservedRunningTime="2026-02-27 17:54:47.486070875 +0000 UTC m=+6483.575343328" Feb 27 17:54:48 crc kubenswrapper[4830]: I0227 17:54:48.435274 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-czxql" Feb 27 17:54:49 crc kubenswrapper[4830]: I0227 17:54:49.453133 4830 generic.go:334] "Generic (PLEG): container finished" podID="bab9b8c9-003b-4139-b9d5-2302e4773442" containerID="aea3a89c3bf0634211953c9ff348caa6b50b938e0d2b975a48889bae42cac06c" exitCode=0 Feb 27 17:54:49 crc kubenswrapper[4830]: I0227 17:54:49.453220 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8q6c2" event={"ID":"bab9b8c9-003b-4139-b9d5-2302e4773442","Type":"ContainerDied","Data":"aea3a89c3bf0634211953c9ff348caa6b50b938e0d2b975a48889bae42cac06c"} Feb 27 17:54:50 crc kubenswrapper[4830]: I0227 17:54:50.466340 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8q6c2" event={"ID":"bab9b8c9-003b-4139-b9d5-2302e4773442","Type":"ContainerStarted","Data":"bc393f4f2d1ae17c80a7b45d09f041da42bb315f921062f75176566e71ec1dc8"} Feb 27 17:54:50 crc kubenswrapper[4830]: I0227 17:54:50.493680 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8q6c2" podStartSLOduration=3.05384396 podStartE2EDuration="3m26.493657086s" podCreationTimestamp="2026-02-27 17:51:24 +0000 UTC" firstStartedPulling="2026-02-27 17:51:26.413256656 +0000 UTC m=+6282.502529139" lastFinishedPulling="2026-02-27 17:54:49.853069802 +0000 UTC m=+6485.942342265" observedRunningTime="2026-02-27 17:54:50.486103706 +0000 UTC m=+6486.575376169" watchObservedRunningTime="2026-02-27 17:54:50.493657086 +0000 UTC m=+6486.582929549" Feb 27 17:54:52 crc kubenswrapper[4830]: I0227 17:54:52.491156 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66fc8b6fcc-5nr6x" event={"ID":"cfe8c971-6fe4-44ae-bea8-d3b6a17821d0","Type":"ContainerStarted","Data":"15dac2fca2addc2a34267a0aa7fb091cdc5fe0e13b81975312393c15878642e3"} Feb 27 17:54:52 crc kubenswrapper[4830]: I0227 17:54:52.493872 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66fc8b6fcc-lvddk" event={"ID":"2e9c720f-41bf-4770-a857-835cd3bf0cbb","Type":"ContainerStarted","Data":"99e57a7fb6cb558cd3f0808dba575d68542f7f656740c01f97ecc54edf3c5ead"} Feb 27 17:54:52 crc kubenswrapper[4830]: I0227 17:54:52.565909 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66fc8b6fcc-5nr6x" podStartSLOduration=2.800718106 podStartE2EDuration="18.565876186s" podCreationTimestamp="2026-02-27 17:54:34 +0000 UTC" firstStartedPulling="2026-02-27 17:54:35.806313813 +0000 UTC m=+6471.895586276" lastFinishedPulling="2026-02-27 17:54:51.571471903 +0000 UTC m=+6487.660744356" observedRunningTime="2026-02-27 17:54:52.51546687 +0000 UTC m=+6488.604739363" watchObservedRunningTime="2026-02-27 17:54:52.565876186 +0000 UTC m=+6488.655148669" Feb 27 17:54:52 crc kubenswrapper[4830]: I0227 17:54:52.583701 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-66fc8b6fcc-lvddk" podStartSLOduration=2.9422390099999998 podStartE2EDuration="18.583675681s" podCreationTimestamp="2026-02-27 17:54:34 +0000 UTC" firstStartedPulling="2026-02-27 17:54:35.923573397 +0000 UTC m=+6472.012845860" lastFinishedPulling="2026-02-27 17:54:51.565010068 +0000 UTC m=+6487.654282531" observedRunningTime="2026-02-27 17:54:52.566246594 +0000 UTC m=+6488.655519077" watchObservedRunningTime="2026-02-27 17:54:52.583675681 +0000 UTC m=+6488.672948144" Feb 27 17:54:52 crc kubenswrapper[4830]: I0227 17:54:52.762776 4830 scope.go:117] "RemoveContainer" containerID="9c7600b1d02b30467a3c9249e6962a5faed8288686d8237218305c7cc4357171" Feb 27 17:54:52 crc kubenswrapper[4830]: E0227 17:54:52.763173 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:54:54 crc kubenswrapper[4830]: I0227 17:54:54.535430 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8q6c2" Feb 27 17:54:54 crc kubenswrapper[4830]: I0227 17:54:54.535995 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8q6c2" Feb 27 17:54:54 crc kubenswrapper[4830]: I0227 17:54:54.660204 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8q6c2" Feb 27 17:54:55 crc kubenswrapper[4830]: I0227 17:54:55.081735 4830 scope.go:117] "RemoveContainer" containerID="2556677abbb65c17d6c1ad2d531cfd136b59424a4216ec299d5e55f0e5e9209a" Feb 27 17:54:55 crc kubenswrapper[4830]: I0227 17:54:55.156074 4830 scope.go:117] "RemoveContainer" containerID="2e2f575e03dcedacee0a87532b1a795db59aa5461672477ec9edebb6c4178cba" Feb 27 17:54:55 crc kubenswrapper[4830]: I0227 17:54:55.212773 4830 scope.go:117] "RemoveContainer" containerID="e6025491c2a8d25ef3d0a88646ac4094e038d55e7bc95aa1f2f13e968309f97d" Feb 27 17:54:55 crc kubenswrapper[4830]: I0227 17:54:55.634189 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8q6c2" Feb 27 17:54:55 crc kubenswrapper[4830]: I0227 17:54:55.691772 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-5tqdq" Feb 27 17:54:56 crc kubenswrapper[4830]: I0227 17:54:56.915153 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8q6c2"] Feb 27 17:54:57 crc kubenswrapper[4830]: I0227 17:54:57.571726 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8q6c2" podUID="bab9b8c9-003b-4139-b9d5-2302e4773442" containerName="registry-server" containerID="cri-o://bc393f4f2d1ae17c80a7b45d09f041da42bb315f921062f75176566e71ec1dc8" gracePeriod=2 Feb 27 17:54:57 crc kubenswrapper[4830]: E0227 17:54:57.764532 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:54:58 crc kubenswrapper[4830]: I0227 17:54:58.203672 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8q6c2" Feb 27 17:54:58 crc kubenswrapper[4830]: I0227 17:54:58.310277 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bab9b8c9-003b-4139-b9d5-2302e4773442-utilities\") pod \"bab9b8c9-003b-4139-b9d5-2302e4773442\" (UID: \"bab9b8c9-003b-4139-b9d5-2302e4773442\") " Feb 27 17:54:58 crc kubenswrapper[4830]: I0227 17:54:58.311122 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bab9b8c9-003b-4139-b9d5-2302e4773442-utilities" (OuterVolumeSpecName: "utilities") pod "bab9b8c9-003b-4139-b9d5-2302e4773442" (UID: "bab9b8c9-003b-4139-b9d5-2302e4773442"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:54:58 crc kubenswrapper[4830]: I0227 17:54:58.311189 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bab9b8c9-003b-4139-b9d5-2302e4773442-catalog-content\") pod \"bab9b8c9-003b-4139-b9d5-2302e4773442\" (UID: \"bab9b8c9-003b-4139-b9d5-2302e4773442\") " Feb 27 17:54:58 crc kubenswrapper[4830]: I0227 17:54:58.318350 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2hnz\" (UniqueName: \"kubernetes.io/projected/bab9b8c9-003b-4139-b9d5-2302e4773442-kube-api-access-z2hnz\") pod \"bab9b8c9-003b-4139-b9d5-2302e4773442\" (UID: \"bab9b8c9-003b-4139-b9d5-2302e4773442\") " Feb 27 17:54:58 crc kubenswrapper[4830]: I0227 17:54:58.319618 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bab9b8c9-003b-4139-b9d5-2302e4773442-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:54:58 crc kubenswrapper[4830]: I0227 17:54:58.333161 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bab9b8c9-003b-4139-b9d5-2302e4773442-kube-api-access-z2hnz" (OuterVolumeSpecName: "kube-api-access-z2hnz") pod "bab9b8c9-003b-4139-b9d5-2302e4773442" (UID: "bab9b8c9-003b-4139-b9d5-2302e4773442"). InnerVolumeSpecName "kube-api-access-z2hnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:54:58 crc kubenswrapper[4830]: I0227 17:54:58.337754 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bab9b8c9-003b-4139-b9d5-2302e4773442-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bab9b8c9-003b-4139-b9d5-2302e4773442" (UID: "bab9b8c9-003b-4139-b9d5-2302e4773442"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:54:58 crc kubenswrapper[4830]: I0227 17:54:58.420992 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bab9b8c9-003b-4139-b9d5-2302e4773442-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:54:58 crc kubenswrapper[4830]: I0227 17:54:58.421398 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2hnz\" (UniqueName: \"kubernetes.io/projected/bab9b8c9-003b-4139-b9d5-2302e4773442-kube-api-access-z2hnz\") on node \"crc\" DevicePath \"\"" Feb 27 17:54:58 crc kubenswrapper[4830]: I0227 17:54:58.595236 4830 generic.go:334] "Generic (PLEG): container finished" podID="bab9b8c9-003b-4139-b9d5-2302e4773442" containerID="bc393f4f2d1ae17c80a7b45d09f041da42bb315f921062f75176566e71ec1dc8" exitCode=0 Feb 27 17:54:58 crc kubenswrapper[4830]: I0227 17:54:58.595294 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8q6c2" event={"ID":"bab9b8c9-003b-4139-b9d5-2302e4773442","Type":"ContainerDied","Data":"bc393f4f2d1ae17c80a7b45d09f041da42bb315f921062f75176566e71ec1dc8"} Feb 27 17:54:58 crc kubenswrapper[4830]: I0227 17:54:58.595306 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8q6c2" Feb 27 17:54:58 crc kubenswrapper[4830]: I0227 17:54:58.595339 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8q6c2" event={"ID":"bab9b8c9-003b-4139-b9d5-2302e4773442","Type":"ContainerDied","Data":"823d33d38e084915744e29545c6ae15c31aa64ed15de8f65bfdf834ee4d420d3"} Feb 27 17:54:58 crc kubenswrapper[4830]: I0227 17:54:58.595366 4830 scope.go:117] "RemoveContainer" containerID="bc393f4f2d1ae17c80a7b45d09f041da42bb315f921062f75176566e71ec1dc8" Feb 27 17:54:58 crc kubenswrapper[4830]: I0227 17:54:58.628193 4830 scope.go:117] "RemoveContainer" containerID="aea3a89c3bf0634211953c9ff348caa6b50b938e0d2b975a48889bae42cac06c" Feb 27 17:54:58 crc kubenswrapper[4830]: I0227 17:54:58.636529 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8q6c2"] Feb 27 17:54:58 crc kubenswrapper[4830]: I0227 17:54:58.650303 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8q6c2"] Feb 27 17:54:58 crc kubenswrapper[4830]: I0227 17:54:58.670891 4830 scope.go:117] "RemoveContainer" containerID="ec82a86b53241a4e93b305870a39cd73c74a95ed3d5b16f627981d401859878c" Feb 27 17:54:58 crc kubenswrapper[4830]: I0227 17:54:58.713125 4830 scope.go:117] "RemoveContainer" containerID="bc393f4f2d1ae17c80a7b45d09f041da42bb315f921062f75176566e71ec1dc8" Feb 27 17:54:58 crc kubenswrapper[4830]: E0227 17:54:58.716340 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc393f4f2d1ae17c80a7b45d09f041da42bb315f921062f75176566e71ec1dc8\": container with ID starting with bc393f4f2d1ae17c80a7b45d09f041da42bb315f921062f75176566e71ec1dc8 not found: ID does not exist" containerID="bc393f4f2d1ae17c80a7b45d09f041da42bb315f921062f75176566e71ec1dc8" Feb 27 17:54:58 crc kubenswrapper[4830]: I0227 17:54:58.716423 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc393f4f2d1ae17c80a7b45d09f041da42bb315f921062f75176566e71ec1dc8"} err="failed to get container status \"bc393f4f2d1ae17c80a7b45d09f041da42bb315f921062f75176566e71ec1dc8\": rpc error: code = NotFound desc = could not find container \"bc393f4f2d1ae17c80a7b45d09f041da42bb315f921062f75176566e71ec1dc8\": container with ID starting with bc393f4f2d1ae17c80a7b45d09f041da42bb315f921062f75176566e71ec1dc8 not found: ID does not exist" Feb 27 17:54:58 crc kubenswrapper[4830]: I0227 17:54:58.716472 4830 scope.go:117] "RemoveContainer" containerID="aea3a89c3bf0634211953c9ff348caa6b50b938e0d2b975a48889bae42cac06c" Feb 27 17:54:58 crc kubenswrapper[4830]: E0227 17:54:58.716877 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aea3a89c3bf0634211953c9ff348caa6b50b938e0d2b975a48889bae42cac06c\": container with ID starting with aea3a89c3bf0634211953c9ff348caa6b50b938e0d2b975a48889bae42cac06c not found: ID does not exist" containerID="aea3a89c3bf0634211953c9ff348caa6b50b938e0d2b975a48889bae42cac06c" Feb 27 17:54:58 crc kubenswrapper[4830]: I0227 17:54:58.716920 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aea3a89c3bf0634211953c9ff348caa6b50b938e0d2b975a48889bae42cac06c"} err="failed to get container status \"aea3a89c3bf0634211953c9ff348caa6b50b938e0d2b975a48889bae42cac06c\": rpc error: code = NotFound desc = could not find container \"aea3a89c3bf0634211953c9ff348caa6b50b938e0d2b975a48889bae42cac06c\": container with ID starting with aea3a89c3bf0634211953c9ff348caa6b50b938e0d2b975a48889bae42cac06c not found: ID does not exist" Feb 27 17:54:58 crc kubenswrapper[4830]: I0227 17:54:58.716970 4830 scope.go:117] "RemoveContainer" containerID="ec82a86b53241a4e93b305870a39cd73c74a95ed3d5b16f627981d401859878c" Feb 27 17:54:58 crc kubenswrapper[4830]: E0227 17:54:58.717288 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec82a86b53241a4e93b305870a39cd73c74a95ed3d5b16f627981d401859878c\": container with ID starting with ec82a86b53241a4e93b305870a39cd73c74a95ed3d5b16f627981d401859878c not found: ID does not exist" containerID="ec82a86b53241a4e93b305870a39cd73c74a95ed3d5b16f627981d401859878c" Feb 27 17:54:58 crc kubenswrapper[4830]: I0227 17:54:58.717328 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec82a86b53241a4e93b305870a39cd73c74a95ed3d5b16f627981d401859878c"} err="failed to get container status \"ec82a86b53241a4e93b305870a39cd73c74a95ed3d5b16f627981d401859878c\": rpc error: code = NotFound desc = could not find container \"ec82a86b53241a4e93b305870a39cd73c74a95ed3d5b16f627981d401859878c\": container with ID starting with ec82a86b53241a4e93b305870a39cd73c74a95ed3d5b16f627981d401859878c not found: ID does not exist" Feb 27 17:54:58 crc kubenswrapper[4830]: I0227 17:54:58.777599 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bab9b8c9-003b-4139-b9d5-2302e4773442" path="/var/lib/kubelet/pods/bab9b8c9-003b-4139-b9d5-2302e4773442/volumes" Feb 27 17:55:04 crc kubenswrapper[4830]: I0227 17:55:04.776203 4830 scope.go:117] "RemoveContainer" containerID="9c7600b1d02b30467a3c9249e6962a5faed8288686d8237218305c7cc4357171" Feb 27 17:55:04 crc kubenswrapper[4830]: E0227 17:55:04.777664 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:55:08 crc kubenswrapper[4830]: E0227 17:55:08.774216 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:55:15 crc kubenswrapper[4830]: I0227 17:55:15.763118 4830 scope.go:117] "RemoveContainer" containerID="9c7600b1d02b30467a3c9249e6962a5faed8288686d8237218305c7cc4357171" Feb 27 17:55:15 crc kubenswrapper[4830]: E0227 17:55:15.764484 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:55:23 crc kubenswrapper[4830]: E0227 17:55:23.766564 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:55:27 crc kubenswrapper[4830]: I0227 17:55:27.763326 4830 scope.go:117] "RemoveContainer" containerID="9c7600b1d02b30467a3c9249e6962a5faed8288686d8237218305c7cc4357171" Feb 27 17:55:27 crc kubenswrapper[4830]: E0227 17:55:27.764733 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:55:34 crc kubenswrapper[4830]: E0227 17:55:34.765494 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:55:37 crc kubenswrapper[4830]: E0227 17:55:37.193825 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256=9c27a3901c5c632e259da7ac87eb6eadf10c51bf967b9fbec5253ade4cce6a9f/signature-4: status 500 (Internal Server Error)" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" Feb 27 17:55:37 crc kubenswrapper[4830]: E0227 17:55:37.194587 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a,Command:[],Args:[--prometheus-config-reloader=$(RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER) --prometheus-instance-selector=app.kubernetes.io/managed-by=observability-operator --alertmanager-instance-selector=app.kubernetes.io/managed-by=observability-operator --thanos-ruler-instance-selector=app.kubernetes.io/managed-by=observability-operator --watch-referenced-objects-in-all-namespaces=true --disable-unmanaged-prometheus-configuration=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOGC,Value:30,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER,Value:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{157286400 0} {} 150Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mq658,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-68bc856cb9-x6smj_openshift-operators(fb796dd0-1d3a-4037-a42a-7427293ea799): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256=9c27a3901c5c632e259da7ac87eb6eadf10c51bf967b9fbec5253ade4cce6a9f/signature-4: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 17:55:37 crc kubenswrapper[4830]: E0227 17:55:37.195832 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256=9c27a3901c5c632e259da7ac87eb6eadf10c51bf967b9fbec5253ade4cce6a9f/signature-4: status 500 (Internal Server Error)\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6smj" podUID="fb796dd0-1d3a-4037-a42a-7427293ea799" Feb 27 17:55:38 crc kubenswrapper[4830]: E0227 17:55:38.142065 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a\\\"\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6smj" podUID="fb796dd0-1d3a-4037-a42a-7427293ea799" Feb 27 17:55:39 crc kubenswrapper[4830]: I0227 17:55:39.762611 4830 scope.go:117] "RemoveContainer" containerID="9c7600b1d02b30467a3c9249e6962a5faed8288686d8237218305c7cc4357171" Feb 27 17:55:39 crc kubenswrapper[4830]: E0227 17:55:39.763525 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:55:46 crc kubenswrapper[4830]: E0227 17:55:46.766631 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:55:53 crc kubenswrapper[4830]: I0227 17:55:53.763009 4830 scope.go:117] "RemoveContainer" containerID="9c7600b1d02b30467a3c9249e6962a5faed8288686d8237218305c7cc4357171" Feb 27 17:55:53 crc kubenswrapper[4830]: E0227 17:55:53.766035 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:55:54 crc kubenswrapper[4830]: E0227 17:55:54.597363 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256=9c27a3901c5c632e259da7ac87eb6eadf10c51bf967b9fbec5253ade4cce6a9f/signature-4: status 500 (Internal Server Error)" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" Feb 27 17:55:54 crc kubenswrapper[4830]: E0227 17:55:54.597975 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a,Command:[],Args:[--prometheus-config-reloader=$(RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER) --prometheus-instance-selector=app.kubernetes.io/managed-by=observability-operator --alertmanager-instance-selector=app.kubernetes.io/managed-by=observability-operator --thanos-ruler-instance-selector=app.kubernetes.io/managed-by=observability-operator --watch-referenced-objects-in-all-namespaces=true --disable-unmanaged-prometheus-configuration=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOGC,Value:30,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER,Value:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{157286400 0} {} 150Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mq658,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-68bc856cb9-x6smj_openshift-operators(fb796dd0-1d3a-4037-a42a-7427293ea799): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256=9c27a3901c5c632e259da7ac87eb6eadf10c51bf967b9fbec5253ade4cce6a9f/signature-4: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 17:55:54 crc kubenswrapper[4830]: E0227 17:55:54.599846 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256=9c27a3901c5c632e259da7ac87eb6eadf10c51bf967b9fbec5253ade4cce6a9f/signature-4: status 500 (Internal Server Error)\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6smj" podUID="fb796dd0-1d3a-4037-a42a-7427293ea799" Feb 27 17:55:55 crc kubenswrapper[4830]: I0227 17:55:55.634912 4830 scope.go:117] "RemoveContainer" containerID="fa6d45dcce0156eb5a88ac195afd7712744613e8cabf76b9bdad3464a2496b86" Feb 27 17:55:55 crc kubenswrapper[4830]: I0227 17:55:55.718039 4830 scope.go:117] "RemoveContainer" containerID="38d463762cb6f6f960f6a295c4de3ff134a0de7d8d84fc6a56cf3b0b761e49a3" Feb 27 17:55:58 crc kubenswrapper[4830]: E0227 17:55:58.765799 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:56:00 crc kubenswrapper[4830]: I0227 17:56:00.158226 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536916-blsrw"] Feb 27 17:56:00 crc kubenswrapper[4830]: E0227 17:56:00.158972 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bab9b8c9-003b-4139-b9d5-2302e4773442" containerName="registry-server" Feb 27 17:56:00 crc kubenswrapper[4830]: I0227 17:56:00.158997 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="bab9b8c9-003b-4139-b9d5-2302e4773442" containerName="registry-server" Feb 27 17:56:00 crc kubenswrapper[4830]: E0227 17:56:00.159037 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bab9b8c9-003b-4139-b9d5-2302e4773442" containerName="extract-utilities" Feb 27 17:56:00 crc kubenswrapper[4830]: I0227 17:56:00.159050 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="bab9b8c9-003b-4139-b9d5-2302e4773442" containerName="extract-utilities" Feb 27 17:56:00 crc kubenswrapper[4830]: E0227 17:56:00.159098 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bab9b8c9-003b-4139-b9d5-2302e4773442" containerName="extract-content" Feb 27 17:56:00 crc kubenswrapper[4830]: I0227 17:56:00.159110 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="bab9b8c9-003b-4139-b9d5-2302e4773442" containerName="extract-content" Feb 27 17:56:00 crc kubenswrapper[4830]: I0227 17:56:00.159452 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="bab9b8c9-003b-4139-b9d5-2302e4773442" containerName="registry-server" Feb 27 17:56:00 crc kubenswrapper[4830]: I0227 17:56:00.160614 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536916-blsrw" Feb 27 17:56:00 crc kubenswrapper[4830]: I0227 17:56:00.162528 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:56:00 crc kubenswrapper[4830]: I0227 17:56:00.162904 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:56:00 crc kubenswrapper[4830]: I0227 17:56:00.163380 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 17:56:00 crc kubenswrapper[4830]: I0227 17:56:00.170354 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536916-blsrw"] Feb 27 17:56:00 crc kubenswrapper[4830]: I0227 17:56:00.277223 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk4fz\" (UniqueName: \"kubernetes.io/projected/9260de39-76c8-432d-9455-4e787911d8c7-kube-api-access-tk4fz\") pod \"auto-csr-approver-29536916-blsrw\" (UID: \"9260de39-76c8-432d-9455-4e787911d8c7\") " pod="openshift-infra/auto-csr-approver-29536916-blsrw" Feb 27 17:56:00 crc kubenswrapper[4830]: I0227 17:56:00.379603 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk4fz\" (UniqueName: \"kubernetes.io/projected/9260de39-76c8-432d-9455-4e787911d8c7-kube-api-access-tk4fz\") pod \"auto-csr-approver-29536916-blsrw\" (UID: \"9260de39-76c8-432d-9455-4e787911d8c7\") " pod="openshift-infra/auto-csr-approver-29536916-blsrw" Feb 27 17:56:00 crc kubenswrapper[4830]: I0227 17:56:00.405319 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk4fz\" (UniqueName: \"kubernetes.io/projected/9260de39-76c8-432d-9455-4e787911d8c7-kube-api-access-tk4fz\") pod \"auto-csr-approver-29536916-blsrw\" (UID: \"9260de39-76c8-432d-9455-4e787911d8c7\") " pod="openshift-infra/auto-csr-approver-29536916-blsrw" Feb 27 17:56:00 crc kubenswrapper[4830]: I0227 17:56:00.526527 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536916-blsrw" Feb 27 17:56:01 crc kubenswrapper[4830]: I0227 17:56:01.002784 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536916-blsrw"] Feb 27 17:56:01 crc kubenswrapper[4830]: W0227 17:56:01.007139 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9260de39_76c8_432d_9455_4e787911d8c7.slice/crio-391a47d1ac5086832a41757639a8fb6bcb7f9d1cb208b47c786197da5c2db5ea WatchSource:0}: Error finding container 391a47d1ac5086832a41757639a8fb6bcb7f9d1cb208b47c786197da5c2db5ea: Status 404 returned error can't find the container with id 391a47d1ac5086832a41757639a8fb6bcb7f9d1cb208b47c786197da5c2db5ea Feb 27 17:56:01 crc kubenswrapper[4830]: I0227 17:56:01.417885 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536916-blsrw" event={"ID":"9260de39-76c8-432d-9455-4e787911d8c7","Type":"ContainerStarted","Data":"391a47d1ac5086832a41757639a8fb6bcb7f9d1cb208b47c786197da5c2db5ea"} Feb 27 17:56:02 crc kubenswrapper[4830]: E0227 17:56:02.789128 4830 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9260de39_76c8_432d_9455_4e787911d8c7.slice/crio-conmon-da93b37cf886804e6a0661b93d7072a6f631fd25a4f85cdf6068f5a8a083c68b.scope\": RecentStats: unable to find data in memory cache]" Feb 27 17:56:03 crc kubenswrapper[4830]: I0227 17:56:03.445507 4830 generic.go:334] "Generic (PLEG): container finished" podID="9260de39-76c8-432d-9455-4e787911d8c7" containerID="da93b37cf886804e6a0661b93d7072a6f631fd25a4f85cdf6068f5a8a083c68b" exitCode=0 Feb 27 17:56:03 crc kubenswrapper[4830]: I0227 17:56:03.445620 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536916-blsrw" event={"ID":"9260de39-76c8-432d-9455-4e787911d8c7","Type":"ContainerDied","Data":"da93b37cf886804e6a0661b93d7072a6f631fd25a4f85cdf6068f5a8a083c68b"} Feb 27 17:56:04 crc kubenswrapper[4830]: I0227 17:56:04.794583 4830 scope.go:117] "RemoveContainer" containerID="9c7600b1d02b30467a3c9249e6962a5faed8288686d8237218305c7cc4357171" Feb 27 17:56:04 crc kubenswrapper[4830]: E0227 17:56:04.797163 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:56:04 crc kubenswrapper[4830]: I0227 17:56:04.982386 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536916-blsrw" Feb 27 17:56:05 crc kubenswrapper[4830]: I0227 17:56:05.095630 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk4fz\" (UniqueName: \"kubernetes.io/projected/9260de39-76c8-432d-9455-4e787911d8c7-kube-api-access-tk4fz\") pod \"9260de39-76c8-432d-9455-4e787911d8c7\" (UID: \"9260de39-76c8-432d-9455-4e787911d8c7\") " Feb 27 17:56:05 crc kubenswrapper[4830]: I0227 17:56:05.101687 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9260de39-76c8-432d-9455-4e787911d8c7-kube-api-access-tk4fz" (OuterVolumeSpecName: "kube-api-access-tk4fz") pod "9260de39-76c8-432d-9455-4e787911d8c7" (UID: "9260de39-76c8-432d-9455-4e787911d8c7"). InnerVolumeSpecName "kube-api-access-tk4fz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:56:05 crc kubenswrapper[4830]: I0227 17:56:05.198484 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk4fz\" (UniqueName: \"kubernetes.io/projected/9260de39-76c8-432d-9455-4e787911d8c7-kube-api-access-tk4fz\") on node \"crc\" DevicePath \"\"" Feb 27 17:56:05 crc kubenswrapper[4830]: I0227 17:56:05.471131 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536916-blsrw" event={"ID":"9260de39-76c8-432d-9455-4e787911d8c7","Type":"ContainerDied","Data":"391a47d1ac5086832a41757639a8fb6bcb7f9d1cb208b47c786197da5c2db5ea"} Feb 27 17:56:05 crc kubenswrapper[4830]: I0227 17:56:05.471174 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="391a47d1ac5086832a41757639a8fb6bcb7f9d1cb208b47c786197da5c2db5ea" Feb 27 17:56:05 crc kubenswrapper[4830]: I0227 17:56:05.471244 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536916-blsrw" Feb 27 17:56:06 crc kubenswrapper[4830]: I0227 17:56:06.082433 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536910-jpww7"] Feb 27 17:56:06 crc kubenswrapper[4830]: I0227 17:56:06.098570 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536910-jpww7"] Feb 27 17:56:06 crc kubenswrapper[4830]: I0227 17:56:06.786018 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec982c69-2d78-4ebd-beb8-d2b640955d6f" path="/var/lib/kubelet/pods/ec982c69-2d78-4ebd-beb8-d2b640955d6f/volumes" Feb 27 17:56:09 crc kubenswrapper[4830]: E0227 17:56:09.765673 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a\\\"\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6smj" podUID="fb796dd0-1d3a-4037-a42a-7427293ea799" Feb 27 17:56:09 crc kubenswrapper[4830]: E0227 17:56:09.766119 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:56:19 crc kubenswrapper[4830]: I0227 17:56:19.762823 4830 scope.go:117] "RemoveContainer" containerID="9c7600b1d02b30467a3c9249e6962a5faed8288686d8237218305c7cc4357171" Feb 27 17:56:19 crc kubenswrapper[4830]: E0227 17:56:19.763823 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:56:20 crc kubenswrapper[4830]: E0227 17:56:20.766110 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:56:25 crc kubenswrapper[4830]: E0227 17:56:25.642061 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256=9c27a3901c5c632e259da7ac87eb6eadf10c51bf967b9fbec5253ade4cce6a9f/signature-4: status 500 (Internal Server Error)" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" Feb 27 17:56:25 crc kubenswrapper[4830]: E0227 17:56:25.643395 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a,Command:[],Args:[--prometheus-config-reloader=$(RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER) --prometheus-instance-selector=app.kubernetes.io/managed-by=observability-operator --alertmanager-instance-selector=app.kubernetes.io/managed-by=observability-operator --thanos-ruler-instance-selector=app.kubernetes.io/managed-by=observability-operator --watch-referenced-objects-in-all-namespaces=true --disable-unmanaged-prometheus-configuration=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOGC,Value:30,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER,Value:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{157286400 0} {} 150Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mq658,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-68bc856cb9-x6smj_openshift-operators(fb796dd0-1d3a-4037-a42a-7427293ea799): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256=9c27a3901c5c632e259da7ac87eb6eadf10c51bf967b9fbec5253ade4cce6a9f/signature-4: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 17:56:25 crc kubenswrapper[4830]: E0227 17:56:25.644658 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256=9c27a3901c5c632e259da7ac87eb6eadf10c51bf967b9fbec5253ade4cce6a9f/signature-4: status 500 (Internal Server Error)\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6smj" podUID="fb796dd0-1d3a-4037-a42a-7427293ea799" Feb 27 17:56:34 crc kubenswrapper[4830]: I0227 17:56:34.770677 4830 scope.go:117] "RemoveContainer" containerID="9c7600b1d02b30467a3c9249e6962a5faed8288686d8237218305c7cc4357171" Feb 27 17:56:34 crc kubenswrapper[4830]: E0227 17:56:34.774516 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:56:35 crc kubenswrapper[4830]: E0227 17:56:35.766513 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:56:37 crc kubenswrapper[4830]: E0227 17:56:37.765595 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a\\\"\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6smj" podUID="fb796dd0-1d3a-4037-a42a-7427293ea799" Feb 27 17:56:46 crc kubenswrapper[4830]: I0227 17:56:46.763127 4830 scope.go:117] "RemoveContainer" containerID="9c7600b1d02b30467a3c9249e6962a5faed8288686d8237218305c7cc4357171" Feb 27 17:56:46 crc kubenswrapper[4830]: E0227 17:56:46.764587 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:56:48 crc kubenswrapper[4830]: E0227 17:56:48.765936 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:56:49 crc kubenswrapper[4830]: E0227 17:56:49.765532 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a\\\"\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6smj" podUID="fb796dd0-1d3a-4037-a42a-7427293ea799" Feb 27 17:56:55 crc kubenswrapper[4830]: I0227 17:56:55.874357 4830 scope.go:117] "RemoveContainer" containerID="79563642ace9758e8b592e78f9523911e3bb953444a953e0c63d0f7bbac7d789" Feb 27 17:56:55 crc kubenswrapper[4830]: I0227 17:56:55.951993 4830 scope.go:117] "RemoveContainer" containerID="76dee8c8675076174e182c25237040c99bb6a31a793dd993524be0662806f266" Feb 27 17:56:56 crc kubenswrapper[4830]: I0227 17:56:56.205252 4830 scope.go:117] "RemoveContainer" containerID="462c4f2e241affc624a4dd25875f81ef7688f725cea165d1b60574a237248f1b" Feb 27 17:56:58 crc kubenswrapper[4830]: I0227 17:56:58.764125 4830 scope.go:117] "RemoveContainer" containerID="9c7600b1d02b30467a3c9249e6962a5faed8288686d8237218305c7cc4357171" Feb 27 17:56:58 crc kubenswrapper[4830]: E0227 17:56:58.765449 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:57:01 crc kubenswrapper[4830]: E0227 17:57:01.767279 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a\\\"\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6smj" podUID="fb796dd0-1d3a-4037-a42a-7427293ea799" Feb 27 17:57:02 crc kubenswrapper[4830]: E0227 17:57:02.767233 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:57:11 crc kubenswrapper[4830]: I0227 17:57:11.762649 4830 scope.go:117] "RemoveContainer" containerID="9c7600b1d02b30467a3c9249e6962a5faed8288686d8237218305c7cc4357171" Feb 27 17:57:11 crc kubenswrapper[4830]: E0227 17:57:11.763650 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:57:13 crc kubenswrapper[4830]: I0227 17:57:13.066289 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-create-z4frp"] Feb 27 17:57:13 crc kubenswrapper[4830]: I0227 17:57:13.076605 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-create-z4frp"] Feb 27 17:57:14 crc kubenswrapper[4830]: I0227 17:57:14.091250 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-ebd1-account-create-update-s45vz"] Feb 27 17:57:14 crc kubenswrapper[4830]: I0227 17:57:14.098793 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-ebd1-account-create-update-s45vz"] Feb 27 17:57:14 crc kubenswrapper[4830]: I0227 17:57:14.785938 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53729d31-2de8-4d6f-b2de-7b9eacb758a0" path="/var/lib/kubelet/pods/53729d31-2de8-4d6f-b2de-7b9eacb758a0/volumes" Feb 27 17:57:14 crc kubenswrapper[4830]: I0227 17:57:14.787827 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5de2bd98-8ad8-4952-9956-225bec3013e1" path="/var/lib/kubelet/pods/5de2bd98-8ad8-4952-9956-225bec3013e1/volumes" Feb 27 17:57:17 crc kubenswrapper[4830]: E0227 17:57:17.764718 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:57:20 crc kubenswrapper[4830]: I0227 17:57:20.061507 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-persistence-db-create-gtbf9"] Feb 27 17:57:20 crc kubenswrapper[4830]: I0227 17:57:20.072874 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-persistence-db-create-gtbf9"] Feb 27 17:57:20 crc kubenswrapper[4830]: I0227 17:57:20.780800 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c4f7b16-9303-44d5-a45b-a9365add4438" path="/var/lib/kubelet/pods/0c4f7b16-9303-44d5-a45b-a9365add4438/volumes" Feb 27 17:57:21 crc kubenswrapper[4830]: I0227 17:57:21.059520 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-69a2-account-create-update-r8l2r"] Feb 27 17:57:21 crc kubenswrapper[4830]: I0227 17:57:21.081912 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-69a2-account-create-update-r8l2r"] Feb 27 17:57:22 crc kubenswrapper[4830]: I0227 17:57:22.781545 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bae2b61b-3081-415b-a231-4994052a20c4" path="/var/lib/kubelet/pods/bae2b61b-3081-415b-a231-4994052a20c4/volumes" Feb 27 17:57:25 crc kubenswrapper[4830]: I0227 17:57:25.762363 4830 scope.go:117] "RemoveContainer" containerID="9c7600b1d02b30467a3c9249e6962a5faed8288686d8237218305c7cc4357171" Feb 27 17:57:25 crc kubenswrapper[4830]: E0227 17:57:25.763492 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:57:31 crc kubenswrapper[4830]: E0227 17:57:31.767231 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" Feb 27 17:57:36 crc kubenswrapper[4830]: I0227 17:57:36.763180 4830 scope.go:117] "RemoveContainer" containerID="9c7600b1d02b30467a3c9249e6962a5faed8288686d8237218305c7cc4357171" Feb 27 17:57:36 crc kubenswrapper[4830]: E0227 17:57:36.764552 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:57:45 crc kubenswrapper[4830]: I0227 17:57:45.106199 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbcl6" event={"ID":"90e915d6-d74a-4f5b-a8da-8f0f2acdda48","Type":"ContainerStarted","Data":"637cf9d9fc02f7eb660a27b4b49eb9cf5cb5c20e08db8a505bbd10714ab68030"} Feb 27 17:57:50 crc kubenswrapper[4830]: I0227 17:57:50.762609 4830 scope.go:117] "RemoveContainer" containerID="9c7600b1d02b30467a3c9249e6962a5faed8288686d8237218305c7cc4357171" Feb 27 17:57:50 crc kubenswrapper[4830]: E0227 17:57:50.763538 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:57:51 crc kubenswrapper[4830]: I0227 17:57:51.177969 4830 generic.go:334] "Generic (PLEG): container finished" podID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" containerID="637cf9d9fc02f7eb660a27b4b49eb9cf5cb5c20e08db8a505bbd10714ab68030" exitCode=0 Feb 27 17:57:51 crc kubenswrapper[4830]: I0227 17:57:51.178020 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbcl6" event={"ID":"90e915d6-d74a-4f5b-a8da-8f0f2acdda48","Type":"ContainerDied","Data":"637cf9d9fc02f7eb660a27b4b49eb9cf5cb5c20e08db8a505bbd10714ab68030"} Feb 27 17:57:52 crc kubenswrapper[4830]: I0227 17:57:52.209187 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbcl6" event={"ID":"90e915d6-d74a-4f5b-a8da-8f0f2acdda48","Type":"ContainerStarted","Data":"070b0124a57f47135aa482ed3c1880a6094f842a97cec17dc811df54a28f55d4"} Feb 27 17:57:52 crc kubenswrapper[4830]: I0227 17:57:52.245326 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gbcl6" podStartSLOduration=2.623749754 podStartE2EDuration="12m9.245288488s" podCreationTimestamp="2026-02-27 17:45:43 +0000 UTC" firstStartedPulling="2026-02-27 17:45:45.020609035 +0000 UTC m=+5941.109881498" lastFinishedPulling="2026-02-27 17:57:51.642147759 +0000 UTC m=+6667.731420232" observedRunningTime="2026-02-27 17:57:52.237286176 +0000 UTC m=+6668.326558669" watchObservedRunningTime="2026-02-27 17:57:52.245288488 +0000 UTC m=+6668.334560991" Feb 27 17:57:53 crc kubenswrapper[4830]: I0227 17:57:53.644287 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gbcl6" Feb 27 17:57:53 crc kubenswrapper[4830]: I0227 17:57:53.646614 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gbcl6" Feb 27 17:57:54 crc kubenswrapper[4830]: I0227 17:57:54.729764 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" containerName="registry-server" probeResult="failure" output=< Feb 27 17:57:54 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Feb 27 17:57:54 crc kubenswrapper[4830]: > Feb 27 17:57:56 crc kubenswrapper[4830]: I0227 17:57:56.320229 4830 scope.go:117] "RemoveContainer" containerID="5c818927d08a5f9938aacf14fdb10ff10c857b47865ab63e07d4f19933ea6710" Feb 27 17:57:56 crc kubenswrapper[4830]: I0227 17:57:56.347235 4830 scope.go:117] "RemoveContainer" containerID="9fd246f254a91e8fb8ab65f60c00c04f30548b8e4e10017fa997454e6c1dfe57" Feb 27 17:57:56 crc kubenswrapper[4830]: I0227 17:57:56.411264 4830 scope.go:117] "RemoveContainer" containerID="518ad1501f616efbe6ad57be8f6606539be903e87d0cdca192e8588c7fa593e1" Feb 27 17:57:56 crc kubenswrapper[4830]: I0227 17:57:56.442959 4830 scope.go:117] "RemoveContainer" containerID="89f1f067e27d0ff1f16fc5c3814328a79897ecb9eb81ad881ffa0e0536577f9e" Feb 27 17:57:58 crc kubenswrapper[4830]: I0227 17:57:58.092219 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-sync-zkk47"] Feb 27 17:57:58 crc kubenswrapper[4830]: I0227 17:57:58.110577 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-sync-zkk47"] Feb 27 17:57:58 crc kubenswrapper[4830]: I0227 17:57:58.807099 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0aaaab7f-77d8-4a19-acef-c47cb951f5b0" path="/var/lib/kubelet/pods/0aaaab7f-77d8-4a19-acef-c47cb951f5b0/volumes" Feb 27 17:57:59 crc kubenswrapper[4830]: I0227 17:57:59.802781 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m7xk8"] Feb 27 17:57:59 crc kubenswrapper[4830]: E0227 17:57:59.804171 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9260de39-76c8-432d-9455-4e787911d8c7" containerName="oc" Feb 27 17:57:59 crc kubenswrapper[4830]: I0227 17:57:59.804219 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="9260de39-76c8-432d-9455-4e787911d8c7" containerName="oc" Feb 27 17:57:59 crc kubenswrapper[4830]: I0227 17:57:59.804488 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="9260de39-76c8-432d-9455-4e787911d8c7" containerName="oc" Feb 27 17:57:59 crc kubenswrapper[4830]: I0227 17:57:59.806379 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m7xk8" Feb 27 17:57:59 crc kubenswrapper[4830]: I0227 17:57:59.815312 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m7xk8"] Feb 27 17:57:59 crc kubenswrapper[4830]: I0227 17:57:59.965029 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8393f040-6d7a-48e5-be41-891334614f73-utilities\") pod \"community-operators-m7xk8\" (UID: \"8393f040-6d7a-48e5-be41-891334614f73\") " pod="openshift-marketplace/community-operators-m7xk8" Feb 27 17:57:59 crc kubenswrapper[4830]: I0227 17:57:59.965102 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8393f040-6d7a-48e5-be41-891334614f73-catalog-content\") pod \"community-operators-m7xk8\" (UID: \"8393f040-6d7a-48e5-be41-891334614f73\") " pod="openshift-marketplace/community-operators-m7xk8" Feb 27 17:57:59 crc kubenswrapper[4830]: I0227 17:57:59.965414 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75vxf\" (UniqueName: \"kubernetes.io/projected/8393f040-6d7a-48e5-be41-891334614f73-kube-api-access-75vxf\") pod \"community-operators-m7xk8\" (UID: \"8393f040-6d7a-48e5-be41-891334614f73\") " pod="openshift-marketplace/community-operators-m7xk8" Feb 27 17:58:00 crc kubenswrapper[4830]: I0227 17:58:00.067777 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8393f040-6d7a-48e5-be41-891334614f73-utilities\") pod \"community-operators-m7xk8\" (UID: \"8393f040-6d7a-48e5-be41-891334614f73\") " pod="openshift-marketplace/community-operators-m7xk8" Feb 27 17:58:00 crc kubenswrapper[4830]: I0227 17:58:00.067837 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8393f040-6d7a-48e5-be41-891334614f73-catalog-content\") pod \"community-operators-m7xk8\" (UID: \"8393f040-6d7a-48e5-be41-891334614f73\") " pod="openshift-marketplace/community-operators-m7xk8" Feb 27 17:58:00 crc kubenswrapper[4830]: I0227 17:58:00.067913 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75vxf\" (UniqueName: \"kubernetes.io/projected/8393f040-6d7a-48e5-be41-891334614f73-kube-api-access-75vxf\") pod \"community-operators-m7xk8\" (UID: \"8393f040-6d7a-48e5-be41-891334614f73\") " pod="openshift-marketplace/community-operators-m7xk8" Feb 27 17:58:00 crc kubenswrapper[4830]: I0227 17:58:00.068362 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8393f040-6d7a-48e5-be41-891334614f73-utilities\") pod \"community-operators-m7xk8\" (UID: \"8393f040-6d7a-48e5-be41-891334614f73\") " pod="openshift-marketplace/community-operators-m7xk8" Feb 27 17:58:00 crc kubenswrapper[4830]: I0227 17:58:00.068507 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8393f040-6d7a-48e5-be41-891334614f73-catalog-content\") pod \"community-operators-m7xk8\" (UID: \"8393f040-6d7a-48e5-be41-891334614f73\") " pod="openshift-marketplace/community-operators-m7xk8" Feb 27 17:58:00 crc kubenswrapper[4830]: I0227 17:58:00.089180 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75vxf\" (UniqueName: \"kubernetes.io/projected/8393f040-6d7a-48e5-be41-891334614f73-kube-api-access-75vxf\") pod \"community-operators-m7xk8\" (UID: \"8393f040-6d7a-48e5-be41-891334614f73\") " pod="openshift-marketplace/community-operators-m7xk8" Feb 27 17:58:00 crc kubenswrapper[4830]: I0227 17:58:00.150669 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536918-j6j9d"] Feb 27 17:58:00 crc kubenswrapper[4830]: I0227 17:58:00.151888 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536918-j6j9d" Feb 27 17:58:00 crc kubenswrapper[4830]: I0227 17:58:00.154458 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:58:00 crc kubenswrapper[4830]: I0227 17:58:00.154656 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:58:00 crc kubenswrapper[4830]: I0227 17:58:00.159934 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 17:58:00 crc kubenswrapper[4830]: I0227 17:58:00.162479 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536918-j6j9d"] Feb 27 17:58:00 crc kubenswrapper[4830]: I0227 17:58:00.169079 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m7xk8" Feb 27 17:58:00 crc kubenswrapper[4830]: I0227 17:58:00.270350 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdf9q\" (UniqueName: \"kubernetes.io/projected/8018a2b4-d99d-40c0-bd20-b38c65447309-kube-api-access-wdf9q\") pod \"auto-csr-approver-29536918-j6j9d\" (UID: \"8018a2b4-d99d-40c0-bd20-b38c65447309\") " pod="openshift-infra/auto-csr-approver-29536918-j6j9d" Feb 27 17:58:00 crc kubenswrapper[4830]: I0227 17:58:00.375007 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdf9q\" (UniqueName: \"kubernetes.io/projected/8018a2b4-d99d-40c0-bd20-b38c65447309-kube-api-access-wdf9q\") pod \"auto-csr-approver-29536918-j6j9d\" (UID: \"8018a2b4-d99d-40c0-bd20-b38c65447309\") " pod="openshift-infra/auto-csr-approver-29536918-j6j9d" Feb 27 17:58:00 crc kubenswrapper[4830]: I0227 17:58:00.412239 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdf9q\" (UniqueName: \"kubernetes.io/projected/8018a2b4-d99d-40c0-bd20-b38c65447309-kube-api-access-wdf9q\") pod \"auto-csr-approver-29536918-j6j9d\" (UID: \"8018a2b4-d99d-40c0-bd20-b38c65447309\") " pod="openshift-infra/auto-csr-approver-29536918-j6j9d" Feb 27 17:58:00 crc kubenswrapper[4830]: I0227 17:58:00.468878 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536918-j6j9d" Feb 27 17:58:00 crc kubenswrapper[4830]: I0227 17:58:00.756908 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m7xk8"] Feb 27 17:58:00 crc kubenswrapper[4830]: I0227 17:58:00.977658 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536918-j6j9d"] Feb 27 17:58:00 crc kubenswrapper[4830]: W0227 17:58:00.981483 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8018a2b4_d99d_40c0_bd20_b38c65447309.slice/crio-db984ed54cac80e5a104effbb42e1ba5c79aeed7671a8aa197c2a3a535e713c9 WatchSource:0}: Error finding container db984ed54cac80e5a104effbb42e1ba5c79aeed7671a8aa197c2a3a535e713c9: Status 404 returned error can't find the container with id db984ed54cac80e5a104effbb42e1ba5c79aeed7671a8aa197c2a3a535e713c9 Feb 27 17:58:01 crc kubenswrapper[4830]: I0227 17:58:01.310926 4830 generic.go:334] "Generic (PLEG): container finished" podID="8393f040-6d7a-48e5-be41-891334614f73" containerID="c7dcffc4315759e0b499e339e767513fa8d3f45f258dc47b869d67ce9da14cab" exitCode=0 Feb 27 17:58:01 crc kubenswrapper[4830]: I0227 17:58:01.311035 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7xk8" event={"ID":"8393f040-6d7a-48e5-be41-891334614f73","Type":"ContainerDied","Data":"c7dcffc4315759e0b499e339e767513fa8d3f45f258dc47b869d67ce9da14cab"} Feb 27 17:58:01 crc kubenswrapper[4830]: I0227 17:58:01.311071 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7xk8" event={"ID":"8393f040-6d7a-48e5-be41-891334614f73","Type":"ContainerStarted","Data":"32b1248274d13778ceb498ffa2f019f4977cc2212905232ab598caca94f308d2"} Feb 27 17:58:01 crc kubenswrapper[4830]: I0227 17:58:01.313033 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536918-j6j9d" event={"ID":"8018a2b4-d99d-40c0-bd20-b38c65447309","Type":"ContainerStarted","Data":"db984ed54cac80e5a104effbb42e1ba5c79aeed7671a8aa197c2a3a535e713c9"} Feb 27 17:58:03 crc kubenswrapper[4830]: I0227 17:58:03.348629 4830 generic.go:334] "Generic (PLEG): container finished" podID="8018a2b4-d99d-40c0-bd20-b38c65447309" containerID="b48fb3abfdc43fbf3a7970fd90270b2081901067001f39b5d405b653414eb321" exitCode=0 Feb 27 17:58:03 crc kubenswrapper[4830]: I0227 17:58:03.348877 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536918-j6j9d" event={"ID":"8018a2b4-d99d-40c0-bd20-b38c65447309","Type":"ContainerDied","Data":"b48fb3abfdc43fbf3a7970fd90270b2081901067001f39b5d405b653414eb321"} Feb 27 17:58:03 crc kubenswrapper[4830]: I0227 17:58:03.355283 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7xk8" event={"ID":"8393f040-6d7a-48e5-be41-891334614f73","Type":"ContainerStarted","Data":"6e457ac51b69fa92a7450e4347f20d47202ad11593584dc2a100db3918cd2b66"} Feb 27 17:58:03 crc kubenswrapper[4830]: I0227 17:58:03.721535 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gbcl6" Feb 27 17:58:03 crc kubenswrapper[4830]: I0227 17:58:03.817239 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gbcl6" Feb 27 17:58:04 crc kubenswrapper[4830]: I0227 17:58:04.371695 4830 generic.go:334] "Generic (PLEG): container finished" podID="8393f040-6d7a-48e5-be41-891334614f73" containerID="6e457ac51b69fa92a7450e4347f20d47202ad11593584dc2a100db3918cd2b66" exitCode=0 Feb 27 17:58:04 crc kubenswrapper[4830]: I0227 17:58:04.371785 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7xk8" event={"ID":"8393f040-6d7a-48e5-be41-891334614f73","Type":"ContainerDied","Data":"6e457ac51b69fa92a7450e4347f20d47202ad11593584dc2a100db3918cd2b66"} Feb 27 17:58:04 crc kubenswrapper[4830]: I0227 17:58:04.777723 4830 scope.go:117] "RemoveContainer" containerID="9c7600b1d02b30467a3c9249e6962a5faed8288686d8237218305c7cc4357171" Feb 27 17:58:04 crc kubenswrapper[4830]: E0227 17:58:04.779684 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:58:04 crc kubenswrapper[4830]: I0227 17:58:04.908101 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536918-j6j9d" Feb 27 17:58:05 crc kubenswrapper[4830]: I0227 17:58:05.082395 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdf9q\" (UniqueName: \"kubernetes.io/projected/8018a2b4-d99d-40c0-bd20-b38c65447309-kube-api-access-wdf9q\") pod \"8018a2b4-d99d-40c0-bd20-b38c65447309\" (UID: \"8018a2b4-d99d-40c0-bd20-b38c65447309\") " Feb 27 17:58:05 crc kubenswrapper[4830]: I0227 17:58:05.098785 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8018a2b4-d99d-40c0-bd20-b38c65447309-kube-api-access-wdf9q" (OuterVolumeSpecName: "kube-api-access-wdf9q") pod "8018a2b4-d99d-40c0-bd20-b38c65447309" (UID: "8018a2b4-d99d-40c0-bd20-b38c65447309"). InnerVolumeSpecName "kube-api-access-wdf9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:58:05 crc kubenswrapper[4830]: I0227 17:58:05.185187 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdf9q\" (UniqueName: \"kubernetes.io/projected/8018a2b4-d99d-40c0-bd20-b38c65447309-kube-api-access-wdf9q\") on node \"crc\" DevicePath \"\"" Feb 27 17:58:05 crc kubenswrapper[4830]: I0227 17:58:05.390220 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7xk8" event={"ID":"8393f040-6d7a-48e5-be41-891334614f73","Type":"ContainerStarted","Data":"a7df8b94c77932c3be07eef4d3d07abe27eb9ea900be660347b186cafa00f6e1"} Feb 27 17:58:05 crc kubenswrapper[4830]: I0227 17:58:05.398802 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536918-j6j9d" event={"ID":"8018a2b4-d99d-40c0-bd20-b38c65447309","Type":"ContainerDied","Data":"db984ed54cac80e5a104effbb42e1ba5c79aeed7671a8aa197c2a3a535e713c9"} Feb 27 17:58:05 crc kubenswrapper[4830]: I0227 17:58:05.399323 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db984ed54cac80e5a104effbb42e1ba5c79aeed7671a8aa197c2a3a535e713c9" Feb 27 17:58:05 crc kubenswrapper[4830]: I0227 17:58:05.398969 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536918-j6j9d" Feb 27 17:58:05 crc kubenswrapper[4830]: I0227 17:58:05.420931 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m7xk8" podStartSLOduration=2.848821434 podStartE2EDuration="6.4209138s" podCreationTimestamp="2026-02-27 17:57:59 +0000 UTC" firstStartedPulling="2026-02-27 17:58:01.313687991 +0000 UTC m=+6677.402960464" lastFinishedPulling="2026-02-27 17:58:04.885780367 +0000 UTC m=+6680.975052830" observedRunningTime="2026-02-27 17:58:05.418445001 +0000 UTC m=+6681.507717474" watchObservedRunningTime="2026-02-27 17:58:05.4209138 +0000 UTC m=+6681.510186263" Feb 27 17:58:05 crc kubenswrapper[4830]: I0227 17:58:05.787179 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gbcl6"] Feb 27 17:58:05 crc kubenswrapper[4830]: I0227 17:58:05.788009 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gbcl6" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" containerName="registry-server" containerID="cri-o://070b0124a57f47135aa482ed3c1880a6094f842a97cec17dc811df54a28f55d4" gracePeriod=2 Feb 27 17:58:06 crc kubenswrapper[4830]: I0227 17:58:06.007068 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536912-2rxlg"] Feb 27 17:58:06 crc kubenswrapper[4830]: I0227 17:58:06.017824 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536912-2rxlg"] Feb 27 17:58:06 crc kubenswrapper[4830]: I0227 17:58:06.416328 4830 generic.go:334] "Generic (PLEG): container finished" podID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" containerID="070b0124a57f47135aa482ed3c1880a6094f842a97cec17dc811df54a28f55d4" exitCode=0 Feb 27 17:58:06 crc kubenswrapper[4830]: I0227 17:58:06.416462 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbcl6" event={"ID":"90e915d6-d74a-4f5b-a8da-8f0f2acdda48","Type":"ContainerDied","Data":"070b0124a57f47135aa482ed3c1880a6094f842a97cec17dc811df54a28f55d4"} Feb 27 17:58:06 crc kubenswrapper[4830]: I0227 17:58:06.783881 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2476383-c615-49e7-b34c-e824adab8603" path="/var/lib/kubelet/pods/a2476383-c615-49e7-b34c-e824adab8603/volumes" Feb 27 17:58:06 crc kubenswrapper[4830]: I0227 17:58:06.878516 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gbcl6" Feb 27 17:58:07 crc kubenswrapper[4830]: I0227 17:58:07.041276 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9tjs\" (UniqueName: \"kubernetes.io/projected/90e915d6-d74a-4f5b-a8da-8f0f2acdda48-kube-api-access-t9tjs\") pod \"90e915d6-d74a-4f5b-a8da-8f0f2acdda48\" (UID: \"90e915d6-d74a-4f5b-a8da-8f0f2acdda48\") " Feb 27 17:58:07 crc kubenswrapper[4830]: I0227 17:58:07.041397 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90e915d6-d74a-4f5b-a8da-8f0f2acdda48-catalog-content\") pod \"90e915d6-d74a-4f5b-a8da-8f0f2acdda48\" (UID: \"90e915d6-d74a-4f5b-a8da-8f0f2acdda48\") " Feb 27 17:58:07 crc kubenswrapper[4830]: I0227 17:58:07.041585 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90e915d6-d74a-4f5b-a8da-8f0f2acdda48-utilities\") pod \"90e915d6-d74a-4f5b-a8da-8f0f2acdda48\" (UID: \"90e915d6-d74a-4f5b-a8da-8f0f2acdda48\") " Feb 27 17:58:07 crc kubenswrapper[4830]: I0227 17:58:07.042356 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90e915d6-d74a-4f5b-a8da-8f0f2acdda48-utilities" (OuterVolumeSpecName: "utilities") pod "90e915d6-d74a-4f5b-a8da-8f0f2acdda48" (UID: "90e915d6-d74a-4f5b-a8da-8f0f2acdda48"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:58:07 crc kubenswrapper[4830]: I0227 17:58:07.051661 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90e915d6-d74a-4f5b-a8da-8f0f2acdda48-kube-api-access-t9tjs" (OuterVolumeSpecName: "kube-api-access-t9tjs") pod "90e915d6-d74a-4f5b-a8da-8f0f2acdda48" (UID: "90e915d6-d74a-4f5b-a8da-8f0f2acdda48"). InnerVolumeSpecName "kube-api-access-t9tjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:58:07 crc kubenswrapper[4830]: I0227 17:58:07.144116 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9tjs\" (UniqueName: \"kubernetes.io/projected/90e915d6-d74a-4f5b-a8da-8f0f2acdda48-kube-api-access-t9tjs\") on node \"crc\" DevicePath \"\"" Feb 27 17:58:07 crc kubenswrapper[4830]: I0227 17:58:07.144437 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90e915d6-d74a-4f5b-a8da-8f0f2acdda48-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:58:07 crc kubenswrapper[4830]: I0227 17:58:07.159369 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90e915d6-d74a-4f5b-a8da-8f0f2acdda48-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90e915d6-d74a-4f5b-a8da-8f0f2acdda48" (UID: "90e915d6-d74a-4f5b-a8da-8f0f2acdda48"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:58:07 crc kubenswrapper[4830]: I0227 17:58:07.247011 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90e915d6-d74a-4f5b-a8da-8f0f2acdda48-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:58:07 crc kubenswrapper[4830]: I0227 17:58:07.428166 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbcl6" event={"ID":"90e915d6-d74a-4f5b-a8da-8f0f2acdda48","Type":"ContainerDied","Data":"de817b138505468257c54fddd61a56fd9130b77ac87aec4c8bad76dfad4482c6"} Feb 27 17:58:07 crc kubenswrapper[4830]: I0227 17:58:07.428215 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gbcl6" Feb 27 17:58:07 crc kubenswrapper[4830]: I0227 17:58:07.428234 4830 scope.go:117] "RemoveContainer" containerID="070b0124a57f47135aa482ed3c1880a6094f842a97cec17dc811df54a28f55d4" Feb 27 17:58:07 crc kubenswrapper[4830]: I0227 17:58:07.464726 4830 scope.go:117] "RemoveContainer" containerID="637cf9d9fc02f7eb660a27b4b49eb9cf5cb5c20e08db8a505bbd10714ab68030" Feb 27 17:58:07 crc kubenswrapper[4830]: I0227 17:58:07.474197 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gbcl6"] Feb 27 17:58:07 crc kubenswrapper[4830]: I0227 17:58:07.484549 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gbcl6"] Feb 27 17:58:07 crc kubenswrapper[4830]: I0227 17:58:07.500443 4830 scope.go:117] "RemoveContainer" containerID="a899de0dbf17614a003a78f6f79914ce7785cca43694b713e7ea7e744695e6d0" Feb 27 17:58:08 crc kubenswrapper[4830]: I0227 17:58:08.195184 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gdh6f"] Feb 27 17:58:08 crc kubenswrapper[4830]: E0227 17:58:08.195627 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" containerName="extract-utilities" Feb 27 17:58:08 crc kubenswrapper[4830]: I0227 17:58:08.195644 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" containerName="extract-utilities" Feb 27 17:58:08 crc kubenswrapper[4830]: E0227 17:58:08.195676 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" containerName="registry-server" Feb 27 17:58:08 crc kubenswrapper[4830]: I0227 17:58:08.195683 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" containerName="registry-server" Feb 27 17:58:08 crc kubenswrapper[4830]: E0227 17:58:08.195695 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8018a2b4-d99d-40c0-bd20-b38c65447309" containerName="oc" Feb 27 17:58:08 crc kubenswrapper[4830]: I0227 17:58:08.195701 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8018a2b4-d99d-40c0-bd20-b38c65447309" containerName="oc" Feb 27 17:58:08 crc kubenswrapper[4830]: E0227 17:58:08.195718 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" containerName="extract-content" Feb 27 17:58:08 crc kubenswrapper[4830]: I0227 17:58:08.195724 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" containerName="extract-content" Feb 27 17:58:08 crc kubenswrapper[4830]: I0227 17:58:08.195907 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" containerName="registry-server" Feb 27 17:58:08 crc kubenswrapper[4830]: I0227 17:58:08.195918 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="8018a2b4-d99d-40c0-bd20-b38c65447309" containerName="oc" Feb 27 17:58:08 crc kubenswrapper[4830]: I0227 17:58:08.197361 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gdh6f" Feb 27 17:58:08 crc kubenswrapper[4830]: I0227 17:58:08.214181 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gdh6f"] Feb 27 17:58:08 crc kubenswrapper[4830]: I0227 17:58:08.282674 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9c2a5cd-996c-4354-af4a-dc030af8ab5e-utilities\") pod \"redhat-operators-gdh6f\" (UID: \"f9c2a5cd-996c-4354-af4a-dc030af8ab5e\") " pod="openshift-marketplace/redhat-operators-gdh6f" Feb 27 17:58:08 crc kubenswrapper[4830]: I0227 17:58:08.283259 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9mk9\" (UniqueName: \"kubernetes.io/projected/f9c2a5cd-996c-4354-af4a-dc030af8ab5e-kube-api-access-x9mk9\") pod \"redhat-operators-gdh6f\" (UID: \"f9c2a5cd-996c-4354-af4a-dc030af8ab5e\") " pod="openshift-marketplace/redhat-operators-gdh6f" Feb 27 17:58:08 crc kubenswrapper[4830]: I0227 17:58:08.283475 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9c2a5cd-996c-4354-af4a-dc030af8ab5e-catalog-content\") pod \"redhat-operators-gdh6f\" (UID: \"f9c2a5cd-996c-4354-af4a-dc030af8ab5e\") " pod="openshift-marketplace/redhat-operators-gdh6f" Feb 27 17:58:08 crc kubenswrapper[4830]: I0227 17:58:08.385848 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9c2a5cd-996c-4354-af4a-dc030af8ab5e-utilities\") pod \"redhat-operators-gdh6f\" (UID: \"f9c2a5cd-996c-4354-af4a-dc030af8ab5e\") " pod="openshift-marketplace/redhat-operators-gdh6f" Feb 27 17:58:08 crc kubenswrapper[4830]: I0227 17:58:08.385926 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9mk9\" (UniqueName: \"kubernetes.io/projected/f9c2a5cd-996c-4354-af4a-dc030af8ab5e-kube-api-access-x9mk9\") pod \"redhat-operators-gdh6f\" (UID: \"f9c2a5cd-996c-4354-af4a-dc030af8ab5e\") " pod="openshift-marketplace/redhat-operators-gdh6f" Feb 27 17:58:08 crc kubenswrapper[4830]: I0227 17:58:08.386060 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9c2a5cd-996c-4354-af4a-dc030af8ab5e-catalog-content\") pod \"redhat-operators-gdh6f\" (UID: \"f9c2a5cd-996c-4354-af4a-dc030af8ab5e\") " pod="openshift-marketplace/redhat-operators-gdh6f" Feb 27 17:58:08 crc kubenswrapper[4830]: I0227 17:58:08.386422 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9c2a5cd-996c-4354-af4a-dc030af8ab5e-utilities\") pod \"redhat-operators-gdh6f\" (UID: \"f9c2a5cd-996c-4354-af4a-dc030af8ab5e\") " pod="openshift-marketplace/redhat-operators-gdh6f" Feb 27 17:58:08 crc kubenswrapper[4830]: I0227 17:58:08.386847 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9c2a5cd-996c-4354-af4a-dc030af8ab5e-catalog-content\") pod \"redhat-operators-gdh6f\" (UID: \"f9c2a5cd-996c-4354-af4a-dc030af8ab5e\") " pod="openshift-marketplace/redhat-operators-gdh6f" Feb 27 17:58:08 crc kubenswrapper[4830]: I0227 17:58:08.418930 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9mk9\" (UniqueName: \"kubernetes.io/projected/f9c2a5cd-996c-4354-af4a-dc030af8ab5e-kube-api-access-x9mk9\") pod \"redhat-operators-gdh6f\" (UID: \"f9c2a5cd-996c-4354-af4a-dc030af8ab5e\") " pod="openshift-marketplace/redhat-operators-gdh6f" Feb 27 17:58:08 crc kubenswrapper[4830]: I0227 17:58:08.567705 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gdh6f" Feb 27 17:58:08 crc kubenswrapper[4830]: I0227 17:58:08.781442 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90e915d6-d74a-4f5b-a8da-8f0f2acdda48" path="/var/lib/kubelet/pods/90e915d6-d74a-4f5b-a8da-8f0f2acdda48/volumes" Feb 27 17:58:09 crc kubenswrapper[4830]: I0227 17:58:09.058513 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gdh6f"] Feb 27 17:58:09 crc kubenswrapper[4830]: I0227 17:58:09.454787 4830 generic.go:334] "Generic (PLEG): container finished" podID="f9c2a5cd-996c-4354-af4a-dc030af8ab5e" containerID="0215ba30b495645b95c8a5e0606942a82650cd5da27ec11b8133ab71a974c700" exitCode=0 Feb 27 17:58:09 crc kubenswrapper[4830]: I0227 17:58:09.455163 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gdh6f" event={"ID":"f9c2a5cd-996c-4354-af4a-dc030af8ab5e","Type":"ContainerDied","Data":"0215ba30b495645b95c8a5e0606942a82650cd5da27ec11b8133ab71a974c700"} Feb 27 17:58:09 crc kubenswrapper[4830]: I0227 17:58:09.455190 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gdh6f" event={"ID":"f9c2a5cd-996c-4354-af4a-dc030af8ab5e","Type":"ContainerStarted","Data":"d44c3ba0bcc8cf1c17e8683cd2b2fa79c8a1e16be8dc22eb1348a0146494bdc6"} Feb 27 17:58:10 crc kubenswrapper[4830]: I0227 17:58:10.171861 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m7xk8" Feb 27 17:58:10 crc kubenswrapper[4830]: I0227 17:58:10.171920 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m7xk8" Feb 27 17:58:10 crc kubenswrapper[4830]: I0227 17:58:10.232367 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m7xk8" Feb 27 17:58:10 crc kubenswrapper[4830]: I0227 17:58:10.479851 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gdh6f" event={"ID":"f9c2a5cd-996c-4354-af4a-dc030af8ab5e","Type":"ContainerStarted","Data":"aea2c6cd8e3bc7d221fb2ff2981b5209a60558fe322058339f4800b4a3e8eee7"} Feb 27 17:58:10 crc kubenswrapper[4830]: I0227 17:58:10.531743 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m7xk8" Feb 27 17:58:12 crc kubenswrapper[4830]: I0227 17:58:12.588415 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m7xk8"] Feb 27 17:58:12 crc kubenswrapper[4830]: I0227 17:58:12.589573 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m7xk8" podUID="8393f040-6d7a-48e5-be41-891334614f73" containerName="registry-server" containerID="cri-o://a7df8b94c77932c3be07eef4d3d07abe27eb9ea900be660347b186cafa00f6e1" gracePeriod=2 Feb 27 17:58:13 crc kubenswrapper[4830]: I0227 17:58:13.260816 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m7xk8" Feb 27 17:58:13 crc kubenswrapper[4830]: I0227 17:58:13.401982 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8393f040-6d7a-48e5-be41-891334614f73-catalog-content\") pod \"8393f040-6d7a-48e5-be41-891334614f73\" (UID: \"8393f040-6d7a-48e5-be41-891334614f73\") " Feb 27 17:58:13 crc kubenswrapper[4830]: I0227 17:58:13.402136 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8393f040-6d7a-48e5-be41-891334614f73-utilities\") pod \"8393f040-6d7a-48e5-be41-891334614f73\" (UID: \"8393f040-6d7a-48e5-be41-891334614f73\") " Feb 27 17:58:13 crc kubenswrapper[4830]: I0227 17:58:13.402292 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75vxf\" (UniqueName: \"kubernetes.io/projected/8393f040-6d7a-48e5-be41-891334614f73-kube-api-access-75vxf\") pod \"8393f040-6d7a-48e5-be41-891334614f73\" (UID: \"8393f040-6d7a-48e5-be41-891334614f73\") " Feb 27 17:58:13 crc kubenswrapper[4830]: I0227 17:58:13.404859 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8393f040-6d7a-48e5-be41-891334614f73-utilities" (OuterVolumeSpecName: "utilities") pod "8393f040-6d7a-48e5-be41-891334614f73" (UID: "8393f040-6d7a-48e5-be41-891334614f73"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:58:13 crc kubenswrapper[4830]: I0227 17:58:13.414568 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8393f040-6d7a-48e5-be41-891334614f73-kube-api-access-75vxf" (OuterVolumeSpecName: "kube-api-access-75vxf") pod "8393f040-6d7a-48e5-be41-891334614f73" (UID: "8393f040-6d7a-48e5-be41-891334614f73"). InnerVolumeSpecName "kube-api-access-75vxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:58:13 crc kubenswrapper[4830]: I0227 17:58:13.470803 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8393f040-6d7a-48e5-be41-891334614f73-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8393f040-6d7a-48e5-be41-891334614f73" (UID: "8393f040-6d7a-48e5-be41-891334614f73"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:58:13 crc kubenswrapper[4830]: I0227 17:58:13.505614 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8393f040-6d7a-48e5-be41-891334614f73-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:58:13 crc kubenswrapper[4830]: I0227 17:58:13.505646 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8393f040-6d7a-48e5-be41-891334614f73-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:58:13 crc kubenswrapper[4830]: I0227 17:58:13.505661 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75vxf\" (UniqueName: \"kubernetes.io/projected/8393f040-6d7a-48e5-be41-891334614f73-kube-api-access-75vxf\") on node \"crc\" DevicePath \"\"" Feb 27 17:58:13 crc kubenswrapper[4830]: I0227 17:58:13.523781 4830 generic.go:334] "Generic (PLEG): container finished" podID="8393f040-6d7a-48e5-be41-891334614f73" containerID="a7df8b94c77932c3be07eef4d3d07abe27eb9ea900be660347b186cafa00f6e1" exitCode=0 Feb 27 17:58:13 crc kubenswrapper[4830]: I0227 17:58:13.523828 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7xk8" event={"ID":"8393f040-6d7a-48e5-be41-891334614f73","Type":"ContainerDied","Data":"a7df8b94c77932c3be07eef4d3d07abe27eb9ea900be660347b186cafa00f6e1"} Feb 27 17:58:13 crc kubenswrapper[4830]: I0227 17:58:13.523860 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7xk8" event={"ID":"8393f040-6d7a-48e5-be41-891334614f73","Type":"ContainerDied","Data":"32b1248274d13778ceb498ffa2f019f4977cc2212905232ab598caca94f308d2"} Feb 27 17:58:13 crc kubenswrapper[4830]: I0227 17:58:13.523888 4830 scope.go:117] "RemoveContainer" containerID="a7df8b94c77932c3be07eef4d3d07abe27eb9ea900be660347b186cafa00f6e1" Feb 27 17:58:13 crc kubenswrapper[4830]: I0227 17:58:13.523908 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m7xk8" Feb 27 17:58:13 crc kubenswrapper[4830]: I0227 17:58:13.548932 4830 scope.go:117] "RemoveContainer" containerID="6e457ac51b69fa92a7450e4347f20d47202ad11593584dc2a100db3918cd2b66" Feb 27 17:58:13 crc kubenswrapper[4830]: I0227 17:58:13.590713 4830 scope.go:117] "RemoveContainer" containerID="c7dcffc4315759e0b499e339e767513fa8d3f45f258dc47b869d67ce9da14cab" Feb 27 17:58:13 crc kubenswrapper[4830]: I0227 17:58:13.601045 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m7xk8"] Feb 27 17:58:13 crc kubenswrapper[4830]: I0227 17:58:13.622167 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m7xk8"] Feb 27 17:58:13 crc kubenswrapper[4830]: I0227 17:58:13.632914 4830 scope.go:117] "RemoveContainer" containerID="a7df8b94c77932c3be07eef4d3d07abe27eb9ea900be660347b186cafa00f6e1" Feb 27 17:58:13 crc kubenswrapper[4830]: E0227 17:58:13.633437 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7df8b94c77932c3be07eef4d3d07abe27eb9ea900be660347b186cafa00f6e1\": container with ID starting with a7df8b94c77932c3be07eef4d3d07abe27eb9ea900be660347b186cafa00f6e1 not found: ID does not exist" containerID="a7df8b94c77932c3be07eef4d3d07abe27eb9ea900be660347b186cafa00f6e1" Feb 27 17:58:13 crc kubenswrapper[4830]: I0227 17:58:13.633481 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7df8b94c77932c3be07eef4d3d07abe27eb9ea900be660347b186cafa00f6e1"} err="failed to get container status \"a7df8b94c77932c3be07eef4d3d07abe27eb9ea900be660347b186cafa00f6e1\": rpc error: code = NotFound desc = could not find container \"a7df8b94c77932c3be07eef4d3d07abe27eb9ea900be660347b186cafa00f6e1\": container with ID starting with a7df8b94c77932c3be07eef4d3d07abe27eb9ea900be660347b186cafa00f6e1 not found: ID does not exist" Feb 27 17:58:13 crc kubenswrapper[4830]: I0227 17:58:13.633513 4830 scope.go:117] "RemoveContainer" containerID="6e457ac51b69fa92a7450e4347f20d47202ad11593584dc2a100db3918cd2b66" Feb 27 17:58:13 crc kubenswrapper[4830]: E0227 17:58:13.633890 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e457ac51b69fa92a7450e4347f20d47202ad11593584dc2a100db3918cd2b66\": container with ID starting with 6e457ac51b69fa92a7450e4347f20d47202ad11593584dc2a100db3918cd2b66 not found: ID does not exist" containerID="6e457ac51b69fa92a7450e4347f20d47202ad11593584dc2a100db3918cd2b66" Feb 27 17:58:13 crc kubenswrapper[4830]: I0227 17:58:13.633923 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e457ac51b69fa92a7450e4347f20d47202ad11593584dc2a100db3918cd2b66"} err="failed to get container status \"6e457ac51b69fa92a7450e4347f20d47202ad11593584dc2a100db3918cd2b66\": rpc error: code = NotFound desc = could not find container \"6e457ac51b69fa92a7450e4347f20d47202ad11593584dc2a100db3918cd2b66\": container with ID starting with 6e457ac51b69fa92a7450e4347f20d47202ad11593584dc2a100db3918cd2b66 not found: ID does not exist" Feb 27 17:58:13 crc kubenswrapper[4830]: I0227 17:58:13.633961 4830 scope.go:117] "RemoveContainer" containerID="c7dcffc4315759e0b499e339e767513fa8d3f45f258dc47b869d67ce9da14cab" Feb 27 17:58:13 crc kubenswrapper[4830]: E0227 17:58:13.634414 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7dcffc4315759e0b499e339e767513fa8d3f45f258dc47b869d67ce9da14cab\": container with ID starting with c7dcffc4315759e0b499e339e767513fa8d3f45f258dc47b869d67ce9da14cab not found: ID does not exist" containerID="c7dcffc4315759e0b499e339e767513fa8d3f45f258dc47b869d67ce9da14cab" Feb 27 17:58:13 crc kubenswrapper[4830]: I0227 17:58:13.634447 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7dcffc4315759e0b499e339e767513fa8d3f45f258dc47b869d67ce9da14cab"} err="failed to get container status \"c7dcffc4315759e0b499e339e767513fa8d3f45f258dc47b869d67ce9da14cab\": rpc error: code = NotFound desc = could not find container \"c7dcffc4315759e0b499e339e767513fa8d3f45f258dc47b869d67ce9da14cab\": container with ID starting with c7dcffc4315759e0b499e339e767513fa8d3f45f258dc47b869d67ce9da14cab not found: ID does not exist" Feb 27 17:58:14 crc kubenswrapper[4830]: I0227 17:58:14.785582 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8393f040-6d7a-48e5-be41-891334614f73" path="/var/lib/kubelet/pods/8393f040-6d7a-48e5-be41-891334614f73/volumes" Feb 27 17:58:16 crc kubenswrapper[4830]: I0227 17:58:16.570370 4830 generic.go:334] "Generic (PLEG): container finished" podID="f9c2a5cd-996c-4354-af4a-dc030af8ab5e" containerID="aea2c6cd8e3bc7d221fb2ff2981b5209a60558fe322058339f4800b4a3e8eee7" exitCode=0 Feb 27 17:58:16 crc kubenswrapper[4830]: I0227 17:58:16.570500 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gdh6f" event={"ID":"f9c2a5cd-996c-4354-af4a-dc030af8ab5e","Type":"ContainerDied","Data":"aea2c6cd8e3bc7d221fb2ff2981b5209a60558fe322058339f4800b4a3e8eee7"} Feb 27 17:58:17 crc kubenswrapper[4830]: I0227 17:58:17.590638 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gdh6f" event={"ID":"f9c2a5cd-996c-4354-af4a-dc030af8ab5e","Type":"ContainerStarted","Data":"7c654c8468482ee78f4a50fdc863f54e5deffdfb08f44cd690830dc64859c97f"} Feb 27 17:58:17 crc kubenswrapper[4830]: I0227 17:58:17.632762 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gdh6f" podStartSLOduration=2.084290805 podStartE2EDuration="9.63273313s" podCreationTimestamp="2026-02-27 17:58:08 +0000 UTC" firstStartedPulling="2026-02-27 17:58:09.457387968 +0000 UTC m=+6685.546660431" lastFinishedPulling="2026-02-27 17:58:17.005830253 +0000 UTC m=+6693.095102756" observedRunningTime="2026-02-27 17:58:17.614211128 +0000 UTC m=+6693.703483621" watchObservedRunningTime="2026-02-27 17:58:17.63273313 +0000 UTC m=+6693.722005623" Feb 27 17:58:17 crc kubenswrapper[4830]: I0227 17:58:17.762433 4830 scope.go:117] "RemoveContainer" containerID="9c7600b1d02b30467a3c9249e6962a5faed8288686d8237218305c7cc4357171" Feb 27 17:58:17 crc kubenswrapper[4830]: E0227 17:58:17.762997 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:58:18 crc kubenswrapper[4830]: E0227 17:58:18.048937 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256=9c27a3901c5c632e259da7ac87eb6eadf10c51bf967b9fbec5253ade4cce6a9f/signature-4: status 500 (Internal Server Error)" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" Feb 27 17:58:18 crc kubenswrapper[4830]: E0227 17:58:18.049288 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a,Command:[],Args:[--prometheus-config-reloader=$(RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER) --prometheus-instance-selector=app.kubernetes.io/managed-by=observability-operator --alertmanager-instance-selector=app.kubernetes.io/managed-by=observability-operator --thanos-ruler-instance-selector=app.kubernetes.io/managed-by=observability-operator --watch-referenced-objects-in-all-namespaces=true --disable-unmanaged-prometheus-configuration=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOGC,Value:30,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER,Value:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{157286400 0} {} 150Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mq658,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-68bc856cb9-x6smj_openshift-operators(fb796dd0-1d3a-4037-a42a-7427293ea799): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256=9c27a3901c5c632e259da7ac87eb6eadf10c51bf967b9fbec5253ade4cce6a9f/signature-4: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 17:58:18 crc kubenswrapper[4830]: E0227 17:58:18.050628 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256=9c27a3901c5c632e259da7ac87eb6eadf10c51bf967b9fbec5253ade4cce6a9f/signature-4: status 500 (Internal Server Error)\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6smj" podUID="fb796dd0-1d3a-4037-a42a-7427293ea799" Feb 27 17:58:18 crc kubenswrapper[4830]: I0227 17:58:18.569051 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gdh6f" Feb 27 17:58:18 crc kubenswrapper[4830]: I0227 17:58:18.569219 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gdh6f" Feb 27 17:58:19 crc kubenswrapper[4830]: I0227 17:58:19.624482 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gdh6f" podUID="f9c2a5cd-996c-4354-af4a-dc030af8ab5e" containerName="registry-server" probeResult="failure" output=< Feb 27 17:58:19 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Feb 27 17:58:19 crc kubenswrapper[4830]: > Feb 27 17:58:28 crc kubenswrapper[4830]: I0227 17:58:28.655675 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gdh6f" Feb 27 17:58:28 crc kubenswrapper[4830]: I0227 17:58:28.730359 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gdh6f" Feb 27 17:58:28 crc kubenswrapper[4830]: I0227 17:58:28.902923 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gdh6f"] Feb 27 17:58:29 crc kubenswrapper[4830]: I0227 17:58:29.731066 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gdh6f" podUID="f9c2a5cd-996c-4354-af4a-dc030af8ab5e" containerName="registry-server" containerID="cri-o://7c654c8468482ee78f4a50fdc863f54e5deffdfb08f44cd690830dc64859c97f" gracePeriod=2 Feb 27 17:58:30 crc kubenswrapper[4830]: I0227 17:58:30.344157 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gdh6f" Feb 27 17:58:30 crc kubenswrapper[4830]: I0227 17:58:30.516159 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9c2a5cd-996c-4354-af4a-dc030af8ab5e-utilities\") pod \"f9c2a5cd-996c-4354-af4a-dc030af8ab5e\" (UID: \"f9c2a5cd-996c-4354-af4a-dc030af8ab5e\") " Feb 27 17:58:30 crc kubenswrapper[4830]: I0227 17:58:30.516399 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9c2a5cd-996c-4354-af4a-dc030af8ab5e-catalog-content\") pod \"f9c2a5cd-996c-4354-af4a-dc030af8ab5e\" (UID: \"f9c2a5cd-996c-4354-af4a-dc030af8ab5e\") " Feb 27 17:58:30 crc kubenswrapper[4830]: I0227 17:58:30.516706 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9mk9\" (UniqueName: \"kubernetes.io/projected/f9c2a5cd-996c-4354-af4a-dc030af8ab5e-kube-api-access-x9mk9\") pod \"f9c2a5cd-996c-4354-af4a-dc030af8ab5e\" (UID: \"f9c2a5cd-996c-4354-af4a-dc030af8ab5e\") " Feb 27 17:58:30 crc kubenswrapper[4830]: I0227 17:58:30.518170 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9c2a5cd-996c-4354-af4a-dc030af8ab5e-utilities" (OuterVolumeSpecName: "utilities") pod "f9c2a5cd-996c-4354-af4a-dc030af8ab5e" (UID: "f9c2a5cd-996c-4354-af4a-dc030af8ab5e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:58:30 crc kubenswrapper[4830]: I0227 17:58:30.523751 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9c2a5cd-996c-4354-af4a-dc030af8ab5e-kube-api-access-x9mk9" (OuterVolumeSpecName: "kube-api-access-x9mk9") pod "f9c2a5cd-996c-4354-af4a-dc030af8ab5e" (UID: "f9c2a5cd-996c-4354-af4a-dc030af8ab5e"). InnerVolumeSpecName "kube-api-access-x9mk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:58:30 crc kubenswrapper[4830]: I0227 17:58:30.620693 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9mk9\" (UniqueName: \"kubernetes.io/projected/f9c2a5cd-996c-4354-af4a-dc030af8ab5e-kube-api-access-x9mk9\") on node \"crc\" DevicePath \"\"" Feb 27 17:58:30 crc kubenswrapper[4830]: I0227 17:58:30.620735 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9c2a5cd-996c-4354-af4a-dc030af8ab5e-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:58:30 crc kubenswrapper[4830]: I0227 17:58:30.654429 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9c2a5cd-996c-4354-af4a-dc030af8ab5e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9c2a5cd-996c-4354-af4a-dc030af8ab5e" (UID: "f9c2a5cd-996c-4354-af4a-dc030af8ab5e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:58:30 crc kubenswrapper[4830]: I0227 17:58:30.723365 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9c2a5cd-996c-4354-af4a-dc030af8ab5e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:58:30 crc kubenswrapper[4830]: I0227 17:58:30.746258 4830 generic.go:334] "Generic (PLEG): container finished" podID="f9c2a5cd-996c-4354-af4a-dc030af8ab5e" containerID="7c654c8468482ee78f4a50fdc863f54e5deffdfb08f44cd690830dc64859c97f" exitCode=0 Feb 27 17:58:30 crc kubenswrapper[4830]: I0227 17:58:30.746361 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gdh6f" Feb 27 17:58:30 crc kubenswrapper[4830]: I0227 17:58:30.746368 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gdh6f" event={"ID":"f9c2a5cd-996c-4354-af4a-dc030af8ab5e","Type":"ContainerDied","Data":"7c654c8468482ee78f4a50fdc863f54e5deffdfb08f44cd690830dc64859c97f"} Feb 27 17:58:30 crc kubenswrapper[4830]: I0227 17:58:30.748359 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gdh6f" event={"ID":"f9c2a5cd-996c-4354-af4a-dc030af8ab5e","Type":"ContainerDied","Data":"d44c3ba0bcc8cf1c17e8683cd2b2fa79c8a1e16be8dc22eb1348a0146494bdc6"} Feb 27 17:58:30 crc kubenswrapper[4830]: I0227 17:58:30.748454 4830 scope.go:117] "RemoveContainer" containerID="7c654c8468482ee78f4a50fdc863f54e5deffdfb08f44cd690830dc64859c97f" Feb 27 17:58:30 crc kubenswrapper[4830]: I0227 17:58:30.790104 4830 scope.go:117] "RemoveContainer" containerID="aea2c6cd8e3bc7d221fb2ff2981b5209a60558fe322058339f4800b4a3e8eee7" Feb 27 17:58:30 crc kubenswrapper[4830]: I0227 17:58:30.810482 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gdh6f"] Feb 27 17:58:30 crc kubenswrapper[4830]: I0227 17:58:30.821387 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gdh6f"] Feb 27 17:58:30 crc kubenswrapper[4830]: I0227 17:58:30.835588 4830 scope.go:117] "RemoveContainer" containerID="0215ba30b495645b95c8a5e0606942a82650cd5da27ec11b8133ab71a974c700" Feb 27 17:58:30 crc kubenswrapper[4830]: I0227 17:58:30.891285 4830 scope.go:117] "RemoveContainer" containerID="7c654c8468482ee78f4a50fdc863f54e5deffdfb08f44cd690830dc64859c97f" Feb 27 17:58:30 crc kubenswrapper[4830]: E0227 17:58:30.893985 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c654c8468482ee78f4a50fdc863f54e5deffdfb08f44cd690830dc64859c97f\": container with ID starting with 7c654c8468482ee78f4a50fdc863f54e5deffdfb08f44cd690830dc64859c97f not found: ID does not exist" containerID="7c654c8468482ee78f4a50fdc863f54e5deffdfb08f44cd690830dc64859c97f" Feb 27 17:58:30 crc kubenswrapper[4830]: I0227 17:58:30.894108 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c654c8468482ee78f4a50fdc863f54e5deffdfb08f44cd690830dc64859c97f"} err="failed to get container status \"7c654c8468482ee78f4a50fdc863f54e5deffdfb08f44cd690830dc64859c97f\": rpc error: code = NotFound desc = could not find container \"7c654c8468482ee78f4a50fdc863f54e5deffdfb08f44cd690830dc64859c97f\": container with ID starting with 7c654c8468482ee78f4a50fdc863f54e5deffdfb08f44cd690830dc64859c97f not found: ID does not exist" Feb 27 17:58:30 crc kubenswrapper[4830]: I0227 17:58:30.894170 4830 scope.go:117] "RemoveContainer" containerID="aea2c6cd8e3bc7d221fb2ff2981b5209a60558fe322058339f4800b4a3e8eee7" Feb 27 17:58:30 crc kubenswrapper[4830]: E0227 17:58:30.894935 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aea2c6cd8e3bc7d221fb2ff2981b5209a60558fe322058339f4800b4a3e8eee7\": container with ID starting with aea2c6cd8e3bc7d221fb2ff2981b5209a60558fe322058339f4800b4a3e8eee7 not found: ID does not exist" containerID="aea2c6cd8e3bc7d221fb2ff2981b5209a60558fe322058339f4800b4a3e8eee7" Feb 27 17:58:30 crc kubenswrapper[4830]: I0227 17:58:30.894989 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aea2c6cd8e3bc7d221fb2ff2981b5209a60558fe322058339f4800b4a3e8eee7"} err="failed to get container status \"aea2c6cd8e3bc7d221fb2ff2981b5209a60558fe322058339f4800b4a3e8eee7\": rpc error: code = NotFound desc = could not find container \"aea2c6cd8e3bc7d221fb2ff2981b5209a60558fe322058339f4800b4a3e8eee7\": container with ID starting with aea2c6cd8e3bc7d221fb2ff2981b5209a60558fe322058339f4800b4a3e8eee7 not found: ID does not exist" Feb 27 17:58:30 crc kubenswrapper[4830]: I0227 17:58:30.895017 4830 scope.go:117] "RemoveContainer" containerID="0215ba30b495645b95c8a5e0606942a82650cd5da27ec11b8133ab71a974c700" Feb 27 17:58:30 crc kubenswrapper[4830]: E0227 17:58:30.895634 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0215ba30b495645b95c8a5e0606942a82650cd5da27ec11b8133ab71a974c700\": container with ID starting with 0215ba30b495645b95c8a5e0606942a82650cd5da27ec11b8133ab71a974c700 not found: ID does not exist" containerID="0215ba30b495645b95c8a5e0606942a82650cd5da27ec11b8133ab71a974c700" Feb 27 17:58:30 crc kubenswrapper[4830]: I0227 17:58:30.895709 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0215ba30b495645b95c8a5e0606942a82650cd5da27ec11b8133ab71a974c700"} err="failed to get container status \"0215ba30b495645b95c8a5e0606942a82650cd5da27ec11b8133ab71a974c700\": rpc error: code = NotFound desc = could not find container \"0215ba30b495645b95c8a5e0606942a82650cd5da27ec11b8133ab71a974c700\": container with ID starting with 0215ba30b495645b95c8a5e0606942a82650cd5da27ec11b8133ab71a974c700 not found: ID does not exist" Feb 27 17:58:31 crc kubenswrapper[4830]: I0227 17:58:31.763694 4830 scope.go:117] "RemoveContainer" containerID="9c7600b1d02b30467a3c9249e6962a5faed8288686d8237218305c7cc4357171" Feb 27 17:58:31 crc kubenswrapper[4830]: E0227 17:58:31.764986 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:58:32 crc kubenswrapper[4830]: E0227 17:58:32.769560 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a\\\"\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6smj" podUID="fb796dd0-1d3a-4037-a42a-7427293ea799" Feb 27 17:58:32 crc kubenswrapper[4830]: I0227 17:58:32.820731 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9c2a5cd-996c-4354-af4a-dc030af8ab5e" path="/var/lib/kubelet/pods/f9c2a5cd-996c-4354-af4a-dc030af8ab5e/volumes" Feb 27 17:58:44 crc kubenswrapper[4830]: I0227 17:58:44.779077 4830 scope.go:117] "RemoveContainer" containerID="9c7600b1d02b30467a3c9249e6962a5faed8288686d8237218305c7cc4357171" Feb 27 17:58:44 crc kubenswrapper[4830]: E0227 17:58:44.780471 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:58:46 crc kubenswrapper[4830]: E0227 17:58:46.767792 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a\\\"\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6smj" podUID="fb796dd0-1d3a-4037-a42a-7427293ea799" Feb 27 17:58:56 crc kubenswrapper[4830]: I0227 17:58:56.597662 4830 scope.go:117] "RemoveContainer" containerID="be34b7a5698aae5a9c1ba9bb648c8fff1a0cbc53687a974ba33846a2c0c0cc9b" Feb 27 17:58:56 crc kubenswrapper[4830]: I0227 17:58:56.674919 4830 scope.go:117] "RemoveContainer" containerID="cfdb726e0b3196e3a7d80143d9c41a14ab38fa91c4f91cbbb6fca41c9f303b57" Feb 27 17:58:56 crc kubenswrapper[4830]: I0227 17:58:56.742618 4830 scope.go:117] "RemoveContainer" containerID="b9164331e1f9e092beb4f47a40672a192a69ad41763af0dbc5f231e7646d3c69" Feb 27 17:58:56 crc kubenswrapper[4830]: I0227 17:58:56.763342 4830 scope.go:117] "RemoveContainer" containerID="9c7600b1d02b30467a3c9249e6962a5faed8288686d8237218305c7cc4357171" Feb 27 17:58:56 crc kubenswrapper[4830]: E0227 17:58:56.763994 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 17:59:00 crc kubenswrapper[4830]: E0227 17:59:00.766102 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a\\\"\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6smj" podUID="fb796dd0-1d3a-4037-a42a-7427293ea799" Feb 27 17:59:10 crc kubenswrapper[4830]: I0227 17:59:10.762511 4830 scope.go:117] "RemoveContainer" containerID="9c7600b1d02b30467a3c9249e6962a5faed8288686d8237218305c7cc4357171" Feb 27 17:59:11 crc kubenswrapper[4830]: I0227 17:59:11.317245 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerStarted","Data":"364bac5e44d6ecef577235338aa01e0eab35896300d6d5c2d81ef312d7b04024"} Feb 27 17:59:13 crc kubenswrapper[4830]: E0227 17:59:13.766403 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a\\\"\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6smj" podUID="fb796dd0-1d3a-4037-a42a-7427293ea799" Feb 27 17:59:26 crc kubenswrapper[4830]: E0227 17:59:26.766889 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a\\\"\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6smj" podUID="fb796dd0-1d3a-4037-a42a-7427293ea799" Feb 27 17:59:40 crc kubenswrapper[4830]: I0227 17:59:40.773457 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 17:59:42 crc kubenswrapper[4830]: E0227 17:59:42.524979 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256=9c27a3901c5c632e259da7ac87eb6eadf10c51bf967b9fbec5253ade4cce6a9f/signature-4: status 500 (Internal Server Error)" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" Feb 27 17:59:42 crc kubenswrapper[4830]: E0227 17:59:42.526310 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a,Command:[],Args:[--prometheus-config-reloader=$(RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER) --prometheus-instance-selector=app.kubernetes.io/managed-by=observability-operator --alertmanager-instance-selector=app.kubernetes.io/managed-by=observability-operator --thanos-ruler-instance-selector=app.kubernetes.io/managed-by=observability-operator --watch-referenced-objects-in-all-namespaces=true --disable-unmanaged-prometheus-configuration=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOGC,Value:30,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER,Value:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{157286400 0} {} 150Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mq658,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-68bc856cb9-x6smj_openshift-operators(fb796dd0-1d3a-4037-a42a-7427293ea799): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256=9c27a3901c5c632e259da7ac87eb6eadf10c51bf967b9fbec5253ade4cce6a9f/signature-4: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 17:59:42 crc kubenswrapper[4830]: E0227 17:59:42.527564 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256=9c27a3901c5c632e259da7ac87eb6eadf10c51bf967b9fbec5253ade4cce6a9f/signature-4: status 500 (Internal Server Error)\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6smj" podUID="fb796dd0-1d3a-4037-a42a-7427293ea799" Feb 27 17:59:56 crc kubenswrapper[4830]: E0227 17:59:56.765634 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a\\\"\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6smj" podUID="fb796dd0-1d3a-4037-a42a-7427293ea799" Feb 27 18:00:00 crc kubenswrapper[4830]: I0227 18:00:00.158377 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536920-rgmsv"] Feb 27 18:00:00 crc kubenswrapper[4830]: E0227 18:00:00.159312 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9c2a5cd-996c-4354-af4a-dc030af8ab5e" containerName="extract-content" Feb 27 18:00:00 crc kubenswrapper[4830]: I0227 18:00:00.159323 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9c2a5cd-996c-4354-af4a-dc030af8ab5e" containerName="extract-content" Feb 27 18:00:00 crc kubenswrapper[4830]: E0227 18:00:00.159341 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9c2a5cd-996c-4354-af4a-dc030af8ab5e" containerName="registry-server" Feb 27 18:00:00 crc kubenswrapper[4830]: I0227 18:00:00.159347 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9c2a5cd-996c-4354-af4a-dc030af8ab5e" containerName="registry-server" Feb 27 18:00:00 crc kubenswrapper[4830]: E0227 18:00:00.159355 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8393f040-6d7a-48e5-be41-891334614f73" containerName="extract-content" Feb 27 18:00:00 crc kubenswrapper[4830]: I0227 18:00:00.159362 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8393f040-6d7a-48e5-be41-891334614f73" containerName="extract-content" Feb 27 18:00:00 crc kubenswrapper[4830]: E0227 18:00:00.159377 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9c2a5cd-996c-4354-af4a-dc030af8ab5e" containerName="extract-utilities" Feb 27 18:00:00 crc kubenswrapper[4830]: I0227 18:00:00.159384 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9c2a5cd-996c-4354-af4a-dc030af8ab5e" containerName="extract-utilities" Feb 27 18:00:00 crc kubenswrapper[4830]: E0227 18:00:00.159402 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8393f040-6d7a-48e5-be41-891334614f73" containerName="extract-utilities" Feb 27 18:00:00 crc kubenswrapper[4830]: I0227 18:00:00.159408 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8393f040-6d7a-48e5-be41-891334614f73" containerName="extract-utilities" Feb 27 18:00:00 crc kubenswrapper[4830]: E0227 18:00:00.159419 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8393f040-6d7a-48e5-be41-891334614f73" containerName="registry-server" Feb 27 18:00:00 crc kubenswrapper[4830]: I0227 18:00:00.159425 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8393f040-6d7a-48e5-be41-891334614f73" containerName="registry-server" Feb 27 18:00:00 crc kubenswrapper[4830]: I0227 18:00:00.159600 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="8393f040-6d7a-48e5-be41-891334614f73" containerName="registry-server" Feb 27 18:00:00 crc kubenswrapper[4830]: I0227 18:00:00.159614 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9c2a5cd-996c-4354-af4a-dc030af8ab5e" containerName="registry-server" Feb 27 18:00:00 crc kubenswrapper[4830]: I0227 18:00:00.160403 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536920-rgmsv" Feb 27 18:00:00 crc kubenswrapper[4830]: I0227 18:00:00.164202 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 18:00:00 crc kubenswrapper[4830]: I0227 18:00:00.164335 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 18:00:00 crc kubenswrapper[4830]: I0227 18:00:00.164430 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 18:00:00 crc kubenswrapper[4830]: I0227 18:00:00.170180 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536920-rgmsv"] Feb 27 18:00:00 crc kubenswrapper[4830]: I0227 18:00:00.227244 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536920-bxljl"] Feb 27 18:00:00 crc kubenswrapper[4830]: I0227 18:00:00.229085 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536920-bxljl" Feb 27 18:00:00 crc kubenswrapper[4830]: I0227 18:00:00.232912 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 18:00:00 crc kubenswrapper[4830]: I0227 18:00:00.233469 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 18:00:00 crc kubenswrapper[4830]: I0227 18:00:00.246805 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536920-bxljl"] Feb 27 18:00:00 crc kubenswrapper[4830]: I0227 18:00:00.266563 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjlnf\" (UniqueName: \"kubernetes.io/projected/c5e81087-8783-41f2-bc8b-bd104ade9e69-kube-api-access-zjlnf\") pod \"auto-csr-approver-29536920-rgmsv\" (UID: \"c5e81087-8783-41f2-bc8b-bd104ade9e69\") " pod="openshift-infra/auto-csr-approver-29536920-rgmsv" Feb 27 18:00:00 crc kubenswrapper[4830]: I0227 18:00:00.368545 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e24baf4-bc40-4511-a5c0-c5797981f2b5-secret-volume\") pod \"collect-profiles-29536920-bxljl\" (UID: \"9e24baf4-bc40-4511-a5c0-c5797981f2b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536920-bxljl" Feb 27 18:00:00 crc kubenswrapper[4830]: I0227 18:00:00.368813 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9cp7\" (UniqueName: \"kubernetes.io/projected/9e24baf4-bc40-4511-a5c0-c5797981f2b5-kube-api-access-g9cp7\") pod \"collect-profiles-29536920-bxljl\" (UID: \"9e24baf4-bc40-4511-a5c0-c5797981f2b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536920-bxljl" Feb 27 18:00:00 crc kubenswrapper[4830]: I0227 18:00:00.368978 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjlnf\" (UniqueName: \"kubernetes.io/projected/c5e81087-8783-41f2-bc8b-bd104ade9e69-kube-api-access-zjlnf\") pod \"auto-csr-approver-29536920-rgmsv\" (UID: \"c5e81087-8783-41f2-bc8b-bd104ade9e69\") " pod="openshift-infra/auto-csr-approver-29536920-rgmsv" Feb 27 18:00:00 crc kubenswrapper[4830]: I0227 18:00:00.369104 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e24baf4-bc40-4511-a5c0-c5797981f2b5-config-volume\") pod \"collect-profiles-29536920-bxljl\" (UID: \"9e24baf4-bc40-4511-a5c0-c5797981f2b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536920-bxljl" Feb 27 18:00:00 crc kubenswrapper[4830]: I0227 18:00:00.389530 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjlnf\" (UniqueName: \"kubernetes.io/projected/c5e81087-8783-41f2-bc8b-bd104ade9e69-kube-api-access-zjlnf\") pod \"auto-csr-approver-29536920-rgmsv\" (UID: \"c5e81087-8783-41f2-bc8b-bd104ade9e69\") " pod="openshift-infra/auto-csr-approver-29536920-rgmsv" Feb 27 18:00:00 crc kubenswrapper[4830]: I0227 18:00:00.471829 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e24baf4-bc40-4511-a5c0-c5797981f2b5-secret-volume\") pod \"collect-profiles-29536920-bxljl\" (UID: \"9e24baf4-bc40-4511-a5c0-c5797981f2b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536920-bxljl" Feb 27 18:00:00 crc kubenswrapper[4830]: I0227 18:00:00.471962 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9cp7\" (UniqueName: \"kubernetes.io/projected/9e24baf4-bc40-4511-a5c0-c5797981f2b5-kube-api-access-g9cp7\") pod \"collect-profiles-29536920-bxljl\" (UID: \"9e24baf4-bc40-4511-a5c0-c5797981f2b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536920-bxljl" Feb 27 18:00:00 crc kubenswrapper[4830]: I0227 18:00:00.472023 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e24baf4-bc40-4511-a5c0-c5797981f2b5-config-volume\") pod \"collect-profiles-29536920-bxljl\" (UID: \"9e24baf4-bc40-4511-a5c0-c5797981f2b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536920-bxljl" Feb 27 18:00:00 crc kubenswrapper[4830]: I0227 18:00:00.472853 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e24baf4-bc40-4511-a5c0-c5797981f2b5-config-volume\") pod \"collect-profiles-29536920-bxljl\" (UID: \"9e24baf4-bc40-4511-a5c0-c5797981f2b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536920-bxljl" Feb 27 18:00:00 crc kubenswrapper[4830]: I0227 18:00:00.478057 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e24baf4-bc40-4511-a5c0-c5797981f2b5-secret-volume\") pod \"collect-profiles-29536920-bxljl\" (UID: \"9e24baf4-bc40-4511-a5c0-c5797981f2b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536920-bxljl" Feb 27 18:00:00 crc kubenswrapper[4830]: I0227 18:00:00.489791 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9cp7\" (UniqueName: \"kubernetes.io/projected/9e24baf4-bc40-4511-a5c0-c5797981f2b5-kube-api-access-g9cp7\") pod \"collect-profiles-29536920-bxljl\" (UID: \"9e24baf4-bc40-4511-a5c0-c5797981f2b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536920-bxljl" Feb 27 18:00:00 crc kubenswrapper[4830]: I0227 18:00:00.495774 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536920-rgmsv" Feb 27 18:00:00 crc kubenswrapper[4830]: I0227 18:00:00.545387 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536920-bxljl" Feb 27 18:00:00 crc kubenswrapper[4830]: I0227 18:00:00.981630 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536920-rgmsv"] Feb 27 18:00:00 crc kubenswrapper[4830]: W0227 18:00:00.990235 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5e81087_8783_41f2_bc8b_bd104ade9e69.slice/crio-338b606ed58b9e7c5bb2e38697f80c3370d0665fecaf6c3864af34d00c09ac90 WatchSource:0}: Error finding container 338b606ed58b9e7c5bb2e38697f80c3370d0665fecaf6c3864af34d00c09ac90: Status 404 returned error can't find the container with id 338b606ed58b9e7c5bb2e38697f80c3370d0665fecaf6c3864af34d00c09ac90 Feb 27 18:00:01 crc kubenswrapper[4830]: I0227 18:00:01.106118 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536920-bxljl"] Feb 27 18:00:01 crc kubenswrapper[4830]: W0227 18:00:01.111768 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e24baf4_bc40_4511_a5c0_c5797981f2b5.slice/crio-b4358b23e9743b4496312bfb347623c586a5aa8a702c4068515d61c8c9ff29fb WatchSource:0}: Error finding container b4358b23e9743b4496312bfb347623c586a5aa8a702c4068515d61c8c9ff29fb: Status 404 returned error can't find the container with id b4358b23e9743b4496312bfb347623c586a5aa8a702c4068515d61c8c9ff29fb Feb 27 18:00:01 crc kubenswrapper[4830]: I0227 18:00:01.954726 4830 generic.go:334] "Generic (PLEG): container finished" podID="9e24baf4-bc40-4511-a5c0-c5797981f2b5" containerID="cf7c5376c46112b76a0f7a3d3f0288cfc1eb855329d2b1d4630101e45ef74f22" exitCode=0 Feb 27 18:00:01 crc kubenswrapper[4830]: I0227 18:00:01.954818 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536920-bxljl" event={"ID":"9e24baf4-bc40-4511-a5c0-c5797981f2b5","Type":"ContainerDied","Data":"cf7c5376c46112b76a0f7a3d3f0288cfc1eb855329d2b1d4630101e45ef74f22"} Feb 27 18:00:01 crc kubenswrapper[4830]: I0227 18:00:01.955083 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536920-bxljl" event={"ID":"9e24baf4-bc40-4511-a5c0-c5797981f2b5","Type":"ContainerStarted","Data":"b4358b23e9743b4496312bfb347623c586a5aa8a702c4068515d61c8c9ff29fb"} Feb 27 18:00:01 crc kubenswrapper[4830]: I0227 18:00:01.956208 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536920-rgmsv" event={"ID":"c5e81087-8783-41f2-bc8b-bd104ade9e69","Type":"ContainerStarted","Data":"338b606ed58b9e7c5bb2e38697f80c3370d0665fecaf6c3864af34d00c09ac90"} Feb 27 18:00:03 crc kubenswrapper[4830]: I0227 18:00:03.400654 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536920-bxljl" Feb 27 18:00:03 crc kubenswrapper[4830]: I0227 18:00:03.462827 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9cp7\" (UniqueName: \"kubernetes.io/projected/9e24baf4-bc40-4511-a5c0-c5797981f2b5-kube-api-access-g9cp7\") pod \"9e24baf4-bc40-4511-a5c0-c5797981f2b5\" (UID: \"9e24baf4-bc40-4511-a5c0-c5797981f2b5\") " Feb 27 18:00:03 crc kubenswrapper[4830]: I0227 18:00:03.462911 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e24baf4-bc40-4511-a5c0-c5797981f2b5-config-volume\") pod \"9e24baf4-bc40-4511-a5c0-c5797981f2b5\" (UID: \"9e24baf4-bc40-4511-a5c0-c5797981f2b5\") " Feb 27 18:00:03 crc kubenswrapper[4830]: I0227 18:00:03.463055 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e24baf4-bc40-4511-a5c0-c5797981f2b5-secret-volume\") pod \"9e24baf4-bc40-4511-a5c0-c5797981f2b5\" (UID: \"9e24baf4-bc40-4511-a5c0-c5797981f2b5\") " Feb 27 18:00:03 crc kubenswrapper[4830]: I0227 18:00:03.464152 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e24baf4-bc40-4511-a5c0-c5797981f2b5-config-volume" (OuterVolumeSpecName: "config-volume") pod "9e24baf4-bc40-4511-a5c0-c5797981f2b5" (UID: "9e24baf4-bc40-4511-a5c0-c5797981f2b5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:00:03 crc kubenswrapper[4830]: I0227 18:00:03.469273 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e24baf4-bc40-4511-a5c0-c5797981f2b5-kube-api-access-g9cp7" (OuterVolumeSpecName: "kube-api-access-g9cp7") pod "9e24baf4-bc40-4511-a5c0-c5797981f2b5" (UID: "9e24baf4-bc40-4511-a5c0-c5797981f2b5"). InnerVolumeSpecName "kube-api-access-g9cp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:00:03 crc kubenswrapper[4830]: I0227 18:00:03.470458 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e24baf4-bc40-4511-a5c0-c5797981f2b5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9e24baf4-bc40-4511-a5c0-c5797981f2b5" (UID: "9e24baf4-bc40-4511-a5c0-c5797981f2b5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:00:03 crc kubenswrapper[4830]: I0227 18:00:03.566288 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9cp7\" (UniqueName: \"kubernetes.io/projected/9e24baf4-bc40-4511-a5c0-c5797981f2b5-kube-api-access-g9cp7\") on node \"crc\" DevicePath \"\"" Feb 27 18:00:03 crc kubenswrapper[4830]: I0227 18:00:03.566332 4830 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e24baf4-bc40-4511-a5c0-c5797981f2b5-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 18:00:03 crc kubenswrapper[4830]: I0227 18:00:03.566344 4830 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e24baf4-bc40-4511-a5c0-c5797981f2b5-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 18:00:03 crc kubenswrapper[4830]: I0227 18:00:03.986635 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536920-bxljl" event={"ID":"9e24baf4-bc40-4511-a5c0-c5797981f2b5","Type":"ContainerDied","Data":"b4358b23e9743b4496312bfb347623c586a5aa8a702c4068515d61c8c9ff29fb"} Feb 27 18:00:03 crc kubenswrapper[4830]: I0227 18:00:03.986756 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4358b23e9743b4496312bfb347623c586a5aa8a702c4068515d61c8c9ff29fb" Feb 27 18:00:03 crc kubenswrapper[4830]: I0227 18:00:03.986695 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536920-bxljl" Feb 27 18:00:04 crc kubenswrapper[4830]: I0227 18:00:04.489854 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536875-kwbgr"] Feb 27 18:00:04 crc kubenswrapper[4830]: I0227 18:00:04.501317 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536875-kwbgr"] Feb 27 18:00:04 crc kubenswrapper[4830]: I0227 18:00:04.783384 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c48742fe-3684-4692-b85f-6bd72411af0e" path="/var/lib/kubelet/pods/c48742fe-3684-4692-b85f-6bd72411af0e/volumes" Feb 27 18:00:05 crc kubenswrapper[4830]: I0227 18:00:05.001735 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536920-rgmsv" event={"ID":"c5e81087-8783-41f2-bc8b-bd104ade9e69","Type":"ContainerStarted","Data":"4f34f0a42ab364d9aef4c06263423046276a80a717301ebf71092ef11f7f2d17"} Feb 27 18:00:05 crc kubenswrapper[4830]: I0227 18:00:05.030659 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536920-rgmsv" podStartSLOduration=1.557587651 podStartE2EDuration="5.03064053s" podCreationTimestamp="2026-02-27 18:00:00 +0000 UTC" firstStartedPulling="2026-02-27 18:00:00.993223569 +0000 UTC m=+6797.082496042" lastFinishedPulling="2026-02-27 18:00:04.466276458 +0000 UTC m=+6800.555548921" observedRunningTime="2026-02-27 18:00:05.023475059 +0000 UTC m=+6801.112747512" watchObservedRunningTime="2026-02-27 18:00:05.03064053 +0000 UTC m=+6801.119912993" Feb 27 18:00:06 crc kubenswrapper[4830]: I0227 18:00:06.018097 4830 generic.go:334] "Generic (PLEG): container finished" podID="c5e81087-8783-41f2-bc8b-bd104ade9e69" containerID="4f34f0a42ab364d9aef4c06263423046276a80a717301ebf71092ef11f7f2d17" exitCode=0 Feb 27 18:00:06 crc kubenswrapper[4830]: I0227 18:00:06.018174 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536920-rgmsv" event={"ID":"c5e81087-8783-41f2-bc8b-bd104ade9e69","Type":"ContainerDied","Data":"4f34f0a42ab364d9aef4c06263423046276a80a717301ebf71092ef11f7f2d17"} Feb 27 18:00:07 crc kubenswrapper[4830]: I0227 18:00:07.562330 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536920-rgmsv" Feb 27 18:00:07 crc kubenswrapper[4830]: I0227 18:00:07.690396 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjlnf\" (UniqueName: \"kubernetes.io/projected/c5e81087-8783-41f2-bc8b-bd104ade9e69-kube-api-access-zjlnf\") pod \"c5e81087-8783-41f2-bc8b-bd104ade9e69\" (UID: \"c5e81087-8783-41f2-bc8b-bd104ade9e69\") " Feb 27 18:00:07 crc kubenswrapper[4830]: I0227 18:00:07.698222 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5e81087-8783-41f2-bc8b-bd104ade9e69-kube-api-access-zjlnf" (OuterVolumeSpecName: "kube-api-access-zjlnf") pod "c5e81087-8783-41f2-bc8b-bd104ade9e69" (UID: "c5e81087-8783-41f2-bc8b-bd104ade9e69"). InnerVolumeSpecName "kube-api-access-zjlnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:00:07 crc kubenswrapper[4830]: E0227 18:00:07.765088 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a\\\"\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6smj" podUID="fb796dd0-1d3a-4037-a42a-7427293ea799" Feb 27 18:00:07 crc kubenswrapper[4830]: I0227 18:00:07.793985 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjlnf\" (UniqueName: \"kubernetes.io/projected/c5e81087-8783-41f2-bc8b-bd104ade9e69-kube-api-access-zjlnf\") on node \"crc\" DevicePath \"\"" Feb 27 18:00:07 crc kubenswrapper[4830]: I0227 18:00:07.925473 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536914-v4qf9"] Feb 27 18:00:07 crc kubenswrapper[4830]: I0227 18:00:07.938348 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536914-v4qf9"] Feb 27 18:00:08 crc kubenswrapper[4830]: I0227 18:00:08.048758 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536920-rgmsv" event={"ID":"c5e81087-8783-41f2-bc8b-bd104ade9e69","Type":"ContainerDied","Data":"338b606ed58b9e7c5bb2e38697f80c3370d0665fecaf6c3864af34d00c09ac90"} Feb 27 18:00:08 crc kubenswrapper[4830]: I0227 18:00:08.048819 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="338b606ed58b9e7c5bb2e38697f80c3370d0665fecaf6c3864af34d00c09ac90" Feb 27 18:00:08 crc kubenswrapper[4830]: I0227 18:00:08.048841 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536920-rgmsv" Feb 27 18:00:08 crc kubenswrapper[4830]: I0227 18:00:08.778187 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcab97ec-480d-4b72-a183-cfebb2ceeec0" path="/var/lib/kubelet/pods/dcab97ec-480d-4b72-a183-cfebb2ceeec0/volumes" Feb 27 18:00:19 crc kubenswrapper[4830]: I0227 18:00:19.225626 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wxtjs"] Feb 27 18:00:19 crc kubenswrapper[4830]: E0227 18:00:19.226629 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5e81087-8783-41f2-bc8b-bd104ade9e69" containerName="oc" Feb 27 18:00:19 crc kubenswrapper[4830]: I0227 18:00:19.226643 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5e81087-8783-41f2-bc8b-bd104ade9e69" containerName="oc" Feb 27 18:00:19 crc kubenswrapper[4830]: E0227 18:00:19.226682 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e24baf4-bc40-4511-a5c0-c5797981f2b5" containerName="collect-profiles" Feb 27 18:00:19 crc kubenswrapper[4830]: I0227 18:00:19.226690 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e24baf4-bc40-4511-a5c0-c5797981f2b5" containerName="collect-profiles" Feb 27 18:00:19 crc kubenswrapper[4830]: I0227 18:00:19.226964 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e24baf4-bc40-4511-a5c0-c5797981f2b5" containerName="collect-profiles" Feb 27 18:00:19 crc kubenswrapper[4830]: I0227 18:00:19.226987 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5e81087-8783-41f2-bc8b-bd104ade9e69" containerName="oc" Feb 27 18:00:19 crc kubenswrapper[4830]: I0227 18:00:19.228916 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wxtjs" Feb 27 18:00:19 crc kubenswrapper[4830]: I0227 18:00:19.247367 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wxtjs"] Feb 27 18:00:19 crc kubenswrapper[4830]: I0227 18:00:19.406465 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn9pp\" (UniqueName: \"kubernetes.io/projected/14c02ecf-6b25-4162-9286-acabcecbd435-kube-api-access-dn9pp\") pod \"certified-operators-wxtjs\" (UID: \"14c02ecf-6b25-4162-9286-acabcecbd435\") " pod="openshift-marketplace/certified-operators-wxtjs" Feb 27 18:00:19 crc kubenswrapper[4830]: I0227 18:00:19.406739 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c02ecf-6b25-4162-9286-acabcecbd435-catalog-content\") pod \"certified-operators-wxtjs\" (UID: \"14c02ecf-6b25-4162-9286-acabcecbd435\") " pod="openshift-marketplace/certified-operators-wxtjs" Feb 27 18:00:19 crc kubenswrapper[4830]: I0227 18:00:19.406814 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c02ecf-6b25-4162-9286-acabcecbd435-utilities\") pod \"certified-operators-wxtjs\" (UID: \"14c02ecf-6b25-4162-9286-acabcecbd435\") " pod="openshift-marketplace/certified-operators-wxtjs" Feb 27 18:00:19 crc kubenswrapper[4830]: I0227 18:00:19.508789 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn9pp\" (UniqueName: \"kubernetes.io/projected/14c02ecf-6b25-4162-9286-acabcecbd435-kube-api-access-dn9pp\") pod \"certified-operators-wxtjs\" (UID: \"14c02ecf-6b25-4162-9286-acabcecbd435\") " pod="openshift-marketplace/certified-operators-wxtjs" Feb 27 18:00:19 crc kubenswrapper[4830]: I0227 18:00:19.508909 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c02ecf-6b25-4162-9286-acabcecbd435-catalog-content\") pod \"certified-operators-wxtjs\" (UID: \"14c02ecf-6b25-4162-9286-acabcecbd435\") " pod="openshift-marketplace/certified-operators-wxtjs" Feb 27 18:00:19 crc kubenswrapper[4830]: I0227 18:00:19.508931 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c02ecf-6b25-4162-9286-acabcecbd435-utilities\") pod \"certified-operators-wxtjs\" (UID: \"14c02ecf-6b25-4162-9286-acabcecbd435\") " pod="openshift-marketplace/certified-operators-wxtjs" Feb 27 18:00:19 crc kubenswrapper[4830]: I0227 18:00:19.509486 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c02ecf-6b25-4162-9286-acabcecbd435-catalog-content\") pod \"certified-operators-wxtjs\" (UID: \"14c02ecf-6b25-4162-9286-acabcecbd435\") " pod="openshift-marketplace/certified-operators-wxtjs" Feb 27 18:00:19 crc kubenswrapper[4830]: I0227 18:00:19.509508 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c02ecf-6b25-4162-9286-acabcecbd435-utilities\") pod \"certified-operators-wxtjs\" (UID: \"14c02ecf-6b25-4162-9286-acabcecbd435\") " pod="openshift-marketplace/certified-operators-wxtjs" Feb 27 18:00:19 crc kubenswrapper[4830]: I0227 18:00:19.548681 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn9pp\" (UniqueName: \"kubernetes.io/projected/14c02ecf-6b25-4162-9286-acabcecbd435-kube-api-access-dn9pp\") pod \"certified-operators-wxtjs\" (UID: \"14c02ecf-6b25-4162-9286-acabcecbd435\") " pod="openshift-marketplace/certified-operators-wxtjs" Feb 27 18:00:19 crc kubenswrapper[4830]: I0227 18:00:19.556206 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wxtjs" Feb 27 18:00:20 crc kubenswrapper[4830]: I0227 18:00:20.128226 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wxtjs"] Feb 27 18:00:20 crc kubenswrapper[4830]: I0227 18:00:20.220973 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxtjs" event={"ID":"14c02ecf-6b25-4162-9286-acabcecbd435","Type":"ContainerStarted","Data":"348d8da92dc75981a74b4e173a37bb843668a089509fe0ea2ac2b16eaaf14b48"} Feb 27 18:00:21 crc kubenswrapper[4830]: I0227 18:00:21.241405 4830 generic.go:334] "Generic (PLEG): container finished" podID="14c02ecf-6b25-4162-9286-acabcecbd435" containerID="01e649fd31179e6e3b490c3a6ccd5c199a02f9e7a5d376676bee3c5a1ae6fee5" exitCode=0 Feb 27 18:00:21 crc kubenswrapper[4830]: I0227 18:00:21.241476 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxtjs" event={"ID":"14c02ecf-6b25-4162-9286-acabcecbd435","Type":"ContainerDied","Data":"01e649fd31179e6e3b490c3a6ccd5c199a02f9e7a5d376676bee3c5a1ae6fee5"} Feb 27 18:00:21 crc kubenswrapper[4830]: E0227 18:00:21.924275 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 27 18:00:21 crc kubenswrapper[4830]: E0227 18:00:21.924802 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dn9pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-wxtjs_openshift-marketplace(14c02ecf-6b25-4162-9286-acabcecbd435): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:00:21 crc kubenswrapper[4830]: E0227 18:00:21.925971 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/certified-operators-wxtjs" podUID="14c02ecf-6b25-4162-9286-acabcecbd435" Feb 27 18:00:22 crc kubenswrapper[4830]: E0227 18:00:22.254659 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-wxtjs" podUID="14c02ecf-6b25-4162-9286-acabcecbd435" Feb 27 18:00:22 crc kubenswrapper[4830]: E0227 18:00:22.764723 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a\\\"\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6smj" podUID="fb796dd0-1d3a-4037-a42a-7427293ea799" Feb 27 18:00:34 crc kubenswrapper[4830]: E0227 18:00:34.520691 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 27 18:00:34 crc kubenswrapper[4830]: E0227 18:00:34.521399 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dn9pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-wxtjs_openshift-marketplace(14c02ecf-6b25-4162-9286-acabcecbd435): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:00:34 crc kubenswrapper[4830]: E0227 18:00:34.522658 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/certified-operators-wxtjs" podUID="14c02ecf-6b25-4162-9286-acabcecbd435" Feb 27 18:00:36 crc kubenswrapper[4830]: E0227 18:00:36.766219 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a\\\"\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6smj" podUID="fb796dd0-1d3a-4037-a42a-7427293ea799" Feb 27 18:00:46 crc kubenswrapper[4830]: E0227 18:00:46.765448 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-wxtjs" podUID="14c02ecf-6b25-4162-9286-acabcecbd435" Feb 27 18:00:51 crc kubenswrapper[4830]: E0227 18:00:51.766541 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a\\\"\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6smj" podUID="fb796dd0-1d3a-4037-a42a-7427293ea799" Feb 27 18:00:56 crc kubenswrapper[4830]: I0227 18:00:56.945609 4830 scope.go:117] "RemoveContainer" containerID="f201355b4dfea4e7690badf25e5b965b799f90fafedbd2c49142ca103aaea93b" Feb 27 18:00:57 crc kubenswrapper[4830]: I0227 18:00:57.017006 4830 scope.go:117] "RemoveContainer" containerID="81b41cd29fe515db7fd3a3ba216aacd034da40bbe22aec1cae04c77ed0f6fbba" Feb 27 18:01:00 crc kubenswrapper[4830]: I0227 18:01:00.178905 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29536921-bbwcz"] Feb 27 18:01:00 crc kubenswrapper[4830]: I0227 18:01:00.182309 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29536921-bbwcz" Feb 27 18:01:00 crc kubenswrapper[4830]: I0227 18:01:00.191714 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29536921-bbwcz"] Feb 27 18:01:00 crc kubenswrapper[4830]: I0227 18:01:00.370966 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea9f937-1d9d-4e38-87dd-98017339ecc1-config-data\") pod \"keystone-cron-29536921-bbwcz\" (UID: \"1ea9f937-1d9d-4e38-87dd-98017339ecc1\") " pod="openstack/keystone-cron-29536921-bbwcz" Feb 27 18:01:00 crc kubenswrapper[4830]: I0227 18:01:00.371019 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lrl2\" (UniqueName: \"kubernetes.io/projected/1ea9f937-1d9d-4e38-87dd-98017339ecc1-kube-api-access-6lrl2\") pod \"keystone-cron-29536921-bbwcz\" (UID: \"1ea9f937-1d9d-4e38-87dd-98017339ecc1\") " pod="openstack/keystone-cron-29536921-bbwcz" Feb 27 18:01:00 crc kubenswrapper[4830]: I0227 18:01:00.371184 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea9f937-1d9d-4e38-87dd-98017339ecc1-combined-ca-bundle\") pod \"keystone-cron-29536921-bbwcz\" (UID: \"1ea9f937-1d9d-4e38-87dd-98017339ecc1\") " pod="openstack/keystone-cron-29536921-bbwcz" Feb 27 18:01:00 crc kubenswrapper[4830]: I0227 18:01:00.371442 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ea9f937-1d9d-4e38-87dd-98017339ecc1-fernet-keys\") pod \"keystone-cron-29536921-bbwcz\" (UID: \"1ea9f937-1d9d-4e38-87dd-98017339ecc1\") " pod="openstack/keystone-cron-29536921-bbwcz" Feb 27 18:01:00 crc kubenswrapper[4830]: I0227 18:01:00.473668 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea9f937-1d9d-4e38-87dd-98017339ecc1-combined-ca-bundle\") pod \"keystone-cron-29536921-bbwcz\" (UID: \"1ea9f937-1d9d-4e38-87dd-98017339ecc1\") " pod="openstack/keystone-cron-29536921-bbwcz" Feb 27 18:01:00 crc kubenswrapper[4830]: I0227 18:01:00.473839 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ea9f937-1d9d-4e38-87dd-98017339ecc1-fernet-keys\") pod \"keystone-cron-29536921-bbwcz\" (UID: \"1ea9f937-1d9d-4e38-87dd-98017339ecc1\") " pod="openstack/keystone-cron-29536921-bbwcz" Feb 27 18:01:00 crc kubenswrapper[4830]: I0227 18:01:00.475258 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea9f937-1d9d-4e38-87dd-98017339ecc1-config-data\") pod \"keystone-cron-29536921-bbwcz\" (UID: \"1ea9f937-1d9d-4e38-87dd-98017339ecc1\") " pod="openstack/keystone-cron-29536921-bbwcz" Feb 27 18:01:00 crc kubenswrapper[4830]: I0227 18:01:00.475298 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lrl2\" (UniqueName: \"kubernetes.io/projected/1ea9f937-1d9d-4e38-87dd-98017339ecc1-kube-api-access-6lrl2\") pod \"keystone-cron-29536921-bbwcz\" (UID: \"1ea9f937-1d9d-4e38-87dd-98017339ecc1\") " pod="openstack/keystone-cron-29536921-bbwcz" Feb 27 18:01:00 crc kubenswrapper[4830]: I0227 18:01:00.482602 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea9f937-1d9d-4e38-87dd-98017339ecc1-combined-ca-bundle\") pod \"keystone-cron-29536921-bbwcz\" (UID: \"1ea9f937-1d9d-4e38-87dd-98017339ecc1\") " pod="openstack/keystone-cron-29536921-bbwcz" Feb 27 18:01:00 crc kubenswrapper[4830]: I0227 18:01:00.486332 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ea9f937-1d9d-4e38-87dd-98017339ecc1-fernet-keys\") pod \"keystone-cron-29536921-bbwcz\" (UID: \"1ea9f937-1d9d-4e38-87dd-98017339ecc1\") " pod="openstack/keystone-cron-29536921-bbwcz" Feb 27 18:01:00 crc kubenswrapper[4830]: I0227 18:01:00.487009 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea9f937-1d9d-4e38-87dd-98017339ecc1-config-data\") pod \"keystone-cron-29536921-bbwcz\" (UID: \"1ea9f937-1d9d-4e38-87dd-98017339ecc1\") " pod="openstack/keystone-cron-29536921-bbwcz" Feb 27 18:01:00 crc kubenswrapper[4830]: I0227 18:01:00.493033 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lrl2\" (UniqueName: \"kubernetes.io/projected/1ea9f937-1d9d-4e38-87dd-98017339ecc1-kube-api-access-6lrl2\") pod \"keystone-cron-29536921-bbwcz\" (UID: \"1ea9f937-1d9d-4e38-87dd-98017339ecc1\") " pod="openstack/keystone-cron-29536921-bbwcz" Feb 27 18:01:00 crc kubenswrapper[4830]: I0227 18:01:00.534751 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29536921-bbwcz" Feb 27 18:01:01 crc kubenswrapper[4830]: I0227 18:01:01.057703 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29536921-bbwcz"] Feb 27 18:01:01 crc kubenswrapper[4830]: E0227 18:01:01.440068 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 27 18:01:01 crc kubenswrapper[4830]: E0227 18:01:01.440758 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dn9pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-wxtjs_openshift-marketplace(14c02ecf-6b25-4162-9286-acabcecbd435): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:01:01 crc kubenswrapper[4830]: E0227 18:01:01.442317 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/certified-operators-wxtjs" podUID="14c02ecf-6b25-4162-9286-acabcecbd435" Feb 27 18:01:01 crc kubenswrapper[4830]: I0227 18:01:01.772280 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29536921-bbwcz" event={"ID":"1ea9f937-1d9d-4e38-87dd-98017339ecc1","Type":"ContainerStarted","Data":"63b25b451a20d13f9cbd91a0a1fe268691d6b17885f4119a44649db6b4ef8872"} Feb 27 18:01:01 crc kubenswrapper[4830]: I0227 18:01:01.772357 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29536921-bbwcz" event={"ID":"1ea9f937-1d9d-4e38-87dd-98017339ecc1","Type":"ContainerStarted","Data":"110725e3e020e2d1dcccb715f9490a3f738da9b24af05253b5150361f4f3386f"} Feb 27 18:01:02 crc kubenswrapper[4830]: E0227 18:01:02.767316 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a\\\"\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6smj" podUID="fb796dd0-1d3a-4037-a42a-7427293ea799" Feb 27 18:01:04 crc kubenswrapper[4830]: I0227 18:01:04.817670 4830 generic.go:334] "Generic (PLEG): container finished" podID="1ea9f937-1d9d-4e38-87dd-98017339ecc1" containerID="63b25b451a20d13f9cbd91a0a1fe268691d6b17885f4119a44649db6b4ef8872" exitCode=0 Feb 27 18:01:04 crc kubenswrapper[4830]: I0227 18:01:04.817746 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29536921-bbwcz" event={"ID":"1ea9f937-1d9d-4e38-87dd-98017339ecc1","Type":"ContainerDied","Data":"63b25b451a20d13f9cbd91a0a1fe268691d6b17885f4119a44649db6b4ef8872"} Feb 27 18:01:06 crc kubenswrapper[4830]: I0227 18:01:06.315096 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29536921-bbwcz" Feb 27 18:01:06 crc kubenswrapper[4830]: I0227 18:01:06.465724 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea9f937-1d9d-4e38-87dd-98017339ecc1-config-data\") pod \"1ea9f937-1d9d-4e38-87dd-98017339ecc1\" (UID: \"1ea9f937-1d9d-4e38-87dd-98017339ecc1\") " Feb 27 18:01:06 crc kubenswrapper[4830]: I0227 18:01:06.466306 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea9f937-1d9d-4e38-87dd-98017339ecc1-combined-ca-bundle\") pod \"1ea9f937-1d9d-4e38-87dd-98017339ecc1\" (UID: \"1ea9f937-1d9d-4e38-87dd-98017339ecc1\") " Feb 27 18:01:06 crc kubenswrapper[4830]: I0227 18:01:06.466452 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ea9f937-1d9d-4e38-87dd-98017339ecc1-fernet-keys\") pod \"1ea9f937-1d9d-4e38-87dd-98017339ecc1\" (UID: \"1ea9f937-1d9d-4e38-87dd-98017339ecc1\") " Feb 27 18:01:06 crc kubenswrapper[4830]: I0227 18:01:06.466484 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lrl2\" (UniqueName: \"kubernetes.io/projected/1ea9f937-1d9d-4e38-87dd-98017339ecc1-kube-api-access-6lrl2\") pod \"1ea9f937-1d9d-4e38-87dd-98017339ecc1\" (UID: \"1ea9f937-1d9d-4e38-87dd-98017339ecc1\") " Feb 27 18:01:06 crc kubenswrapper[4830]: I0227 18:01:06.474599 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ea9f937-1d9d-4e38-87dd-98017339ecc1-kube-api-access-6lrl2" (OuterVolumeSpecName: "kube-api-access-6lrl2") pod "1ea9f937-1d9d-4e38-87dd-98017339ecc1" (UID: "1ea9f937-1d9d-4e38-87dd-98017339ecc1"). InnerVolumeSpecName "kube-api-access-6lrl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:01:06 crc kubenswrapper[4830]: I0227 18:01:06.480319 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ea9f937-1d9d-4e38-87dd-98017339ecc1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1ea9f937-1d9d-4e38-87dd-98017339ecc1" (UID: "1ea9f937-1d9d-4e38-87dd-98017339ecc1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:01:06 crc kubenswrapper[4830]: I0227 18:01:06.513426 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ea9f937-1d9d-4e38-87dd-98017339ecc1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ea9f937-1d9d-4e38-87dd-98017339ecc1" (UID: "1ea9f937-1d9d-4e38-87dd-98017339ecc1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:01:06 crc kubenswrapper[4830]: I0227 18:01:06.537815 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ea9f937-1d9d-4e38-87dd-98017339ecc1-config-data" (OuterVolumeSpecName: "config-data") pod "1ea9f937-1d9d-4e38-87dd-98017339ecc1" (UID: "1ea9f937-1d9d-4e38-87dd-98017339ecc1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:01:06 crc kubenswrapper[4830]: I0227 18:01:06.570317 4830 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea9f937-1d9d-4e38-87dd-98017339ecc1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 18:01:06 crc kubenswrapper[4830]: I0227 18:01:06.570383 4830 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ea9f937-1d9d-4e38-87dd-98017339ecc1-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 27 18:01:06 crc kubenswrapper[4830]: I0227 18:01:06.570407 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lrl2\" (UniqueName: \"kubernetes.io/projected/1ea9f937-1d9d-4e38-87dd-98017339ecc1-kube-api-access-6lrl2\") on node \"crc\" DevicePath \"\"" Feb 27 18:01:06 crc kubenswrapper[4830]: I0227 18:01:06.570432 4830 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea9f937-1d9d-4e38-87dd-98017339ecc1-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 18:01:06 crc kubenswrapper[4830]: I0227 18:01:06.847230 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29536921-bbwcz" event={"ID":"1ea9f937-1d9d-4e38-87dd-98017339ecc1","Type":"ContainerDied","Data":"110725e3e020e2d1dcccb715f9490a3f738da9b24af05253b5150361f4f3386f"} Feb 27 18:01:06 crc kubenswrapper[4830]: I0227 18:01:06.847277 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="110725e3e020e2d1dcccb715f9490a3f738da9b24af05253b5150361f4f3386f" Feb 27 18:01:06 crc kubenswrapper[4830]: I0227 18:01:06.847312 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29536921-bbwcz" Feb 27 18:01:16 crc kubenswrapper[4830]: E0227 18:01:16.767398 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a\\\"\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6smj" podUID="fb796dd0-1d3a-4037-a42a-7427293ea799" Feb 27 18:01:16 crc kubenswrapper[4830]: E0227 18:01:16.769839 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-wxtjs" podUID="14c02ecf-6b25-4162-9286-acabcecbd435" Feb 27 18:01:27 crc kubenswrapper[4830]: E0227 18:01:27.765995 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a\\\"\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6smj" podUID="fb796dd0-1d3a-4037-a42a-7427293ea799" Feb 27 18:01:31 crc kubenswrapper[4830]: E0227 18:01:31.765349 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-wxtjs" podUID="14c02ecf-6b25-4162-9286-acabcecbd435" Feb 27 18:01:33 crc kubenswrapper[4830]: I0227 18:01:33.160425 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:01:33 crc kubenswrapper[4830]: I0227 18:01:33.160835 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:01:41 crc kubenswrapper[4830]: E0227 18:01:41.766243 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a\\\"\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6smj" podUID="fb796dd0-1d3a-4037-a42a-7427293ea799" Feb 27 18:01:44 crc kubenswrapper[4830]: I0227 18:01:44.365881 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxtjs" event={"ID":"14c02ecf-6b25-4162-9286-acabcecbd435","Type":"ContainerStarted","Data":"64883b9df98a8022ced07a17053933fb22ea519dc31d88cef1b9be1fd45d30e2"} Feb 27 18:01:45 crc kubenswrapper[4830]: I0227 18:01:45.387996 4830 generic.go:334] "Generic (PLEG): container finished" podID="14c02ecf-6b25-4162-9286-acabcecbd435" containerID="64883b9df98a8022ced07a17053933fb22ea519dc31d88cef1b9be1fd45d30e2" exitCode=0 Feb 27 18:01:45 crc kubenswrapper[4830]: I0227 18:01:45.388002 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxtjs" event={"ID":"14c02ecf-6b25-4162-9286-acabcecbd435","Type":"ContainerDied","Data":"64883b9df98a8022ced07a17053933fb22ea519dc31d88cef1b9be1fd45d30e2"} Feb 27 18:01:46 crc kubenswrapper[4830]: I0227 18:01:46.403729 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxtjs" event={"ID":"14c02ecf-6b25-4162-9286-acabcecbd435","Type":"ContainerStarted","Data":"af99243a01b6d26181bd0468d68e1d8d61eaa7e2c71f7a4fccc61890b0ceb126"} Feb 27 18:01:46 crc kubenswrapper[4830]: I0227 18:01:46.438389 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wxtjs" podStartSLOduration=2.699281199 podStartE2EDuration="1m27.438364816s" podCreationTimestamp="2026-02-27 18:00:19 +0000 UTC" firstStartedPulling="2026-02-27 18:00:21.248089472 +0000 UTC m=+6817.337361975" lastFinishedPulling="2026-02-27 18:01:45.987173089 +0000 UTC m=+6902.076445592" observedRunningTime="2026-02-27 18:01:46.43018754 +0000 UTC m=+6902.519460033" watchObservedRunningTime="2026-02-27 18:01:46.438364816 +0000 UTC m=+6902.527637319" Feb 27 18:01:49 crc kubenswrapper[4830]: I0227 18:01:49.557700 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wxtjs" Feb 27 18:01:49 crc kubenswrapper[4830]: I0227 18:01:49.558350 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wxtjs" Feb 27 18:01:49 crc kubenswrapper[4830]: I0227 18:01:49.638476 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wxtjs" Feb 27 18:01:56 crc kubenswrapper[4830]: E0227 18:01:56.765844 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a\\\"\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6smj" podUID="fb796dd0-1d3a-4037-a42a-7427293ea799" Feb 27 18:01:57 crc kubenswrapper[4830]: I0227 18:01:57.099667 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-lnc76"] Feb 27 18:01:57 crc kubenswrapper[4830]: I0227 18:01:57.111256 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-d74f-account-create-update-4bpbv"] Feb 27 18:01:57 crc kubenswrapper[4830]: I0227 18:01:57.125806 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-lnc76"] Feb 27 18:01:57 crc kubenswrapper[4830]: I0227 18:01:57.134379 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-d74f-account-create-update-4bpbv"] Feb 27 18:01:58 crc kubenswrapper[4830]: I0227 18:01:58.780775 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c62765c-1e54-4883-bb95-ae8b9727ace2" path="/var/lib/kubelet/pods/4c62765c-1e54-4883-bb95-ae8b9727ace2/volumes" Feb 27 18:01:58 crc kubenswrapper[4830]: I0227 18:01:58.782043 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69d24cdc-6ac8-49bc-aca6-81956b204c0b" path="/var/lib/kubelet/pods/69d24cdc-6ac8-49bc-aca6-81956b204c0b/volumes" Feb 27 18:01:59 crc kubenswrapper[4830]: I0227 18:01:59.619103 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wxtjs" Feb 27 18:01:59 crc kubenswrapper[4830]: I0227 18:01:59.701519 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wxtjs"] Feb 27 18:02:00 crc kubenswrapper[4830]: I0227 18:02:00.166660 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536922-84cqt"] Feb 27 18:02:00 crc kubenswrapper[4830]: E0227 18:02:00.167762 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea9f937-1d9d-4e38-87dd-98017339ecc1" containerName="keystone-cron" Feb 27 18:02:00 crc kubenswrapper[4830]: I0227 18:02:00.167785 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea9f937-1d9d-4e38-87dd-98017339ecc1" containerName="keystone-cron" Feb 27 18:02:00 crc kubenswrapper[4830]: I0227 18:02:00.168203 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea9f937-1d9d-4e38-87dd-98017339ecc1" containerName="keystone-cron" Feb 27 18:02:00 crc kubenswrapper[4830]: I0227 18:02:00.169289 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536922-84cqt" Feb 27 18:02:00 crc kubenswrapper[4830]: I0227 18:02:00.174591 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 18:02:00 crc kubenswrapper[4830]: I0227 18:02:00.174831 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 18:02:00 crc kubenswrapper[4830]: I0227 18:02:00.175036 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 18:02:00 crc kubenswrapper[4830]: I0227 18:02:00.180594 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536922-84cqt"] Feb 27 18:02:00 crc kubenswrapper[4830]: I0227 18:02:00.181535 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq62f\" (UniqueName: \"kubernetes.io/projected/d90c5d5a-0f24-48b8-b8c6-4652a1922a9e-kube-api-access-vq62f\") pod \"auto-csr-approver-29536922-84cqt\" (UID: \"d90c5d5a-0f24-48b8-b8c6-4652a1922a9e\") " pod="openshift-infra/auto-csr-approver-29536922-84cqt" Feb 27 18:02:00 crc kubenswrapper[4830]: I0227 18:02:00.284159 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq62f\" (UniqueName: \"kubernetes.io/projected/d90c5d5a-0f24-48b8-b8c6-4652a1922a9e-kube-api-access-vq62f\") pod \"auto-csr-approver-29536922-84cqt\" (UID: \"d90c5d5a-0f24-48b8-b8c6-4652a1922a9e\") " pod="openshift-infra/auto-csr-approver-29536922-84cqt" Feb 27 18:02:00 crc kubenswrapper[4830]: I0227 18:02:00.312046 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq62f\" (UniqueName: \"kubernetes.io/projected/d90c5d5a-0f24-48b8-b8c6-4652a1922a9e-kube-api-access-vq62f\") pod \"auto-csr-approver-29536922-84cqt\" (UID: \"d90c5d5a-0f24-48b8-b8c6-4652a1922a9e\") " pod="openshift-infra/auto-csr-approver-29536922-84cqt" Feb 27 18:02:00 crc kubenswrapper[4830]: I0227 18:02:00.524309 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536922-84cqt" Feb 27 18:02:00 crc kubenswrapper[4830]: I0227 18:02:00.564007 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wxtjs" podUID="14c02ecf-6b25-4162-9286-acabcecbd435" containerName="registry-server" containerID="cri-o://af99243a01b6d26181bd0468d68e1d8d61eaa7e2c71f7a4fccc61890b0ceb126" gracePeriod=2 Feb 27 18:02:00 crc kubenswrapper[4830]: I0227 18:02:00.959386 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536922-84cqt"] Feb 27 18:02:01 crc kubenswrapper[4830]: I0227 18:02:01.046604 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wxtjs" Feb 27 18:02:01 crc kubenswrapper[4830]: I0227 18:02:01.213716 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dn9pp\" (UniqueName: \"kubernetes.io/projected/14c02ecf-6b25-4162-9286-acabcecbd435-kube-api-access-dn9pp\") pod \"14c02ecf-6b25-4162-9286-acabcecbd435\" (UID: \"14c02ecf-6b25-4162-9286-acabcecbd435\") " Feb 27 18:02:01 crc kubenswrapper[4830]: I0227 18:02:01.213888 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c02ecf-6b25-4162-9286-acabcecbd435-utilities\") pod \"14c02ecf-6b25-4162-9286-acabcecbd435\" (UID: \"14c02ecf-6b25-4162-9286-acabcecbd435\") " Feb 27 18:02:01 crc kubenswrapper[4830]: I0227 18:02:01.214121 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c02ecf-6b25-4162-9286-acabcecbd435-catalog-content\") pod \"14c02ecf-6b25-4162-9286-acabcecbd435\" (UID: \"14c02ecf-6b25-4162-9286-acabcecbd435\") " Feb 27 18:02:01 crc kubenswrapper[4830]: I0227 18:02:01.214752 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14c02ecf-6b25-4162-9286-acabcecbd435-utilities" (OuterVolumeSpecName: "utilities") pod "14c02ecf-6b25-4162-9286-acabcecbd435" (UID: "14c02ecf-6b25-4162-9286-acabcecbd435"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:02:01 crc kubenswrapper[4830]: I0227 18:02:01.215709 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c02ecf-6b25-4162-9286-acabcecbd435-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 18:02:01 crc kubenswrapper[4830]: I0227 18:02:01.219270 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14c02ecf-6b25-4162-9286-acabcecbd435-kube-api-access-dn9pp" (OuterVolumeSpecName: "kube-api-access-dn9pp") pod "14c02ecf-6b25-4162-9286-acabcecbd435" (UID: "14c02ecf-6b25-4162-9286-acabcecbd435"). InnerVolumeSpecName "kube-api-access-dn9pp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:02:01 crc kubenswrapper[4830]: I0227 18:02:01.274913 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14c02ecf-6b25-4162-9286-acabcecbd435-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14c02ecf-6b25-4162-9286-acabcecbd435" (UID: "14c02ecf-6b25-4162-9286-acabcecbd435"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:02:01 crc kubenswrapper[4830]: I0227 18:02:01.319504 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c02ecf-6b25-4162-9286-acabcecbd435-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 18:02:01 crc kubenswrapper[4830]: I0227 18:02:01.319596 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dn9pp\" (UniqueName: \"kubernetes.io/projected/14c02ecf-6b25-4162-9286-acabcecbd435-kube-api-access-dn9pp\") on node \"crc\" DevicePath \"\"" Feb 27 18:02:01 crc kubenswrapper[4830]: I0227 18:02:01.579414 4830 generic.go:334] "Generic (PLEG): container finished" podID="14c02ecf-6b25-4162-9286-acabcecbd435" containerID="af99243a01b6d26181bd0468d68e1d8d61eaa7e2c71f7a4fccc61890b0ceb126" exitCode=0 Feb 27 18:02:01 crc kubenswrapper[4830]: I0227 18:02:01.579544 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxtjs" event={"ID":"14c02ecf-6b25-4162-9286-acabcecbd435","Type":"ContainerDied","Data":"af99243a01b6d26181bd0468d68e1d8d61eaa7e2c71f7a4fccc61890b0ceb126"} Feb 27 18:02:01 crc kubenswrapper[4830]: I0227 18:02:01.579586 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxtjs" event={"ID":"14c02ecf-6b25-4162-9286-acabcecbd435","Type":"ContainerDied","Data":"348d8da92dc75981a74b4e173a37bb843668a089509fe0ea2ac2b16eaaf14b48"} Feb 27 18:02:01 crc kubenswrapper[4830]: I0227 18:02:01.579614 4830 scope.go:117] "RemoveContainer" containerID="af99243a01b6d26181bd0468d68e1d8d61eaa7e2c71f7a4fccc61890b0ceb126" Feb 27 18:02:01 crc kubenswrapper[4830]: I0227 18:02:01.579817 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wxtjs" Feb 27 18:02:01 crc kubenswrapper[4830]: I0227 18:02:01.584411 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536922-84cqt" event={"ID":"d90c5d5a-0f24-48b8-b8c6-4652a1922a9e","Type":"ContainerStarted","Data":"d5e66dc87d19811dc35128db8ac05a826c715f7979d532f631af87fed8466d0d"} Feb 27 18:02:01 crc kubenswrapper[4830]: I0227 18:02:01.628839 4830 scope.go:117] "RemoveContainer" containerID="64883b9df98a8022ced07a17053933fb22ea519dc31d88cef1b9be1fd45d30e2" Feb 27 18:02:01 crc kubenswrapper[4830]: I0227 18:02:01.638512 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wxtjs"] Feb 27 18:02:01 crc kubenswrapper[4830]: I0227 18:02:01.663571 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wxtjs"] Feb 27 18:02:01 crc kubenswrapper[4830]: I0227 18:02:01.672484 4830 scope.go:117] "RemoveContainer" containerID="01e649fd31179e6e3b490c3a6ccd5c199a02f9e7a5d376676bee3c5a1ae6fee5" Feb 27 18:02:01 crc kubenswrapper[4830]: I0227 18:02:01.716851 4830 scope.go:117] "RemoveContainer" containerID="af99243a01b6d26181bd0468d68e1d8d61eaa7e2c71f7a4fccc61890b0ceb126" Feb 27 18:02:01 crc kubenswrapper[4830]: E0227 18:02:01.717673 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af99243a01b6d26181bd0468d68e1d8d61eaa7e2c71f7a4fccc61890b0ceb126\": container with ID starting with af99243a01b6d26181bd0468d68e1d8d61eaa7e2c71f7a4fccc61890b0ceb126 not found: ID does not exist" containerID="af99243a01b6d26181bd0468d68e1d8d61eaa7e2c71f7a4fccc61890b0ceb126" Feb 27 18:02:01 crc kubenswrapper[4830]: I0227 18:02:01.717729 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af99243a01b6d26181bd0468d68e1d8d61eaa7e2c71f7a4fccc61890b0ceb126"} err="failed to get container status \"af99243a01b6d26181bd0468d68e1d8d61eaa7e2c71f7a4fccc61890b0ceb126\": rpc error: code = NotFound desc = could not find container \"af99243a01b6d26181bd0468d68e1d8d61eaa7e2c71f7a4fccc61890b0ceb126\": container with ID starting with af99243a01b6d26181bd0468d68e1d8d61eaa7e2c71f7a4fccc61890b0ceb126 not found: ID does not exist" Feb 27 18:02:01 crc kubenswrapper[4830]: I0227 18:02:01.717764 4830 scope.go:117] "RemoveContainer" containerID="64883b9df98a8022ced07a17053933fb22ea519dc31d88cef1b9be1fd45d30e2" Feb 27 18:02:01 crc kubenswrapper[4830]: E0227 18:02:01.718242 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64883b9df98a8022ced07a17053933fb22ea519dc31d88cef1b9be1fd45d30e2\": container with ID starting with 64883b9df98a8022ced07a17053933fb22ea519dc31d88cef1b9be1fd45d30e2 not found: ID does not exist" containerID="64883b9df98a8022ced07a17053933fb22ea519dc31d88cef1b9be1fd45d30e2" Feb 27 18:02:01 crc kubenswrapper[4830]: I0227 18:02:01.718288 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64883b9df98a8022ced07a17053933fb22ea519dc31d88cef1b9be1fd45d30e2"} err="failed to get container status \"64883b9df98a8022ced07a17053933fb22ea519dc31d88cef1b9be1fd45d30e2\": rpc error: code = NotFound desc = could not find container \"64883b9df98a8022ced07a17053933fb22ea519dc31d88cef1b9be1fd45d30e2\": container with ID starting with 64883b9df98a8022ced07a17053933fb22ea519dc31d88cef1b9be1fd45d30e2 not found: ID does not exist" Feb 27 18:02:01 crc kubenswrapper[4830]: I0227 18:02:01.718317 4830 scope.go:117] "RemoveContainer" containerID="01e649fd31179e6e3b490c3a6ccd5c199a02f9e7a5d376676bee3c5a1ae6fee5" Feb 27 18:02:01 crc kubenswrapper[4830]: E0227 18:02:01.718800 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01e649fd31179e6e3b490c3a6ccd5c199a02f9e7a5d376676bee3c5a1ae6fee5\": container with ID starting with 01e649fd31179e6e3b490c3a6ccd5c199a02f9e7a5d376676bee3c5a1ae6fee5 not found: ID does not exist" containerID="01e649fd31179e6e3b490c3a6ccd5c199a02f9e7a5d376676bee3c5a1ae6fee5" Feb 27 18:02:01 crc kubenswrapper[4830]: I0227 18:02:01.718833 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01e649fd31179e6e3b490c3a6ccd5c199a02f9e7a5d376676bee3c5a1ae6fee5"} err="failed to get container status \"01e649fd31179e6e3b490c3a6ccd5c199a02f9e7a5d376676bee3c5a1ae6fee5\": rpc error: code = NotFound desc = could not find container \"01e649fd31179e6e3b490c3a6ccd5c199a02f9e7a5d376676bee3c5a1ae6fee5\": container with ID starting with 01e649fd31179e6e3b490c3a6ccd5c199a02f9e7a5d376676bee3c5a1ae6fee5 not found: ID does not exist" Feb 27 18:02:02 crc kubenswrapper[4830]: E0227 18:02:02.044696 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:02:02 crc kubenswrapper[4830]: E0227 18:02:02.044885 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:02:02 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:02:02 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vq62f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536922-84cqt_openshift-infra(d90c5d5a-0f24-48b8-b8c6-4652a1922a9e): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:02:02 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 18:02:02 crc kubenswrapper[4830]: E0227 18:02:02.046128 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536922-84cqt" podUID="d90c5d5a-0f24-48b8-b8c6-4652a1922a9e" Feb 27 18:02:02 crc kubenswrapper[4830]: E0227 18:02:02.593625 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536922-84cqt" podUID="d90c5d5a-0f24-48b8-b8c6-4652a1922a9e" Feb 27 18:02:02 crc kubenswrapper[4830]: I0227 18:02:02.772873 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14c02ecf-6b25-4162-9286-acabcecbd435" path="/var/lib/kubelet/pods/14c02ecf-6b25-4162-9286-acabcecbd435/volumes" Feb 27 18:02:03 crc kubenswrapper[4830]: I0227 18:02:03.160378 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:02:03 crc kubenswrapper[4830]: I0227 18:02:03.160470 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:02:11 crc kubenswrapper[4830]: E0227 18:02:11.766190 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a\\\"\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6smj" podUID="fb796dd0-1d3a-4037-a42a-7427293ea799" Feb 27 18:02:15 crc kubenswrapper[4830]: E0227 18:02:15.567831 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:02:15 crc kubenswrapper[4830]: E0227 18:02:15.568401 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:02:15 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:02:15 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vq62f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536922-84cqt_openshift-infra(d90c5d5a-0f24-48b8-b8c6-4652a1922a9e): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:02:15 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 18:02:15 crc kubenswrapper[4830]: E0227 18:02:15.570314 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536922-84cqt" podUID="d90c5d5a-0f24-48b8-b8c6-4652a1922a9e" Feb 27 18:02:19 crc kubenswrapper[4830]: I0227 18:02:19.058727 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-2tcsn"] Feb 27 18:02:19 crc kubenswrapper[4830]: I0227 18:02:19.080985 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-2tcsn"] Feb 27 18:02:20 crc kubenswrapper[4830]: I0227 18:02:20.780375 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="370be3ee-4c90-499b-a826-5b39169ac10a" path="/var/lib/kubelet/pods/370be3ee-4c90-499b-a826-5b39169ac10a/volumes" Feb 27 18:02:28 crc kubenswrapper[4830]: E0227 18:02:28.764365 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536922-84cqt" podUID="d90c5d5a-0f24-48b8-b8c6-4652a1922a9e" Feb 27 18:02:33 crc kubenswrapper[4830]: I0227 18:02:33.160762 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:02:33 crc kubenswrapper[4830]: I0227 18:02:33.162139 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:02:33 crc kubenswrapper[4830]: I0227 18:02:33.162249 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 18:02:33 crc kubenswrapper[4830]: I0227 18:02:33.163473 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"364bac5e44d6ecef577235338aa01e0eab35896300d6d5c2d81ef312d7b04024"} pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 18:02:33 crc kubenswrapper[4830]: I0227 18:02:33.163592 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" containerID="cri-o://364bac5e44d6ecef577235338aa01e0eab35896300d6d5c2d81ef312d7b04024" gracePeriod=600 Feb 27 18:02:33 crc kubenswrapper[4830]: I0227 18:02:33.986629 4830 generic.go:334] "Generic (PLEG): container finished" podID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerID="364bac5e44d6ecef577235338aa01e0eab35896300d6d5c2d81ef312d7b04024" exitCode=0 Feb 27 18:02:33 crc kubenswrapper[4830]: I0227 18:02:33.989330 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerDied","Data":"364bac5e44d6ecef577235338aa01e0eab35896300d6d5c2d81ef312d7b04024"} Feb 27 18:02:33 crc kubenswrapper[4830]: I0227 18:02:33.989518 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerStarted","Data":"81ef18a8ceffa5c1cfa26cc002b47333287098e5e6cdb33647e44f342d553fc2"} Feb 27 18:02:33 crc kubenswrapper[4830]: I0227 18:02:33.989719 4830 scope.go:117] "RemoveContainer" containerID="9c7600b1d02b30467a3c9249e6962a5faed8288686d8237218305c7cc4357171" Feb 27 18:02:43 crc kubenswrapper[4830]: E0227 18:02:43.604270 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:02:43 crc kubenswrapper[4830]: E0227 18:02:43.604858 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:02:43 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:02:43 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vq62f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536922-84cqt_openshift-infra(d90c5d5a-0f24-48b8-b8c6-4652a1922a9e): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:02:43 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 18:02:43 crc kubenswrapper[4830]: E0227 18:02:43.606092 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536922-84cqt" podUID="d90c5d5a-0f24-48b8-b8c6-4652a1922a9e" Feb 27 18:02:54 crc kubenswrapper[4830]: E0227 18:02:54.779476 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536922-84cqt" podUID="d90c5d5a-0f24-48b8-b8c6-4652a1922a9e" Feb 27 18:02:57 crc kubenswrapper[4830]: I0227 18:02:57.166584 4830 scope.go:117] "RemoveContainer" containerID="aa4edb17d0fc5db0eacc87959db200bad8ecd7e4d182baaa7b5f61d043ea4413" Feb 27 18:02:58 crc kubenswrapper[4830]: I0227 18:02:58.053101 4830 scope.go:117] "RemoveContainer" containerID="89440017ef91fbeaae35638eeed4966b5dbbd07cdbc63e99fffc1fbbc0e8ddae" Feb 27 18:02:58 crc kubenswrapper[4830]: I0227 18:02:58.113222 4830 scope.go:117] "RemoveContainer" containerID="2789089b2b7910526790c32e7c41f21b278ea5ba6260d31dbd2c81df1ad29faa" Feb 27 18:02:59 crc kubenswrapper[4830]: I0227 18:02:59.323294 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6smj" event={"ID":"fb796dd0-1d3a-4037-a42a-7427293ea799","Type":"ContainerStarted","Data":"c9e6241bdf6a4840826c3ba9fef67ea44de43386d6eec051cbb70423d4d2c61b"} Feb 27 18:03:01 crc kubenswrapper[4830]: I0227 18:03:01.703527 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x6smj" podStartSLOduration=5.363499302 podStartE2EDuration="8m27.703511183s" podCreationTimestamp="2026-02-27 17:54:34 +0000 UTC" firstStartedPulling="2026-02-27 17:54:35.779682487 +0000 UTC m=+6471.868954950" lastFinishedPulling="2026-02-27 18:02:58.119694338 +0000 UTC m=+6974.208966831" observedRunningTime="2026-02-27 18:02:59.353278378 +0000 UTC m=+6975.442550881" watchObservedRunningTime="2026-02-27 18:03:01.703511183 +0000 UTC m=+6977.792783646" Feb 27 18:03:01 crc kubenswrapper[4830]: I0227 18:03:01.708391 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 27 18:03:01 crc kubenswrapper[4830]: I0227 18:03:01.708576 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="5f45a253-e0e6-49aa-9c48-8c57b3639130" containerName="openstackclient" containerID="cri-o://453bd6cfd132d0dc0f8b96af15c32bd0a0d75846bad555fcb8e37140409ee65a" gracePeriod=2 Feb 27 18:03:01 crc kubenswrapper[4830]: I0227 18:03:01.724344 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 27 18:03:01 crc kubenswrapper[4830]: I0227 18:03:01.870622 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 27 18:03:01 crc kubenswrapper[4830]: E0227 18:03:01.871069 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f45a253-e0e6-49aa-9c48-8c57b3639130" containerName="openstackclient" Feb 27 18:03:01 crc kubenswrapper[4830]: I0227 18:03:01.871086 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f45a253-e0e6-49aa-9c48-8c57b3639130" containerName="openstackclient" Feb 27 18:03:01 crc kubenswrapper[4830]: E0227 18:03:01.871105 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c02ecf-6b25-4162-9286-acabcecbd435" containerName="registry-server" Feb 27 18:03:01 crc kubenswrapper[4830]: I0227 18:03:01.871112 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c02ecf-6b25-4162-9286-acabcecbd435" containerName="registry-server" Feb 27 18:03:01 crc kubenswrapper[4830]: E0227 18:03:01.871129 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c02ecf-6b25-4162-9286-acabcecbd435" containerName="extract-content" Feb 27 18:03:01 crc kubenswrapper[4830]: I0227 18:03:01.871135 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c02ecf-6b25-4162-9286-acabcecbd435" containerName="extract-content" Feb 27 18:03:01 crc kubenswrapper[4830]: E0227 18:03:01.871155 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c02ecf-6b25-4162-9286-acabcecbd435" containerName="extract-utilities" Feb 27 18:03:01 crc kubenswrapper[4830]: I0227 18:03:01.871161 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c02ecf-6b25-4162-9286-acabcecbd435" containerName="extract-utilities" Feb 27 18:03:01 crc kubenswrapper[4830]: I0227 18:03:01.871355 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="14c02ecf-6b25-4162-9286-acabcecbd435" containerName="registry-server" Feb 27 18:03:01 crc kubenswrapper[4830]: I0227 18:03:01.871366 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f45a253-e0e6-49aa-9c48-8c57b3639130" containerName="openstackclient" Feb 27 18:03:01 crc kubenswrapper[4830]: I0227 18:03:01.872053 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 27 18:03:01 crc kubenswrapper[4830]: I0227 18:03:01.904207 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 27 18:03:01 crc kubenswrapper[4830]: I0227 18:03:01.944122 4830 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="5f45a253-e0e6-49aa-9c48-8c57b3639130" podUID="42745d06-1e64-4f81-a075-db86b6665a3e" Feb 27 18:03:01 crc kubenswrapper[4830]: I0227 18:03:01.962543 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/42745d06-1e64-4f81-a075-db86b6665a3e-openstack-config-secret\") pod \"openstackclient\" (UID: \"42745d06-1e64-4f81-a075-db86b6665a3e\") " pod="openstack/openstackclient" Feb 27 18:03:01 crc kubenswrapper[4830]: I0227 18:03:01.963289 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c6sc\" (UniqueName: \"kubernetes.io/projected/42745d06-1e64-4f81-a075-db86b6665a3e-kube-api-access-5c6sc\") pod \"openstackclient\" (UID: \"42745d06-1e64-4f81-a075-db86b6665a3e\") " pod="openstack/openstackclient" Feb 27 18:03:01 crc kubenswrapper[4830]: I0227 18:03:01.963329 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/42745d06-1e64-4f81-a075-db86b6665a3e-openstack-config\") pod \"openstackclient\" (UID: \"42745d06-1e64-4f81-a075-db86b6665a3e\") " pod="openstack/openstackclient" Feb 27 18:03:02 crc kubenswrapper[4830]: I0227 18:03:02.065160 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c6sc\" (UniqueName: \"kubernetes.io/projected/42745d06-1e64-4f81-a075-db86b6665a3e-kube-api-access-5c6sc\") pod \"openstackclient\" (UID: \"42745d06-1e64-4f81-a075-db86b6665a3e\") " pod="openstack/openstackclient" Feb 27 18:03:02 crc kubenswrapper[4830]: I0227 18:03:02.065535 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/42745d06-1e64-4f81-a075-db86b6665a3e-openstack-config\") pod \"openstackclient\" (UID: \"42745d06-1e64-4f81-a075-db86b6665a3e\") " pod="openstack/openstackclient" Feb 27 18:03:02 crc kubenswrapper[4830]: I0227 18:03:02.066373 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/42745d06-1e64-4f81-a075-db86b6665a3e-openstack-config\") pod \"openstackclient\" (UID: \"42745d06-1e64-4f81-a075-db86b6665a3e\") " pod="openstack/openstackclient" Feb 27 18:03:02 crc kubenswrapper[4830]: I0227 18:03:02.066476 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/42745d06-1e64-4f81-a075-db86b6665a3e-openstack-config-secret\") pod \"openstackclient\" (UID: \"42745d06-1e64-4f81-a075-db86b6665a3e\") " pod="openstack/openstackclient" Feb 27 18:03:02 crc kubenswrapper[4830]: I0227 18:03:02.082328 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/42745d06-1e64-4f81-a075-db86b6665a3e-openstack-config-secret\") pod \"openstackclient\" (UID: \"42745d06-1e64-4f81-a075-db86b6665a3e\") " pod="openstack/openstackclient" Feb 27 18:03:02 crc kubenswrapper[4830]: I0227 18:03:02.103018 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 18:03:02 crc kubenswrapper[4830]: I0227 18:03:02.104430 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 27 18:03:02 crc kubenswrapper[4830]: I0227 18:03:02.110786 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-9vgrv" Feb 27 18:03:02 crc kubenswrapper[4830]: I0227 18:03:02.111751 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c6sc\" (UniqueName: \"kubernetes.io/projected/42745d06-1e64-4f81-a075-db86b6665a3e-kube-api-access-5c6sc\") pod \"openstackclient\" (UID: \"42745d06-1e64-4f81-a075-db86b6665a3e\") " pod="openstack/openstackclient" Feb 27 18:03:02 crc kubenswrapper[4830]: I0227 18:03:02.133491 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 18:03:02 crc kubenswrapper[4830]: I0227 18:03:02.169770 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d46lr\" (UniqueName: \"kubernetes.io/projected/8caaa2c3-eb20-4f5c-8a28-09d2f8c64fc4-kube-api-access-d46lr\") pod \"kube-state-metrics-0\" (UID: \"8caaa2c3-eb20-4f5c-8a28-09d2f8c64fc4\") " pod="openstack/kube-state-metrics-0" Feb 27 18:03:02 crc kubenswrapper[4830]: I0227 18:03:02.226615 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 27 18:03:02 crc kubenswrapper[4830]: I0227 18:03:02.276497 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d46lr\" (UniqueName: \"kubernetes.io/projected/8caaa2c3-eb20-4f5c-8a28-09d2f8c64fc4-kube-api-access-d46lr\") pod \"kube-state-metrics-0\" (UID: \"8caaa2c3-eb20-4f5c-8a28-09d2f8c64fc4\") " pod="openstack/kube-state-metrics-0" Feb 27 18:03:02 crc kubenswrapper[4830]: I0227 18:03:02.325401 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d46lr\" (UniqueName: \"kubernetes.io/projected/8caaa2c3-eb20-4f5c-8a28-09d2f8c64fc4-kube-api-access-d46lr\") pod \"kube-state-metrics-0\" (UID: \"8caaa2c3-eb20-4f5c-8a28-09d2f8c64fc4\") " pod="openstack/kube-state-metrics-0" Feb 27 18:03:02 crc kubenswrapper[4830]: I0227 18:03:02.506015 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 27 18:03:02 crc kubenswrapper[4830]: I0227 18:03:02.996989 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 27 18:03:02 crc kubenswrapper[4830]: I0227 18:03:02.999361 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.003168 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-sw6zw" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.003361 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.003483 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.003630 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.010080 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.016303 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.096054 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv45l\" (UniqueName: \"kubernetes.io/projected/8608d556-6b34-4ab2-b676-007c65e0d359-kube-api-access-zv45l\") pod \"alertmanager-metric-storage-0\" (UID: \"8608d556-6b34-4ab2-b676-007c65e0d359\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.096095 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8608d556-6b34-4ab2-b676-007c65e0d359-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"8608d556-6b34-4ab2-b676-007c65e0d359\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.096143 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/8608d556-6b34-4ab2-b676-007c65e0d359-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"8608d556-6b34-4ab2-b676-007c65e0d359\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.096182 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8608d556-6b34-4ab2-b676-007c65e0d359-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"8608d556-6b34-4ab2-b676-007c65e0d359\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.096246 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/8608d556-6b34-4ab2-b676-007c65e0d359-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"8608d556-6b34-4ab2-b676-007c65e0d359\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.096308 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8608d556-6b34-4ab2-b676-007c65e0d359-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"8608d556-6b34-4ab2-b676-007c65e0d359\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.096361 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/8608d556-6b34-4ab2-b676-007c65e0d359-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"8608d556-6b34-4ab2-b676-007c65e0d359\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.147242 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.165113 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 18:03:03 crc kubenswrapper[4830]: W0227 18:03:03.166311 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8caaa2c3_eb20_4f5c_8a28_09d2f8c64fc4.slice/crio-6cad98fafa2be820aac07835321ea1b3fb90909ab3584c1a1566c231a6fd1480 WatchSource:0}: Error finding container 6cad98fafa2be820aac07835321ea1b3fb90909ab3584c1a1566c231a6fd1480: Status 404 returned error can't find the container with id 6cad98fafa2be820aac07835321ea1b3fb90909ab3584c1a1566c231a6fd1480 Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.198611 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/8608d556-6b34-4ab2-b676-007c65e0d359-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"8608d556-6b34-4ab2-b676-007c65e0d359\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.199059 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8608d556-6b34-4ab2-b676-007c65e0d359-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"8608d556-6b34-4ab2-b676-007c65e0d359\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.199106 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/8608d556-6b34-4ab2-b676-007c65e0d359-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"8608d556-6b34-4ab2-b676-007c65e0d359\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.199137 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv45l\" (UniqueName: \"kubernetes.io/projected/8608d556-6b34-4ab2-b676-007c65e0d359-kube-api-access-zv45l\") pod \"alertmanager-metric-storage-0\" (UID: \"8608d556-6b34-4ab2-b676-007c65e0d359\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.199156 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8608d556-6b34-4ab2-b676-007c65e0d359-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"8608d556-6b34-4ab2-b676-007c65e0d359\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.199189 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/8608d556-6b34-4ab2-b676-007c65e0d359-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"8608d556-6b34-4ab2-b676-007c65e0d359\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.199221 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8608d556-6b34-4ab2-b676-007c65e0d359-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"8608d556-6b34-4ab2-b676-007c65e0d359\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.202472 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/8608d556-6b34-4ab2-b676-007c65e0d359-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"8608d556-6b34-4ab2-b676-007c65e0d359\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.204686 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8608d556-6b34-4ab2-b676-007c65e0d359-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"8608d556-6b34-4ab2-b676-007c65e0d359\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.206869 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8608d556-6b34-4ab2-b676-007c65e0d359-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"8608d556-6b34-4ab2-b676-007c65e0d359\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.209695 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/8608d556-6b34-4ab2-b676-007c65e0d359-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"8608d556-6b34-4ab2-b676-007c65e0d359\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.211602 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/8608d556-6b34-4ab2-b676-007c65e0d359-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"8608d556-6b34-4ab2-b676-007c65e0d359\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.211685 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8608d556-6b34-4ab2-b676-007c65e0d359-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"8608d556-6b34-4ab2-b676-007c65e0d359\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.233672 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv45l\" (UniqueName: \"kubernetes.io/projected/8608d556-6b34-4ab2-b676-007c65e0d359-kube-api-access-zv45l\") pod \"alertmanager-metric-storage-0\" (UID: \"8608d556-6b34-4ab2-b676-007c65e0d359\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.350852 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.412375 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"42745d06-1e64-4f81-a075-db86b6665a3e","Type":"ContainerStarted","Data":"b73a77f8d9491b32bdfdc7e08d2b352cd91f82bf491a54bfc509371a43a9fc59"} Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.413416 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8caaa2c3-eb20-4f5c-8a28-09d2f8c64fc4","Type":"ContainerStarted","Data":"6cad98fafa2be820aac07835321ea1b3fb90909ab3584c1a1566c231a6fd1480"} Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.433939 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.436678 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.439105 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.439248 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.439385 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.439503 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.440197 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.440315 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-xrb7d" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.440439 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.440561 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.469731 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.505091 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/75bcbe49-556d-4af7-9506-514c14ec8d9e-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"75bcbe49-556d-4af7-9506-514c14ec8d9e\") " pod="openstack/prometheus-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.505142 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/75bcbe49-556d-4af7-9506-514c14ec8d9e-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"75bcbe49-556d-4af7-9506-514c14ec8d9e\") " pod="openstack/prometheus-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.505171 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/75bcbe49-556d-4af7-9506-514c14ec8d9e-config\") pod \"prometheus-metric-storage-0\" (UID: \"75bcbe49-556d-4af7-9506-514c14ec8d9e\") " pod="openstack/prometheus-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.505196 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/75bcbe49-556d-4af7-9506-514c14ec8d9e-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"75bcbe49-556d-4af7-9506-514c14ec8d9e\") " pod="openstack/prometheus-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.505519 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6l8v\" (UniqueName: \"kubernetes.io/projected/75bcbe49-556d-4af7-9506-514c14ec8d9e-kube-api-access-w6l8v\") pod \"prometheus-metric-storage-0\" (UID: \"75bcbe49-556d-4af7-9506-514c14ec8d9e\") " pod="openstack/prometheus-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.505777 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e969e7c4-d5ee-4397-833c-eda11e237a73\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e969e7c4-d5ee-4397-833c-eda11e237a73\") pod \"prometheus-metric-storage-0\" (UID: \"75bcbe49-556d-4af7-9506-514c14ec8d9e\") " pod="openstack/prometheus-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.505819 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/75bcbe49-556d-4af7-9506-514c14ec8d9e-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"75bcbe49-556d-4af7-9506-514c14ec8d9e\") " pod="openstack/prometheus-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.505844 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/75bcbe49-556d-4af7-9506-514c14ec8d9e-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"75bcbe49-556d-4af7-9506-514c14ec8d9e\") " pod="openstack/prometheus-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.505861 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/75bcbe49-556d-4af7-9506-514c14ec8d9e-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"75bcbe49-556d-4af7-9506-514c14ec8d9e\") " pod="openstack/prometheus-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.506019 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/75bcbe49-556d-4af7-9506-514c14ec8d9e-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"75bcbe49-556d-4af7-9506-514c14ec8d9e\") " pod="openstack/prometheus-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.608200 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/75bcbe49-556d-4af7-9506-514c14ec8d9e-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"75bcbe49-556d-4af7-9506-514c14ec8d9e\") " pod="openstack/prometheus-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.608612 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/75bcbe49-556d-4af7-9506-514c14ec8d9e-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"75bcbe49-556d-4af7-9506-514c14ec8d9e\") " pod="openstack/prometheus-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.608634 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/75bcbe49-556d-4af7-9506-514c14ec8d9e-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"75bcbe49-556d-4af7-9506-514c14ec8d9e\") " pod="openstack/prometheus-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.608703 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/75bcbe49-556d-4af7-9506-514c14ec8d9e-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"75bcbe49-556d-4af7-9506-514c14ec8d9e\") " pod="openstack/prometheus-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.608770 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/75bcbe49-556d-4af7-9506-514c14ec8d9e-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"75bcbe49-556d-4af7-9506-514c14ec8d9e\") " pod="openstack/prometheus-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.608790 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/75bcbe49-556d-4af7-9506-514c14ec8d9e-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"75bcbe49-556d-4af7-9506-514c14ec8d9e\") " pod="openstack/prometheus-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.608834 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/75bcbe49-556d-4af7-9506-514c14ec8d9e-config\") pod \"prometheus-metric-storage-0\" (UID: \"75bcbe49-556d-4af7-9506-514c14ec8d9e\") " pod="openstack/prometheus-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.608856 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/75bcbe49-556d-4af7-9506-514c14ec8d9e-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"75bcbe49-556d-4af7-9506-514c14ec8d9e\") " pod="openstack/prometheus-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.608935 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6l8v\" (UniqueName: \"kubernetes.io/projected/75bcbe49-556d-4af7-9506-514c14ec8d9e-kube-api-access-w6l8v\") pod \"prometheus-metric-storage-0\" (UID: \"75bcbe49-556d-4af7-9506-514c14ec8d9e\") " pod="openstack/prometheus-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.609057 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e969e7c4-d5ee-4397-833c-eda11e237a73\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e969e7c4-d5ee-4397-833c-eda11e237a73\") pod \"prometheus-metric-storage-0\" (UID: \"75bcbe49-556d-4af7-9506-514c14ec8d9e\") " pod="openstack/prometheus-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.609760 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/75bcbe49-556d-4af7-9506-514c14ec8d9e-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"75bcbe49-556d-4af7-9506-514c14ec8d9e\") " pod="openstack/prometheus-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.609977 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/75bcbe49-556d-4af7-9506-514c14ec8d9e-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"75bcbe49-556d-4af7-9506-514c14ec8d9e\") " pod="openstack/prometheus-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.610202 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/75bcbe49-556d-4af7-9506-514c14ec8d9e-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"75bcbe49-556d-4af7-9506-514c14ec8d9e\") " pod="openstack/prometheus-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.618909 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/75bcbe49-556d-4af7-9506-514c14ec8d9e-config\") pod \"prometheus-metric-storage-0\" (UID: \"75bcbe49-556d-4af7-9506-514c14ec8d9e\") " pod="openstack/prometheus-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.619261 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/75bcbe49-556d-4af7-9506-514c14ec8d9e-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"75bcbe49-556d-4af7-9506-514c14ec8d9e\") " pod="openstack/prometheus-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.620038 4830 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.620063 4830 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e969e7c4-d5ee-4397-833c-eda11e237a73\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e969e7c4-d5ee-4397-833c-eda11e237a73\") pod \"prometheus-metric-storage-0\" (UID: \"75bcbe49-556d-4af7-9506-514c14ec8d9e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b8a8068338ef4fb8e5f2673033cb849d637a5c36a340c2c78ea5ef0de54e248c/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.621696 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/75bcbe49-556d-4af7-9506-514c14ec8d9e-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"75bcbe49-556d-4af7-9506-514c14ec8d9e\") " pod="openstack/prometheus-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.624664 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/75bcbe49-556d-4af7-9506-514c14ec8d9e-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"75bcbe49-556d-4af7-9506-514c14ec8d9e\") " pod="openstack/prometheus-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.626532 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/75bcbe49-556d-4af7-9506-514c14ec8d9e-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"75bcbe49-556d-4af7-9506-514c14ec8d9e\") " pod="openstack/prometheus-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.657477 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6l8v\" (UniqueName: \"kubernetes.io/projected/75bcbe49-556d-4af7-9506-514c14ec8d9e-kube-api-access-w6l8v\") pod \"prometheus-metric-storage-0\" (UID: \"75bcbe49-556d-4af7-9506-514c14ec8d9e\") " pod="openstack/prometheus-metric-storage-0" Feb 27 18:03:03 crc kubenswrapper[4830]: I0227 18:03:03.880630 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e969e7c4-d5ee-4397-833c-eda11e237a73\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e969e7c4-d5ee-4397-833c-eda11e237a73\") pod \"prometheus-metric-storage-0\" (UID: \"75bcbe49-556d-4af7-9506-514c14ec8d9e\") " pod="openstack/prometheus-metric-storage-0" Feb 27 18:03:04 crc kubenswrapper[4830]: I0227 18:03:04.058893 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 27 18:03:04 crc kubenswrapper[4830]: W0227 18:03:04.168154 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8608d556_6b34_4ab2_b676_007c65e0d359.slice/crio-1f7e3aa8c75c042bb047256d7ad93baa2cdb0892a27d5c1211dcdc4dc9cd49e1 WatchSource:0}: Error finding container 1f7e3aa8c75c042bb047256d7ad93baa2cdb0892a27d5c1211dcdc4dc9cd49e1: Status 404 returned error can't find the container with id 1f7e3aa8c75c042bb047256d7ad93baa2cdb0892a27d5c1211dcdc4dc9cd49e1 Feb 27 18:03:04 crc kubenswrapper[4830]: I0227 18:03:04.212840 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 27 18:03:04 crc kubenswrapper[4830]: I0227 18:03:04.353466 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 27 18:03:04 crc kubenswrapper[4830]: I0227 18:03:04.447327 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"8608d556-6b34-4ab2-b676-007c65e0d359","Type":"ContainerStarted","Data":"1f7e3aa8c75c042bb047256d7ad93baa2cdb0892a27d5c1211dcdc4dc9cd49e1"} Feb 27 18:03:04 crc kubenswrapper[4830]: I0227 18:03:04.460170 4830 generic.go:334] "Generic (PLEG): container finished" podID="5f45a253-e0e6-49aa-9c48-8c57b3639130" containerID="453bd6cfd132d0dc0f8b96af15c32bd0a0d75846bad555fcb8e37140409ee65a" exitCode=137 Feb 27 18:03:04 crc kubenswrapper[4830]: I0227 18:03:04.460278 4830 scope.go:117] "RemoveContainer" containerID="453bd6cfd132d0dc0f8b96af15c32bd0a0d75846bad555fcb8e37140409ee65a" Feb 27 18:03:04 crc kubenswrapper[4830]: I0227 18:03:04.460532 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 27 18:03:04 crc kubenswrapper[4830]: I0227 18:03:04.464722 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"42745d06-1e64-4f81-a075-db86b6665a3e","Type":"ContainerStarted","Data":"8ff2cccb36b03ccf7d1111b9cf259609bd52aa673f3b0ccee28857b9a101457c"} Feb 27 18:03:04 crc kubenswrapper[4830]: I0227 18:03:04.476773 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8caaa2c3-eb20-4f5c-8a28-09d2f8c64fc4","Type":"ContainerStarted","Data":"74cabd0f71156245197e0bfa5d224262d0e258183254db5016734c7550df2268"} Feb 27 18:03:04 crc kubenswrapper[4830]: I0227 18:03:04.476932 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 27 18:03:04 crc kubenswrapper[4830]: I0227 18:03:04.505489 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.505468578 podStartE2EDuration="3.505468578s" podCreationTimestamp="2026-02-27 18:03:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:03:04.486426743 +0000 UTC m=+6980.575699206" watchObservedRunningTime="2026-02-27 18:03:04.505468578 +0000 UTC m=+6980.594741041" Feb 27 18:03:04 crc kubenswrapper[4830]: I0227 18:03:04.514573 4830 scope.go:117] "RemoveContainer" containerID="453bd6cfd132d0dc0f8b96af15c32bd0a0d75846bad555fcb8e37140409ee65a" Feb 27 18:03:04 crc kubenswrapper[4830]: E0227 18:03:04.515161 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"453bd6cfd132d0dc0f8b96af15c32bd0a0d75846bad555fcb8e37140409ee65a\": container with ID starting with 453bd6cfd132d0dc0f8b96af15c32bd0a0d75846bad555fcb8e37140409ee65a not found: ID does not exist" containerID="453bd6cfd132d0dc0f8b96af15c32bd0a0d75846bad555fcb8e37140409ee65a" Feb 27 18:03:04 crc kubenswrapper[4830]: I0227 18:03:04.515283 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"453bd6cfd132d0dc0f8b96af15c32bd0a0d75846bad555fcb8e37140409ee65a"} err="failed to get container status \"453bd6cfd132d0dc0f8b96af15c32bd0a0d75846bad555fcb8e37140409ee65a\": rpc error: code = NotFound desc = could not find container \"453bd6cfd132d0dc0f8b96af15c32bd0a0d75846bad555fcb8e37140409ee65a\": container with ID starting with 453bd6cfd132d0dc0f8b96af15c32bd0a0d75846bad555fcb8e37140409ee65a not found: ID does not exist" Feb 27 18:03:04 crc kubenswrapper[4830]: I0227 18:03:04.524438 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.132568363 podStartE2EDuration="2.524415301s" podCreationTimestamp="2026-02-27 18:03:02 +0000 UTC" firstStartedPulling="2026-02-27 18:03:03.170171496 +0000 UTC m=+6979.259443959" lastFinishedPulling="2026-02-27 18:03:03.562018434 +0000 UTC m=+6979.651290897" observedRunningTime="2026-02-27 18:03:04.509505834 +0000 UTC m=+6980.598778297" watchObservedRunningTime="2026-02-27 18:03:04.524415301 +0000 UTC m=+6980.613687764" Feb 27 18:03:04 crc kubenswrapper[4830]: I0227 18:03:04.557682 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5f45a253-e0e6-49aa-9c48-8c57b3639130-openstack-config-secret\") pod \"5f45a253-e0e6-49aa-9c48-8c57b3639130\" (UID: \"5f45a253-e0e6-49aa-9c48-8c57b3639130\") " Feb 27 18:03:04 crc kubenswrapper[4830]: I0227 18:03:04.558367 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4zz8\" (UniqueName: \"kubernetes.io/projected/5f45a253-e0e6-49aa-9c48-8c57b3639130-kube-api-access-m4zz8\") pod \"5f45a253-e0e6-49aa-9c48-8c57b3639130\" (UID: \"5f45a253-e0e6-49aa-9c48-8c57b3639130\") " Feb 27 18:03:04 crc kubenswrapper[4830]: I0227 18:03:04.558713 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5f45a253-e0e6-49aa-9c48-8c57b3639130-openstack-config\") pod \"5f45a253-e0e6-49aa-9c48-8c57b3639130\" (UID: \"5f45a253-e0e6-49aa-9c48-8c57b3639130\") " Feb 27 18:03:04 crc kubenswrapper[4830]: I0227 18:03:04.564851 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f45a253-e0e6-49aa-9c48-8c57b3639130-kube-api-access-m4zz8" (OuterVolumeSpecName: "kube-api-access-m4zz8") pod "5f45a253-e0e6-49aa-9c48-8c57b3639130" (UID: "5f45a253-e0e6-49aa-9c48-8c57b3639130"). InnerVolumeSpecName "kube-api-access-m4zz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:03:04 crc kubenswrapper[4830]: I0227 18:03:04.582931 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f45a253-e0e6-49aa-9c48-8c57b3639130-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "5f45a253-e0e6-49aa-9c48-8c57b3639130" (UID: "5f45a253-e0e6-49aa-9c48-8c57b3639130"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:03:04 crc kubenswrapper[4830]: I0227 18:03:04.623375 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f45a253-e0e6-49aa-9c48-8c57b3639130-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "5f45a253-e0e6-49aa-9c48-8c57b3639130" (UID: "5f45a253-e0e6-49aa-9c48-8c57b3639130"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:03:04 crc kubenswrapper[4830]: I0227 18:03:04.662582 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4zz8\" (UniqueName: \"kubernetes.io/projected/5f45a253-e0e6-49aa-9c48-8c57b3639130-kube-api-access-m4zz8\") on node \"crc\" DevicePath \"\"" Feb 27 18:03:04 crc kubenswrapper[4830]: I0227 18:03:04.662637 4830 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5f45a253-e0e6-49aa-9c48-8c57b3639130-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 27 18:03:04 crc kubenswrapper[4830]: I0227 18:03:04.662651 4830 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5f45a253-e0e6-49aa-9c48-8c57b3639130-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 27 18:03:04 crc kubenswrapper[4830]: I0227 18:03:04.777066 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f45a253-e0e6-49aa-9c48-8c57b3639130" path="/var/lib/kubelet/pods/5f45a253-e0e6-49aa-9c48-8c57b3639130/volumes" Feb 27 18:03:04 crc kubenswrapper[4830]: I0227 18:03:04.789825 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 27 18:03:05 crc kubenswrapper[4830]: I0227 18:03:05.488347 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"75bcbe49-556d-4af7-9506-514c14ec8d9e","Type":"ContainerStarted","Data":"c1f22cfa6d14945b274b21c86ee9215f2d4429ea8f246668402e311ca22b02e6"} Feb 27 18:03:07 crc kubenswrapper[4830]: E0227 18:03:07.766939 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536922-84cqt" podUID="d90c5d5a-0f24-48b8-b8c6-4652a1922a9e" Feb 27 18:03:12 crc kubenswrapper[4830]: I0227 18:03:12.558676 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 27 18:03:18 crc kubenswrapper[4830]: E0227 18:03:18.764234 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536922-84cqt" podUID="d90c5d5a-0f24-48b8-b8c6-4652a1922a9e" Feb 27 18:03:31 crc kubenswrapper[4830]: I0227 18:03:31.845655 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536922-84cqt" event={"ID":"d90c5d5a-0f24-48b8-b8c6-4652a1922a9e","Type":"ContainerStarted","Data":"1fc3e6825266a3a414c721da57ca610cb415101243b2b48b18494b8a2d76c81d"} Feb 27 18:03:31 crc kubenswrapper[4830]: I0227 18:03:31.872614 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536922-84cqt" podStartSLOduration=1.535476091 podStartE2EDuration="1m31.872581897s" podCreationTimestamp="2026-02-27 18:02:00 +0000 UTC" firstStartedPulling="2026-02-27 18:02:00.963119419 +0000 UTC m=+6917.052391882" lastFinishedPulling="2026-02-27 18:03:31.300225195 +0000 UTC m=+7007.389497688" observedRunningTime="2026-02-27 18:03:31.861918403 +0000 UTC m=+7007.951190876" watchObservedRunningTime="2026-02-27 18:03:31.872581897 +0000 UTC m=+7007.961854370" Feb 27 18:03:32 crc kubenswrapper[4830]: I0227 18:03:32.866241 4830 generic.go:334] "Generic (PLEG): container finished" podID="d90c5d5a-0f24-48b8-b8c6-4652a1922a9e" containerID="1fc3e6825266a3a414c721da57ca610cb415101243b2b48b18494b8a2d76c81d" exitCode=0 Feb 27 18:03:32 crc kubenswrapper[4830]: I0227 18:03:32.866391 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536922-84cqt" event={"ID":"d90c5d5a-0f24-48b8-b8c6-4652a1922a9e","Type":"ContainerDied","Data":"1fc3e6825266a3a414c721da57ca610cb415101243b2b48b18494b8a2d76c81d"} Feb 27 18:03:34 crc kubenswrapper[4830]: I0227 18:03:34.357631 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536922-84cqt" Feb 27 18:03:34 crc kubenswrapper[4830]: I0227 18:03:34.493902 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vq62f\" (UniqueName: \"kubernetes.io/projected/d90c5d5a-0f24-48b8-b8c6-4652a1922a9e-kube-api-access-vq62f\") pod \"d90c5d5a-0f24-48b8-b8c6-4652a1922a9e\" (UID: \"d90c5d5a-0f24-48b8-b8c6-4652a1922a9e\") " Feb 27 18:03:34 crc kubenswrapper[4830]: I0227 18:03:34.696280 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d90c5d5a-0f24-48b8-b8c6-4652a1922a9e-kube-api-access-vq62f" (OuterVolumeSpecName: "kube-api-access-vq62f") pod "d90c5d5a-0f24-48b8-b8c6-4652a1922a9e" (UID: "d90c5d5a-0f24-48b8-b8c6-4652a1922a9e"). InnerVolumeSpecName "kube-api-access-vq62f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:03:34 crc kubenswrapper[4830]: I0227 18:03:34.699441 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vq62f\" (UniqueName: \"kubernetes.io/projected/d90c5d5a-0f24-48b8-b8c6-4652a1922a9e-kube-api-access-vq62f\") on node \"crc\" DevicePath \"\"" Feb 27 18:03:34 crc kubenswrapper[4830]: I0227 18:03:34.901937 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536922-84cqt" event={"ID":"d90c5d5a-0f24-48b8-b8c6-4652a1922a9e","Type":"ContainerDied","Data":"d5e66dc87d19811dc35128db8ac05a826c715f7979d532f631af87fed8466d0d"} Feb 27 18:03:34 crc kubenswrapper[4830]: I0227 18:03:34.902010 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5e66dc87d19811dc35128db8ac05a826c715f7979d532f631af87fed8466d0d" Feb 27 18:03:34 crc kubenswrapper[4830]: I0227 18:03:34.902078 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536922-84cqt" Feb 27 18:03:34 crc kubenswrapper[4830]: I0227 18:03:34.966179 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536916-blsrw"] Feb 27 18:03:34 crc kubenswrapper[4830]: I0227 18:03:34.976635 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536916-blsrw"] Feb 27 18:03:36 crc kubenswrapper[4830]: I0227 18:03:36.785201 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9260de39-76c8-432d-9455-4e787911d8c7" path="/var/lib/kubelet/pods/9260de39-76c8-432d-9455-4e787911d8c7/volumes" Feb 27 18:03:37 crc kubenswrapper[4830]: I0227 18:03:37.960011 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"8608d556-6b34-4ab2-b676-007c65e0d359","Type":"ContainerStarted","Data":"c96cea3e27c69c89ed90fabc376fecce19752d44198573981a206a29d78e70c0"} Feb 27 18:03:38 crc kubenswrapper[4830]: I0227 18:03:38.973237 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"75bcbe49-556d-4af7-9506-514c14ec8d9e","Type":"ContainerStarted","Data":"63f4c97650f101718257cee1a13b22827d1ef3ced5db14ec45319dfbd01d9e60"} Feb 27 18:03:47 crc kubenswrapper[4830]: I0227 18:03:47.088991 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"8608d556-6b34-4ab2-b676-007c65e0d359","Type":"ContainerDied","Data":"c96cea3e27c69c89ed90fabc376fecce19752d44198573981a206a29d78e70c0"} Feb 27 18:03:47 crc kubenswrapper[4830]: I0227 18:03:47.088925 4830 generic.go:334] "Generic (PLEG): container finished" podID="8608d556-6b34-4ab2-b676-007c65e0d359" containerID="c96cea3e27c69c89ed90fabc376fecce19752d44198573981a206a29d78e70c0" exitCode=0 Feb 27 18:03:50 crc kubenswrapper[4830]: I0227 18:03:50.148859 4830 generic.go:334] "Generic (PLEG): container finished" podID="75bcbe49-556d-4af7-9506-514c14ec8d9e" containerID="63f4c97650f101718257cee1a13b22827d1ef3ced5db14ec45319dfbd01d9e60" exitCode=0 Feb 27 18:03:50 crc kubenswrapper[4830]: I0227 18:03:50.149003 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"75bcbe49-556d-4af7-9506-514c14ec8d9e","Type":"ContainerDied","Data":"63f4c97650f101718257cee1a13b22827d1ef3ced5db14ec45319dfbd01d9e60"} Feb 27 18:03:58 crc kubenswrapper[4830]: I0227 18:03:58.264460 4830 scope.go:117] "RemoveContainer" containerID="da93b37cf886804e6a0661b93d7072a6f631fd25a4f85cdf6068f5a8a083c68b" Feb 27 18:04:00 crc kubenswrapper[4830]: I0227 18:04:00.171354 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536924-p7752"] Feb 27 18:04:00 crc kubenswrapper[4830]: E0227 18:04:00.173164 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d90c5d5a-0f24-48b8-b8c6-4652a1922a9e" containerName="oc" Feb 27 18:04:00 crc kubenswrapper[4830]: I0227 18:04:00.173188 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="d90c5d5a-0f24-48b8-b8c6-4652a1922a9e" containerName="oc" Feb 27 18:04:00 crc kubenswrapper[4830]: I0227 18:04:00.173596 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="d90c5d5a-0f24-48b8-b8c6-4652a1922a9e" containerName="oc" Feb 27 18:04:00 crc kubenswrapper[4830]: I0227 18:04:00.177059 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536924-p7752" Feb 27 18:04:00 crc kubenswrapper[4830]: I0227 18:04:00.182421 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 18:04:00 crc kubenswrapper[4830]: I0227 18:04:00.182680 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 18:04:00 crc kubenswrapper[4830]: I0227 18:04:00.182906 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 18:04:00 crc kubenswrapper[4830]: I0227 18:04:00.182914 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536924-p7752"] Feb 27 18:04:00 crc kubenswrapper[4830]: I0227 18:04:00.325579 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v57nw\" (UniqueName: \"kubernetes.io/projected/0ead8534-2b58-480e-9367-3aa26d44a876-kube-api-access-v57nw\") pod \"auto-csr-approver-29536924-p7752\" (UID: \"0ead8534-2b58-480e-9367-3aa26d44a876\") " pod="openshift-infra/auto-csr-approver-29536924-p7752" Feb 27 18:04:00 crc kubenswrapper[4830]: I0227 18:04:00.431298 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v57nw\" (UniqueName: \"kubernetes.io/projected/0ead8534-2b58-480e-9367-3aa26d44a876-kube-api-access-v57nw\") pod \"auto-csr-approver-29536924-p7752\" (UID: \"0ead8534-2b58-480e-9367-3aa26d44a876\") " pod="openshift-infra/auto-csr-approver-29536924-p7752" Feb 27 18:04:00 crc kubenswrapper[4830]: I0227 18:04:00.454673 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v57nw\" (UniqueName: \"kubernetes.io/projected/0ead8534-2b58-480e-9367-3aa26d44a876-kube-api-access-v57nw\") pod \"auto-csr-approver-29536924-p7752\" (UID: \"0ead8534-2b58-480e-9367-3aa26d44a876\") " pod="openshift-infra/auto-csr-approver-29536924-p7752" Feb 27 18:04:00 crc kubenswrapper[4830]: I0227 18:04:00.514872 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536924-p7752" Feb 27 18:04:01 crc kubenswrapper[4830]: I0227 18:04:01.049370 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536924-p7752"] Feb 27 18:04:01 crc kubenswrapper[4830]: I0227 18:04:01.269176 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536924-p7752" event={"ID":"0ead8534-2b58-480e-9367-3aa26d44a876","Type":"ContainerStarted","Data":"4ea3bc4bc42188a06c59f8b695358183586a947e4d630c11b44a69d4bacf8cff"} Feb 27 18:04:02 crc kubenswrapper[4830]: I0227 18:04:02.263302 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lqb2w"] Feb 27 18:04:02 crc kubenswrapper[4830]: I0227 18:04:02.267068 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lqb2w" Feb 27 18:04:02 crc kubenswrapper[4830]: I0227 18:04:02.276192 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lqb2w"] Feb 27 18:04:02 crc kubenswrapper[4830]: I0227 18:04:02.385099 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0596772a-54ae-4d9e-9db4-5d7138bae51e-utilities\") pod \"redhat-marketplace-lqb2w\" (UID: \"0596772a-54ae-4d9e-9db4-5d7138bae51e\") " pod="openshift-marketplace/redhat-marketplace-lqb2w" Feb 27 18:04:02 crc kubenswrapper[4830]: I0227 18:04:02.385210 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0596772a-54ae-4d9e-9db4-5d7138bae51e-catalog-content\") pod \"redhat-marketplace-lqb2w\" (UID: \"0596772a-54ae-4d9e-9db4-5d7138bae51e\") " pod="openshift-marketplace/redhat-marketplace-lqb2w" Feb 27 18:04:02 crc kubenswrapper[4830]: I0227 18:04:02.385303 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jztz\" (UniqueName: \"kubernetes.io/projected/0596772a-54ae-4d9e-9db4-5d7138bae51e-kube-api-access-8jztz\") pod \"redhat-marketplace-lqb2w\" (UID: \"0596772a-54ae-4d9e-9db4-5d7138bae51e\") " pod="openshift-marketplace/redhat-marketplace-lqb2w" Feb 27 18:04:02 crc kubenswrapper[4830]: I0227 18:04:02.487230 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0596772a-54ae-4d9e-9db4-5d7138bae51e-utilities\") pod \"redhat-marketplace-lqb2w\" (UID: \"0596772a-54ae-4d9e-9db4-5d7138bae51e\") " pod="openshift-marketplace/redhat-marketplace-lqb2w" Feb 27 18:04:02 crc kubenswrapper[4830]: I0227 18:04:02.487333 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0596772a-54ae-4d9e-9db4-5d7138bae51e-catalog-content\") pod \"redhat-marketplace-lqb2w\" (UID: \"0596772a-54ae-4d9e-9db4-5d7138bae51e\") " pod="openshift-marketplace/redhat-marketplace-lqb2w" Feb 27 18:04:02 crc kubenswrapper[4830]: I0227 18:04:02.487429 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jztz\" (UniqueName: \"kubernetes.io/projected/0596772a-54ae-4d9e-9db4-5d7138bae51e-kube-api-access-8jztz\") pod \"redhat-marketplace-lqb2w\" (UID: \"0596772a-54ae-4d9e-9db4-5d7138bae51e\") " pod="openshift-marketplace/redhat-marketplace-lqb2w" Feb 27 18:04:02 crc kubenswrapper[4830]: I0227 18:04:02.488078 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0596772a-54ae-4d9e-9db4-5d7138bae51e-utilities\") pod \"redhat-marketplace-lqb2w\" (UID: \"0596772a-54ae-4d9e-9db4-5d7138bae51e\") " pod="openshift-marketplace/redhat-marketplace-lqb2w" Feb 27 18:04:02 crc kubenswrapper[4830]: I0227 18:04:02.488323 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0596772a-54ae-4d9e-9db4-5d7138bae51e-catalog-content\") pod \"redhat-marketplace-lqb2w\" (UID: \"0596772a-54ae-4d9e-9db4-5d7138bae51e\") " pod="openshift-marketplace/redhat-marketplace-lqb2w" Feb 27 18:04:02 crc kubenswrapper[4830]: I0227 18:04:02.514791 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jztz\" (UniqueName: \"kubernetes.io/projected/0596772a-54ae-4d9e-9db4-5d7138bae51e-kube-api-access-8jztz\") pod \"redhat-marketplace-lqb2w\" (UID: \"0596772a-54ae-4d9e-9db4-5d7138bae51e\") " pod="openshift-marketplace/redhat-marketplace-lqb2w" Feb 27 18:04:02 crc kubenswrapper[4830]: I0227 18:04:02.601918 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lqb2w" Feb 27 18:04:03 crc kubenswrapper[4830]: I0227 18:04:03.120024 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lqb2w"] Feb 27 18:04:03 crc kubenswrapper[4830]: I0227 18:04:03.296012 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536924-p7752" event={"ID":"0ead8534-2b58-480e-9367-3aa26d44a876","Type":"ContainerStarted","Data":"73d497eb649e304ed03d6fbd993e8d97dd6c23c18aa5eb0096b8c72f39c60a21"} Feb 27 18:04:03 crc kubenswrapper[4830]: I0227 18:04:03.297380 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lqb2w" event={"ID":"0596772a-54ae-4d9e-9db4-5d7138bae51e","Type":"ContainerStarted","Data":"11364ea2192a8d2e3e0a631727f6e74af25a3212f51c9affb94c0235519e1bf3"} Feb 27 18:04:03 crc kubenswrapper[4830]: I0227 18:04:03.321808 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536924-p7752" podStartSLOduration=1.97091448 podStartE2EDuration="3.321788435s" podCreationTimestamp="2026-02-27 18:04:00 +0000 UTC" firstStartedPulling="2026-02-27 18:04:01.058453676 +0000 UTC m=+7037.147726149" lastFinishedPulling="2026-02-27 18:04:02.409327641 +0000 UTC m=+7038.498600104" observedRunningTime="2026-02-27 18:04:03.312212846 +0000 UTC m=+7039.401485309" watchObservedRunningTime="2026-02-27 18:04:03.321788435 +0000 UTC m=+7039.411060898" Feb 27 18:04:04 crc kubenswrapper[4830]: I0227 18:04:04.308547 4830 generic.go:334] "Generic (PLEG): container finished" podID="0ead8534-2b58-480e-9367-3aa26d44a876" containerID="73d497eb649e304ed03d6fbd993e8d97dd6c23c18aa5eb0096b8c72f39c60a21" exitCode=0 Feb 27 18:04:04 crc kubenswrapper[4830]: I0227 18:04:04.308600 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536924-p7752" event={"ID":"0ead8534-2b58-480e-9367-3aa26d44a876","Type":"ContainerDied","Data":"73d497eb649e304ed03d6fbd993e8d97dd6c23c18aa5eb0096b8c72f39c60a21"} Feb 27 18:04:04 crc kubenswrapper[4830]: I0227 18:04:04.311928 4830 generic.go:334] "Generic (PLEG): container finished" podID="0596772a-54ae-4d9e-9db4-5d7138bae51e" containerID="0ef930de79a9a1b547dc8bd4599716487427c019dc19d5930b991e71bda0fae4" exitCode=0 Feb 27 18:04:04 crc kubenswrapper[4830]: I0227 18:04:04.311972 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lqb2w" event={"ID":"0596772a-54ae-4d9e-9db4-5d7138bae51e","Type":"ContainerDied","Data":"0ef930de79a9a1b547dc8bd4599716487427c019dc19d5930b991e71bda0fae4"} Feb 27 18:04:05 crc kubenswrapper[4830]: E0227 18:04:05.029788 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 18:04:05 crc kubenswrapper[4830]: E0227 18:04:05.029939 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8jztz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-lqb2w_openshift-marketplace(0596772a-54ae-4d9e-9db4-5d7138bae51e): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:04:05 crc kubenswrapper[4830]: E0227 18:04:05.031178 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:04:05 crc kubenswrapper[4830]: E0227 18:04:05.325577 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:04:08 crc kubenswrapper[4830]: I0227 18:04:08.800427 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536924-p7752" Feb 27 18:04:08 crc kubenswrapper[4830]: I0227 18:04:08.936338 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v57nw\" (UniqueName: \"kubernetes.io/projected/0ead8534-2b58-480e-9367-3aa26d44a876-kube-api-access-v57nw\") pod \"0ead8534-2b58-480e-9367-3aa26d44a876\" (UID: \"0ead8534-2b58-480e-9367-3aa26d44a876\") " Feb 27 18:04:08 crc kubenswrapper[4830]: I0227 18:04:08.941022 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ead8534-2b58-480e-9367-3aa26d44a876-kube-api-access-v57nw" (OuterVolumeSpecName: "kube-api-access-v57nw") pod "0ead8534-2b58-480e-9367-3aa26d44a876" (UID: "0ead8534-2b58-480e-9367-3aa26d44a876"). InnerVolumeSpecName "kube-api-access-v57nw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:04:09 crc kubenswrapper[4830]: I0227 18:04:09.040570 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v57nw\" (UniqueName: \"kubernetes.io/projected/0ead8534-2b58-480e-9367-3aa26d44a876-kube-api-access-v57nw\") on node \"crc\" DevicePath \"\"" Feb 27 18:04:09 crc kubenswrapper[4830]: I0227 18:04:09.394877 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"75bcbe49-556d-4af7-9506-514c14ec8d9e","Type":"ContainerStarted","Data":"5d5205c544922a01f50c4b74379c0416a22e983c2a10c4ceea95d942bdeefad2"} Feb 27 18:04:09 crc kubenswrapper[4830]: I0227 18:04:09.397272 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536924-p7752" event={"ID":"0ead8534-2b58-480e-9367-3aa26d44a876","Type":"ContainerDied","Data":"4ea3bc4bc42188a06c59f8b695358183586a947e4d630c11b44a69d4bacf8cff"} Feb 27 18:04:09 crc kubenswrapper[4830]: I0227 18:04:09.397316 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ea3bc4bc42188a06c59f8b695358183586a947e4d630c11b44a69d4bacf8cff" Feb 27 18:04:09 crc kubenswrapper[4830]: I0227 18:04:09.397373 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536924-p7752" Feb 27 18:04:09 crc kubenswrapper[4830]: I0227 18:04:09.908050 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536918-j6j9d"] Feb 27 18:04:09 crc kubenswrapper[4830]: I0227 18:04:09.915859 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536918-j6j9d"] Feb 27 18:04:10 crc kubenswrapper[4830]: I0227 18:04:10.783139 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8018a2b4-d99d-40c0-bd20-b38c65447309" path="/var/lib/kubelet/pods/8018a2b4-d99d-40c0-bd20-b38c65447309/volumes" Feb 27 18:04:14 crc kubenswrapper[4830]: I0227 18:04:14.491906 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"75bcbe49-556d-4af7-9506-514c14ec8d9e","Type":"ContainerStarted","Data":"260ad7227896946e4c41c2026b6fb78a4686dbc9350a54df630187d5a03c63a0"} Feb 27 18:04:14 crc kubenswrapper[4830]: E0227 18:04:14.839434 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/alertmanager-rhel9@sha256=b23d4d4796437a2d93a9ad40b24c1130bcf4315029983aa275426d01d2955388/signature-4: status 500 (Internal Server Error)" image="registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3" Feb 27 18:04:14 crc kubenswrapper[4830]: E0227 18:04:14.839753 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:alertmanager,Image:registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3,Command:[],Args:[--config.file=/etc/alertmanager/config_out/alertmanager.env.yaml --storage.path=/alertmanager --data.retention=120h --cluster.listen-address=[$(POD_IP)]:9094 --web.listen-address=:9093 --web.route-prefix=/ --cluster.label=openstack/metric-storage --cluster.peer=alertmanager-metric-storage-0.alertmanager-operated:9094 --cluster.peer=alertmanager-metric-storage-1.alertmanager-operated:9094 --cluster.reconnect-timeout=5m --web.config.file=/etc/alertmanager/web_config/web-config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:web,HostPort:0,ContainerPort:9093,Protocol:TCP,HostIP:,},ContainerPort{Name:mesh-tcp,HostPort:0,ContainerPort:9094,Protocol:TCP,HostIP:,},ContainerPort{Name:mesh-udp,HostPort:0,ContainerPort:9094,Protocol:UDP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{memory: {{209715200 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:false,MountPath:/etc/alertmanager/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-out,ReadOnly:true,MountPath:/etc/alertmanager/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-assets,ReadOnly:true,MountPath:/etc/alertmanager/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:alertmanager-metric-storage-db,ReadOnly:false,MountPath:/alertmanager,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:web-config,ReadOnly:true,MountPath:/etc/alertmanager/web_config/web-config.yaml,SubPath:web-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cluster-tls-config,ReadOnly:true,MountPath:/etc/alertmanager/cluster_tls_config/cluster-tls-config.yaml,SubPath:cluster-tls-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zv45l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/healthy,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod alertmanager-metric-storage-0_openstack(8608d556-6b34-4ab2-b676-007c65e0d359): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/alertmanager-rhel9@sha256=b23d4d4796437a2d93a9ad40b24c1130bcf4315029983aa275426d01d2955388/signature-4: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:04:16 crc kubenswrapper[4830]: E0227 18:04:16.577701 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 18:04:16 crc kubenswrapper[4830]: E0227 18:04:16.578897 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8jztz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-lqb2w_openshift-marketplace(0596772a-54ae-4d9e-9db4-5d7138bae51e): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:04:16 crc kubenswrapper[4830]: E0227 18:04:16.580312 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:04:21 crc kubenswrapper[4830]: E0227 18:04:21.227530 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/alertmanager-rhel9@sha256=b23d4d4796437a2d93a9ad40b24c1130bcf4315029983aa275426d01d2955388/signature-4: status 500 (Internal Server Error)\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:04:21 crc kubenswrapper[4830]: I0227 18:04:21.588829 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"8608d556-6b34-4ab2-b676-007c65e0d359","Type":"ContainerStarted","Data":"30de8cf8a263a3011eaaf6c241babf053d03fc71e44735b4888b83193b370344"} Feb 27 18:04:21 crc kubenswrapper[4830]: E0227 18:04:21.593696 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:04:29 crc kubenswrapper[4830]: E0227 18:04:29.765683 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:04:33 crc kubenswrapper[4830]: I0227 18:04:33.160751 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:04:33 crc kubenswrapper[4830]: I0227 18:04:33.161906 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:04:35 crc kubenswrapper[4830]: E0227 18:04:35.575843 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/alertmanager-rhel9@sha256=b23d4d4796437a2d93a9ad40b24c1130bcf4315029983aa275426d01d2955388/signature-4: status 500 (Internal Server Error)" image="registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3" Feb 27 18:04:35 crc kubenswrapper[4830]: E0227 18:04:35.576568 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:alertmanager,Image:registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3,Command:[],Args:[--config.file=/etc/alertmanager/config_out/alertmanager.env.yaml --storage.path=/alertmanager --data.retention=120h --cluster.listen-address=[$(POD_IP)]:9094 --web.listen-address=:9093 --web.route-prefix=/ --cluster.label=openstack/metric-storage --cluster.peer=alertmanager-metric-storage-0.alertmanager-operated:9094 --cluster.peer=alertmanager-metric-storage-1.alertmanager-operated:9094 --cluster.reconnect-timeout=5m --web.config.file=/etc/alertmanager/web_config/web-config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:web,HostPort:0,ContainerPort:9093,Protocol:TCP,HostIP:,},ContainerPort{Name:mesh-tcp,HostPort:0,ContainerPort:9094,Protocol:TCP,HostIP:,},ContainerPort{Name:mesh-udp,HostPort:0,ContainerPort:9094,Protocol:UDP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{memory: {{209715200 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:false,MountPath:/etc/alertmanager/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-out,ReadOnly:true,MountPath:/etc/alertmanager/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-assets,ReadOnly:true,MountPath:/etc/alertmanager/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:alertmanager-metric-storage-db,ReadOnly:false,MountPath:/alertmanager,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:web-config,ReadOnly:true,MountPath:/etc/alertmanager/web_config/web-config.yaml,SubPath:web-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cluster-tls-config,ReadOnly:true,MountPath:/etc/alertmanager/cluster_tls_config/cluster-tls-config.yaml,SubPath:cluster-tls-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zv45l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/healthy,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod alertmanager-metric-storage-0_openstack(8608d556-6b34-4ab2-b676-007c65e0d359): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/alertmanager-rhel9@sha256=b23d4d4796437a2d93a9ad40b24c1130bcf4315029983aa275426d01d2955388/signature-4: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:04:35 crc kubenswrapper[4830]: E0227 18:04:35.577900 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/alertmanager-rhel9@sha256=b23d4d4796437a2d93a9ad40b24c1130bcf4315029983aa275426d01d2955388/signature-4: status 500 (Internal Server Error)\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:04:42 crc kubenswrapper[4830]: I0227 18:04:42.767901 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 18:04:43 crc kubenswrapper[4830]: E0227 18:04:43.418603 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 18:04:43 crc kubenswrapper[4830]: E0227 18:04:43.419254 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8jztz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-lqb2w_openshift-marketplace(0596772a-54ae-4d9e-9db4-5d7138bae51e): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:04:43 crc kubenswrapper[4830]: E0227 18:04:43.420872 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:04:48 crc kubenswrapper[4830]: E0227 18:04:48.768464 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:04:55 crc kubenswrapper[4830]: E0227 18:04:55.766323 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:04:58 crc kubenswrapper[4830]: I0227 18:04:58.418081 4830 scope.go:117] "RemoveContainer" containerID="b48fb3abfdc43fbf3a7970fd90270b2081901067001f39b5d405b653414eb321" Feb 27 18:05:01 crc kubenswrapper[4830]: E0227 18:05:01.900661 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/alertmanager-rhel9@sha256=b23d4d4796437a2d93a9ad40b24c1130bcf4315029983aa275426d01d2955388/signature-4: status 500 (Internal Server Error)" image="registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3" Feb 27 18:05:01 crc kubenswrapper[4830]: E0227 18:05:01.901476 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:alertmanager,Image:registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3,Command:[],Args:[--config.file=/etc/alertmanager/config_out/alertmanager.env.yaml --storage.path=/alertmanager --data.retention=120h --cluster.listen-address=[$(POD_IP)]:9094 --web.listen-address=:9093 --web.route-prefix=/ --cluster.label=openstack/metric-storage --cluster.peer=alertmanager-metric-storage-0.alertmanager-operated:9094 --cluster.peer=alertmanager-metric-storage-1.alertmanager-operated:9094 --cluster.reconnect-timeout=5m --web.config.file=/etc/alertmanager/web_config/web-config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:web,HostPort:0,ContainerPort:9093,Protocol:TCP,HostIP:,},ContainerPort{Name:mesh-tcp,HostPort:0,ContainerPort:9094,Protocol:TCP,HostIP:,},ContainerPort{Name:mesh-udp,HostPort:0,ContainerPort:9094,Protocol:UDP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{memory: {{209715200 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:false,MountPath:/etc/alertmanager/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-out,ReadOnly:true,MountPath:/etc/alertmanager/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-assets,ReadOnly:true,MountPath:/etc/alertmanager/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:alertmanager-metric-storage-db,ReadOnly:false,MountPath:/alertmanager,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:web-config,ReadOnly:true,MountPath:/etc/alertmanager/web_config/web-config.yaml,SubPath:web-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cluster-tls-config,ReadOnly:true,MountPath:/etc/alertmanager/cluster_tls_config/cluster-tls-config.yaml,SubPath:cluster-tls-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zv45l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/healthy,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod alertmanager-metric-storage-0_openstack(8608d556-6b34-4ab2-b676-007c65e0d359): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/alertmanager-rhel9@sha256=b23d4d4796437a2d93a9ad40b24c1130bcf4315029983aa275426d01d2955388/signature-4: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:05:01 crc kubenswrapper[4830]: E0227 18:05:01.902773 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/alertmanager-rhel9@sha256=b23d4d4796437a2d93a9ad40b24c1130bcf4315029983aa275426d01d2955388/signature-4: status 500 (Internal Server Error)\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:05:03 crc kubenswrapper[4830]: I0227 18:05:03.160137 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:05:03 crc kubenswrapper[4830]: I0227 18:05:03.160550 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:05:07 crc kubenswrapper[4830]: E0227 18:05:07.765820 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:05:09 crc kubenswrapper[4830]: E0227 18:05:09.750239 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/thanos-rhel9@sha256=b77218de6528d52542abddf9fb3faececdf1a3e47987ac740ab00468d461a60b/signature-4: status 500 (Internal Server Error)" image="registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34" Feb 27 18:05:09 crc kubenswrapper[4830]: E0227 18:05:09.751067 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:thanos-sidecar,Image:registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34,Command:[],Args:[sidecar --prometheus.url=http://localhost:9090/ --grpc-address=:10901 --http-address=:10902 --log.level=info --prometheus.http-client-file=/etc/thanos/config/prometheus.http-client-file.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:10902,Protocol:TCP,HostIP:,},ContainerPort{Name:grpc,HostPort:0,ContainerPort:10901,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:thanos-prometheus-http-client-file,ReadOnly:false,MountPath:/etc/thanos/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w6l8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(75bcbe49-556d-4af7-9506-514c14ec8d9e): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/thanos-rhel9@sha256=b77218de6528d52542abddf9fb3faececdf1a3e47987ac740ab00468d461a60b/signature-4: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:05:09 crc kubenswrapper[4830]: E0227 18:05:09.752989 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/thanos-rhel9@sha256=b77218de6528d52542abddf9fb3faececdf1a3e47987ac740ab00468d461a60b/signature-4: status 500 (Internal Server Error)\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:05:10 crc kubenswrapper[4830]: E0227 18:05:10.168484 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:05:14 crc kubenswrapper[4830]: I0227 18:05:14.061040 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 27 18:05:14 crc kubenswrapper[4830]: E0227 18:05:14.064839 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:05:15 crc kubenswrapper[4830]: E0227 18:05:15.765775 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:05:19 crc kubenswrapper[4830]: I0227 18:05:19.060859 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 27 18:05:19 crc kubenswrapper[4830]: E0227 18:05:19.065747 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:05:19 crc kubenswrapper[4830]: I0227 18:05:19.065853 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 27 18:05:19 crc kubenswrapper[4830]: I0227 18:05:19.294357 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 27 18:05:19 crc kubenswrapper[4830]: E0227 18:05:19.295390 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:05:21 crc kubenswrapper[4830]: E0227 18:05:21.119230 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/thanos-rhel9@sha256=b77218de6528d52542abddf9fb3faececdf1a3e47987ac740ab00468d461a60b/signature-4: status 500 (Internal Server Error)" image="registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34" Feb 27 18:05:21 crc kubenswrapper[4830]: E0227 18:05:21.120035 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:thanos-sidecar,Image:registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34,Command:[],Args:[sidecar --prometheus.url=http://localhost:9090/ --grpc-address=:10901 --http-address=:10902 --log.level=info --prometheus.http-client-file=/etc/thanos/config/prometheus.http-client-file.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:10902,Protocol:TCP,HostIP:,},ContainerPort{Name:grpc,HostPort:0,ContainerPort:10901,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:thanos-prometheus-http-client-file,ReadOnly:false,MountPath:/etc/thanos/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w6l8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(75bcbe49-556d-4af7-9506-514c14ec8d9e): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/thanos-rhel9@sha256=b77218de6528d52542abddf9fb3faececdf1a3e47987ac740ab00468d461a60b/signature-4: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:05:21 crc kubenswrapper[4830]: E0227 18:05:21.121284 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/thanos-rhel9@sha256=b77218de6528d52542abddf9fb3faececdf1a3e47987ac740ab00468d461a60b/signature-4: status 500 (Internal Server Error)\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:05:21 crc kubenswrapper[4830]: E0227 18:05:21.766085 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:05:27 crc kubenswrapper[4830]: E0227 18:05:27.767221 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:05:31 crc kubenswrapper[4830]: E0227 18:05:31.765712 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:05:33 crc kubenswrapper[4830]: I0227 18:05:33.160831 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:05:33 crc kubenswrapper[4830]: I0227 18:05:33.161320 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:05:33 crc kubenswrapper[4830]: I0227 18:05:33.161378 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 18:05:33 crc kubenswrapper[4830]: I0227 18:05:33.162130 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"81ef18a8ceffa5c1cfa26cc002b47333287098e5e6cdb33647e44f342d553fc2"} pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 18:05:33 crc kubenswrapper[4830]: I0227 18:05:33.162188 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" containerID="cri-o://81ef18a8ceffa5c1cfa26cc002b47333287098e5e6cdb33647e44f342d553fc2" gracePeriod=600 Feb 27 18:05:33 crc kubenswrapper[4830]: E0227 18:05:33.307856 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 18:05:33 crc kubenswrapper[4830]: I0227 18:05:33.462136 4830 generic.go:334] "Generic (PLEG): container finished" podID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerID="81ef18a8ceffa5c1cfa26cc002b47333287098e5e6cdb33647e44f342d553fc2" exitCode=0 Feb 27 18:05:33 crc kubenswrapper[4830]: I0227 18:05:33.462248 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerDied","Data":"81ef18a8ceffa5c1cfa26cc002b47333287098e5e6cdb33647e44f342d553fc2"} Feb 27 18:05:33 crc kubenswrapper[4830]: I0227 18:05:33.462543 4830 scope.go:117] "RemoveContainer" containerID="364bac5e44d6ecef577235338aa01e0eab35896300d6d5c2d81ef312d7b04024" Feb 27 18:05:33 crc kubenswrapper[4830]: I0227 18:05:33.464061 4830 scope.go:117] "RemoveContainer" containerID="81ef18a8ceffa5c1cfa26cc002b47333287098e5e6cdb33647e44f342d553fc2" Feb 27 18:05:33 crc kubenswrapper[4830]: E0227 18:05:33.464824 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 18:05:34 crc kubenswrapper[4830]: E0227 18:05:34.433107 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 18:05:34 crc kubenswrapper[4830]: E0227 18:05:34.433633 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8jztz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-lqb2w_openshift-marketplace(0596772a-54ae-4d9e-9db4-5d7138bae51e): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:05:34 crc kubenswrapper[4830]: E0227 18:05:34.434896 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:05:43 crc kubenswrapper[4830]: E0227 18:05:43.864547 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/alertmanager-rhel9@sha256=b23d4d4796437a2d93a9ad40b24c1130bcf4315029983aa275426d01d2955388/signature-4: status 500 (Internal Server Error)" image="registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3" Feb 27 18:05:43 crc kubenswrapper[4830]: E0227 18:05:43.866864 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:alertmanager,Image:registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3,Command:[],Args:[--config.file=/etc/alertmanager/config_out/alertmanager.env.yaml --storage.path=/alertmanager --data.retention=120h --cluster.listen-address=[$(POD_IP)]:9094 --web.listen-address=:9093 --web.route-prefix=/ --cluster.label=openstack/metric-storage --cluster.peer=alertmanager-metric-storage-0.alertmanager-operated:9094 --cluster.peer=alertmanager-metric-storage-1.alertmanager-operated:9094 --cluster.reconnect-timeout=5m --web.config.file=/etc/alertmanager/web_config/web-config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:web,HostPort:0,ContainerPort:9093,Protocol:TCP,HostIP:,},ContainerPort{Name:mesh-tcp,HostPort:0,ContainerPort:9094,Protocol:TCP,HostIP:,},ContainerPort{Name:mesh-udp,HostPort:0,ContainerPort:9094,Protocol:UDP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{memory: {{209715200 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:false,MountPath:/etc/alertmanager/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-out,ReadOnly:true,MountPath:/etc/alertmanager/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-assets,ReadOnly:true,MountPath:/etc/alertmanager/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:alertmanager-metric-storage-db,ReadOnly:false,MountPath:/alertmanager,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:web-config,ReadOnly:true,MountPath:/etc/alertmanager/web_config/web-config.yaml,SubPath:web-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cluster-tls-config,ReadOnly:true,MountPath:/etc/alertmanager/cluster_tls_config/cluster-tls-config.yaml,SubPath:cluster-tls-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zv45l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/healthy,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod alertmanager-metric-storage-0_openstack(8608d556-6b34-4ab2-b676-007c65e0d359): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/alertmanager-rhel9@sha256=b23d4d4796437a2d93a9ad40b24c1130bcf4315029983aa275426d01d2955388/signature-4: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:05:43 crc kubenswrapper[4830]: E0227 18:05:43.869008 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/alertmanager-rhel9@sha256=b23d4d4796437a2d93a9ad40b24c1130bcf4315029983aa275426d01d2955388/signature-4: status 500 (Internal Server Error)\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:05:45 crc kubenswrapper[4830]: E0227 18:05:45.764362 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:05:45 crc kubenswrapper[4830]: E0227 18:05:45.795823 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/thanos-rhel9@sha256=b77218de6528d52542abddf9fb3faececdf1a3e47987ac740ab00468d461a60b/signature-4: status 500 (Internal Server Error)" image="registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34" Feb 27 18:05:45 crc kubenswrapper[4830]: E0227 18:05:45.796111 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:thanos-sidecar,Image:registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34,Command:[],Args:[sidecar --prometheus.url=http://localhost:9090/ --grpc-address=:10901 --http-address=:10902 --log.level=info --prometheus.http-client-file=/etc/thanos/config/prometheus.http-client-file.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:10902,Protocol:TCP,HostIP:,},ContainerPort{Name:grpc,HostPort:0,ContainerPort:10901,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:thanos-prometheus-http-client-file,ReadOnly:false,MountPath:/etc/thanos/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w6l8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(75bcbe49-556d-4af7-9506-514c14ec8d9e): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/thanos-rhel9@sha256=b77218de6528d52542abddf9fb3faececdf1a3e47987ac740ab00468d461a60b/signature-4: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:05:45 crc kubenswrapper[4830]: E0227 18:05:45.798348 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/thanos-rhel9@sha256=b77218de6528d52542abddf9fb3faececdf1a3e47987ac740ab00468d461a60b/signature-4: status 500 (Internal Server Error)\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:05:46 crc kubenswrapper[4830]: I0227 18:05:46.763165 4830 scope.go:117] "RemoveContainer" containerID="81ef18a8ceffa5c1cfa26cc002b47333287098e5e6cdb33647e44f342d553fc2" Feb 27 18:05:46 crc kubenswrapper[4830]: E0227 18:05:46.763687 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 18:05:58 crc kubenswrapper[4830]: I0227 18:05:58.764606 4830 scope.go:117] "RemoveContainer" containerID="81ef18a8ceffa5c1cfa26cc002b47333287098e5e6cdb33647e44f342d553fc2" Feb 27 18:05:58 crc kubenswrapper[4830]: E0227 18:05:58.765987 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:05:58 crc kubenswrapper[4830]: E0227 18:05:58.766242 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:05:58 crc kubenswrapper[4830]: E0227 18:05:58.766861 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 18:06:00 crc kubenswrapper[4830]: I0227 18:06:00.159112 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536926-4ghnd"] Feb 27 18:06:00 crc kubenswrapper[4830]: E0227 18:06:00.159769 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ead8534-2b58-480e-9367-3aa26d44a876" containerName="oc" Feb 27 18:06:00 crc kubenswrapper[4830]: I0227 18:06:00.159780 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ead8534-2b58-480e-9367-3aa26d44a876" containerName="oc" Feb 27 18:06:00 crc kubenswrapper[4830]: I0227 18:06:00.160007 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ead8534-2b58-480e-9367-3aa26d44a876" containerName="oc" Feb 27 18:06:00 crc kubenswrapper[4830]: I0227 18:06:00.160804 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" Feb 27 18:06:00 crc kubenswrapper[4830]: I0227 18:06:00.163565 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-lmbtm" Feb 27 18:06:00 crc kubenswrapper[4830]: I0227 18:06:00.164725 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 18:06:00 crc kubenswrapper[4830]: I0227 18:06:00.166176 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 18:06:00 crc kubenswrapper[4830]: I0227 18:06:00.174761 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536926-4ghnd"] Feb 27 18:06:00 crc kubenswrapper[4830]: I0227 18:06:00.252165 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj7ht\" (UniqueName: \"kubernetes.io/projected/43ed5a43-8e62-46bf-8151-7179e13730dd-kube-api-access-qj7ht\") pod \"auto-csr-approver-29536926-4ghnd\" (UID: \"43ed5a43-8e62-46bf-8151-7179e13730dd\") " pod="openshift-infra/auto-csr-approver-29536926-4ghnd" Feb 27 18:06:00 crc kubenswrapper[4830]: I0227 18:06:00.354273 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj7ht\" (UniqueName: \"kubernetes.io/projected/43ed5a43-8e62-46bf-8151-7179e13730dd-kube-api-access-qj7ht\") pod \"auto-csr-approver-29536926-4ghnd\" (UID: \"43ed5a43-8e62-46bf-8151-7179e13730dd\") " pod="openshift-infra/auto-csr-approver-29536926-4ghnd" Feb 27 18:06:00 crc kubenswrapper[4830]: I0227 18:06:00.376437 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj7ht\" (UniqueName: \"kubernetes.io/projected/43ed5a43-8e62-46bf-8151-7179e13730dd-kube-api-access-qj7ht\") pod \"auto-csr-approver-29536926-4ghnd\" (UID: \"43ed5a43-8e62-46bf-8151-7179e13730dd\") " pod="openshift-infra/auto-csr-approver-29536926-4ghnd" Feb 27 18:06:00 crc kubenswrapper[4830]: I0227 18:06:00.510885 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" Feb 27 18:06:00 crc kubenswrapper[4830]: E0227 18:06:00.766846 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:06:01 crc kubenswrapper[4830]: I0227 18:06:01.078915 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536926-4ghnd"] Feb 27 18:06:01 crc kubenswrapper[4830]: W0227 18:06:01.088308 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43ed5a43_8e62_46bf_8151_7179e13730dd.slice/crio-8960ba672c0fdcd355a7dfa197751d5bcabeaac0f1dd0b287a784130caa286ea WatchSource:0}: Error finding container 8960ba672c0fdcd355a7dfa197751d5bcabeaac0f1dd0b287a784130caa286ea: Status 404 returned error can't find the container with id 8960ba672c0fdcd355a7dfa197751d5bcabeaac0f1dd0b287a784130caa286ea Feb 27 18:06:01 crc kubenswrapper[4830]: I0227 18:06:01.914162 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" event={"ID":"43ed5a43-8e62-46bf-8151-7179e13730dd","Type":"ContainerStarted","Data":"8960ba672c0fdcd355a7dfa197751d5bcabeaac0f1dd0b287a784130caa286ea"} Feb 27 18:06:02 crc kubenswrapper[4830]: E0227 18:06:02.080995 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:06:02 crc kubenswrapper[4830]: E0227 18:06:02.081156 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:06:02 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:06:02 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qj7ht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536926-4ghnd_openshift-infra(43ed5a43-8e62-46bf-8151-7179e13730dd): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:06:02 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 18:06:02 crc kubenswrapper[4830]: E0227 18:06:02.082383 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:06:02 crc kubenswrapper[4830]: E0227 18:06:02.923596 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:06:11 crc kubenswrapper[4830]: I0227 18:06:11.762693 4830 scope.go:117] "RemoveContainer" containerID="81ef18a8ceffa5c1cfa26cc002b47333287098e5e6cdb33647e44f342d553fc2" Feb 27 18:06:11 crc kubenswrapper[4830]: E0227 18:06:11.763788 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 18:06:12 crc kubenswrapper[4830]: E0227 18:06:12.769411 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:06:13 crc kubenswrapper[4830]: E0227 18:06:13.769780 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:06:15 crc kubenswrapper[4830]: E0227 18:06:15.719745 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:06:15 crc kubenswrapper[4830]: E0227 18:06:15.720298 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:06:15 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:06:15 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qj7ht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536926-4ghnd_openshift-infra(43ed5a43-8e62-46bf-8151-7179e13730dd): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:06:15 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 18:06:15 crc kubenswrapper[4830]: E0227 18:06:15.721572 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:06:15 crc kubenswrapper[4830]: E0227 18:06:15.765177 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:06:24 crc kubenswrapper[4830]: E0227 18:06:24.779174 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:06:25 crc kubenswrapper[4830]: I0227 18:06:25.764519 4830 scope.go:117] "RemoveContainer" containerID="81ef18a8ceffa5c1cfa26cc002b47333287098e5e6cdb33647e44f342d553fc2" Feb 27 18:06:25 crc kubenswrapper[4830]: E0227 18:06:25.765059 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 18:06:25 crc kubenswrapper[4830]: E0227 18:06:25.766412 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:06:27 crc kubenswrapper[4830]: E0227 18:06:27.767177 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:06:29 crc kubenswrapper[4830]: E0227 18:06:29.127454 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/thanos-rhel9@sha256=b77218de6528d52542abddf9fb3faececdf1a3e47987ac740ab00468d461a60b/signature-4: status 500 (Internal Server Error)" image="registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34" Feb 27 18:06:29 crc kubenswrapper[4830]: E0227 18:06:29.128063 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:thanos-sidecar,Image:registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34,Command:[],Args:[sidecar --prometheus.url=http://localhost:9090/ --grpc-address=:10901 --http-address=:10902 --log.level=info --prometheus.http-client-file=/etc/thanos/config/prometheus.http-client-file.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:10902,Protocol:TCP,HostIP:,},ContainerPort{Name:grpc,HostPort:0,ContainerPort:10901,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:thanos-prometheus-http-client-file,ReadOnly:false,MountPath:/etc/thanos/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w6l8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(75bcbe49-556d-4af7-9506-514c14ec8d9e): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/thanos-rhel9@sha256=b77218de6528d52542abddf9fb3faececdf1a3e47987ac740ab00468d461a60b/signature-4: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:06:29 crc kubenswrapper[4830]: E0227 18:06:29.129255 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/thanos-rhel9@sha256=b77218de6528d52542abddf9fb3faececdf1a3e47987ac740ab00468d461a60b/signature-4: status 500 (Internal Server Error)\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:06:36 crc kubenswrapper[4830]: I0227 18:06:36.762907 4830 scope.go:117] "RemoveContainer" containerID="81ef18a8ceffa5c1cfa26cc002b47333287098e5e6cdb33647e44f342d553fc2" Feb 27 18:06:36 crc kubenswrapper[4830]: E0227 18:06:36.764291 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 18:06:38 crc kubenswrapper[4830]: E0227 18:06:38.766382 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:06:39 crc kubenswrapper[4830]: E0227 18:06:39.766084 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:06:41 crc kubenswrapper[4830]: E0227 18:06:41.763934 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:06:42 crc kubenswrapper[4830]: E0227 18:06:42.066650 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:06:42 crc kubenswrapper[4830]: E0227 18:06:42.067147 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:06:42 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:06:42 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qj7ht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536926-4ghnd_openshift-infra(43ed5a43-8e62-46bf-8151-7179e13730dd): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:06:42 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 18:06:42 crc kubenswrapper[4830]: E0227 18:06:42.068506 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:06:51 crc kubenswrapper[4830]: I0227 18:06:51.763518 4830 scope.go:117] "RemoveContainer" containerID="81ef18a8ceffa5c1cfa26cc002b47333287098e5e6cdb33647e44f342d553fc2" Feb 27 18:06:51 crc kubenswrapper[4830]: E0227 18:06:51.765555 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 18:06:51 crc kubenswrapper[4830]: E0227 18:06:51.766877 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:06:52 crc kubenswrapper[4830]: E0227 18:06:52.766882 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:06:53 crc kubenswrapper[4830]: E0227 18:06:53.766568 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:06:54 crc kubenswrapper[4830]: E0227 18:06:54.782858 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:07:03 crc kubenswrapper[4830]: I0227 18:07:03.763867 4830 scope.go:117] "RemoveContainer" containerID="81ef18a8ceffa5c1cfa26cc002b47333287098e5e6cdb33647e44f342d553fc2" Feb 27 18:07:03 crc kubenswrapper[4830]: E0227 18:07:03.765360 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 18:07:06 crc kubenswrapper[4830]: E0227 18:07:06.525613 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 18:07:06 crc kubenswrapper[4830]: E0227 18:07:06.526850 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8jztz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-lqb2w_openshift-marketplace(0596772a-54ae-4d9e-9db4-5d7138bae51e): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:07:06 crc kubenswrapper[4830]: E0227 18:07:06.528223 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:07:07 crc kubenswrapper[4830]: E0227 18:07:07.158614 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/alertmanager-rhel9@sha256=b23d4d4796437a2d93a9ad40b24c1130bcf4315029983aa275426d01d2955388/signature-4: status 500 (Internal Server Error)" image="registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3" Feb 27 18:07:07 crc kubenswrapper[4830]: E0227 18:07:07.159018 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:alertmanager,Image:registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3,Command:[],Args:[--config.file=/etc/alertmanager/config_out/alertmanager.env.yaml --storage.path=/alertmanager --data.retention=120h --cluster.listen-address=[$(POD_IP)]:9094 --web.listen-address=:9093 --web.route-prefix=/ --cluster.label=openstack/metric-storage --cluster.peer=alertmanager-metric-storage-0.alertmanager-operated:9094 --cluster.peer=alertmanager-metric-storage-1.alertmanager-operated:9094 --cluster.reconnect-timeout=5m --web.config.file=/etc/alertmanager/web_config/web-config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:web,HostPort:0,ContainerPort:9093,Protocol:TCP,HostIP:,},ContainerPort{Name:mesh-tcp,HostPort:0,ContainerPort:9094,Protocol:TCP,HostIP:,},ContainerPort{Name:mesh-udp,HostPort:0,ContainerPort:9094,Protocol:UDP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{memory: {{209715200 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:false,MountPath:/etc/alertmanager/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-out,ReadOnly:true,MountPath:/etc/alertmanager/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-assets,ReadOnly:true,MountPath:/etc/alertmanager/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:alertmanager-metric-storage-db,ReadOnly:false,MountPath:/alertmanager,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:web-config,ReadOnly:true,MountPath:/etc/alertmanager/web_config/web-config.yaml,SubPath:web-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cluster-tls-config,ReadOnly:true,MountPath:/etc/alertmanager/cluster_tls_config/cluster-tls-config.yaml,SubPath:cluster-tls-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zv45l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/healthy,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod alertmanager-metric-storage-0_openstack(8608d556-6b34-4ab2-b676-007c65e0d359): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/alertmanager-rhel9@sha256=b23d4d4796437a2d93a9ad40b24c1130bcf4315029983aa275426d01d2955388/signature-4: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:07:07 crc kubenswrapper[4830]: E0227 18:07:07.160321 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/alertmanager-rhel9@sha256=b23d4d4796437a2d93a9ad40b24c1130bcf4315029983aa275426d01d2955388/signature-4: status 500 (Internal Server Error)\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:07:08 crc kubenswrapper[4830]: E0227 18:07:08.769414 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:07:08 crc kubenswrapper[4830]: E0227 18:07:08.769438 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:07:17 crc kubenswrapper[4830]: I0227 18:07:17.764543 4830 scope.go:117] "RemoveContainer" containerID="81ef18a8ceffa5c1cfa26cc002b47333287098e5e6cdb33647e44f342d553fc2" Feb 27 18:07:17 crc kubenswrapper[4830]: E0227 18:07:17.766207 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 18:07:18 crc kubenswrapper[4830]: E0227 18:07:18.766797 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:07:19 crc kubenswrapper[4830]: E0227 18:07:19.786362 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:07:22 crc kubenswrapper[4830]: E0227 18:07:22.767090 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:07:23 crc kubenswrapper[4830]: E0227 18:07:23.766159 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:07:31 crc kubenswrapper[4830]: I0227 18:07:31.762581 4830 scope.go:117] "RemoveContainer" containerID="81ef18a8ceffa5c1cfa26cc002b47333287098e5e6cdb33647e44f342d553fc2" Feb 27 18:07:31 crc kubenswrapper[4830]: E0227 18:07:31.764363 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 18:07:32 crc kubenswrapper[4830]: E0227 18:07:32.768174 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:07:34 crc kubenswrapper[4830]: E0227 18:07:34.634641 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:07:34 crc kubenswrapper[4830]: E0227 18:07:34.635251 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:07:34 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:07:34 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qj7ht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536926-4ghnd_openshift-infra(43ed5a43-8e62-46bf-8151-7179e13730dd): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:07:34 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 18:07:34 crc kubenswrapper[4830]: E0227 18:07:34.636455 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:07:36 crc kubenswrapper[4830]: E0227 18:07:36.766558 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:07:37 crc kubenswrapper[4830]: E0227 18:07:37.766302 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:07:44 crc kubenswrapper[4830]: I0227 18:07:44.772525 4830 scope.go:117] "RemoveContainer" containerID="81ef18a8ceffa5c1cfa26cc002b47333287098e5e6cdb33647e44f342d553fc2" Feb 27 18:07:44 crc kubenswrapper[4830]: E0227 18:07:44.775015 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 18:07:44 crc kubenswrapper[4830]: E0227 18:07:44.776286 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:07:49 crc kubenswrapper[4830]: E0227 18:07:49.769556 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:07:49 crc kubenswrapper[4830]: E0227 18:07:49.769554 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:07:50 crc kubenswrapper[4830]: E0227 18:07:50.863109 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/thanos-rhel9@sha256=b77218de6528d52542abddf9fb3faececdf1a3e47987ac740ab00468d461a60b/signature-4: status 500 (Internal Server Error)" image="registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34" Feb 27 18:07:50 crc kubenswrapper[4830]: E0227 18:07:50.864054 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:thanos-sidecar,Image:registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34,Command:[],Args:[sidecar --prometheus.url=http://localhost:9090/ --grpc-address=:10901 --http-address=:10902 --log.level=info --prometheus.http-client-file=/etc/thanos/config/prometheus.http-client-file.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:10902,Protocol:TCP,HostIP:,},ContainerPort{Name:grpc,HostPort:0,ContainerPort:10901,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:thanos-prometheus-http-client-file,ReadOnly:false,MountPath:/etc/thanos/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w6l8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(75bcbe49-556d-4af7-9506-514c14ec8d9e): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/thanos-rhel9@sha256=b77218de6528d52542abddf9fb3faececdf1a3e47987ac740ab00468d461a60b/signature-4: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:07:50 crc kubenswrapper[4830]: E0227 18:07:50.865242 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/thanos-rhel9@sha256=b77218de6528d52542abddf9fb3faececdf1a3e47987ac740ab00468d461a60b/signature-4: status 500 (Internal Server Error)\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:07:55 crc kubenswrapper[4830]: I0227 18:07:55.763656 4830 scope.go:117] "RemoveContainer" containerID="81ef18a8ceffa5c1cfa26cc002b47333287098e5e6cdb33647e44f342d553fc2" Feb 27 18:07:55 crc kubenswrapper[4830]: E0227 18:07:55.764931 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 18:07:58 crc kubenswrapper[4830]: E0227 18:07:58.768448 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:08:00 crc kubenswrapper[4830]: I0227 18:08:00.169457 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536928-wckhp"] Feb 27 18:08:00 crc kubenswrapper[4830]: I0227 18:08:00.172532 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536928-wckhp" Feb 27 18:08:00 crc kubenswrapper[4830]: I0227 18:08:00.185927 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536928-wckhp"] Feb 27 18:08:00 crc kubenswrapper[4830]: I0227 18:08:00.312518 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9p5f\" (UniqueName: \"kubernetes.io/projected/35a44dce-1ea1-4005-84c9-f14986ee706b-kube-api-access-t9p5f\") pod \"auto-csr-approver-29536928-wckhp\" (UID: \"35a44dce-1ea1-4005-84c9-f14986ee706b\") " pod="openshift-infra/auto-csr-approver-29536928-wckhp" Feb 27 18:08:00 crc kubenswrapper[4830]: I0227 18:08:00.415585 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9p5f\" (UniqueName: \"kubernetes.io/projected/35a44dce-1ea1-4005-84c9-f14986ee706b-kube-api-access-t9p5f\") pod \"auto-csr-approver-29536928-wckhp\" (UID: \"35a44dce-1ea1-4005-84c9-f14986ee706b\") " pod="openshift-infra/auto-csr-approver-29536928-wckhp" Feb 27 18:08:00 crc kubenswrapper[4830]: I0227 18:08:00.451344 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9p5f\" (UniqueName: \"kubernetes.io/projected/35a44dce-1ea1-4005-84c9-f14986ee706b-kube-api-access-t9p5f\") pod \"auto-csr-approver-29536928-wckhp\" (UID: \"35a44dce-1ea1-4005-84c9-f14986ee706b\") " pod="openshift-infra/auto-csr-approver-29536928-wckhp" Feb 27 18:08:00 crc kubenswrapper[4830]: I0227 18:08:00.512026 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536928-wckhp" Feb 27 18:08:00 crc kubenswrapper[4830]: E0227 18:08:00.766206 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:08:01 crc kubenswrapper[4830]: I0227 18:08:01.113306 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536928-wckhp"] Feb 27 18:08:01 crc kubenswrapper[4830]: I0227 18:08:01.474907 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536928-wckhp" event={"ID":"35a44dce-1ea1-4005-84c9-f14986ee706b","Type":"ContainerStarted","Data":"91903f9fb81cea7f32e0483021c5f4d946c7eaa6e134079812b97e3a9ea27c39"} Feb 27 18:08:01 crc kubenswrapper[4830]: E0227 18:08:01.766257 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:08:02 crc kubenswrapper[4830]: E0227 18:08:02.883438 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:08:02 crc kubenswrapper[4830]: E0227 18:08:02.884138 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:08:02 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:08:02 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t9p5f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536928-wckhp_openshift-infra(35a44dce-1ea1-4005-84c9-f14986ee706b): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:08:02 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 18:08:02 crc kubenswrapper[4830]: E0227 18:08:02.885608 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536928-wckhp" podUID="35a44dce-1ea1-4005-84c9-f14986ee706b" Feb 27 18:08:03 crc kubenswrapper[4830]: E0227 18:08:03.504726 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-wckhp" podUID="35a44dce-1ea1-4005-84c9-f14986ee706b" Feb 27 18:08:04 crc kubenswrapper[4830]: E0227 18:08:04.783496 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:08:08 crc kubenswrapper[4830]: I0227 18:08:08.763832 4830 scope.go:117] "RemoveContainer" containerID="81ef18a8ceffa5c1cfa26cc002b47333287098e5e6cdb33647e44f342d553fc2" Feb 27 18:08:08 crc kubenswrapper[4830]: E0227 18:08:08.766856 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 18:08:12 crc kubenswrapper[4830]: E0227 18:08:12.766904 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:08:13 crc kubenswrapper[4830]: E0227 18:08:13.768247 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:08:14 crc kubenswrapper[4830]: E0227 18:08:14.780955 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:08:16 crc kubenswrapper[4830]: I0227 18:08:16.682107 4830 generic.go:334] "Generic (PLEG): container finished" podID="35a44dce-1ea1-4005-84c9-f14986ee706b" containerID="a4db4e5f1764770929cc4adb2cec729b768e86e6d1828156f2bb0782d66b1912" exitCode=0 Feb 27 18:08:16 crc kubenswrapper[4830]: I0227 18:08:16.682271 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536928-wckhp" event={"ID":"35a44dce-1ea1-4005-84c9-f14986ee706b","Type":"ContainerDied","Data":"a4db4e5f1764770929cc4adb2cec729b768e86e6d1828156f2bb0782d66b1912"} Feb 27 18:08:18 crc kubenswrapper[4830]: I0227 18:08:18.177806 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536928-wckhp" Feb 27 18:08:18 crc kubenswrapper[4830]: I0227 18:08:18.244151 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9p5f\" (UniqueName: \"kubernetes.io/projected/35a44dce-1ea1-4005-84c9-f14986ee706b-kube-api-access-t9p5f\") pod \"35a44dce-1ea1-4005-84c9-f14986ee706b\" (UID: \"35a44dce-1ea1-4005-84c9-f14986ee706b\") " Feb 27 18:08:18 crc kubenswrapper[4830]: I0227 18:08:18.253855 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35a44dce-1ea1-4005-84c9-f14986ee706b-kube-api-access-t9p5f" (OuterVolumeSpecName: "kube-api-access-t9p5f") pod "35a44dce-1ea1-4005-84c9-f14986ee706b" (UID: "35a44dce-1ea1-4005-84c9-f14986ee706b"). InnerVolumeSpecName "kube-api-access-t9p5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:08:18 crc kubenswrapper[4830]: I0227 18:08:18.347491 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9p5f\" (UniqueName: \"kubernetes.io/projected/35a44dce-1ea1-4005-84c9-f14986ee706b-kube-api-access-t9p5f\") on node \"crc\" DevicePath \"\"" Feb 27 18:08:18 crc kubenswrapper[4830]: I0227 18:08:18.711358 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536928-wckhp" event={"ID":"35a44dce-1ea1-4005-84c9-f14986ee706b","Type":"ContainerDied","Data":"91903f9fb81cea7f32e0483021c5f4d946c7eaa6e134079812b97e3a9ea27c39"} Feb 27 18:08:18 crc kubenswrapper[4830]: I0227 18:08:18.711419 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91903f9fb81cea7f32e0483021c5f4d946c7eaa6e134079812b97e3a9ea27c39" Feb 27 18:08:18 crc kubenswrapper[4830]: I0227 18:08:18.711456 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536928-wckhp" Feb 27 18:08:19 crc kubenswrapper[4830]: I0227 18:08:19.260396 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536920-rgmsv"] Feb 27 18:08:19 crc kubenswrapper[4830]: I0227 18:08:19.262563 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536920-rgmsv"] Feb 27 18:08:19 crc kubenswrapper[4830]: I0227 18:08:19.769789 4830 scope.go:117] "RemoveContainer" containerID="81ef18a8ceffa5c1cfa26cc002b47333287098e5e6cdb33647e44f342d553fc2" Feb 27 18:08:19 crc kubenswrapper[4830]: E0227 18:08:19.770747 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 18:08:19 crc kubenswrapper[4830]: E0227 18:08:19.775291 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:08:20 crc kubenswrapper[4830]: I0227 18:08:20.782418 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5e81087-8783-41f2-bc8b-bd104ade9e69" path="/var/lib/kubelet/pods/c5e81087-8783-41f2-bc8b-bd104ade9e69/volumes" Feb 27 18:08:23 crc kubenswrapper[4830]: E0227 18:08:23.765750 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:08:25 crc kubenswrapper[4830]: E0227 18:08:25.766201 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:08:26 crc kubenswrapper[4830]: E0227 18:08:26.765923 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:08:31 crc kubenswrapper[4830]: I0227 18:08:31.762909 4830 scope.go:117] "RemoveContainer" containerID="81ef18a8ceffa5c1cfa26cc002b47333287098e5e6cdb33647e44f342d553fc2" Feb 27 18:08:31 crc kubenswrapper[4830]: E0227 18:08:31.763850 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 18:08:34 crc kubenswrapper[4830]: E0227 18:08:34.800925 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:08:34 crc kubenswrapper[4830]: E0227 18:08:34.801829 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:08:38 crc kubenswrapper[4830]: I0227 18:08:38.496027 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xwqr6"] Feb 27 18:08:38 crc kubenswrapper[4830]: E0227 18:08:38.502514 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35a44dce-1ea1-4005-84c9-f14986ee706b" containerName="oc" Feb 27 18:08:38 crc kubenswrapper[4830]: I0227 18:08:38.502552 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="35a44dce-1ea1-4005-84c9-f14986ee706b" containerName="oc" Feb 27 18:08:38 crc kubenswrapper[4830]: I0227 18:08:38.503054 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="35a44dce-1ea1-4005-84c9-f14986ee706b" containerName="oc" Feb 27 18:08:38 crc kubenswrapper[4830]: I0227 18:08:38.506290 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xwqr6" Feb 27 18:08:38 crc kubenswrapper[4830]: I0227 18:08:38.540995 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xwqr6"] Feb 27 18:08:38 crc kubenswrapper[4830]: I0227 18:08:38.605285 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a670c5d-bc3f-4fef-b1b1-f62883562b09-utilities\") pod \"redhat-operators-xwqr6\" (UID: \"8a670c5d-bc3f-4fef-b1b1-f62883562b09\") " pod="openshift-marketplace/redhat-operators-xwqr6" Feb 27 18:08:38 crc kubenswrapper[4830]: I0227 18:08:38.606051 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a670c5d-bc3f-4fef-b1b1-f62883562b09-catalog-content\") pod \"redhat-operators-xwqr6\" (UID: \"8a670c5d-bc3f-4fef-b1b1-f62883562b09\") " pod="openshift-marketplace/redhat-operators-xwqr6" Feb 27 18:08:38 crc kubenswrapper[4830]: I0227 18:08:38.606299 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l28mt\" (UniqueName: \"kubernetes.io/projected/8a670c5d-bc3f-4fef-b1b1-f62883562b09-kube-api-access-l28mt\") pod \"redhat-operators-xwqr6\" (UID: \"8a670c5d-bc3f-4fef-b1b1-f62883562b09\") " pod="openshift-marketplace/redhat-operators-xwqr6" Feb 27 18:08:38 crc kubenswrapper[4830]: I0227 18:08:38.709847 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l28mt\" (UniqueName: \"kubernetes.io/projected/8a670c5d-bc3f-4fef-b1b1-f62883562b09-kube-api-access-l28mt\") pod \"redhat-operators-xwqr6\" (UID: \"8a670c5d-bc3f-4fef-b1b1-f62883562b09\") " pod="openshift-marketplace/redhat-operators-xwqr6" Feb 27 18:08:38 crc kubenswrapper[4830]: I0227 18:08:38.710140 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a670c5d-bc3f-4fef-b1b1-f62883562b09-utilities\") pod \"redhat-operators-xwqr6\" (UID: \"8a670c5d-bc3f-4fef-b1b1-f62883562b09\") " pod="openshift-marketplace/redhat-operators-xwqr6" Feb 27 18:08:38 crc kubenswrapper[4830]: I0227 18:08:38.710478 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a670c5d-bc3f-4fef-b1b1-f62883562b09-catalog-content\") pod \"redhat-operators-xwqr6\" (UID: \"8a670c5d-bc3f-4fef-b1b1-f62883562b09\") " pod="openshift-marketplace/redhat-operators-xwqr6" Feb 27 18:08:38 crc kubenswrapper[4830]: I0227 18:08:38.710698 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a670c5d-bc3f-4fef-b1b1-f62883562b09-utilities\") pod \"redhat-operators-xwqr6\" (UID: \"8a670c5d-bc3f-4fef-b1b1-f62883562b09\") " pod="openshift-marketplace/redhat-operators-xwqr6" Feb 27 18:08:38 crc kubenswrapper[4830]: I0227 18:08:38.710998 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a670c5d-bc3f-4fef-b1b1-f62883562b09-catalog-content\") pod \"redhat-operators-xwqr6\" (UID: \"8a670c5d-bc3f-4fef-b1b1-f62883562b09\") " pod="openshift-marketplace/redhat-operators-xwqr6" Feb 27 18:08:38 crc kubenswrapper[4830]: I0227 18:08:38.734837 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l28mt\" (UniqueName: \"kubernetes.io/projected/8a670c5d-bc3f-4fef-b1b1-f62883562b09-kube-api-access-l28mt\") pod \"redhat-operators-xwqr6\" (UID: \"8a670c5d-bc3f-4fef-b1b1-f62883562b09\") " pod="openshift-marketplace/redhat-operators-xwqr6" Feb 27 18:08:38 crc kubenswrapper[4830]: I0227 18:08:38.842905 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xwqr6" Feb 27 18:08:39 crc kubenswrapper[4830]: I0227 18:08:39.404318 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xwqr6"] Feb 27 18:08:39 crc kubenswrapper[4830]: E0227 18:08:39.765239 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:08:39 crc kubenswrapper[4830]: I0227 18:08:39.996258 4830 generic.go:334] "Generic (PLEG): container finished" podID="8a670c5d-bc3f-4fef-b1b1-f62883562b09" containerID="a0292901e3b44f229c7c978c7c0c7bd9c469b8c5cf6e3b1a7f1f052b733c1583" exitCode=0 Feb 27 18:08:39 crc kubenswrapper[4830]: I0227 18:08:39.996353 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xwqr6" event={"ID":"8a670c5d-bc3f-4fef-b1b1-f62883562b09","Type":"ContainerDied","Data":"a0292901e3b44f229c7c978c7c0c7bd9c469b8c5cf6e3b1a7f1f052b733c1583"} Feb 27 18:08:39 crc kubenswrapper[4830]: I0227 18:08:39.996411 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xwqr6" event={"ID":"8a670c5d-bc3f-4fef-b1b1-f62883562b09","Type":"ContainerStarted","Data":"095ed1e9ed8f94bde64a72d3532e63d2fef2cd458432cd92d06aaec3ff5b92cd"} Feb 27 18:08:40 crc kubenswrapper[4830]: E0227 18:08:40.696211 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 18:08:40 crc kubenswrapper[4830]: E0227 18:08:40.696741 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l28mt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-xwqr6_openshift-marketplace(8a670c5d-bc3f-4fef-b1b1-f62883562b09): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:08:40 crc kubenswrapper[4830]: E0227 18:08:40.697919 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-xwqr6" podUID="8a670c5d-bc3f-4fef-b1b1-f62883562b09" Feb 27 18:08:41 crc kubenswrapper[4830]: E0227 18:08:41.016216 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-xwqr6" podUID="8a670c5d-bc3f-4fef-b1b1-f62883562b09" Feb 27 18:08:41 crc kubenswrapper[4830]: E0227 18:08:41.764853 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:08:45 crc kubenswrapper[4830]: I0227 18:08:45.762819 4830 scope.go:117] "RemoveContainer" containerID="81ef18a8ceffa5c1cfa26cc002b47333287098e5e6cdb33647e44f342d553fc2" Feb 27 18:08:45 crc kubenswrapper[4830]: E0227 18:08:45.764017 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 18:08:47 crc kubenswrapper[4830]: E0227 18:08:47.767628 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:08:49 crc kubenswrapper[4830]: E0227 18:08:49.769326 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:08:50 crc kubenswrapper[4830]: E0227 18:08:50.764206 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:08:53 crc kubenswrapper[4830]: E0227 18:08:53.765814 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:08:54 crc kubenswrapper[4830]: I0227 18:08:54.249155 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xwqr6" event={"ID":"8a670c5d-bc3f-4fef-b1b1-f62883562b09","Type":"ContainerStarted","Data":"d5ea3407f58e31b24d730a5c497992364c3aee4d77844fc20c57ce5489f39012"} Feb 27 18:08:58 crc kubenswrapper[4830]: I0227 18:08:58.610578 4830 scope.go:117] "RemoveContainer" containerID="4f34f0a42ab364d9aef4c06263423046276a80a717301ebf71092ef11f7f2d17" Feb 27 18:08:58 crc kubenswrapper[4830]: I0227 18:08:58.675974 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5pkvb"] Feb 27 18:08:58 crc kubenswrapper[4830]: I0227 18:08:58.678466 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5pkvb" Feb 27 18:08:58 crc kubenswrapper[4830]: I0227 18:08:58.700608 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5pkvb"] Feb 27 18:08:58 crc kubenswrapper[4830]: I0227 18:08:58.763448 4830 scope.go:117] "RemoveContainer" containerID="81ef18a8ceffa5c1cfa26cc002b47333287098e5e6cdb33647e44f342d553fc2" Feb 27 18:08:58 crc kubenswrapper[4830]: E0227 18:08:58.763744 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 18:08:58 crc kubenswrapper[4830]: I0227 18:08:58.870002 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d0e4d8e-d4ab-47f9-8015-5ace0337272f-catalog-content\") pod \"community-operators-5pkvb\" (UID: \"4d0e4d8e-d4ab-47f9-8015-5ace0337272f\") " pod="openshift-marketplace/community-operators-5pkvb" Feb 27 18:08:58 crc kubenswrapper[4830]: I0227 18:08:58.870122 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d0e4d8e-d4ab-47f9-8015-5ace0337272f-utilities\") pod \"community-operators-5pkvb\" (UID: \"4d0e4d8e-d4ab-47f9-8015-5ace0337272f\") " pod="openshift-marketplace/community-operators-5pkvb" Feb 27 18:08:58 crc kubenswrapper[4830]: I0227 18:08:58.870306 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74bq7\" (UniqueName: \"kubernetes.io/projected/4d0e4d8e-d4ab-47f9-8015-5ace0337272f-kube-api-access-74bq7\") pod \"community-operators-5pkvb\" (UID: \"4d0e4d8e-d4ab-47f9-8015-5ace0337272f\") " pod="openshift-marketplace/community-operators-5pkvb" Feb 27 18:08:58 crc kubenswrapper[4830]: I0227 18:08:58.973091 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d0e4d8e-d4ab-47f9-8015-5ace0337272f-catalog-content\") pod \"community-operators-5pkvb\" (UID: \"4d0e4d8e-d4ab-47f9-8015-5ace0337272f\") " pod="openshift-marketplace/community-operators-5pkvb" Feb 27 18:08:58 crc kubenswrapper[4830]: I0227 18:08:58.973158 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d0e4d8e-d4ab-47f9-8015-5ace0337272f-utilities\") pod \"community-operators-5pkvb\" (UID: \"4d0e4d8e-d4ab-47f9-8015-5ace0337272f\") " pod="openshift-marketplace/community-operators-5pkvb" Feb 27 18:08:58 crc kubenswrapper[4830]: I0227 18:08:58.973272 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74bq7\" (UniqueName: \"kubernetes.io/projected/4d0e4d8e-d4ab-47f9-8015-5ace0337272f-kube-api-access-74bq7\") pod \"community-operators-5pkvb\" (UID: \"4d0e4d8e-d4ab-47f9-8015-5ace0337272f\") " pod="openshift-marketplace/community-operators-5pkvb" Feb 27 18:08:58 crc kubenswrapper[4830]: I0227 18:08:58.973913 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d0e4d8e-d4ab-47f9-8015-5ace0337272f-catalog-content\") pod \"community-operators-5pkvb\" (UID: \"4d0e4d8e-d4ab-47f9-8015-5ace0337272f\") " pod="openshift-marketplace/community-operators-5pkvb" Feb 27 18:08:58 crc kubenswrapper[4830]: I0227 18:08:58.973987 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d0e4d8e-d4ab-47f9-8015-5ace0337272f-utilities\") pod \"community-operators-5pkvb\" (UID: \"4d0e4d8e-d4ab-47f9-8015-5ace0337272f\") " pod="openshift-marketplace/community-operators-5pkvb" Feb 27 18:08:58 crc kubenswrapper[4830]: I0227 18:08:58.992974 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74bq7\" (UniqueName: \"kubernetes.io/projected/4d0e4d8e-d4ab-47f9-8015-5ace0337272f-kube-api-access-74bq7\") pod \"community-operators-5pkvb\" (UID: \"4d0e4d8e-d4ab-47f9-8015-5ace0337272f\") " pod="openshift-marketplace/community-operators-5pkvb" Feb 27 18:08:58 crc kubenswrapper[4830]: I0227 18:08:58.993718 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5pkvb" Feb 27 18:08:59 crc kubenswrapper[4830]: I0227 18:08:59.746586 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5pkvb"] Feb 27 18:08:59 crc kubenswrapper[4830]: W0227 18:08:59.750921 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d0e4d8e_d4ab_47f9_8015_5ace0337272f.slice/crio-c65ad6fa35510c5a24ce4aed27353f07dcea8d74495d616e1c76a31a20c1302a WatchSource:0}: Error finding container c65ad6fa35510c5a24ce4aed27353f07dcea8d74495d616e1c76a31a20c1302a: Status 404 returned error can't find the container with id c65ad6fa35510c5a24ce4aed27353f07dcea8d74495d616e1c76a31a20c1302a Feb 27 18:08:59 crc kubenswrapper[4830]: E0227 18:08:59.778895 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:09:00 crc kubenswrapper[4830]: I0227 18:09:00.320718 4830 generic.go:334] "Generic (PLEG): container finished" podID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" containerID="bd57a32cf2a820452b2260b8cb9c0b4c363d672e19c5e6bed4d0aec4853553e0" exitCode=0 Feb 27 18:09:00 crc kubenswrapper[4830]: I0227 18:09:00.320765 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5pkvb" event={"ID":"4d0e4d8e-d4ab-47f9-8015-5ace0337272f","Type":"ContainerDied","Data":"bd57a32cf2a820452b2260b8cb9c0b4c363d672e19c5e6bed4d0aec4853553e0"} Feb 27 18:09:00 crc kubenswrapper[4830]: I0227 18:09:00.320795 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5pkvb" event={"ID":"4d0e4d8e-d4ab-47f9-8015-5ace0337272f","Type":"ContainerStarted","Data":"c65ad6fa35510c5a24ce4aed27353f07dcea8d74495d616e1c76a31a20c1302a"} Feb 27 18:09:01 crc kubenswrapper[4830]: E0227 18:09:01.043148 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 18:09:01 crc kubenswrapper[4830]: E0227 18:09:01.044831 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-74bq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-5pkvb_openshift-marketplace(4d0e4d8e-d4ab-47f9-8015-5ace0337272f): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:09:01 crc kubenswrapper[4830]: E0227 18:09:01.046264 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:09:01 crc kubenswrapper[4830]: I0227 18:09:01.338043 4830 generic.go:334] "Generic (PLEG): container finished" podID="8a670c5d-bc3f-4fef-b1b1-f62883562b09" containerID="d5ea3407f58e31b24d730a5c497992364c3aee4d77844fc20c57ce5489f39012" exitCode=0 Feb 27 18:09:01 crc kubenswrapper[4830]: I0227 18:09:01.338737 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xwqr6" event={"ID":"8a670c5d-bc3f-4fef-b1b1-f62883562b09","Type":"ContainerDied","Data":"d5ea3407f58e31b24d730a5c497992364c3aee4d77844fc20c57ce5489f39012"} Feb 27 18:09:01 crc kubenswrapper[4830]: E0227 18:09:01.345510 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:09:02 crc kubenswrapper[4830]: I0227 18:09:02.356022 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xwqr6" event={"ID":"8a670c5d-bc3f-4fef-b1b1-f62883562b09","Type":"ContainerStarted","Data":"ed96a07f90b7e1a3c412c67a0ddf6bf049c5c66653542b4cf935a862b5af51dc"} Feb 27 18:09:04 crc kubenswrapper[4830]: E0227 18:09:04.779691 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:09:04 crc kubenswrapper[4830]: E0227 18:09:04.779820 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:09:08 crc kubenswrapper[4830]: I0227 18:09:08.842640 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xwqr6" Feb 27 18:09:08 crc kubenswrapper[4830]: I0227 18:09:08.843336 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xwqr6" Feb 27 18:09:08 crc kubenswrapper[4830]: E0227 18:09:08.992537 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:09:08 crc kubenswrapper[4830]: E0227 18:09:08.992682 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:09:08 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:09:08 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qj7ht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536926-4ghnd_openshift-infra(43ed5a43-8e62-46bf-8151-7179e13730dd): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:09:08 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 18:09:08 crc kubenswrapper[4830]: E0227 18:09:08.993792 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:09:09 crc kubenswrapper[4830]: I0227 18:09:09.912723 4830 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xwqr6" podUID="8a670c5d-bc3f-4fef-b1b1-f62883562b09" containerName="registry-server" probeResult="failure" output=< Feb 27 18:09:09 crc kubenswrapper[4830]: timeout: failed to connect service ":50051" within 1s Feb 27 18:09:09 crc kubenswrapper[4830]: > Feb 27 18:09:12 crc kubenswrapper[4830]: I0227 18:09:12.763268 4830 scope.go:117] "RemoveContainer" containerID="81ef18a8ceffa5c1cfa26cc002b47333287098e5e6cdb33647e44f342d553fc2" Feb 27 18:09:12 crc kubenswrapper[4830]: E0227 18:09:12.764132 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 18:09:13 crc kubenswrapper[4830]: E0227 18:09:13.766198 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:09:13 crc kubenswrapper[4830]: I0227 18:09:13.821054 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xwqr6" podStartSLOduration=14.040164125 podStartE2EDuration="35.821027155s" podCreationTimestamp="2026-02-27 18:08:38 +0000 UTC" firstStartedPulling="2026-02-27 18:08:39.9995053 +0000 UTC m=+7316.088777783" lastFinishedPulling="2026-02-27 18:09:01.78036832 +0000 UTC m=+7337.869640813" observedRunningTime="2026-02-27 18:09:02.397493194 +0000 UTC m=+7338.486765657" watchObservedRunningTime="2026-02-27 18:09:13.821027155 +0000 UTC m=+7349.910299628" Feb 27 18:09:14 crc kubenswrapper[4830]: E0227 18:09:14.725028 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 18:09:14 crc kubenswrapper[4830]: E0227 18:09:14.725646 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-74bq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-5pkvb_openshift-marketplace(4d0e4d8e-d4ab-47f9-8015-5ace0337272f): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:09:14 crc kubenswrapper[4830]: E0227 18:09:14.726922 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:09:16 crc kubenswrapper[4830]: E0227 18:09:16.767772 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:09:18 crc kubenswrapper[4830]: E0227 18:09:18.773407 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:09:18 crc kubenswrapper[4830]: I0227 18:09:18.928846 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xwqr6" Feb 27 18:09:19 crc kubenswrapper[4830]: I0227 18:09:19.023880 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xwqr6" Feb 27 18:09:19 crc kubenswrapper[4830]: I0227 18:09:19.178809 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xwqr6"] Feb 27 18:09:20 crc kubenswrapper[4830]: I0227 18:09:20.579488 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xwqr6" podUID="8a670c5d-bc3f-4fef-b1b1-f62883562b09" containerName="registry-server" containerID="cri-o://ed96a07f90b7e1a3c412c67a0ddf6bf049c5c66653542b4cf935a862b5af51dc" gracePeriod=2 Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.188770 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xwqr6" Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.304687 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a670c5d-bc3f-4fef-b1b1-f62883562b09-catalog-content\") pod \"8a670c5d-bc3f-4fef-b1b1-f62883562b09\" (UID: \"8a670c5d-bc3f-4fef-b1b1-f62883562b09\") " Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.304852 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l28mt\" (UniqueName: \"kubernetes.io/projected/8a670c5d-bc3f-4fef-b1b1-f62883562b09-kube-api-access-l28mt\") pod \"8a670c5d-bc3f-4fef-b1b1-f62883562b09\" (UID: \"8a670c5d-bc3f-4fef-b1b1-f62883562b09\") " Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.305068 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a670c5d-bc3f-4fef-b1b1-f62883562b09-utilities\") pod \"8a670c5d-bc3f-4fef-b1b1-f62883562b09\" (UID: \"8a670c5d-bc3f-4fef-b1b1-f62883562b09\") " Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.306093 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a670c5d-bc3f-4fef-b1b1-f62883562b09-utilities" (OuterVolumeSpecName: "utilities") pod "8a670c5d-bc3f-4fef-b1b1-f62883562b09" (UID: "8a670c5d-bc3f-4fef-b1b1-f62883562b09"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.308266 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a670c5d-bc3f-4fef-b1b1-f62883562b09-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.316532 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a670c5d-bc3f-4fef-b1b1-f62883562b09-kube-api-access-l28mt" (OuterVolumeSpecName: "kube-api-access-l28mt") pod "8a670c5d-bc3f-4fef-b1b1-f62883562b09" (UID: "8a670c5d-bc3f-4fef-b1b1-f62883562b09"). InnerVolumeSpecName "kube-api-access-l28mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.411985 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l28mt\" (UniqueName: \"kubernetes.io/projected/8a670c5d-bc3f-4fef-b1b1-f62883562b09-kube-api-access-l28mt\") on node \"crc\" DevicePath \"\"" Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.497499 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a670c5d-bc3f-4fef-b1b1-f62883562b09-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8a670c5d-bc3f-4fef-b1b1-f62883562b09" (UID: "8a670c5d-bc3f-4fef-b1b1-f62883562b09"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.514164 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a670c5d-bc3f-4fef-b1b1-f62883562b09-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.529229 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pfkj9/must-gather-4nm2j"] Feb 27 18:09:21 crc kubenswrapper[4830]: E0227 18:09:21.530063 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a670c5d-bc3f-4fef-b1b1-f62883562b09" containerName="extract-utilities" Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.530156 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a670c5d-bc3f-4fef-b1b1-f62883562b09" containerName="extract-utilities" Feb 27 18:09:21 crc kubenswrapper[4830]: E0227 18:09:21.530257 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a670c5d-bc3f-4fef-b1b1-f62883562b09" containerName="registry-server" Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.530308 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a670c5d-bc3f-4fef-b1b1-f62883562b09" containerName="registry-server" Feb 27 18:09:21 crc kubenswrapper[4830]: E0227 18:09:21.530373 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a670c5d-bc3f-4fef-b1b1-f62883562b09" containerName="extract-content" Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.530434 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a670c5d-bc3f-4fef-b1b1-f62883562b09" containerName="extract-content" Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.530669 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a670c5d-bc3f-4fef-b1b1-f62883562b09" containerName="registry-server" Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.531929 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfkj9/must-gather-4nm2j" Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.534425 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-pfkj9"/"default-dockercfg-2xm8v" Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.534544 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-pfkj9"/"openshift-service-ca.crt" Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.534712 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-pfkj9"/"kube-root-ca.crt" Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.537656 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-pfkj9/must-gather-4nm2j"] Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.607139 4830 generic.go:334] "Generic (PLEG): container finished" podID="8a670c5d-bc3f-4fef-b1b1-f62883562b09" containerID="ed96a07f90b7e1a3c412c67a0ddf6bf049c5c66653542b4cf935a862b5af51dc" exitCode=0 Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.607182 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xwqr6" event={"ID":"8a670c5d-bc3f-4fef-b1b1-f62883562b09","Type":"ContainerDied","Data":"ed96a07f90b7e1a3c412c67a0ddf6bf049c5c66653542b4cf935a862b5af51dc"} Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.607209 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xwqr6" event={"ID":"8a670c5d-bc3f-4fef-b1b1-f62883562b09","Type":"ContainerDied","Data":"095ed1e9ed8f94bde64a72d3532e63d2fef2cd458432cd92d06aaec3ff5b92cd"} Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.607228 4830 scope.go:117] "RemoveContainer" containerID="ed96a07f90b7e1a3c412c67a0ddf6bf049c5c66653542b4cf935a862b5af51dc" Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.607307 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xwqr6" Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.616759 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xwgd\" (UniqueName: \"kubernetes.io/projected/bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa-kube-api-access-5xwgd\") pod \"must-gather-4nm2j\" (UID: \"bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa\") " pod="openshift-must-gather-pfkj9/must-gather-4nm2j" Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.616853 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa-must-gather-output\") pod \"must-gather-4nm2j\" (UID: \"bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa\") " pod="openshift-must-gather-pfkj9/must-gather-4nm2j" Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.633672 4830 scope.go:117] "RemoveContainer" containerID="d5ea3407f58e31b24d730a5c497992364c3aee4d77844fc20c57ce5489f39012" Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.651544 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xwqr6"] Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.654805 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xwqr6"] Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.663413 4830 scope.go:117] "RemoveContainer" containerID="a0292901e3b44f229c7c978c7c0c7bd9c469b8c5cf6e3b1a7f1f052b733c1583" Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.697401 4830 scope.go:117] "RemoveContainer" containerID="ed96a07f90b7e1a3c412c67a0ddf6bf049c5c66653542b4cf935a862b5af51dc" Feb 27 18:09:21 crc kubenswrapper[4830]: E0227 18:09:21.698001 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed96a07f90b7e1a3c412c67a0ddf6bf049c5c66653542b4cf935a862b5af51dc\": container with ID starting with ed96a07f90b7e1a3c412c67a0ddf6bf049c5c66653542b4cf935a862b5af51dc not found: ID does not exist" containerID="ed96a07f90b7e1a3c412c67a0ddf6bf049c5c66653542b4cf935a862b5af51dc" Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.698077 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed96a07f90b7e1a3c412c67a0ddf6bf049c5c66653542b4cf935a862b5af51dc"} err="failed to get container status \"ed96a07f90b7e1a3c412c67a0ddf6bf049c5c66653542b4cf935a862b5af51dc\": rpc error: code = NotFound desc = could not find container \"ed96a07f90b7e1a3c412c67a0ddf6bf049c5c66653542b4cf935a862b5af51dc\": container with ID starting with ed96a07f90b7e1a3c412c67a0ddf6bf049c5c66653542b4cf935a862b5af51dc not found: ID does not exist" Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.698122 4830 scope.go:117] "RemoveContainer" containerID="d5ea3407f58e31b24d730a5c497992364c3aee4d77844fc20c57ce5489f39012" Feb 27 18:09:21 crc kubenswrapper[4830]: E0227 18:09:21.698647 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5ea3407f58e31b24d730a5c497992364c3aee4d77844fc20c57ce5489f39012\": container with ID starting with d5ea3407f58e31b24d730a5c497992364c3aee4d77844fc20c57ce5489f39012 not found: ID does not exist" containerID="d5ea3407f58e31b24d730a5c497992364c3aee4d77844fc20c57ce5489f39012" Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.698701 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5ea3407f58e31b24d730a5c497992364c3aee4d77844fc20c57ce5489f39012"} err="failed to get container status \"d5ea3407f58e31b24d730a5c497992364c3aee4d77844fc20c57ce5489f39012\": rpc error: code = NotFound desc = could not find container \"d5ea3407f58e31b24d730a5c497992364c3aee4d77844fc20c57ce5489f39012\": container with ID starting with d5ea3407f58e31b24d730a5c497992364c3aee4d77844fc20c57ce5489f39012 not found: ID does not exist" Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.698737 4830 scope.go:117] "RemoveContainer" containerID="a0292901e3b44f229c7c978c7c0c7bd9c469b8c5cf6e3b1a7f1f052b733c1583" Feb 27 18:09:21 crc kubenswrapper[4830]: E0227 18:09:21.699133 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0292901e3b44f229c7c978c7c0c7bd9c469b8c5cf6e3b1a7f1f052b733c1583\": container with ID starting with a0292901e3b44f229c7c978c7c0c7bd9c469b8c5cf6e3b1a7f1f052b733c1583 not found: ID does not exist" containerID="a0292901e3b44f229c7c978c7c0c7bd9c469b8c5cf6e3b1a7f1f052b733c1583" Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.699169 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0292901e3b44f229c7c978c7c0c7bd9c469b8c5cf6e3b1a7f1f052b733c1583"} err="failed to get container status \"a0292901e3b44f229c7c978c7c0c7bd9c469b8c5cf6e3b1a7f1f052b733c1583\": rpc error: code = NotFound desc = could not find container \"a0292901e3b44f229c7c978c7c0c7bd9c469b8c5cf6e3b1a7f1f052b733c1583\": container with ID starting with a0292901e3b44f229c7c978c7c0c7bd9c469b8c5cf6e3b1a7f1f052b733c1583 not found: ID does not exist" Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.718892 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa-must-gather-output\") pod \"must-gather-4nm2j\" (UID: \"bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa\") " pod="openshift-must-gather-pfkj9/must-gather-4nm2j" Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.719103 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xwgd\" (UniqueName: \"kubernetes.io/projected/bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa-kube-api-access-5xwgd\") pod \"must-gather-4nm2j\" (UID: \"bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa\") " pod="openshift-must-gather-pfkj9/must-gather-4nm2j" Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.719501 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa-must-gather-output\") pod \"must-gather-4nm2j\" (UID: \"bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa\") " pod="openshift-must-gather-pfkj9/must-gather-4nm2j" Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.744369 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xwgd\" (UniqueName: \"kubernetes.io/projected/bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa-kube-api-access-5xwgd\") pod \"must-gather-4nm2j\" (UID: \"bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa\") " pod="openshift-must-gather-pfkj9/must-gather-4nm2j" Feb 27 18:09:21 crc kubenswrapper[4830]: I0227 18:09:21.846773 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfkj9/must-gather-4nm2j" Feb 27 18:09:22 crc kubenswrapper[4830]: I0227 18:09:22.397835 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-pfkj9/must-gather-4nm2j"] Feb 27 18:09:22 crc kubenswrapper[4830]: I0227 18:09:22.639843 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pfkj9/must-gather-4nm2j" event={"ID":"bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa","Type":"ContainerStarted","Data":"042d0fc7e51fa34561c38a8b4e6dc2c5aec604c8f87e90d63efbd066a5bfdb73"} Feb 27 18:09:22 crc kubenswrapper[4830]: E0227 18:09:22.772562 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:09:22 crc kubenswrapper[4830]: I0227 18:09:22.778781 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a670c5d-bc3f-4fef-b1b1-f62883562b09" path="/var/lib/kubelet/pods/8a670c5d-bc3f-4fef-b1b1-f62883562b09/volumes" Feb 27 18:09:27 crc kubenswrapper[4830]: E0227 18:09:27.253412 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:09:27 crc kubenswrapper[4830]: I0227 18:09:27.771470 4830 scope.go:117] "RemoveContainer" containerID="81ef18a8ceffa5c1cfa26cc002b47333287098e5e6cdb33647e44f342d553fc2" Feb 27 18:09:27 crc kubenswrapper[4830]: E0227 18:09:27.772613 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 18:09:29 crc kubenswrapper[4830]: E0227 18:09:29.153996 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:09:29 crc kubenswrapper[4830]: I0227 18:09:29.704028 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pfkj9/must-gather-4nm2j" event={"ID":"bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa","Type":"ContainerStarted","Data":"a2ee0c3bb13fa1093c47e6e4687d01afaa5fab4c4ee68a25f941ec2eb8b66a89"} Feb 27 18:09:29 crc kubenswrapper[4830]: E0227 18:09:29.764070 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:09:30 crc kubenswrapper[4830]: I0227 18:09:30.716477 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pfkj9/must-gather-4nm2j" event={"ID":"bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa","Type":"ContainerStarted","Data":"0117ca02adf9dc2b8df32010488e9f0524a4dfbb77de9f70a72690e3a80ee023"} Feb 27 18:09:30 crc kubenswrapper[4830]: I0227 18:09:30.743933 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pfkj9/must-gather-4nm2j" podStartSLOduration=2.9011391399999997 podStartE2EDuration="9.743911708s" podCreationTimestamp="2026-02-27 18:09:21 +0000 UTC" firstStartedPulling="2026-02-27 18:09:22.399291941 +0000 UTC m=+7358.488564434" lastFinishedPulling="2026-02-27 18:09:29.242064539 +0000 UTC m=+7365.331337002" observedRunningTime="2026-02-27 18:09:30.738090928 +0000 UTC m=+7366.827363421" watchObservedRunningTime="2026-02-27 18:09:30.743911708 +0000 UTC m=+7366.833184181" Feb 27 18:09:31 crc kubenswrapper[4830]: E0227 18:09:31.765968 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:09:34 crc kubenswrapper[4830]: E0227 18:09:34.567392 4830 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.36:57564->38.129.56.36:42557: write tcp 38.129.56.36:57564->38.129.56.36:42557: write: broken pipe Feb 27 18:09:35 crc kubenswrapper[4830]: I0227 18:09:35.381159 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pfkj9/crc-debug-8pr66"] Feb 27 18:09:35 crc kubenswrapper[4830]: I0227 18:09:35.382846 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfkj9/crc-debug-8pr66" Feb 27 18:09:35 crc kubenswrapper[4830]: I0227 18:09:35.474597 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh7zn\" (UniqueName: \"kubernetes.io/projected/7d18e63b-ad55-49db-bb23-708cbe96b15b-kube-api-access-xh7zn\") pod \"crc-debug-8pr66\" (UID: \"7d18e63b-ad55-49db-bb23-708cbe96b15b\") " pod="openshift-must-gather-pfkj9/crc-debug-8pr66" Feb 27 18:09:35 crc kubenswrapper[4830]: I0227 18:09:35.474669 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7d18e63b-ad55-49db-bb23-708cbe96b15b-host\") pod \"crc-debug-8pr66\" (UID: \"7d18e63b-ad55-49db-bb23-708cbe96b15b\") " pod="openshift-must-gather-pfkj9/crc-debug-8pr66" Feb 27 18:09:35 crc kubenswrapper[4830]: I0227 18:09:35.576447 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh7zn\" (UniqueName: \"kubernetes.io/projected/7d18e63b-ad55-49db-bb23-708cbe96b15b-kube-api-access-xh7zn\") pod \"crc-debug-8pr66\" (UID: \"7d18e63b-ad55-49db-bb23-708cbe96b15b\") " pod="openshift-must-gather-pfkj9/crc-debug-8pr66" Feb 27 18:09:35 crc kubenswrapper[4830]: I0227 18:09:35.576515 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7d18e63b-ad55-49db-bb23-708cbe96b15b-host\") pod \"crc-debug-8pr66\" (UID: \"7d18e63b-ad55-49db-bb23-708cbe96b15b\") " pod="openshift-must-gather-pfkj9/crc-debug-8pr66" Feb 27 18:09:35 crc kubenswrapper[4830]: I0227 18:09:35.576697 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7d18e63b-ad55-49db-bb23-708cbe96b15b-host\") pod \"crc-debug-8pr66\" (UID: \"7d18e63b-ad55-49db-bb23-708cbe96b15b\") " pod="openshift-must-gather-pfkj9/crc-debug-8pr66" Feb 27 18:09:35 crc kubenswrapper[4830]: I0227 18:09:35.608510 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh7zn\" (UniqueName: \"kubernetes.io/projected/7d18e63b-ad55-49db-bb23-708cbe96b15b-kube-api-access-xh7zn\") pod \"crc-debug-8pr66\" (UID: \"7d18e63b-ad55-49db-bb23-708cbe96b15b\") " pod="openshift-must-gather-pfkj9/crc-debug-8pr66" Feb 27 18:09:35 crc kubenswrapper[4830]: I0227 18:09:35.701763 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfkj9/crc-debug-8pr66" Feb 27 18:09:35 crc kubenswrapper[4830]: W0227 18:09:35.766344 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d18e63b_ad55_49db_bb23_708cbe96b15b.slice/crio-b57b57418d5c831a36b0d6c90b735c73ba4ea3ecc50ce6006dd5d35378ed0989 WatchSource:0}: Error finding container b57b57418d5c831a36b0d6c90b735c73ba4ea3ecc50ce6006dd5d35378ed0989: Status 404 returned error can't find the container with id b57b57418d5c831a36b0d6c90b735c73ba4ea3ecc50ce6006dd5d35378ed0989 Feb 27 18:09:35 crc kubenswrapper[4830]: I0227 18:09:35.810179 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pfkj9/crc-debug-8pr66" event={"ID":"7d18e63b-ad55-49db-bb23-708cbe96b15b","Type":"ContainerStarted","Data":"b57b57418d5c831a36b0d6c90b735c73ba4ea3ecc50ce6006dd5d35378ed0989"} Feb 27 18:09:36 crc kubenswrapper[4830]: E0227 18:09:36.765164 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:09:39 crc kubenswrapper[4830]: E0227 18:09:39.351581 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 18:09:39 crc kubenswrapper[4830]: E0227 18:09:39.352125 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-74bq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-5pkvb_openshift-marketplace(4d0e4d8e-d4ab-47f9-8015-5ace0337272f): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:09:39 crc kubenswrapper[4830]: E0227 18:09:39.353573 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:09:42 crc kubenswrapper[4830]: I0227 18:09:42.767003 4830 scope.go:117] "RemoveContainer" containerID="81ef18a8ceffa5c1cfa26cc002b47333287098e5e6cdb33647e44f342d553fc2" Feb 27 18:09:42 crc kubenswrapper[4830]: E0227 18:09:42.768771 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 18:09:44 crc kubenswrapper[4830]: E0227 18:09:44.785208 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:09:44 crc kubenswrapper[4830]: E0227 18:09:44.790625 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:09:46 crc kubenswrapper[4830]: I0227 18:09:46.982268 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 18:09:47 crc kubenswrapper[4830]: I0227 18:09:47.964638 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pfkj9/crc-debug-8pr66" event={"ID":"7d18e63b-ad55-49db-bb23-708cbe96b15b","Type":"ContainerStarted","Data":"3daacc021bd16d7fbbf140a8e9591e43de306c4c9f70b304c42213e6040e61f8"} Feb 27 18:09:48 crc kubenswrapper[4830]: I0227 18:09:47.993299 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pfkj9/crc-debug-8pr66" podStartSLOduration=1.671662153 podStartE2EDuration="12.993278759s" podCreationTimestamp="2026-02-27 18:09:35 +0000 UTC" firstStartedPulling="2026-02-27 18:09:35.768930834 +0000 UTC m=+7371.858203317" lastFinishedPulling="2026-02-27 18:09:47.09054744 +0000 UTC m=+7383.179819923" observedRunningTime="2026-02-27 18:09:47.988423103 +0000 UTC m=+7384.077695576" watchObservedRunningTime="2026-02-27 18:09:47.993278759 +0000 UTC m=+7384.082551242" Feb 27 18:09:48 crc kubenswrapper[4830]: E0227 18:09:48.376934 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 18:09:48 crc kubenswrapper[4830]: E0227 18:09:48.377207 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8jztz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-lqb2w_openshift-marketplace(0596772a-54ae-4d9e-9db4-5d7138bae51e): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:09:48 crc kubenswrapper[4830]: E0227 18:09:48.378387 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:09:48 crc kubenswrapper[4830]: E0227 18:09:48.769690 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:09:52 crc kubenswrapper[4830]: I0227 18:09:52.003838 4830 generic.go:334] "Generic (PLEG): container finished" podID="7d18e63b-ad55-49db-bb23-708cbe96b15b" containerID="3daacc021bd16d7fbbf140a8e9591e43de306c4c9f70b304c42213e6040e61f8" exitCode=125 Feb 27 18:09:52 crc kubenswrapper[4830]: I0227 18:09:52.004146 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pfkj9/crc-debug-8pr66" event={"ID":"7d18e63b-ad55-49db-bb23-708cbe96b15b","Type":"ContainerDied","Data":"3daacc021bd16d7fbbf140a8e9591e43de306c4c9f70b304c42213e6040e61f8"} Feb 27 18:09:52 crc kubenswrapper[4830]: E0227 18:09:52.765500 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:09:53 crc kubenswrapper[4830]: I0227 18:09:53.135383 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfkj9/crc-debug-8pr66" Feb 27 18:09:53 crc kubenswrapper[4830]: I0227 18:09:53.174058 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pfkj9/crc-debug-8pr66"] Feb 27 18:09:53 crc kubenswrapper[4830]: I0227 18:09:53.181422 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pfkj9/crc-debug-8pr66"] Feb 27 18:09:53 crc kubenswrapper[4830]: I0227 18:09:53.295790 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7d18e63b-ad55-49db-bb23-708cbe96b15b-host\") pod \"7d18e63b-ad55-49db-bb23-708cbe96b15b\" (UID: \"7d18e63b-ad55-49db-bb23-708cbe96b15b\") " Feb 27 18:09:53 crc kubenswrapper[4830]: I0227 18:09:53.295871 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh7zn\" (UniqueName: \"kubernetes.io/projected/7d18e63b-ad55-49db-bb23-708cbe96b15b-kube-api-access-xh7zn\") pod \"7d18e63b-ad55-49db-bb23-708cbe96b15b\" (UID: \"7d18e63b-ad55-49db-bb23-708cbe96b15b\") " Feb 27 18:09:53 crc kubenswrapper[4830]: I0227 18:09:53.297779 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d18e63b-ad55-49db-bb23-708cbe96b15b-host" (OuterVolumeSpecName: "host") pod "7d18e63b-ad55-49db-bb23-708cbe96b15b" (UID: "7d18e63b-ad55-49db-bb23-708cbe96b15b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 18:09:53 crc kubenswrapper[4830]: I0227 18:09:53.320152 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d18e63b-ad55-49db-bb23-708cbe96b15b-kube-api-access-xh7zn" (OuterVolumeSpecName: "kube-api-access-xh7zn") pod "7d18e63b-ad55-49db-bb23-708cbe96b15b" (UID: "7d18e63b-ad55-49db-bb23-708cbe96b15b"). InnerVolumeSpecName "kube-api-access-xh7zn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:09:53 crc kubenswrapper[4830]: I0227 18:09:53.407773 4830 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7d18e63b-ad55-49db-bb23-708cbe96b15b-host\") on node \"crc\" DevicePath \"\"" Feb 27 18:09:53 crc kubenswrapper[4830]: I0227 18:09:53.407839 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xh7zn\" (UniqueName: \"kubernetes.io/projected/7d18e63b-ad55-49db-bb23-708cbe96b15b-kube-api-access-xh7zn\") on node \"crc\" DevicePath \"\"" Feb 27 18:09:54 crc kubenswrapper[4830]: I0227 18:09:54.032712 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b57b57418d5c831a36b0d6c90b735c73ba4ea3ecc50ce6006dd5d35378ed0989" Feb 27 18:09:54 crc kubenswrapper[4830]: I0227 18:09:54.033162 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfkj9/crc-debug-8pr66" Feb 27 18:09:54 crc kubenswrapper[4830]: I0227 18:09:54.776239 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d18e63b-ad55-49db-bb23-708cbe96b15b" path="/var/lib/kubelet/pods/7d18e63b-ad55-49db-bb23-708cbe96b15b/volumes" Feb 27 18:09:55 crc kubenswrapper[4830]: I0227 18:09:55.764046 4830 scope.go:117] "RemoveContainer" containerID="81ef18a8ceffa5c1cfa26cc002b47333287098e5e6cdb33647e44f342d553fc2" Feb 27 18:09:55 crc kubenswrapper[4830]: E0227 18:09:55.764642 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 18:09:56 crc kubenswrapper[4830]: E0227 18:09:56.765233 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:09:57 crc kubenswrapper[4830]: E0227 18:09:57.812406 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/alertmanager-rhel9@sha256=b23d4d4796437a2d93a9ad40b24c1130bcf4315029983aa275426d01d2955388/signature-4: status 500 (Internal Server Error)" image="registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3" Feb 27 18:09:57 crc kubenswrapper[4830]: E0227 18:09:57.812856 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:alertmanager,Image:registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3,Command:[],Args:[--config.file=/etc/alertmanager/config_out/alertmanager.env.yaml --storage.path=/alertmanager --data.retention=120h --cluster.listen-address=[$(POD_IP)]:9094 --web.listen-address=:9093 --web.route-prefix=/ --cluster.label=openstack/metric-storage --cluster.peer=alertmanager-metric-storage-0.alertmanager-operated:9094 --cluster.peer=alertmanager-metric-storage-1.alertmanager-operated:9094 --cluster.reconnect-timeout=5m --web.config.file=/etc/alertmanager/web_config/web-config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:web,HostPort:0,ContainerPort:9093,Protocol:TCP,HostIP:,},ContainerPort{Name:mesh-tcp,HostPort:0,ContainerPort:9094,Protocol:TCP,HostIP:,},ContainerPort{Name:mesh-udp,HostPort:0,ContainerPort:9094,Protocol:UDP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{memory: {{209715200 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:false,MountPath:/etc/alertmanager/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-out,ReadOnly:true,MountPath:/etc/alertmanager/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-assets,ReadOnly:true,MountPath:/etc/alertmanager/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:alertmanager-metric-storage-db,ReadOnly:false,MountPath:/alertmanager,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:web-config,ReadOnly:true,MountPath:/etc/alertmanager/web_config/web-config.yaml,SubPath:web-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cluster-tls-config,ReadOnly:true,MountPath:/etc/alertmanager/cluster_tls_config/cluster-tls-config.yaml,SubPath:cluster-tls-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zv45l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/healthy,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod alertmanager-metric-storage-0_openstack(8608d556-6b34-4ab2-b676-007c65e0d359): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/alertmanager-rhel9@sha256=b23d4d4796437a2d93a9ad40b24c1130bcf4315029983aa275426d01d2955388/signature-4: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:09:57 crc kubenswrapper[4830]: E0227 18:09:57.814049 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/alertmanager-rhel9@sha256=b23d4d4796437a2d93a9ad40b24c1130bcf4315029983aa275426d01d2955388/signature-4: status 500 (Internal Server Error)\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:10:00 crc kubenswrapper[4830]: I0227 18:10:00.161577 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536930-8nt27"] Feb 27 18:10:00 crc kubenswrapper[4830]: E0227 18:10:00.162464 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d18e63b-ad55-49db-bb23-708cbe96b15b" containerName="container-00" Feb 27 18:10:00 crc kubenswrapper[4830]: I0227 18:10:00.162482 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d18e63b-ad55-49db-bb23-708cbe96b15b" containerName="container-00" Feb 27 18:10:00 crc kubenswrapper[4830]: I0227 18:10:00.162756 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d18e63b-ad55-49db-bb23-708cbe96b15b" containerName="container-00" Feb 27 18:10:00 crc kubenswrapper[4830]: I0227 18:10:00.163757 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536930-8nt27" Feb 27 18:10:00 crc kubenswrapper[4830]: I0227 18:10:00.182111 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536930-8nt27"] Feb 27 18:10:00 crc kubenswrapper[4830]: I0227 18:10:00.262666 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2dt6\" (UniqueName: \"kubernetes.io/projected/451836eb-a90a-4644-ba0f-d03cd3cac130-kube-api-access-l2dt6\") pod \"auto-csr-approver-29536930-8nt27\" (UID: \"451836eb-a90a-4644-ba0f-d03cd3cac130\") " pod="openshift-infra/auto-csr-approver-29536930-8nt27" Feb 27 18:10:00 crc kubenswrapper[4830]: I0227 18:10:00.365633 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2dt6\" (UniqueName: \"kubernetes.io/projected/451836eb-a90a-4644-ba0f-d03cd3cac130-kube-api-access-l2dt6\") pod \"auto-csr-approver-29536930-8nt27\" (UID: \"451836eb-a90a-4644-ba0f-d03cd3cac130\") " pod="openshift-infra/auto-csr-approver-29536930-8nt27" Feb 27 18:10:00 crc kubenswrapper[4830]: I0227 18:10:00.387167 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2dt6\" (UniqueName: \"kubernetes.io/projected/451836eb-a90a-4644-ba0f-d03cd3cac130-kube-api-access-l2dt6\") pod \"auto-csr-approver-29536930-8nt27\" (UID: \"451836eb-a90a-4644-ba0f-d03cd3cac130\") " pod="openshift-infra/auto-csr-approver-29536930-8nt27" Feb 27 18:10:00 crc kubenswrapper[4830]: I0227 18:10:00.494260 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536930-8nt27" Feb 27 18:10:00 crc kubenswrapper[4830]: E0227 18:10:00.765105 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:10:00 crc kubenswrapper[4830]: W0227 18:10:00.988675 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod451836eb_a90a_4644_ba0f_d03cd3cac130.slice/crio-37960e0e7a8e905272c89cfef0eeb990435ee656123d8616b44528294b220c4e WatchSource:0}: Error finding container 37960e0e7a8e905272c89cfef0eeb990435ee656123d8616b44528294b220c4e: Status 404 returned error can't find the container with id 37960e0e7a8e905272c89cfef0eeb990435ee656123d8616b44528294b220c4e Feb 27 18:10:01 crc kubenswrapper[4830]: I0227 18:10:01.002340 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536930-8nt27"] Feb 27 18:10:01 crc kubenswrapper[4830]: I0227 18:10:01.116608 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536930-8nt27" event={"ID":"451836eb-a90a-4644-ba0f-d03cd3cac130","Type":"ContainerStarted","Data":"37960e0e7a8e905272c89cfef0eeb990435ee656123d8616b44528294b220c4e"} Feb 27 18:10:01 crc kubenswrapper[4830]: E0227 18:10:01.971170 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:10:01 crc kubenswrapper[4830]: E0227 18:10:01.971668 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:10:01 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:10:01 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l2dt6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536930-8nt27_openshift-infra(451836eb-a90a-4644-ba0f-d03cd3cac130): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:10:01 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 18:10:01 crc kubenswrapper[4830]: E0227 18:10:01.972839 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536930-8nt27" podUID="451836eb-a90a-4644-ba0f-d03cd3cac130" Feb 27 18:10:02 crc kubenswrapper[4830]: E0227 18:10:02.126982 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-8nt27" podUID="451836eb-a90a-4644-ba0f-d03cd3cac130" Feb 27 18:10:03 crc kubenswrapper[4830]: E0227 18:10:03.764622 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:10:07 crc kubenswrapper[4830]: I0227 18:10:07.762710 4830 scope.go:117] "RemoveContainer" containerID="81ef18a8ceffa5c1cfa26cc002b47333287098e5e6cdb33647e44f342d553fc2" Feb 27 18:10:07 crc kubenswrapper[4830]: E0227 18:10:07.763684 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 18:10:07 crc kubenswrapper[4830]: E0227 18:10:07.764630 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:10:09 crc kubenswrapper[4830]: E0227 18:10:09.764561 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:10:09 crc kubenswrapper[4830]: E0227 18:10:09.765889 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:10:14 crc kubenswrapper[4830]: E0227 18:10:14.773231 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:10:14 crc kubenswrapper[4830]: E0227 18:10:14.776337 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:10:15 crc kubenswrapper[4830]: E0227 18:10:15.852592 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:10:15 crc kubenswrapper[4830]: E0227 18:10:15.852742 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:10:15 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:10:15 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l2dt6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536930-8nt27_openshift-infra(451836eb-a90a-4644-ba0f-d03cd3cac130): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:10:15 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 18:10:15 crc kubenswrapper[4830]: E0227 18:10:15.854024 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536930-8nt27" podUID="451836eb-a90a-4644-ba0f-d03cd3cac130" Feb 27 18:10:20 crc kubenswrapper[4830]: E0227 18:10:20.549343 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 18:10:20 crc kubenswrapper[4830]: E0227 18:10:20.549715 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-74bq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-5pkvb_openshift-marketplace(4d0e4d8e-d4ab-47f9-8015-5ace0337272f): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:10:20 crc kubenswrapper[4830]: E0227 18:10:20.550891 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:10:21 crc kubenswrapper[4830]: I0227 18:10:21.764505 4830 scope.go:117] "RemoveContainer" containerID="81ef18a8ceffa5c1cfa26cc002b47333287098e5e6cdb33647e44f342d553fc2" Feb 27 18:10:21 crc kubenswrapper[4830]: E0227 18:10:21.765341 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 18:10:21 crc kubenswrapper[4830]: E0227 18:10:21.766375 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:10:24 crc kubenswrapper[4830]: E0227 18:10:24.772020 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:10:26 crc kubenswrapper[4830]: E0227 18:10:26.765800 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:10:28 crc kubenswrapper[4830]: E0227 18:10:28.777208 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-8nt27" podUID="451836eb-a90a-4644-ba0f-d03cd3cac130" Feb 27 18:10:28 crc kubenswrapper[4830]: E0227 18:10:28.777282 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:10:33 crc kubenswrapper[4830]: I0227 18:10:33.762839 4830 scope.go:117] "RemoveContainer" containerID="81ef18a8ceffa5c1cfa26cc002b47333287098e5e6cdb33647e44f342d553fc2" Feb 27 18:10:34 crc kubenswrapper[4830]: I0227 18:10:34.462878 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerStarted","Data":"469f495eca7f2f6702dc34d5195646e1c220a84d4e0dd0fdedb43c726d6afe28"} Feb 27 18:10:35 crc kubenswrapper[4830]: E0227 18:10:35.766910 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:10:36 crc kubenswrapper[4830]: E0227 18:10:36.766235 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:10:37 crc kubenswrapper[4830]: E0227 18:10:37.765135 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:10:38 crc kubenswrapper[4830]: E0227 18:10:38.014782 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/thanos-rhel9@sha256=b77218de6528d52542abddf9fb3faececdf1a3e47987ac740ab00468d461a60b/signature-4: status 500 (Internal Server Error)" image="registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34" Feb 27 18:10:38 crc kubenswrapper[4830]: E0227 18:10:38.015171 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:thanos-sidecar,Image:registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34,Command:[],Args:[sidecar --prometheus.url=http://localhost:9090/ --grpc-address=:10901 --http-address=:10902 --log.level=info --prometheus.http-client-file=/etc/thanos/config/prometheus.http-client-file.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:10902,Protocol:TCP,HostIP:,},ContainerPort{Name:grpc,HostPort:0,ContainerPort:10901,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:thanos-prometheus-http-client-file,ReadOnly:false,MountPath:/etc/thanos/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w6l8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(75bcbe49-556d-4af7-9506-514c14ec8d9e): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/thanos-rhel9@sha256=b77218de6528d52542abddf9fb3faececdf1a3e47987ac740ab00468d461a60b/signature-4: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:10:38 crc kubenswrapper[4830]: E0227 18:10:38.017752 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/thanos-rhel9@sha256=b77218de6528d52542abddf9fb3faececdf1a3e47987ac740ab00468d461a60b/signature-4: status 500 (Internal Server Error)\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:10:40 crc kubenswrapper[4830]: E0227 18:10:40.764551 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:10:41 crc kubenswrapper[4830]: E0227 18:10:41.942452 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:10:41 crc kubenswrapper[4830]: E0227 18:10:41.943001 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:10:41 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:10:41 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l2dt6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536930-8nt27_openshift-infra(451836eb-a90a-4644-ba0f-d03cd3cac130): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:10:41 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 18:10:41 crc kubenswrapper[4830]: E0227 18:10:41.944179 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536930-8nt27" podUID="451836eb-a90a-4644-ba0f-d03cd3cac130" Feb 27 18:10:47 crc kubenswrapper[4830]: E0227 18:10:47.764356 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:10:47 crc kubenswrapper[4830]: E0227 18:10:47.764622 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:10:50 crc kubenswrapper[4830]: E0227 18:10:50.767333 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:10:51 crc kubenswrapper[4830]: E0227 18:10:51.766876 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:10:53 crc kubenswrapper[4830]: E0227 18:10:53.766809 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:10:53 crc kubenswrapper[4830]: E0227 18:10:53.766811 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-8nt27" podUID="451836eb-a90a-4644-ba0f-d03cd3cac130" Feb 27 18:10:59 crc kubenswrapper[4830]: E0227 18:10:59.771044 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:11:02 crc kubenswrapper[4830]: I0227 18:11:02.090268 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_8608d556-6b34-4ab2-b676-007c65e0d359/init-config-reloader/0.log" Feb 27 18:11:02 crc kubenswrapper[4830]: I0227 18:11:02.366972 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_8608d556-6b34-4ab2-b676-007c65e0d359/init-config-reloader/0.log" Feb 27 18:11:02 crc kubenswrapper[4830]: I0227 18:11:02.438430 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_8608d556-6b34-4ab2-b676-007c65e0d359/config-reloader/0.log" Feb 27 18:11:02 crc kubenswrapper[4830]: I0227 18:11:02.596284 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-f4df6446b-z2csf_4ba8e997-3bde-4a23-9748-bd39acb5bcf1/barbican-api-log/0.log" Feb 27 18:11:02 crc kubenswrapper[4830]: I0227 18:11:02.600875 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-f4df6446b-z2csf_4ba8e997-3bde-4a23-9748-bd39acb5bcf1/barbican-api/0.log" Feb 27 18:11:02 crc kubenswrapper[4830]: E0227 18:11:02.765543 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:11:02 crc kubenswrapper[4830]: E0227 18:11:02.768788 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:11:02 crc kubenswrapper[4830]: E0227 18:11:02.769508 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:11:02 crc kubenswrapper[4830]: I0227 18:11:02.803591 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5658d7bb68-tdlwd_13e050dc-75b5-42df-bd0f-04e850d34786/barbican-keystone-listener/0.log" Feb 27 18:11:02 crc kubenswrapper[4830]: I0227 18:11:02.820806 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5658d7bb68-tdlwd_13e050dc-75b5-42df-bd0f-04e850d34786/barbican-keystone-listener-log/0.log" Feb 27 18:11:02 crc kubenswrapper[4830]: I0227 18:11:02.965305 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-55f77b7c67-tb7rb_69c33f33-e26d-48e1-91c6-2bcf08372648/barbican-worker/0.log" Feb 27 18:11:03 crc kubenswrapper[4830]: I0227 18:11:03.049554 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-55f77b7c67-tb7rb_69c33f33-e26d-48e1-91c6-2bcf08372648/barbican-worker-log/0.log" Feb 27 18:11:03 crc kubenswrapper[4830]: I0227 18:11:03.145144 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_0e37d0f8-38cf-4583-811f-1907fd385a6c/cinder-api-log/0.log" Feb 27 18:11:03 crc kubenswrapper[4830]: I0227 18:11:03.165506 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_0e37d0f8-38cf-4583-811f-1907fd385a6c/cinder-api/0.log" Feb 27 18:11:03 crc kubenswrapper[4830]: I0227 18:11:03.371613 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_fea93d69-d865-4c2a-b245-eda3ff54abac/probe/0.log" Feb 27 18:11:03 crc kubenswrapper[4830]: I0227 18:11:03.403270 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_fea93d69-d865-4c2a-b245-eda3ff54abac/cinder-backup/0.log" Feb 27 18:11:03 crc kubenswrapper[4830]: I0227 18:11:03.488099 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ff43406b-1751-47e9-84a7-38f1e2aa419e/cinder-scheduler/0.log" Feb 27 18:11:03 crc kubenswrapper[4830]: I0227 18:11:03.558519 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ff43406b-1751-47e9-84a7-38f1e2aa419e/probe/0.log" Feb 27 18:11:03 crc kubenswrapper[4830]: I0227 18:11:03.634990 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8/cinder-volume/0.log" Feb 27 18:11:03 crc kubenswrapper[4830]: I0227 18:11:03.741695 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_ffe65459-6ffe-44f7-8cee-acf6b6ec2fa8/probe/0.log" Feb 27 18:11:03 crc kubenswrapper[4830]: I0227 18:11:03.821463 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b8675bf5c-vgk78_a049e072-04be-4b81-8815-c5ee22647712/init/0.log" Feb 27 18:11:04 crc kubenswrapper[4830]: I0227 18:11:04.293987 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b8675bf5c-vgk78_a049e072-04be-4b81-8815-c5ee22647712/init/0.log" Feb 27 18:11:04 crc kubenswrapper[4830]: I0227 18:11:04.329516 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6b8675bf5c-vgk78_a049e072-04be-4b81-8815-c5ee22647712/dnsmasq-dns/0.log" Feb 27 18:11:04 crc kubenswrapper[4830]: I0227 18:11:04.365097 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_4b3006a5-059d-4325-ab11-bb77351ab8f6/glance-httpd/0.log" Feb 27 18:11:04 crc kubenswrapper[4830]: I0227 18:11:04.497234 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_4b3006a5-059d-4325-ab11-bb77351ab8f6/glance-log/0.log" Feb 27 18:11:04 crc kubenswrapper[4830]: I0227 18:11:04.577505 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d7edff5b-0c5e-4950-ae29-5cd0af755e35/glance-httpd/0.log" Feb 27 18:11:04 crc kubenswrapper[4830]: I0227 18:11:04.613890 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d7edff5b-0c5e-4950-ae29-5cd0af755e35/glance-log/0.log" Feb 27 18:11:04 crc kubenswrapper[4830]: E0227 18:11:04.772433 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:11:04 crc kubenswrapper[4830]: I0227 18:11:04.805256 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-5fbb4bdc94-5c6mv_522dc2a3-ea31-4a6e-a591-31b8988518e9/heat-api/0.log" Feb 27 18:11:04 crc kubenswrapper[4830]: I0227 18:11:04.834577 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-7d576967dc-475nd_6e31db19-37ac-4e76-a650-dacf0b71c2fa/heat-cfnapi/0.log" Feb 27 18:11:05 crc kubenswrapper[4830]: I0227 18:11:05.035921 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-649cbc8c57-j4q69_add92f79-a9b6-4757-a50d-902c8de76fdc/heat-engine/0.log" Feb 27 18:11:05 crc kubenswrapper[4830]: I0227 18:11:05.150391 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7796b64d89-v2b4b_b07ca473-049b-41a3-bb57-a16764c45d86/horizon/0.log" Feb 27 18:11:05 crc kubenswrapper[4830]: I0227 18:11:05.242537 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7796b64d89-v2b4b_b07ca473-049b-41a3-bb57-a16764c45d86/horizon-log/0.log" Feb 27 18:11:05 crc kubenswrapper[4830]: I0227 18:11:05.446923 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-676ffb979c-dk4rh_8f66c590-b19e-4188-bf5c-125cc3b78c4f/keystone-api/0.log" Feb 27 18:11:05 crc kubenswrapper[4830]: I0227 18:11:05.491566 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29536921-bbwcz_1ea9f937-1d9d-4e38-87dd-98017339ecc1/keystone-cron/0.log" Feb 27 18:11:05 crc kubenswrapper[4830]: I0227 18:11:05.658549 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_8caaa2c3-eb20-4f5c-8a28-09d2f8c64fc4/kube-state-metrics/0.log" Feb 27 18:11:05 crc kubenswrapper[4830]: I0227 18:11:05.698166 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-copy-data_a7b1cd16-932c-44e3-b8fa-bed298c7d045/adoption/0.log" Feb 27 18:11:06 crc kubenswrapper[4830]: I0227 18:11:06.164658 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-cbb7cdb9f-mhl2g_2f64f8e1-a586-468f-a64d-18ea603f34c2/neutron-api/0.log" Feb 27 18:11:06 crc kubenswrapper[4830]: I0227 18:11:06.242488 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-cbb7cdb9f-mhl2g_2f64f8e1-a586-468f-a64d-18ea603f34c2/neutron-httpd/0.log" Feb 27 18:11:06 crc kubenswrapper[4830]: I0227 18:11:06.475692 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_d64fd96a-b098-4112-8019-6577ba87df85/nova-api-api/0.log" Feb 27 18:11:06 crc kubenswrapper[4830]: I0227 18:11:06.565148 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_d64fd96a-b098-4112-8019-6577ba87df85/nova-api-log/0.log" Feb 27 18:11:06 crc kubenswrapper[4830]: I0227 18:11:06.675166 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_9854db0d-60b3-462b-818b-9fa262f89cb4/nova-cell0-conductor-conductor/0.log" Feb 27 18:11:06 crc kubenswrapper[4830]: I0227 18:11:06.819128 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_1d4f86df-5d6b-4fd2-8c50-e414adfda318/nova-cell1-conductor-conductor/0.log" Feb 27 18:11:07 crc kubenswrapper[4830]: I0227 18:11:07.078109 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_85168f4c-a1d8-408f-a88c-269e899d29d9/nova-cell1-novncproxy-novncproxy/0.log" Feb 27 18:11:07 crc kubenswrapper[4830]: I0227 18:11:07.146045 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_cd94c1a3-2090-4382-b181-7b121e05a5d7/nova-metadata-log/0.log" Feb 27 18:11:07 crc kubenswrapper[4830]: I0227 18:11:07.248907 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_cd94c1a3-2090-4382-b181-7b121e05a5d7/nova-metadata-metadata/0.log" Feb 27 18:11:07 crc kubenswrapper[4830]: I0227 18:11:07.410787 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_c482099c-834e-41c1-92f6-7a4699524e31/nova-scheduler-scheduler/0.log" Feb 27 18:11:07 crc kubenswrapper[4830]: I0227 18:11:07.503050 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-dc594bd7f-7cnbx_27beea35-cf86-4a88-ae9a-1620fd0bc390/init/0.log" Feb 27 18:11:07 crc kubenswrapper[4830]: I0227 18:11:07.696884 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-dc594bd7f-7cnbx_27beea35-cf86-4a88-ae9a-1620fd0bc390/init/0.log" Feb 27 18:11:07 crc kubenswrapper[4830]: I0227 18:11:07.848261 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-dc594bd7f-7cnbx_27beea35-cf86-4a88-ae9a-1620fd0bc390/octavia-api-provider-agent/0.log" Feb 27 18:11:07 crc kubenswrapper[4830]: I0227 18:11:07.928813 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-l9rlw_e970daf4-00a2-473d-bfae-e985a7c78a94/init/0.log" Feb 27 18:11:07 crc kubenswrapper[4830]: I0227 18:11:07.948866 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-dc594bd7f-7cnbx_27beea35-cf86-4a88-ae9a-1620fd0bc390/octavia-api/0.log" Feb 27 18:11:08 crc kubenswrapper[4830]: I0227 18:11:08.522959 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-l9rlw_e970daf4-00a2-473d-bfae-e985a7c78a94/init/0.log" Feb 27 18:11:08 crc kubenswrapper[4830]: I0227 18:11:08.555902 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-jvpsh_6b0a7833-e438-4248-a46f-bbeb413c9f1b/init/0.log" Feb 27 18:11:08 crc kubenswrapper[4830]: I0227 18:11:08.579114 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-l9rlw_e970daf4-00a2-473d-bfae-e985a7c78a94/octavia-healthmanager/0.log" Feb 27 18:11:08 crc kubenswrapper[4830]: E0227 18:11:08.765840 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-8nt27" podUID="451836eb-a90a-4644-ba0f-d03cd3cac130" Feb 27 18:11:08 crc kubenswrapper[4830]: I0227 18:11:08.915866 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-59f8cff499-rf92d_f8b0f281-569f-4fbe-ab94-b604360aaafe/init/0.log" Feb 27 18:11:08 crc kubenswrapper[4830]: I0227 18:11:08.917700 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-jvpsh_6b0a7833-e438-4248-a46f-bbeb413c9f1b/init/0.log" Feb 27 18:11:08 crc kubenswrapper[4830]: I0227 18:11:08.920723 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-jvpsh_6b0a7833-e438-4248-a46f-bbeb413c9f1b/octavia-housekeeping/0.log" Feb 27 18:11:09 crc kubenswrapper[4830]: I0227 18:11:09.142698 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-59f8cff499-rf92d_f8b0f281-569f-4fbe-ab94-b604360aaafe/octavia-amphora-httpd/0.log" Feb 27 18:11:09 crc kubenswrapper[4830]: I0227 18:11:09.227600 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-59f8cff499-rf92d_f8b0f281-569f-4fbe-ab94-b604360aaafe/init/0.log" Feb 27 18:11:09 crc kubenswrapper[4830]: I0227 18:11:09.245215 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-xhmtb_a5ea5263-f3d8-40bf-9d4f-66afaad4eeec/init/0.log" Feb 27 18:11:09 crc kubenswrapper[4830]: I0227 18:11:09.466271 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-zbtll_e70e9f25-ddb4-4592-acce-1cc44b59f2b8/init/0.log" Feb 27 18:11:09 crc kubenswrapper[4830]: I0227 18:11:09.511991 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-xhmtb_a5ea5263-f3d8-40bf-9d4f-66afaad4eeec/octavia-rsyslog/0.log" Feb 27 18:11:09 crc kubenswrapper[4830]: I0227 18:11:09.548141 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-xhmtb_a5ea5263-f3d8-40bf-9d4f-66afaad4eeec/init/0.log" Feb 27 18:11:09 crc kubenswrapper[4830]: I0227 18:11:09.743694 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-zbtll_e70e9f25-ddb4-4592-acce-1cc44b59f2b8/init/0.log" Feb 27 18:11:09 crc kubenswrapper[4830]: I0227 18:11:09.937682 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-zbtll_e70e9f25-ddb4-4592-acce-1cc44b59f2b8/octavia-worker/0.log" Feb 27 18:11:09 crc kubenswrapper[4830]: I0227 18:11:09.941967 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e2e2ff35-b569-4ab4-b1f3-47ec2327caeb/mysql-bootstrap/0.log" Feb 27 18:11:10 crc kubenswrapper[4830]: I0227 18:11:10.072954 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e2e2ff35-b569-4ab4-b1f3-47ec2327caeb/mysql-bootstrap/0.log" Feb 27 18:11:10 crc kubenswrapper[4830]: I0227 18:11:10.180033 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e2e2ff35-b569-4ab4-b1f3-47ec2327caeb/galera/0.log" Feb 27 18:11:10 crc kubenswrapper[4830]: I0227 18:11:10.264922 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_74c21e05-7e2b-4653-b6fa-a9a814716cc1/mysql-bootstrap/0.log" Feb 27 18:11:10 crc kubenswrapper[4830]: I0227 18:11:10.479222 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_74c21e05-7e2b-4653-b6fa-a9a814716cc1/mysql-bootstrap/0.log" Feb 27 18:11:10 crc kubenswrapper[4830]: I0227 18:11:10.481406 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_74c21e05-7e2b-4653-b6fa-a9a814716cc1/galera/0.log" Feb 27 18:11:10 crc kubenswrapper[4830]: I0227 18:11:10.486708 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_42745d06-1e64-4f81-a075-db86b6665a3e/openstackclient/0.log" Feb 27 18:11:10 crc kubenswrapper[4830]: I0227 18:11:10.803546 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-b8hbd_d4881336-2572-4aa9-a0c2-9c46b73b7898/ovn-controller/0.log" Feb 27 18:11:10 crc kubenswrapper[4830]: I0227 18:11:10.934747 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-xzrhp_f2c4bba8-df9d-411c-9990-7e98513001aa/openstack-network-exporter/0.log" Feb 27 18:11:11 crc kubenswrapper[4830]: I0227 18:11:11.044128 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rfd9s_c2f87ce8-a38b-467d-a4bf-17eefbfbc958/ovsdb-server-init/0.log" Feb 27 18:11:11 crc kubenswrapper[4830]: I0227 18:11:11.273775 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rfd9s_c2f87ce8-a38b-467d-a4bf-17eefbfbc958/ovsdb-server-init/0.log" Feb 27 18:11:11 crc kubenswrapper[4830]: I0227 18:11:11.333057 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rfd9s_c2f87ce8-a38b-467d-a4bf-17eefbfbc958/ovs-vswitchd/0.log" Feb 27 18:11:11 crc kubenswrapper[4830]: I0227 18:11:11.339105 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rfd9s_c2f87ce8-a38b-467d-a4bf-17eefbfbc958/ovsdb-server/0.log" Feb 27 18:11:11 crc kubenswrapper[4830]: I0227 18:11:11.515212 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-copy-data_af8bf2f1-b870-4b65-ac65-fcf8a2a11c1f/adoption/0.log" Feb 27 18:11:11 crc kubenswrapper[4830]: I0227 18:11:11.619687 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_8cea844f-8422-43f9-8056-0fa419120d61/ovn-northd/0.log" Feb 27 18:11:11 crc kubenswrapper[4830]: I0227 18:11:11.641427 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_8cea844f-8422-43f9-8056-0fa419120d61/openstack-network-exporter/0.log" Feb 27 18:11:11 crc kubenswrapper[4830]: I0227 18:11:11.836248 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_64bfb115-0d42-406c-8cf7-eee1da063fdf/ovsdbserver-nb/0.log" Feb 27 18:11:11 crc kubenswrapper[4830]: I0227 18:11:11.876683 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_64bfb115-0d42-406c-8cf7-eee1da063fdf/openstack-network-exporter/0.log" Feb 27 18:11:12 crc kubenswrapper[4830]: I0227 18:11:12.218866 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_3aaa1ca6-2199-4a24-8ac2-af90f0f4b4e0/openstack-network-exporter/0.log" Feb 27 18:11:12 crc kubenswrapper[4830]: I0227 18:11:12.337284 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_3aaa1ca6-2199-4a24-8ac2-af90f0f4b4e0/ovsdbserver-nb/0.log" Feb 27 18:11:12 crc kubenswrapper[4830]: I0227 18:11:12.497067 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_3b5aca21-88b5-41b5-a8fa-58df03c2dc7b/openstack-network-exporter/0.log" Feb 27 18:11:12 crc kubenswrapper[4830]: I0227 18:11:12.589916 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_3b5aca21-88b5-41b5-a8fa-58df03c2dc7b/ovsdbserver-nb/0.log" Feb 27 18:11:12 crc kubenswrapper[4830]: I0227 18:11:12.678839 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_b9629cfc-62f4-4e7b-abc6-c5310b859385/openstack-network-exporter/0.log" Feb 27 18:11:12 crc kubenswrapper[4830]: I0227 18:11:12.770347 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_b9629cfc-62f4-4e7b-abc6-c5310b859385/ovsdbserver-sb/0.log" Feb 27 18:11:12 crc kubenswrapper[4830]: I0227 18:11:12.798535 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_f185a288-a581-46e4-8ed5-d0ce81a59f00/memcached/0.log" Feb 27 18:11:12 crc kubenswrapper[4830]: I0227 18:11:12.849405 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_63a0cf52-f7cb-41ac-80a8-e83fcaff23d2/openstack-network-exporter/0.log" Feb 27 18:11:12 crc kubenswrapper[4830]: I0227 18:11:12.957758 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_63a0cf52-f7cb-41ac-80a8-e83fcaff23d2/ovsdbserver-sb/0.log" Feb 27 18:11:13 crc kubenswrapper[4830]: I0227 18:11:13.013544 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_22c480d9-5633-47b2-935e-c8db62ccb85f/openstack-network-exporter/0.log" Feb 27 18:11:13 crc kubenswrapper[4830]: I0227 18:11:13.104654 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_22c480d9-5633-47b2-935e-c8db62ccb85f/ovsdbserver-sb/0.log" Feb 27 18:11:13 crc kubenswrapper[4830]: I0227 18:11:13.185547 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-597449fbf6-zh885_7957ffb0-fa18-4c4b-b17e-7160a1c5f41f/placement-api/0.log" Feb 27 18:11:13 crc kubenswrapper[4830]: I0227 18:11:13.236935 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-597449fbf6-zh885_7957ffb0-fa18-4c4b-b17e-7160a1c5f41f/placement-log/0.log" Feb 27 18:11:13 crc kubenswrapper[4830]: I0227 18:11:13.344670 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_75bcbe49-556d-4af7-9506-514c14ec8d9e/init-config-reloader/0.log" Feb 27 18:11:13 crc kubenswrapper[4830]: I0227 18:11:13.550892 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_75bcbe49-556d-4af7-9506-514c14ec8d9e/init-config-reloader/0.log" Feb 27 18:11:13 crc kubenswrapper[4830]: I0227 18:11:13.576383 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_75bcbe49-556d-4af7-9506-514c14ec8d9e/config-reloader/0.log" Feb 27 18:11:13 crc kubenswrapper[4830]: I0227 18:11:13.591851 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_75bcbe49-556d-4af7-9506-514c14ec8d9e/prometheus/0.log" Feb 27 18:11:13 crc kubenswrapper[4830]: I0227 18:11:13.756294 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_57696a20-06e7-4dd6-9a1e-e4b0cb8013bf/setup-container/0.log" Feb 27 18:11:13 crc kubenswrapper[4830]: E0227 18:11:13.765383 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:11:13 crc kubenswrapper[4830]: I0227 18:11:13.987933 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_57696a20-06e7-4dd6-9a1e-e4b0cb8013bf/setup-container/0.log" Feb 27 18:11:14 crc kubenswrapper[4830]: I0227 18:11:14.005792 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_57696a20-06e7-4dd6-9a1e-e4b0cb8013bf/rabbitmq/0.log" Feb 27 18:11:14 crc kubenswrapper[4830]: I0227 18:11:14.060524 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ff3f1819-c196-4202-a77b-6272462a9671/setup-container/0.log" Feb 27 18:11:14 crc kubenswrapper[4830]: I0227 18:11:14.197099 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ff3f1819-c196-4202-a77b-6272462a9671/setup-container/0.log" Feb 27 18:11:14 crc kubenswrapper[4830]: E0227 18:11:14.773803 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:11:14 crc kubenswrapper[4830]: E0227 18:11:14.774082 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:11:14 crc kubenswrapper[4830]: E0227 18:11:14.776012 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:11:14 crc kubenswrapper[4830]: I0227 18:11:14.905913 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ff3f1819-c196-4202-a77b-6272462a9671/rabbitmq/0.log" Feb 27 18:11:18 crc kubenswrapper[4830]: E0227 18:11:18.765568 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:11:19 crc kubenswrapper[4830]: E0227 18:11:19.764482 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-8nt27" podUID="451836eb-a90a-4644-ba0f-d03cd3cac130" Feb 27 18:11:24 crc kubenswrapper[4830]: E0227 18:11:24.791674 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:11:25 crc kubenswrapper[4830]: E0227 18:11:25.765429 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:11:26 crc kubenswrapper[4830]: E0227 18:11:26.769902 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:11:27 crc kubenswrapper[4830]: E0227 18:11:27.765194 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:11:30 crc kubenswrapper[4830]: E0227 18:11:30.764409 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:11:35 crc kubenswrapper[4830]: E0227 18:11:35.797686 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:11:35 crc kubenswrapper[4830]: E0227 18:11:35.798616 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:11:35 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:11:35 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l2dt6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536930-8nt27_openshift-infra(451836eb-a90a-4644-ba0f-d03cd3cac130): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:11:35 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 18:11:35 crc kubenswrapper[4830]: E0227 18:11:35.800381 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536930-8nt27" podUID="451836eb-a90a-4644-ba0f-d03cd3cac130" Feb 27 18:11:36 crc kubenswrapper[4830]: E0227 18:11:36.764818 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:11:36 crc kubenswrapper[4830]: E0227 18:11:36.765047 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:11:37 crc kubenswrapper[4830]: I0227 18:11:37.043234 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_56ecd13db59bdd13e8f9c434db3243d93729dc89d3d863bd79a8b38d1ddq5l4_56801599-f8f5-494d-88bf-2c4786ed93d3/util/0.log" Feb 27 18:11:37 crc kubenswrapper[4830]: I0227 18:11:37.208975 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_56ecd13db59bdd13e8f9c434db3243d93729dc89d3d863bd79a8b38d1ddq5l4_56801599-f8f5-494d-88bf-2c4786ed93d3/pull/0.log" Feb 27 18:11:37 crc kubenswrapper[4830]: I0227 18:11:37.215258 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_56ecd13db59bdd13e8f9c434db3243d93729dc89d3d863bd79a8b38d1ddq5l4_56801599-f8f5-494d-88bf-2c4786ed93d3/pull/0.log" Feb 27 18:11:37 crc kubenswrapper[4830]: I0227 18:11:37.229905 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_56ecd13db59bdd13e8f9c434db3243d93729dc89d3d863bd79a8b38d1ddq5l4_56801599-f8f5-494d-88bf-2c4786ed93d3/util/0.log" Feb 27 18:11:37 crc kubenswrapper[4830]: I0227 18:11:37.429824 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_56ecd13db59bdd13e8f9c434db3243d93729dc89d3d863bd79a8b38d1ddq5l4_56801599-f8f5-494d-88bf-2c4786ed93d3/util/0.log" Feb 27 18:11:37 crc kubenswrapper[4830]: I0227 18:11:37.430044 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_56ecd13db59bdd13e8f9c434db3243d93729dc89d3d863bd79a8b38d1ddq5l4_56801599-f8f5-494d-88bf-2c4786ed93d3/extract/0.log" Feb 27 18:11:37 crc kubenswrapper[4830]: I0227 18:11:37.436072 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_56ecd13db59bdd13e8f9c434db3243d93729dc89d3d863bd79a8b38d1ddq5l4_56801599-f8f5-494d-88bf-2c4786ed93d3/pull/0.log" Feb 27 18:11:37 crc kubenswrapper[4830]: I0227 18:11:37.854724 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-55cc45767f-m892c_33e3f2f7-6a6a-4e59-84d6-a7bb2a7b14e2/manager/0.log" Feb 27 18:11:38 crc kubenswrapper[4830]: I0227 18:11:38.298821 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7f748f8b74-f9pxf_ddc86b78-f250-426e-80a2-1e0da35ea2a5/manager/0.log" Feb 27 18:11:38 crc kubenswrapper[4830]: I0227 18:11:38.370665 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-585b788787-slc8g_190a4a9c-ee4a-4c6d-a45c-1febc5a67e9d/manager/0.log" Feb 27 18:11:38 crc kubenswrapper[4830]: I0227 18:11:38.564838 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7db95d7ffb-59k4p_23c25dea-fae4-4381-9b97-98fd17aee9d8/manager/0.log" Feb 27 18:11:38 crc kubenswrapper[4830]: E0227 18:11:38.764903 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:11:39 crc kubenswrapper[4830]: I0227 18:11:39.127639 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-8784b4656-29x7g_e68ac45c-7b30-4cd5-932a-9a0e8a3824f3/manager/0.log" Feb 27 18:11:39 crc kubenswrapper[4830]: I0227 18:11:39.600372 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-c77466965-24fz2_5b73c28e-36b3-4845-9336-299fc3dd2551/manager/0.log" Feb 27 18:11:39 crc kubenswrapper[4830]: I0227 18:11:39.681894 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-78b64779b9-fhwn5_53b4e8e1-00b7-4744-8fcf-a723ae104e53/manager/0.log" Feb 27 18:11:39 crc kubenswrapper[4830]: I0227 18:11:39.894607 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-76fd76856-vtdk8_b9dbfa18-3a80-408c-9a7d-34a96b2c411e/manager/0.log" Feb 27 18:11:40 crc kubenswrapper[4830]: I0227 18:11:40.175283 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-745fc45789-w8lqb_e42044d1-1153-4216-8d8f-b8333d2bcb00/manager/0.log" Feb 27 18:11:40 crc kubenswrapper[4830]: I0227 18:11:40.490922 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-768f998cf4-qvwzn_bbd18a52-1057-4183-bb46-f1c270691eac/manager/0.log" Feb 27 18:11:40 crc kubenswrapper[4830]: I0227 18:11:40.670582 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-768c8b45bb-7pp52_9526e5f2-4fd2-42bb-b96a-f9cd615313b9/manager/0.log" Feb 27 18:11:40 crc kubenswrapper[4830]: I0227 18:11:40.859157 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-cc79fdffd-2wlpz_7237e49f-cb23-40bd-b5ab-f1460c620f13/manager/0.log" Feb 27 18:11:40 crc kubenswrapper[4830]: I0227 18:11:40.933128 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6c67ff7674-ftbbj_531e48d4-bbe4-4527-944e-4b27dc957ff4/manager/0.log" Feb 27 18:11:40 crc kubenswrapper[4830]: I0227 18:11:40.998756 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-c5677dc5d-68j87_b719a387-109a-49fe-b4df-98038c202a0f/manager/0.log" Feb 27 18:11:41 crc kubenswrapper[4830]: I0227 18:11:41.422224 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-966449766-gf8mn_d2358885-c27e-4483-9e57-fdd68a711164/operator/0.log" Feb 27 18:11:41 crc kubenswrapper[4830]: I0227 18:11:41.480505 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-4p2qq_b2c0ed51-a6e9-40cd-8ce9-fa9f810528a1/registry-server/0.log" Feb 27 18:11:41 crc kubenswrapper[4830]: I0227 18:11:41.720758 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-684c7d77b-2n88g_f179e5c8-193f-47fc-841e-2dc3feff31cd/manager/0.log" Feb 27 18:11:41 crc kubenswrapper[4830]: E0227 18:11:41.764833 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:11:41 crc kubenswrapper[4830]: I0227 18:11:41.854634 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-bff955cc4-fhgdd_8cf505f8-023a-4cfe-be27-2b920c8875cc/manager/0.log" Feb 27 18:11:41 crc kubenswrapper[4830]: I0227 18:11:41.970112 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-4pzb7_af786cf1-6705-4c96-9c45-882daad96637/operator/0.log" Feb 27 18:11:42 crc kubenswrapper[4830]: I0227 18:11:42.271465 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-55f4bf89cb-lqgtj_c0bb3f6f-67ec-4669-be22-2122ae624cdd/manager/0.log" Feb 27 18:11:42 crc kubenswrapper[4830]: I0227 18:11:42.408766 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-56dc67d744-44hlt_a358af53-9ef3-4686-8e96-528d08c2e7a2/manager/0.log" Feb 27 18:11:42 crc kubenswrapper[4830]: I0227 18:11:42.492626 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-8467ccb4c8-mh9d6_33a4c588-56bf-40d2-892c-9fbe458de600/manager/0.log" Feb 27 18:11:42 crc kubenswrapper[4830]: I0227 18:11:42.609928 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-65c9f4f6b-w6kw7_37c91ba3-1b2b-4717-b591-d4a4c2ec9d62/manager/0.log" Feb 27 18:11:43 crc kubenswrapper[4830]: I0227 18:11:43.552113 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7987977d84-9b7m9_7dcda287-c580-4c6d-881d-d2500541cfba/manager/0.log" Feb 27 18:11:45 crc kubenswrapper[4830]: E0227 18:11:45.765634 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:11:46 crc kubenswrapper[4830]: E0227 18:11:46.763386 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-8nt27" podUID="451836eb-a90a-4644-ba0f-d03cd3cac130" Feb 27 18:11:49 crc kubenswrapper[4830]: E0227 18:11:49.659274 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 18:11:49 crc kubenswrapper[4830]: E0227 18:11:49.660166 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-74bq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-5pkvb_openshift-marketplace(4d0e4d8e-d4ab-47f9-8015-5ace0337272f): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:11:49 crc kubenswrapper[4830]: E0227 18:11:49.661227 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:11:49 crc kubenswrapper[4830]: E0227 18:11:49.763772 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:11:50 crc kubenswrapper[4830]: E0227 18:11:50.763315 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:11:50 crc kubenswrapper[4830]: I0227 18:11:50.770969 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6fb74c6d59-zw5q9_04f72aa7-3bab-4ac9-9fb6-106c7e40b9fb/manager/0.log" Feb 27 18:11:55 crc kubenswrapper[4830]: E0227 18:11:55.765675 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:12:00 crc kubenswrapper[4830]: I0227 18:12:00.158894 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536932-2wrrx"] Feb 27 18:12:00 crc kubenswrapper[4830]: I0227 18:12:00.161427 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536932-2wrrx" Feb 27 18:12:00 crc kubenswrapper[4830]: I0227 18:12:00.182675 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536932-2wrrx"] Feb 27 18:12:00 crc kubenswrapper[4830]: I0227 18:12:00.238094 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnzmj\" (UniqueName: \"kubernetes.io/projected/38ec6425-c4e6-445c-bada-3ad3758ca61f-kube-api-access-bnzmj\") pod \"auto-csr-approver-29536932-2wrrx\" (UID: \"38ec6425-c4e6-445c-bada-3ad3758ca61f\") " pod="openshift-infra/auto-csr-approver-29536932-2wrrx" Feb 27 18:12:00 crc kubenswrapper[4830]: I0227 18:12:00.340968 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnzmj\" (UniqueName: \"kubernetes.io/projected/38ec6425-c4e6-445c-bada-3ad3758ca61f-kube-api-access-bnzmj\") pod \"auto-csr-approver-29536932-2wrrx\" (UID: \"38ec6425-c4e6-445c-bada-3ad3758ca61f\") " pod="openshift-infra/auto-csr-approver-29536932-2wrrx" Feb 27 18:12:00 crc kubenswrapper[4830]: I0227 18:12:00.364982 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnzmj\" (UniqueName: \"kubernetes.io/projected/38ec6425-c4e6-445c-bada-3ad3758ca61f-kube-api-access-bnzmj\") pod \"auto-csr-approver-29536932-2wrrx\" (UID: \"38ec6425-c4e6-445c-bada-3ad3758ca61f\") " pod="openshift-infra/auto-csr-approver-29536932-2wrrx" Feb 27 18:12:00 crc kubenswrapper[4830]: I0227 18:12:00.530561 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536932-2wrrx" Feb 27 18:12:00 crc kubenswrapper[4830]: E0227 18:12:00.768240 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-8nt27" podUID="451836eb-a90a-4644-ba0f-d03cd3cac130" Feb 27 18:12:01 crc kubenswrapper[4830]: I0227 18:12:01.127242 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536932-2wrrx"] Feb 27 18:12:01 crc kubenswrapper[4830]: I0227 18:12:01.493910 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536932-2wrrx" event={"ID":"38ec6425-c4e6-445c-bada-3ad3758ca61f","Type":"ContainerStarted","Data":"030f9d5bc48df53fb96e9470033ecd08aa9f6db7f32cd9134e40f96ccd65db84"} Feb 27 18:12:01 crc kubenswrapper[4830]: E0227 18:12:01.746550 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:12:01 crc kubenswrapper[4830]: E0227 18:12:01.747243 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:12:01 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:12:01 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qj7ht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536926-4ghnd_openshift-infra(43ed5a43-8e62-46bf-8151-7179e13730dd): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:12:01 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 18:12:01 crc kubenswrapper[4830]: E0227 18:12:01.749038 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:12:02 crc kubenswrapper[4830]: E0227 18:12:02.365931 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:12:02 crc kubenswrapper[4830]: E0227 18:12:02.366497 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:12:02 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:12:02 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bnzmj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536932-2wrrx_openshift-infra(38ec6425-c4e6-445c-bada-3ad3758ca61f): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:12:02 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 18:12:02 crc kubenswrapper[4830]: E0227 18:12:02.368387 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536932-2wrrx" podUID="38ec6425-c4e6-445c-bada-3ad3758ca61f" Feb 27 18:12:02 crc kubenswrapper[4830]: E0227 18:12:02.504559 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536932-2wrrx" podUID="38ec6425-c4e6-445c-bada-3ad3758ca61f" Feb 27 18:12:03 crc kubenswrapper[4830]: E0227 18:12:03.765552 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:12:03 crc kubenswrapper[4830]: E0227 18:12:03.765712 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:12:04 crc kubenswrapper[4830]: E0227 18:12:04.778738 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:12:06 crc kubenswrapper[4830]: I0227 18:12:06.687714 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-8lclt_61f22a16-1565-425a-914d-ec0d5a5c1902/control-plane-machine-set-operator/0.log" Feb 27 18:12:06 crc kubenswrapper[4830]: E0227 18:12:06.770479 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:12:06 crc kubenswrapper[4830]: I0227 18:12:06.850768 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-khmn9_1843207f-14a3-4f21-a253-dbd843d2d8bf/machine-api-operator/0.log" Feb 27 18:12:06 crc kubenswrapper[4830]: I0227 18:12:06.874678 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-khmn9_1843207f-14a3-4f21-a253-dbd843d2d8bf/kube-rbac-proxy/0.log" Feb 27 18:12:12 crc kubenswrapper[4830]: E0227 18:12:12.766635 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-8nt27" podUID="451836eb-a90a-4644-ba0f-d03cd3cac130" Feb 27 18:12:15 crc kubenswrapper[4830]: E0227 18:12:15.767185 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:12:15 crc kubenswrapper[4830]: E0227 18:12:15.767494 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:12:16 crc kubenswrapper[4830]: E0227 18:12:16.765600 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:12:16 crc kubenswrapper[4830]: E0227 18:12:16.766992 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:12:19 crc kubenswrapper[4830]: E0227 18:12:19.022416 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:12:19 crc kubenswrapper[4830]: E0227 18:12:19.023421 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:12:19 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:12:19 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bnzmj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536932-2wrrx_openshift-infra(38ec6425-c4e6-445c-bada-3ad3758ca61f): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:12:19 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 18:12:19 crc kubenswrapper[4830]: E0227 18:12:19.024833 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536932-2wrrx" podUID="38ec6425-c4e6-445c-bada-3ad3758ca61f" Feb 27 18:12:20 crc kubenswrapper[4830]: E0227 18:12:20.766112 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:12:23 crc kubenswrapper[4830]: I0227 18:12:23.080785 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-xc4z5_15de8621-6ef1-450c-8af3-e039897a9a14/cert-manager-controller/0.log" Feb 27 18:12:23 crc kubenswrapper[4830]: I0227 18:12:23.224654 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-s29zc_bcd67c59-ad0e-4ca7-b11a-91a4f441ddb4/cert-manager-cainjector/0.log" Feb 27 18:12:23 crc kubenswrapper[4830]: I0227 18:12:23.333128 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-fr5hs_b04573a0-1535-4606-8551-ba1c3a53f933/cert-manager-webhook/0.log" Feb 27 18:12:27 crc kubenswrapper[4830]: E0227 18:12:27.767314 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:12:27 crc kubenswrapper[4830]: E0227 18:12:27.767333 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:12:27 crc kubenswrapper[4830]: E0227 18:12:27.767377 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-8nt27" podUID="451836eb-a90a-4644-ba0f-d03cd3cac130" Feb 27 18:12:30 crc kubenswrapper[4830]: E0227 18:12:30.766238 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:12:30 crc kubenswrapper[4830]: E0227 18:12:30.766251 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:12:34 crc kubenswrapper[4830]: E0227 18:12:34.785581 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536932-2wrrx" podUID="38ec6425-c4e6-445c-bada-3ad3758ca61f" Feb 27 18:12:35 crc kubenswrapper[4830]: E0227 18:12:35.767106 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:12:38 crc kubenswrapper[4830]: E0227 18:12:38.766576 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-8nt27" podUID="451836eb-a90a-4644-ba0f-d03cd3cac130" Feb 27 18:12:39 crc kubenswrapper[4830]: I0227 18:12:39.577579 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5dcbbd79cf-fl5kr_cb2e0063-5469-4239-836b-131854f77207/nmstate-console-plugin/0.log" Feb 27 18:12:39 crc kubenswrapper[4830]: E0227 18:12:39.765163 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:12:39 crc kubenswrapper[4830]: I0227 18:12:39.919203 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-jpmgn_35d07d48-0cd6-4813-9737-497857d9e40b/nmstate-handler/0.log" Feb 27 18:12:40 crc kubenswrapper[4830]: I0227 18:12:40.014668 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-5mjm7_6865cd8a-83de-4744-8631-7b95fd599910/kube-rbac-proxy/0.log" Feb 27 18:12:40 crc kubenswrapper[4830]: I0227 18:12:40.078663 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-5mjm7_6865cd8a-83de-4744-8631-7b95fd599910/nmstate-metrics/0.log" Feb 27 18:12:40 crc kubenswrapper[4830]: I0227 18:12:40.308839 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-75c5dccd6c-5tq6p_760fa7ab-d23d-4c12-afd2-fe11766fd7d1/nmstate-operator/0.log" Feb 27 18:12:40 crc kubenswrapper[4830]: I0227 18:12:40.319104 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-786f45cff4-6kgbb_0e5821a0-b1d4-49d4-becb-f08af1b6a92f/nmstate-webhook/0.log" Feb 27 18:12:41 crc kubenswrapper[4830]: E0227 18:12:41.766897 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:12:41 crc kubenswrapper[4830]: E0227 18:12:41.767909 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:12:44 crc kubenswrapper[4830]: E0227 18:12:44.772488 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:12:48 crc kubenswrapper[4830]: E0227 18:12:48.764593 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:12:49 crc kubenswrapper[4830]: E0227 18:12:49.764256 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-8nt27" podUID="451836eb-a90a-4644-ba0f-d03cd3cac130" Feb 27 18:12:50 crc kubenswrapper[4830]: I0227 18:12:50.051526 4830 generic.go:334] "Generic (PLEG): container finished" podID="38ec6425-c4e6-445c-bada-3ad3758ca61f" containerID="72e0b8a8200471ae5c4f656100f07ad8fda35faae3f0c43adab5f472222f4460" exitCode=0 Feb 27 18:12:50 crc kubenswrapper[4830]: I0227 18:12:50.051598 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536932-2wrrx" event={"ID":"38ec6425-c4e6-445c-bada-3ad3758ca61f","Type":"ContainerDied","Data":"72e0b8a8200471ae5c4f656100f07ad8fda35faae3f0c43adab5f472222f4460"} Feb 27 18:12:51 crc kubenswrapper[4830]: I0227 18:12:51.449455 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536932-2wrrx" Feb 27 18:12:51 crc kubenswrapper[4830]: I0227 18:12:51.607481 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnzmj\" (UniqueName: \"kubernetes.io/projected/38ec6425-c4e6-445c-bada-3ad3758ca61f-kube-api-access-bnzmj\") pod \"38ec6425-c4e6-445c-bada-3ad3758ca61f\" (UID: \"38ec6425-c4e6-445c-bada-3ad3758ca61f\") " Feb 27 18:12:51 crc kubenswrapper[4830]: I0227 18:12:51.615145 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38ec6425-c4e6-445c-bada-3ad3758ca61f-kube-api-access-bnzmj" (OuterVolumeSpecName: "kube-api-access-bnzmj") pod "38ec6425-c4e6-445c-bada-3ad3758ca61f" (UID: "38ec6425-c4e6-445c-bada-3ad3758ca61f"). InnerVolumeSpecName "kube-api-access-bnzmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:12:51 crc kubenswrapper[4830]: I0227 18:12:51.710553 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnzmj\" (UniqueName: \"kubernetes.io/projected/38ec6425-c4e6-445c-bada-3ad3758ca61f-kube-api-access-bnzmj\") on node \"crc\" DevicePath \"\"" Feb 27 18:12:52 crc kubenswrapper[4830]: I0227 18:12:52.107632 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536932-2wrrx" event={"ID":"38ec6425-c4e6-445c-bada-3ad3758ca61f","Type":"ContainerDied","Data":"030f9d5bc48df53fb96e9470033ecd08aa9f6db7f32cd9134e40f96ccd65db84"} Feb 27 18:12:52 crc kubenswrapper[4830]: I0227 18:12:52.107670 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="030f9d5bc48df53fb96e9470033ecd08aa9f6db7f32cd9134e40f96ccd65db84" Feb 27 18:12:52 crc kubenswrapper[4830]: I0227 18:12:52.107722 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536932-2wrrx" Feb 27 18:12:52 crc kubenswrapper[4830]: I0227 18:12:52.548047 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536922-84cqt"] Feb 27 18:12:52 crc kubenswrapper[4830]: I0227 18:12:52.557529 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536922-84cqt"] Feb 27 18:12:52 crc kubenswrapper[4830]: E0227 18:12:52.765361 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:12:52 crc kubenswrapper[4830]: I0227 18:12:52.779652 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d90c5d5a-0f24-48b8-b8c6-4652a1922a9e" path="/var/lib/kubelet/pods/d90c5d5a-0f24-48b8-b8c6-4652a1922a9e/volumes" Feb 27 18:12:54 crc kubenswrapper[4830]: E0227 18:12:54.775851 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:12:54 crc kubenswrapper[4830]: E0227 18:12:54.776089 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:12:58 crc kubenswrapper[4830]: I0227 18:12:58.353506 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-x6smj_fb796dd0-1d3a-4037-a42a-7427293ea799/prometheus-operator/0.log" Feb 27 18:12:58 crc kubenswrapper[4830]: I0227 18:12:58.551634 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-66fc8b6fcc-5nr6x_cfe8c971-6fe4-44ae-bea8-d3b6a17821d0/prometheus-operator-admission-webhook/0.log" Feb 27 18:12:58 crc kubenswrapper[4830]: I0227 18:12:58.614487 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-66fc8b6fcc-lvddk_2e9c720f-41bf-4770-a857-835cd3bf0cbb/prometheus-operator-admission-webhook/0.log" Feb 27 18:12:58 crc kubenswrapper[4830]: I0227 18:12:58.737349 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-czxql_428d2446-f933-4f1d-b757-501fb5695db2/operator/0.log" Feb 27 18:12:58 crc kubenswrapper[4830]: E0227 18:12:58.765726 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:12:58 crc kubenswrapper[4830]: I0227 18:12:58.808427 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-5tqdq_df7ff018-e3f5-4243-bb66-c04cfa3ff9f9/perses-operator/0.log" Feb 27 18:12:58 crc kubenswrapper[4830]: I0227 18:12:58.867596 4830 scope.go:117] "RemoveContainer" containerID="1fc3e6825266a3a414c721da57ca610cb415101243b2b48b18494b8a2d76c81d" Feb 27 18:13:02 crc kubenswrapper[4830]: E0227 18:13:02.765767 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:13:03 crc kubenswrapper[4830]: I0227 18:13:03.160671 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:13:03 crc kubenswrapper[4830]: I0227 18:13:03.160764 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:13:04 crc kubenswrapper[4830]: I0227 18:13:04.817436 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jrr7w"] Feb 27 18:13:04 crc kubenswrapper[4830]: E0227 18:13:04.818819 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38ec6425-c4e6-445c-bada-3ad3758ca61f" containerName="oc" Feb 27 18:13:04 crc kubenswrapper[4830]: I0227 18:13:04.818837 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="38ec6425-c4e6-445c-bada-3ad3758ca61f" containerName="oc" Feb 27 18:13:04 crc kubenswrapper[4830]: I0227 18:13:04.819215 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="38ec6425-c4e6-445c-bada-3ad3758ca61f" containerName="oc" Feb 27 18:13:04 crc kubenswrapper[4830]: I0227 18:13:04.821568 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jrr7w" Feb 27 18:13:04 crc kubenswrapper[4830]: I0227 18:13:04.829571 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jrr7w"] Feb 27 18:13:04 crc kubenswrapper[4830]: I0227 18:13:04.929415 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6q8f\" (UniqueName: \"kubernetes.io/projected/f45a4ecf-deb8-40a8-ae42-17dbc1353484-kube-api-access-x6q8f\") pod \"certified-operators-jrr7w\" (UID: \"f45a4ecf-deb8-40a8-ae42-17dbc1353484\") " pod="openshift-marketplace/certified-operators-jrr7w" Feb 27 18:13:04 crc kubenswrapper[4830]: I0227 18:13:04.929470 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f45a4ecf-deb8-40a8-ae42-17dbc1353484-catalog-content\") pod \"certified-operators-jrr7w\" (UID: \"f45a4ecf-deb8-40a8-ae42-17dbc1353484\") " pod="openshift-marketplace/certified-operators-jrr7w" Feb 27 18:13:04 crc kubenswrapper[4830]: I0227 18:13:04.929817 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f45a4ecf-deb8-40a8-ae42-17dbc1353484-utilities\") pod \"certified-operators-jrr7w\" (UID: \"f45a4ecf-deb8-40a8-ae42-17dbc1353484\") " pod="openshift-marketplace/certified-operators-jrr7w" Feb 27 18:13:05 crc kubenswrapper[4830]: I0227 18:13:05.033572 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6q8f\" (UniqueName: \"kubernetes.io/projected/f45a4ecf-deb8-40a8-ae42-17dbc1353484-kube-api-access-x6q8f\") pod \"certified-operators-jrr7w\" (UID: \"f45a4ecf-deb8-40a8-ae42-17dbc1353484\") " pod="openshift-marketplace/certified-operators-jrr7w" Feb 27 18:13:05 crc kubenswrapper[4830]: I0227 18:13:05.033670 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f45a4ecf-deb8-40a8-ae42-17dbc1353484-catalog-content\") pod \"certified-operators-jrr7w\" (UID: \"f45a4ecf-deb8-40a8-ae42-17dbc1353484\") " pod="openshift-marketplace/certified-operators-jrr7w" Feb 27 18:13:05 crc kubenswrapper[4830]: I0227 18:13:05.033825 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f45a4ecf-deb8-40a8-ae42-17dbc1353484-utilities\") pod \"certified-operators-jrr7w\" (UID: \"f45a4ecf-deb8-40a8-ae42-17dbc1353484\") " pod="openshift-marketplace/certified-operators-jrr7w" Feb 27 18:13:05 crc kubenswrapper[4830]: I0227 18:13:05.034194 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f45a4ecf-deb8-40a8-ae42-17dbc1353484-catalog-content\") pod \"certified-operators-jrr7w\" (UID: \"f45a4ecf-deb8-40a8-ae42-17dbc1353484\") " pod="openshift-marketplace/certified-operators-jrr7w" Feb 27 18:13:05 crc kubenswrapper[4830]: I0227 18:13:05.034312 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f45a4ecf-deb8-40a8-ae42-17dbc1353484-utilities\") pod \"certified-operators-jrr7w\" (UID: \"f45a4ecf-deb8-40a8-ae42-17dbc1353484\") " pod="openshift-marketplace/certified-operators-jrr7w" Feb 27 18:13:05 crc kubenswrapper[4830]: I0227 18:13:05.063084 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6q8f\" (UniqueName: \"kubernetes.io/projected/f45a4ecf-deb8-40a8-ae42-17dbc1353484-kube-api-access-x6q8f\") pod \"certified-operators-jrr7w\" (UID: \"f45a4ecf-deb8-40a8-ae42-17dbc1353484\") " pod="openshift-marketplace/certified-operators-jrr7w" Feb 27 18:13:05 crc kubenswrapper[4830]: I0227 18:13:05.141413 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jrr7w" Feb 27 18:13:05 crc kubenswrapper[4830]: E0227 18:13:05.367842 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:13:05 crc kubenswrapper[4830]: E0227 18:13:05.368332 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:13:05 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:13:05 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l2dt6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536930-8nt27_openshift-infra(451836eb-a90a-4644-ba0f-d03cd3cac130): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:13:05 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 18:13:05 crc kubenswrapper[4830]: E0227 18:13:05.370135 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536930-8nt27" podUID="451836eb-a90a-4644-ba0f-d03cd3cac130" Feb 27 18:13:05 crc kubenswrapper[4830]: W0227 18:13:05.649138 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf45a4ecf_deb8_40a8_ae42_17dbc1353484.slice/crio-35efa048b3d28038d56aa42798ff8861bbc5d7e0dc3dec173758186670d50204 WatchSource:0}: Error finding container 35efa048b3d28038d56aa42798ff8861bbc5d7e0dc3dec173758186670d50204: Status 404 returned error can't find the container with id 35efa048b3d28038d56aa42798ff8861bbc5d7e0dc3dec173758186670d50204 Feb 27 18:13:05 crc kubenswrapper[4830]: I0227 18:13:05.652567 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jrr7w"] Feb 27 18:13:05 crc kubenswrapper[4830]: E0227 18:13:05.763647 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:13:06 crc kubenswrapper[4830]: I0227 18:13:06.300497 4830 generic.go:334] "Generic (PLEG): container finished" podID="f45a4ecf-deb8-40a8-ae42-17dbc1353484" containerID="c4ad94a6909a667cc9399304fc2f8a3061eec059d86bc33a10f79f9466fb70ab" exitCode=0 Feb 27 18:13:06 crc kubenswrapper[4830]: I0227 18:13:06.300584 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrr7w" event={"ID":"f45a4ecf-deb8-40a8-ae42-17dbc1353484","Type":"ContainerDied","Data":"c4ad94a6909a667cc9399304fc2f8a3061eec059d86bc33a10f79f9466fb70ab"} Feb 27 18:13:06 crc kubenswrapper[4830]: I0227 18:13:06.300637 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrr7w" event={"ID":"f45a4ecf-deb8-40a8-ae42-17dbc1353484","Type":"ContainerStarted","Data":"35efa048b3d28038d56aa42798ff8861bbc5d7e0dc3dec173758186670d50204"} Feb 27 18:13:07 crc kubenswrapper[4830]: E0227 18:13:07.001515 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 27 18:13:07 crc kubenswrapper[4830]: E0227 18:13:07.001909 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6q8f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jrr7w_openshift-marketplace(f45a4ecf-deb8-40a8-ae42-17dbc1353484): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:13:07 crc kubenswrapper[4830]: E0227 18:13:07.003212 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/certified-operators-jrr7w" podUID="f45a4ecf-deb8-40a8-ae42-17dbc1353484" Feb 27 18:13:07 crc kubenswrapper[4830]: E0227 18:13:07.321057 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jrr7w" podUID="f45a4ecf-deb8-40a8-ae42-17dbc1353484" Feb 27 18:13:07 crc kubenswrapper[4830]: E0227 18:13:07.767653 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:13:08 crc kubenswrapper[4830]: E0227 18:13:08.767326 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:13:12 crc kubenswrapper[4830]: E0227 18:13:12.767249 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:13:15 crc kubenswrapper[4830]: E0227 18:13:15.765864 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:13:15 crc kubenswrapper[4830]: I0227 18:13:15.867876 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-tlhc9_a7982aec-1d5b-4ab1-a8ae-a027dab24864/kube-rbac-proxy/0.log" Feb 27 18:13:16 crc kubenswrapper[4830]: I0227 18:13:16.231554 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t7kgx_053107e9-9202-4a31-8c74-a54d8a3cf63b/cp-frr-files/0.log" Feb 27 18:13:16 crc kubenswrapper[4830]: I0227 18:13:16.383964 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-tlhc9_a7982aec-1d5b-4ab1-a8ae-a027dab24864/controller/0.log" Feb 27 18:13:16 crc kubenswrapper[4830]: I0227 18:13:16.456176 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t7kgx_053107e9-9202-4a31-8c74-a54d8a3cf63b/cp-reloader/0.log" Feb 27 18:13:16 crc kubenswrapper[4830]: I0227 18:13:16.513651 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t7kgx_053107e9-9202-4a31-8c74-a54d8a3cf63b/cp-frr-files/0.log" Feb 27 18:13:16 crc kubenswrapper[4830]: I0227 18:13:16.567125 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t7kgx_053107e9-9202-4a31-8c74-a54d8a3cf63b/cp-reloader/0.log" Feb 27 18:13:16 crc kubenswrapper[4830]: I0227 18:13:16.567167 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t7kgx_053107e9-9202-4a31-8c74-a54d8a3cf63b/cp-metrics/0.log" Feb 27 18:13:16 crc kubenswrapper[4830]: I0227 18:13:16.797608 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t7kgx_053107e9-9202-4a31-8c74-a54d8a3cf63b/cp-metrics/0.log" Feb 27 18:13:16 crc kubenswrapper[4830]: I0227 18:13:16.808229 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t7kgx_053107e9-9202-4a31-8c74-a54d8a3cf63b/cp-reloader/0.log" Feb 27 18:13:16 crc kubenswrapper[4830]: I0227 18:13:16.833602 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t7kgx_053107e9-9202-4a31-8c74-a54d8a3cf63b/cp-metrics/0.log" Feb 27 18:13:16 crc kubenswrapper[4830]: I0227 18:13:16.834929 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t7kgx_053107e9-9202-4a31-8c74-a54d8a3cf63b/cp-frr-files/0.log" Feb 27 18:13:17 crc kubenswrapper[4830]: I0227 18:13:17.016256 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t7kgx_053107e9-9202-4a31-8c74-a54d8a3cf63b/cp-frr-files/0.log" Feb 27 18:13:17 crc kubenswrapper[4830]: I0227 18:13:17.033262 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t7kgx_053107e9-9202-4a31-8c74-a54d8a3cf63b/cp-metrics/0.log" Feb 27 18:13:17 crc kubenswrapper[4830]: I0227 18:13:17.062713 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t7kgx_053107e9-9202-4a31-8c74-a54d8a3cf63b/controller/0.log" Feb 27 18:13:17 crc kubenswrapper[4830]: I0227 18:13:17.108550 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t7kgx_053107e9-9202-4a31-8c74-a54d8a3cf63b/cp-reloader/0.log" Feb 27 18:13:17 crc kubenswrapper[4830]: I0227 18:13:17.234352 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t7kgx_053107e9-9202-4a31-8c74-a54d8a3cf63b/frr-metrics/0.log" Feb 27 18:13:17 crc kubenswrapper[4830]: I0227 18:13:17.311328 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t7kgx_053107e9-9202-4a31-8c74-a54d8a3cf63b/kube-rbac-proxy/0.log" Feb 27 18:13:17 crc kubenswrapper[4830]: I0227 18:13:17.370701 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t7kgx_053107e9-9202-4a31-8c74-a54d8a3cf63b/kube-rbac-proxy-frr/0.log" Feb 27 18:13:17 crc kubenswrapper[4830]: I0227 18:13:17.519886 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t7kgx_053107e9-9202-4a31-8c74-a54d8a3cf63b/reloader/0.log" Feb 27 18:13:17 crc kubenswrapper[4830]: I0227 18:13:17.645296 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7f989f654f-mcd8b_68745f95-bd81-4609-bc51-f6222d4b2f27/frr-k8s-webhook-server/0.log" Feb 27 18:13:17 crc kubenswrapper[4830]: E0227 18:13:17.764459 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:13:17 crc kubenswrapper[4830]: I0227 18:13:17.883356 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-64c4cc7899-7w4m7_5352a317-0150-4796-91dc-e91251c1bc20/manager/0.log" Feb 27 18:13:17 crc kubenswrapper[4830]: I0227 18:13:17.996403 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7955dd9b7b-tb4vn_76c3c72e-3bfb-4b1c-9ab1-fdb798994872/webhook-server/0.log" Feb 27 18:13:18 crc kubenswrapper[4830]: I0227 18:13:18.158578 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-skvmw_e9ed2887-fafc-4283-baf2-1ecd1da2da58/kube-rbac-proxy/0.log" Feb 27 18:13:18 crc kubenswrapper[4830]: I0227 18:13:18.929862 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-skvmw_e9ed2887-fafc-4283-baf2-1ecd1da2da58/speaker/0.log" Feb 27 18:13:20 crc kubenswrapper[4830]: I0227 18:13:20.033172 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t7kgx_053107e9-9202-4a31-8c74-a54d8a3cf63b/frr/0.log" Feb 27 18:13:20 crc kubenswrapper[4830]: E0227 18:13:20.765026 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-8nt27" podUID="451836eb-a90a-4644-ba0f-d03cd3cac130" Feb 27 18:13:21 crc kubenswrapper[4830]: E0227 18:13:21.426088 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 27 18:13:21 crc kubenswrapper[4830]: E0227 18:13:21.426564 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6q8f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jrr7w_openshift-marketplace(f45a4ecf-deb8-40a8-ae42-17dbc1353484): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:13:21 crc kubenswrapper[4830]: E0227 18:13:21.427802 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/certified-operators-jrr7w" podUID="f45a4ecf-deb8-40a8-ae42-17dbc1353484" Feb 27 18:13:21 crc kubenswrapper[4830]: E0227 18:13:21.766271 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:13:22 crc kubenswrapper[4830]: E0227 18:13:22.765023 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:13:25 crc kubenswrapper[4830]: E0227 18:13:25.766902 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:13:26 crc kubenswrapper[4830]: E0227 18:13:26.765927 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:13:28 crc kubenswrapper[4830]: E0227 18:13:28.768857 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:13:32 crc kubenswrapper[4830]: E0227 18:13:32.765881 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jrr7w" podUID="f45a4ecf-deb8-40a8-ae42-17dbc1353484" Feb 27 18:13:32 crc kubenswrapper[4830]: E0227 18:13:32.765933 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-8nt27" podUID="451836eb-a90a-4644-ba0f-d03cd3cac130" Feb 27 18:13:33 crc kubenswrapper[4830]: I0227 18:13:33.160792 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:13:33 crc kubenswrapper[4830]: I0227 18:13:33.160860 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:13:33 crc kubenswrapper[4830]: E0227 18:13:33.763916 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:13:33 crc kubenswrapper[4830]: I0227 18:13:33.931897 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828swbr_c2c326b5-3888-4022-8171-e06f87caf906/util/0.log" Feb 27 18:13:34 crc kubenswrapper[4830]: I0227 18:13:34.120927 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828swbr_c2c326b5-3888-4022-8171-e06f87caf906/util/0.log" Feb 27 18:13:34 crc kubenswrapper[4830]: I0227 18:13:34.161509 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828swbr_c2c326b5-3888-4022-8171-e06f87caf906/pull/0.log" Feb 27 18:13:34 crc kubenswrapper[4830]: I0227 18:13:34.177290 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828swbr_c2c326b5-3888-4022-8171-e06f87caf906/pull/0.log" Feb 27 18:13:34 crc kubenswrapper[4830]: I0227 18:13:34.347298 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828swbr_c2c326b5-3888-4022-8171-e06f87caf906/util/0.log" Feb 27 18:13:34 crc kubenswrapper[4830]: I0227 18:13:34.350227 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828swbr_c2c326b5-3888-4022-8171-e06f87caf906/pull/0.log" Feb 27 18:13:34 crc kubenswrapper[4830]: I0227 18:13:34.369273 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828swbr_c2c326b5-3888-4022-8171-e06f87caf906/extract/0.log" Feb 27 18:13:34 crc kubenswrapper[4830]: I0227 18:13:34.517383 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qd9b8_a8fb3e00-3a8c-4ffd-9638-e1d738fc1651/util/0.log" Feb 27 18:13:34 crc kubenswrapper[4830]: I0227 18:13:34.709978 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qd9b8_a8fb3e00-3a8c-4ffd-9638-e1d738fc1651/util/0.log" Feb 27 18:13:34 crc kubenswrapper[4830]: I0227 18:13:34.774178 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qd9b8_a8fb3e00-3a8c-4ffd-9638-e1d738fc1651/pull/0.log" Feb 27 18:13:34 crc kubenswrapper[4830]: I0227 18:13:34.775785 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qd9b8_a8fb3e00-3a8c-4ffd-9638-e1d738fc1651/pull/0.log" Feb 27 18:13:34 crc kubenswrapper[4830]: I0227 18:13:34.874496 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qd9b8_a8fb3e00-3a8c-4ffd-9638-e1d738fc1651/util/0.log" Feb 27 18:13:34 crc kubenswrapper[4830]: I0227 18:13:34.899603 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qd9b8_a8fb3e00-3a8c-4ffd-9638-e1d738fc1651/pull/0.log" Feb 27 18:13:34 crc kubenswrapper[4830]: I0227 18:13:34.970751 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qd9b8_a8fb3e00-3a8c-4ffd-9638-e1d738fc1651/extract/0.log" Feb 27 18:13:35 crc kubenswrapper[4830]: I0227 18:13:35.046869 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0882lj7_e4939dcd-a003-4c9f-8883-1f8361eee450/util/0.log" Feb 27 18:13:35 crc kubenswrapper[4830]: I0227 18:13:35.213475 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0882lj7_e4939dcd-a003-4c9f-8883-1f8361eee450/util/0.log" Feb 27 18:13:35 crc kubenswrapper[4830]: I0227 18:13:35.223161 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0882lj7_e4939dcd-a003-4c9f-8883-1f8361eee450/pull/0.log" Feb 27 18:13:35 crc kubenswrapper[4830]: I0227 18:13:35.260221 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0882lj7_e4939dcd-a003-4c9f-8883-1f8361eee450/pull/0.log" Feb 27 18:13:35 crc kubenswrapper[4830]: I0227 18:13:35.433854 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0882lj7_e4939dcd-a003-4c9f-8883-1f8361eee450/util/0.log" Feb 27 18:13:35 crc kubenswrapper[4830]: I0227 18:13:35.436496 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0882lj7_e4939dcd-a003-4c9f-8883-1f8361eee450/pull/0.log" Feb 27 18:13:35 crc kubenswrapper[4830]: I0227 18:13:35.437469 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0882lj7_e4939dcd-a003-4c9f-8883-1f8361eee450/extract/0.log" Feb 27 18:13:35 crc kubenswrapper[4830]: I0227 18:13:35.601723 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jrr7w_f45a4ecf-deb8-40a8-ae42-17dbc1353484/extract-utilities/0.log" Feb 27 18:13:35 crc kubenswrapper[4830]: I0227 18:13:35.795154 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jrr7w_f45a4ecf-deb8-40a8-ae42-17dbc1353484/extract-utilities/0.log" Feb 27 18:13:36 crc kubenswrapper[4830]: I0227 18:13:36.023577 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-jrr7w_f45a4ecf-deb8-40a8-ae42-17dbc1353484/extract-utilities/0.log" Feb 27 18:13:36 crc kubenswrapper[4830]: I0227 18:13:36.218299 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xcv5r_2c3aa295-07ae-4594-935f-b9a902a83770/extract-utilities/0.log" Feb 27 18:13:36 crc kubenswrapper[4830]: I0227 18:13:36.408225 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xcv5r_2c3aa295-07ae-4594-935f-b9a902a83770/extract-utilities/0.log" Feb 27 18:13:36 crc kubenswrapper[4830]: I0227 18:13:36.413925 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xcv5r_2c3aa295-07ae-4594-935f-b9a902a83770/extract-content/0.log" Feb 27 18:13:36 crc kubenswrapper[4830]: I0227 18:13:36.469357 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xcv5r_2c3aa295-07ae-4594-935f-b9a902a83770/extract-content/0.log" Feb 27 18:13:36 crc kubenswrapper[4830]: I0227 18:13:36.628331 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xcv5r_2c3aa295-07ae-4594-935f-b9a902a83770/extract-utilities/0.log" Feb 27 18:13:36 crc kubenswrapper[4830]: I0227 18:13:36.630476 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xcv5r_2c3aa295-07ae-4594-935f-b9a902a83770/extract-content/0.log" Feb 27 18:13:36 crc kubenswrapper[4830]: E0227 18:13:36.764563 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:13:36 crc kubenswrapper[4830]: I0227 18:13:36.865165 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5pkvb_4d0e4d8e-d4ab-47f9-8015-5ace0337272f/extract-utilities/0.log" Feb 27 18:13:37 crc kubenswrapper[4830]: I0227 18:13:37.045847 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-xcv5r_2c3aa295-07ae-4594-935f-b9a902a83770/registry-server/0.log" Feb 27 18:13:37 crc kubenswrapper[4830]: I0227 18:13:37.082036 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5pkvb_4d0e4d8e-d4ab-47f9-8015-5ace0337272f/extract-utilities/0.log" Feb 27 18:13:37 crc kubenswrapper[4830]: I0227 18:13:37.243872 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5pkvb_4d0e4d8e-d4ab-47f9-8015-5ace0337272f/extract-utilities/0.log" Feb 27 18:13:37 crc kubenswrapper[4830]: I0227 18:13:37.413658 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nckd4_f0dbf914-3579-4535-94f5-ea7382816919/extract-utilities/0.log" Feb 27 18:13:37 crc kubenswrapper[4830]: I0227 18:13:37.549874 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nckd4_f0dbf914-3579-4535-94f5-ea7382816919/extract-utilities/0.log" Feb 27 18:13:37 crc kubenswrapper[4830]: I0227 18:13:37.565848 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nckd4_f0dbf914-3579-4535-94f5-ea7382816919/extract-content/0.log" Feb 27 18:13:37 crc kubenswrapper[4830]: I0227 18:13:37.576592 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nckd4_f0dbf914-3579-4535-94f5-ea7382816919/extract-content/0.log" Feb 27 18:13:37 crc kubenswrapper[4830]: I0227 18:13:37.743872 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nckd4_f0dbf914-3579-4535-94f5-ea7382816919/extract-utilities/0.log" Feb 27 18:13:37 crc kubenswrapper[4830]: I0227 18:13:37.766025 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nckd4_f0dbf914-3579-4535-94f5-ea7382816919/extract-content/0.log" Feb 27 18:13:37 crc kubenswrapper[4830]: I0227 18:13:37.767738 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4bq74j_9ea172fb-feaf-4174-9aaf-e50231dcdf04/util/0.log" Feb 27 18:13:38 crc kubenswrapper[4830]: I0227 18:13:38.223167 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4bq74j_9ea172fb-feaf-4174-9aaf-e50231dcdf04/pull/0.log" Feb 27 18:13:38 crc kubenswrapper[4830]: I0227 18:13:38.244996 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4bq74j_9ea172fb-feaf-4174-9aaf-e50231dcdf04/util/0.log" Feb 27 18:13:38 crc kubenswrapper[4830]: I0227 18:13:38.265709 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4bq74j_9ea172fb-feaf-4174-9aaf-e50231dcdf04/pull/0.log" Feb 27 18:13:38 crc kubenswrapper[4830]: I0227 18:13:38.489466 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4bq74j_9ea172fb-feaf-4174-9aaf-e50231dcdf04/util/0.log" Feb 27 18:13:38 crc kubenswrapper[4830]: I0227 18:13:38.507933 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4bq74j_9ea172fb-feaf-4174-9aaf-e50231dcdf04/extract/0.log" Feb 27 18:13:38 crc kubenswrapper[4830]: I0227 18:13:38.514555 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4bq74j_9ea172fb-feaf-4174-9aaf-e50231dcdf04/pull/0.log" Feb 27 18:13:38 crc kubenswrapper[4830]: I0227 18:13:38.685024 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-w2snv_79d764bd-68e2-4846-a2c3-3f6bdc2db5e7/marketplace-operator/0.log" Feb 27 18:13:38 crc kubenswrapper[4830]: I0227 18:13:38.739427 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-62jtg_85b6b000-62ad-4dfa-b384-c603bec84bbd/extract-utilities/0.log" Feb 27 18:13:38 crc kubenswrapper[4830]: I0227 18:13:38.740117 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nckd4_f0dbf914-3579-4535-94f5-ea7382816919/registry-server/0.log" Feb 27 18:13:38 crc kubenswrapper[4830]: E0227 18:13:38.764782 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:13:38 crc kubenswrapper[4830]: I0227 18:13:38.921931 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-62jtg_85b6b000-62ad-4dfa-b384-c603bec84bbd/extract-utilities/0.log" Feb 27 18:13:38 crc kubenswrapper[4830]: I0227 18:13:38.945309 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-62jtg_85b6b000-62ad-4dfa-b384-c603bec84bbd/extract-content/0.log" Feb 27 18:13:38 crc kubenswrapper[4830]: I0227 18:13:38.964868 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-62jtg_85b6b000-62ad-4dfa-b384-c603bec84bbd/extract-content/0.log" Feb 27 18:13:39 crc kubenswrapper[4830]: I0227 18:13:39.152358 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-62jtg_85b6b000-62ad-4dfa-b384-c603bec84bbd/extract-content/0.log" Feb 27 18:13:39 crc kubenswrapper[4830]: I0227 18:13:39.157224 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-62jtg_85b6b000-62ad-4dfa-b384-c603bec84bbd/extract-utilities/0.log" Feb 27 18:13:39 crc kubenswrapper[4830]: I0227 18:13:39.238351 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lqb2w_0596772a-54ae-4d9e-9db4-5d7138bae51e/extract-utilities/0.log" Feb 27 18:13:39 crc kubenswrapper[4830]: I0227 18:13:39.466382 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lqb2w_0596772a-54ae-4d9e-9db4-5d7138bae51e/extract-utilities/0.log" Feb 27 18:13:39 crc kubenswrapper[4830]: I0227 18:13:39.484718 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-62jtg_85b6b000-62ad-4dfa-b384-c603bec84bbd/registry-server/0.log" Feb 27 18:13:39 crc kubenswrapper[4830]: I0227 18:13:39.620348 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-lqb2w_0596772a-54ae-4d9e-9db4-5d7138bae51e/extract-utilities/0.log" Feb 27 18:13:39 crc kubenswrapper[4830]: I0227 18:13:39.778940 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jq4fp_6ce624ae-e85d-456f-9da1-fb880e9640ca/extract-utilities/0.log" Feb 27 18:13:39 crc kubenswrapper[4830]: I0227 18:13:39.920338 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jq4fp_6ce624ae-e85d-456f-9da1-fb880e9640ca/extract-utilities/0.log" Feb 27 18:13:39 crc kubenswrapper[4830]: I0227 18:13:39.946594 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jq4fp_6ce624ae-e85d-456f-9da1-fb880e9640ca/extract-content/0.log" Feb 27 18:13:39 crc kubenswrapper[4830]: I0227 18:13:39.957501 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jq4fp_6ce624ae-e85d-456f-9da1-fb880e9640ca/extract-content/0.log" Feb 27 18:13:40 crc kubenswrapper[4830]: I0227 18:13:40.109780 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jq4fp_6ce624ae-e85d-456f-9da1-fb880e9640ca/extract-content/0.log" Feb 27 18:13:40 crc kubenswrapper[4830]: I0227 18:13:40.110381 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jq4fp_6ce624ae-e85d-456f-9da1-fb880e9640ca/extract-utilities/0.log" Feb 27 18:13:41 crc kubenswrapper[4830]: I0227 18:13:41.046706 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jq4fp_6ce624ae-e85d-456f-9da1-fb880e9640ca/registry-server/0.log" Feb 27 18:13:41 crc kubenswrapper[4830]: E0227 18:13:41.765651 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:13:43 crc kubenswrapper[4830]: E0227 18:13:43.767160 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:13:44 crc kubenswrapper[4830]: E0227 18:13:44.777004 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:13:44 crc kubenswrapper[4830]: I0227 18:13:44.780425 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrr7w" event={"ID":"f45a4ecf-deb8-40a8-ae42-17dbc1353484","Type":"ContainerStarted","Data":"9038ca681e9b58e934275ee0fcf534c87d9257d6b5e55a83c98a4762de22154f"} Feb 27 18:13:44 crc kubenswrapper[4830]: E0227 18:13:44.784795 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-8nt27" podUID="451836eb-a90a-4644-ba0f-d03cd3cac130" Feb 27 18:13:46 crc kubenswrapper[4830]: I0227 18:13:46.794744 4830 generic.go:334] "Generic (PLEG): container finished" podID="f45a4ecf-deb8-40a8-ae42-17dbc1353484" containerID="9038ca681e9b58e934275ee0fcf534c87d9257d6b5e55a83c98a4762de22154f" exitCode=0 Feb 27 18:13:46 crc kubenswrapper[4830]: I0227 18:13:46.794845 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrr7w" event={"ID":"f45a4ecf-deb8-40a8-ae42-17dbc1353484","Type":"ContainerDied","Data":"9038ca681e9b58e934275ee0fcf534c87d9257d6b5e55a83c98a4762de22154f"} Feb 27 18:13:47 crc kubenswrapper[4830]: E0227 18:13:47.765619 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:13:47 crc kubenswrapper[4830]: I0227 18:13:47.808673 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrr7w" event={"ID":"f45a4ecf-deb8-40a8-ae42-17dbc1353484","Type":"ContainerStarted","Data":"552e27fa15b20acdac1965657a99aaa06248529136c43affa07fb7f4673a4523"} Feb 27 18:13:47 crc kubenswrapper[4830]: I0227 18:13:47.842044 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jrr7w" podStartSLOduration=2.897654595 podStartE2EDuration="43.84201842s" podCreationTimestamp="2026-02-27 18:13:04 +0000 UTC" firstStartedPulling="2026-02-27 18:13:06.30461423 +0000 UTC m=+7582.393886723" lastFinishedPulling="2026-02-27 18:13:47.248978055 +0000 UTC m=+7623.338250548" observedRunningTime="2026-02-27 18:13:47.834114311 +0000 UTC m=+7623.923386804" watchObservedRunningTime="2026-02-27 18:13:47.84201842 +0000 UTC m=+7623.931290913" Feb 27 18:13:50 crc kubenswrapper[4830]: E0227 18:13:50.767259 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:13:54 crc kubenswrapper[4830]: E0227 18:13:54.780396 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:13:55 crc kubenswrapper[4830]: I0227 18:13:55.141956 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jrr7w" Feb 27 18:13:55 crc kubenswrapper[4830]: I0227 18:13:55.142301 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jrr7w" Feb 27 18:13:55 crc kubenswrapper[4830]: I0227 18:13:55.208986 4830 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jrr7w" Feb 27 18:13:55 crc kubenswrapper[4830]: E0227 18:13:55.764234 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-8nt27" podUID="451836eb-a90a-4644-ba0f-d03cd3cac130" Feb 27 18:13:55 crc kubenswrapper[4830]: I0227 18:13:55.944562 4830 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jrr7w" Feb 27 18:13:55 crc kubenswrapper[4830]: I0227 18:13:55.992488 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jrr7w"] Feb 27 18:13:56 crc kubenswrapper[4830]: I0227 18:13:56.426717 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-x6smj_fb796dd0-1d3a-4037-a42a-7427293ea799/prometheus-operator/0.log" Feb 27 18:13:56 crc kubenswrapper[4830]: I0227 18:13:56.466531 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-66fc8b6fcc-lvddk_2e9c720f-41bf-4770-a857-835cd3bf0cbb/prometheus-operator-admission-webhook/0.log" Feb 27 18:13:56 crc kubenswrapper[4830]: I0227 18:13:56.471320 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-66fc8b6fcc-5nr6x_cfe8c971-6fe4-44ae-bea8-d3b6a17821d0/prometheus-operator-admission-webhook/0.log" Feb 27 18:13:56 crc kubenswrapper[4830]: I0227 18:13:56.636809 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-czxql_428d2446-f933-4f1d-b757-501fb5695db2/operator/0.log" Feb 27 18:13:56 crc kubenswrapper[4830]: I0227 18:13:56.736886 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-5tqdq_df7ff018-e3f5-4243-bb66-c04cfa3ff9f9/perses-operator/0.log" Feb 27 18:13:56 crc kubenswrapper[4830]: E0227 18:13:56.766461 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:13:57 crc kubenswrapper[4830]: I0227 18:13:57.914377 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jrr7w" podUID="f45a4ecf-deb8-40a8-ae42-17dbc1353484" containerName="registry-server" containerID="cri-o://552e27fa15b20acdac1965657a99aaa06248529136c43affa07fb7f4673a4523" gracePeriod=2 Feb 27 18:13:58 crc kubenswrapper[4830]: I0227 18:13:58.521853 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jrr7w" Feb 27 18:13:58 crc kubenswrapper[4830]: I0227 18:13:58.558074 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6q8f\" (UniqueName: \"kubernetes.io/projected/f45a4ecf-deb8-40a8-ae42-17dbc1353484-kube-api-access-x6q8f\") pod \"f45a4ecf-deb8-40a8-ae42-17dbc1353484\" (UID: \"f45a4ecf-deb8-40a8-ae42-17dbc1353484\") " Feb 27 18:13:58 crc kubenswrapper[4830]: I0227 18:13:58.558165 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f45a4ecf-deb8-40a8-ae42-17dbc1353484-catalog-content\") pod \"f45a4ecf-deb8-40a8-ae42-17dbc1353484\" (UID: \"f45a4ecf-deb8-40a8-ae42-17dbc1353484\") " Feb 27 18:13:58 crc kubenswrapper[4830]: I0227 18:13:58.558372 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f45a4ecf-deb8-40a8-ae42-17dbc1353484-utilities\") pod \"f45a4ecf-deb8-40a8-ae42-17dbc1353484\" (UID: \"f45a4ecf-deb8-40a8-ae42-17dbc1353484\") " Feb 27 18:13:58 crc kubenswrapper[4830]: I0227 18:13:58.559466 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f45a4ecf-deb8-40a8-ae42-17dbc1353484-utilities" (OuterVolumeSpecName: "utilities") pod "f45a4ecf-deb8-40a8-ae42-17dbc1353484" (UID: "f45a4ecf-deb8-40a8-ae42-17dbc1353484"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:13:58 crc kubenswrapper[4830]: I0227 18:13:58.568097 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f45a4ecf-deb8-40a8-ae42-17dbc1353484-kube-api-access-x6q8f" (OuterVolumeSpecName: "kube-api-access-x6q8f") pod "f45a4ecf-deb8-40a8-ae42-17dbc1353484" (UID: "f45a4ecf-deb8-40a8-ae42-17dbc1353484"). InnerVolumeSpecName "kube-api-access-x6q8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:13:58 crc kubenswrapper[4830]: I0227 18:13:58.613323 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f45a4ecf-deb8-40a8-ae42-17dbc1353484-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f45a4ecf-deb8-40a8-ae42-17dbc1353484" (UID: "f45a4ecf-deb8-40a8-ae42-17dbc1353484"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:13:58 crc kubenswrapper[4830]: I0227 18:13:58.661438 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6q8f\" (UniqueName: \"kubernetes.io/projected/f45a4ecf-deb8-40a8-ae42-17dbc1353484-kube-api-access-x6q8f\") on node \"crc\" DevicePath \"\"" Feb 27 18:13:58 crc kubenswrapper[4830]: I0227 18:13:58.661492 4830 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f45a4ecf-deb8-40a8-ae42-17dbc1353484-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 18:13:58 crc kubenswrapper[4830]: I0227 18:13:58.661516 4830 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f45a4ecf-deb8-40a8-ae42-17dbc1353484-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 18:13:58 crc kubenswrapper[4830]: E0227 18:13:58.764354 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:13:58 crc kubenswrapper[4830]: I0227 18:13:58.939669 4830 generic.go:334] "Generic (PLEG): container finished" podID="f45a4ecf-deb8-40a8-ae42-17dbc1353484" containerID="552e27fa15b20acdac1965657a99aaa06248529136c43affa07fb7f4673a4523" exitCode=0 Feb 27 18:13:58 crc kubenswrapper[4830]: I0227 18:13:58.939710 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrr7w" event={"ID":"f45a4ecf-deb8-40a8-ae42-17dbc1353484","Type":"ContainerDied","Data":"552e27fa15b20acdac1965657a99aaa06248529136c43affa07fb7f4673a4523"} Feb 27 18:13:58 crc kubenswrapper[4830]: I0227 18:13:58.939737 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jrr7w" event={"ID":"f45a4ecf-deb8-40a8-ae42-17dbc1353484","Type":"ContainerDied","Data":"35efa048b3d28038d56aa42798ff8861bbc5d7e0dc3dec173758186670d50204"} Feb 27 18:13:58 crc kubenswrapper[4830]: I0227 18:13:58.939739 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jrr7w" Feb 27 18:13:58 crc kubenswrapper[4830]: I0227 18:13:58.939755 4830 scope.go:117] "RemoveContainer" containerID="552e27fa15b20acdac1965657a99aaa06248529136c43affa07fb7f4673a4523" Feb 27 18:13:58 crc kubenswrapper[4830]: I0227 18:13:58.962969 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jrr7w"] Feb 27 18:13:58 crc kubenswrapper[4830]: I0227 18:13:58.970870 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jrr7w"] Feb 27 18:13:58 crc kubenswrapper[4830]: I0227 18:13:58.978039 4830 scope.go:117] "RemoveContainer" containerID="9038ca681e9b58e934275ee0fcf534c87d9257d6b5e55a83c98a4762de22154f" Feb 27 18:13:59 crc kubenswrapper[4830]: I0227 18:13:59.003474 4830 scope.go:117] "RemoveContainer" containerID="c4ad94a6909a667cc9399304fc2f8a3061eec059d86bc33a10f79f9466fb70ab" Feb 27 18:13:59 crc kubenswrapper[4830]: I0227 18:13:59.087210 4830 scope.go:117] "RemoveContainer" containerID="552e27fa15b20acdac1965657a99aaa06248529136c43affa07fb7f4673a4523" Feb 27 18:13:59 crc kubenswrapper[4830]: E0227 18:13:59.087736 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"552e27fa15b20acdac1965657a99aaa06248529136c43affa07fb7f4673a4523\": container with ID starting with 552e27fa15b20acdac1965657a99aaa06248529136c43affa07fb7f4673a4523 not found: ID does not exist" containerID="552e27fa15b20acdac1965657a99aaa06248529136c43affa07fb7f4673a4523" Feb 27 18:13:59 crc kubenswrapper[4830]: I0227 18:13:59.087777 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"552e27fa15b20acdac1965657a99aaa06248529136c43affa07fb7f4673a4523"} err="failed to get container status \"552e27fa15b20acdac1965657a99aaa06248529136c43affa07fb7f4673a4523\": rpc error: code = NotFound desc = could not find container \"552e27fa15b20acdac1965657a99aaa06248529136c43affa07fb7f4673a4523\": container with ID starting with 552e27fa15b20acdac1965657a99aaa06248529136c43affa07fb7f4673a4523 not found: ID does not exist" Feb 27 18:13:59 crc kubenswrapper[4830]: I0227 18:13:59.087802 4830 scope.go:117] "RemoveContainer" containerID="9038ca681e9b58e934275ee0fcf534c87d9257d6b5e55a83c98a4762de22154f" Feb 27 18:13:59 crc kubenswrapper[4830]: E0227 18:13:59.088210 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9038ca681e9b58e934275ee0fcf534c87d9257d6b5e55a83c98a4762de22154f\": container with ID starting with 9038ca681e9b58e934275ee0fcf534c87d9257d6b5e55a83c98a4762de22154f not found: ID does not exist" containerID="9038ca681e9b58e934275ee0fcf534c87d9257d6b5e55a83c98a4762de22154f" Feb 27 18:13:59 crc kubenswrapper[4830]: I0227 18:13:59.088242 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9038ca681e9b58e934275ee0fcf534c87d9257d6b5e55a83c98a4762de22154f"} err="failed to get container status \"9038ca681e9b58e934275ee0fcf534c87d9257d6b5e55a83c98a4762de22154f\": rpc error: code = NotFound desc = could not find container \"9038ca681e9b58e934275ee0fcf534c87d9257d6b5e55a83c98a4762de22154f\": container with ID starting with 9038ca681e9b58e934275ee0fcf534c87d9257d6b5e55a83c98a4762de22154f not found: ID does not exist" Feb 27 18:13:59 crc kubenswrapper[4830]: I0227 18:13:59.088259 4830 scope.go:117] "RemoveContainer" containerID="c4ad94a6909a667cc9399304fc2f8a3061eec059d86bc33a10f79f9466fb70ab" Feb 27 18:13:59 crc kubenswrapper[4830]: E0227 18:13:59.088545 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4ad94a6909a667cc9399304fc2f8a3061eec059d86bc33a10f79f9466fb70ab\": container with ID starting with c4ad94a6909a667cc9399304fc2f8a3061eec059d86bc33a10f79f9466fb70ab not found: ID does not exist" containerID="c4ad94a6909a667cc9399304fc2f8a3061eec059d86bc33a10f79f9466fb70ab" Feb 27 18:13:59 crc kubenswrapper[4830]: I0227 18:13:59.088578 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4ad94a6909a667cc9399304fc2f8a3061eec059d86bc33a10f79f9466fb70ab"} err="failed to get container status \"c4ad94a6909a667cc9399304fc2f8a3061eec059d86bc33a10f79f9466fb70ab\": rpc error: code = NotFound desc = could not find container \"c4ad94a6909a667cc9399304fc2f8a3061eec059d86bc33a10f79f9466fb70ab\": container with ID starting with c4ad94a6909a667cc9399304fc2f8a3061eec059d86bc33a10f79f9466fb70ab not found: ID does not exist" Feb 27 18:14:00 crc kubenswrapper[4830]: I0227 18:14:00.159109 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536934-f4df8"] Feb 27 18:14:00 crc kubenswrapper[4830]: E0227 18:14:00.160233 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f45a4ecf-deb8-40a8-ae42-17dbc1353484" containerName="registry-server" Feb 27 18:14:00 crc kubenswrapper[4830]: I0227 18:14:00.160255 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f45a4ecf-deb8-40a8-ae42-17dbc1353484" containerName="registry-server" Feb 27 18:14:00 crc kubenswrapper[4830]: E0227 18:14:00.160283 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f45a4ecf-deb8-40a8-ae42-17dbc1353484" containerName="extract-utilities" Feb 27 18:14:00 crc kubenswrapper[4830]: I0227 18:14:00.160295 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f45a4ecf-deb8-40a8-ae42-17dbc1353484" containerName="extract-utilities" Feb 27 18:14:00 crc kubenswrapper[4830]: E0227 18:14:00.160323 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f45a4ecf-deb8-40a8-ae42-17dbc1353484" containerName="extract-content" Feb 27 18:14:00 crc kubenswrapper[4830]: I0227 18:14:00.160336 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="f45a4ecf-deb8-40a8-ae42-17dbc1353484" containerName="extract-content" Feb 27 18:14:00 crc kubenswrapper[4830]: I0227 18:14:00.160738 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="f45a4ecf-deb8-40a8-ae42-17dbc1353484" containerName="registry-server" Feb 27 18:14:00 crc kubenswrapper[4830]: I0227 18:14:00.162046 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536934-f4df8" Feb 27 18:14:00 crc kubenswrapper[4830]: I0227 18:14:00.177839 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536934-f4df8"] Feb 27 18:14:00 crc kubenswrapper[4830]: I0227 18:14:00.298516 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58df7\" (UniqueName: \"kubernetes.io/projected/3312ebad-9fb6-4efb-92a3-92c49763672e-kube-api-access-58df7\") pod \"auto-csr-approver-29536934-f4df8\" (UID: \"3312ebad-9fb6-4efb-92a3-92c49763672e\") " pod="openshift-infra/auto-csr-approver-29536934-f4df8" Feb 27 18:14:00 crc kubenswrapper[4830]: I0227 18:14:00.400706 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58df7\" (UniqueName: \"kubernetes.io/projected/3312ebad-9fb6-4efb-92a3-92c49763672e-kube-api-access-58df7\") pod \"auto-csr-approver-29536934-f4df8\" (UID: \"3312ebad-9fb6-4efb-92a3-92c49763672e\") " pod="openshift-infra/auto-csr-approver-29536934-f4df8" Feb 27 18:14:00 crc kubenswrapper[4830]: I0227 18:14:00.428798 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58df7\" (UniqueName: \"kubernetes.io/projected/3312ebad-9fb6-4efb-92a3-92c49763672e-kube-api-access-58df7\") pod \"auto-csr-approver-29536934-f4df8\" (UID: \"3312ebad-9fb6-4efb-92a3-92c49763672e\") " pod="openshift-infra/auto-csr-approver-29536934-f4df8" Feb 27 18:14:00 crc kubenswrapper[4830]: I0227 18:14:00.497326 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536934-f4df8" Feb 27 18:14:00 crc kubenswrapper[4830]: E0227 18:14:00.777320 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:14:00 crc kubenswrapper[4830]: I0227 18:14:00.778211 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f45a4ecf-deb8-40a8-ae42-17dbc1353484" path="/var/lib/kubelet/pods/f45a4ecf-deb8-40a8-ae42-17dbc1353484/volumes" Feb 27 18:14:01 crc kubenswrapper[4830]: I0227 18:14:01.024219 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536934-f4df8"] Feb 27 18:14:01 crc kubenswrapper[4830]: E0227 18:14:01.932260 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:14:01 crc kubenswrapper[4830]: E0227 18:14:01.933039 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:14:01 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:14:01 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-58df7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536934-f4df8_openshift-infra(3312ebad-9fb6-4efb-92a3-92c49763672e): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:14:01 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 18:14:01 crc kubenswrapper[4830]: E0227 18:14:01.934208 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536934-f4df8" podUID="3312ebad-9fb6-4efb-92a3-92c49763672e" Feb 27 18:14:01 crc kubenswrapper[4830]: I0227 18:14:01.993831 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536934-f4df8" event={"ID":"3312ebad-9fb6-4efb-92a3-92c49763672e","Type":"ContainerStarted","Data":"f6e7eb8b2bcb0755c95b6a7510a3311eac4b78a87c4a9a855bf520020734784b"} Feb 27 18:14:02 crc kubenswrapper[4830]: E0227 18:14:01.998107 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-f4df8" podUID="3312ebad-9fb6-4efb-92a3-92c49763672e" Feb 27 18:14:03 crc kubenswrapper[4830]: E0227 18:14:03.003601 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-f4df8" podUID="3312ebad-9fb6-4efb-92a3-92c49763672e" Feb 27 18:14:03 crc kubenswrapper[4830]: I0227 18:14:03.160192 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:14:03 crc kubenswrapper[4830]: I0227 18:14:03.160257 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:14:03 crc kubenswrapper[4830]: I0227 18:14:03.160307 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 18:14:03 crc kubenswrapper[4830]: I0227 18:14:03.161308 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"469f495eca7f2f6702dc34d5195646e1c220a84d4e0dd0fdedb43c726d6afe28"} pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 18:14:03 crc kubenswrapper[4830]: I0227 18:14:03.161366 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" containerID="cri-o://469f495eca7f2f6702dc34d5195646e1c220a84d4e0dd0fdedb43c726d6afe28" gracePeriod=600 Feb 27 18:14:04 crc kubenswrapper[4830]: I0227 18:14:04.070214 4830 generic.go:334] "Generic (PLEG): container finished" podID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerID="469f495eca7f2f6702dc34d5195646e1c220a84d4e0dd0fdedb43c726d6afe28" exitCode=0 Feb 27 18:14:04 crc kubenswrapper[4830]: I0227 18:14:04.070884 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerDied","Data":"469f495eca7f2f6702dc34d5195646e1c220a84d4e0dd0fdedb43c726d6afe28"} Feb 27 18:14:04 crc kubenswrapper[4830]: I0227 18:14:04.070915 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerStarted","Data":"a8b39d391d0b62f855059e989b55160b60188dcf89720fb39518165931511a21"} Feb 27 18:14:04 crc kubenswrapper[4830]: I0227 18:14:04.070931 4830 scope.go:117] "RemoveContainer" containerID="81ef18a8ceffa5c1cfa26cc002b47333287098e5e6cdb33647e44f342d553fc2" Feb 27 18:14:04 crc kubenswrapper[4830]: E0227 18:14:04.779442 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:14:09 crc kubenswrapper[4830]: E0227 18:14:09.764751 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:14:09 crc kubenswrapper[4830]: E0227 18:14:09.764988 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-8nt27" podUID="451836eb-a90a-4644-ba0f-d03cd3cac130" Feb 27 18:14:10 crc kubenswrapper[4830]: E0227 18:14:10.764238 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:14:10 crc kubenswrapper[4830]: E0227 18:14:10.764877 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:14:13 crc kubenswrapper[4830]: E0227 18:14:13.767042 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:14:17 crc kubenswrapper[4830]: E0227 18:14:17.765663 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:14:18 crc kubenswrapper[4830]: E0227 18:14:18.678333 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:14:18 crc kubenswrapper[4830]: E0227 18:14:18.678921 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:14:18 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:14:18 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-58df7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536934-f4df8_openshift-infra(3312ebad-9fb6-4efb-92a3-92c49763672e): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:14:18 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 18:14:18 crc kubenswrapper[4830]: E0227 18:14:18.680580 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536934-f4df8" podUID="3312ebad-9fb6-4efb-92a3-92c49763672e" Feb 27 18:14:21 crc kubenswrapper[4830]: E0227 18:14:21.764904 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:14:22 crc kubenswrapper[4830]: E0227 18:14:22.763793 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:14:23 crc kubenswrapper[4830]: E0227 18:14:23.764935 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-8nt27" podUID="451836eb-a90a-4644-ba0f-d03cd3cac130" Feb 27 18:14:25 crc kubenswrapper[4830]: E0227 18:14:25.766672 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:14:27 crc kubenswrapper[4830]: E0227 18:14:27.765285 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:14:28 crc kubenswrapper[4830]: E0227 18:14:28.768692 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:14:29 crc kubenswrapper[4830]: E0227 18:14:29.766214 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-f4df8" podUID="3312ebad-9fb6-4efb-92a3-92c49763672e" Feb 27 18:14:34 crc kubenswrapper[4830]: E0227 18:14:34.778510 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:14:35 crc kubenswrapper[4830]: E0227 18:14:35.570655 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 18:14:35 crc kubenswrapper[4830]: E0227 18:14:35.571423 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-74bq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-5pkvb_openshift-marketplace(4d0e4d8e-d4ab-47f9-8015-5ace0337272f): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:14:35 crc kubenswrapper[4830]: E0227 18:14:35.572715 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:14:35 crc kubenswrapper[4830]: E0227 18:14:35.766774 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-8nt27" podUID="451836eb-a90a-4644-ba0f-d03cd3cac130" Feb 27 18:14:36 crc kubenswrapper[4830]: E0227 18:14:36.767072 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:14:40 crc kubenswrapper[4830]: E0227 18:14:40.766540 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:14:41 crc kubenswrapper[4830]: E0227 18:14:41.765679 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:14:45 crc kubenswrapper[4830]: E0227 18:14:45.873532 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:14:45 crc kubenswrapper[4830]: E0227 18:14:45.874149 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:14:45 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:14:45 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-58df7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536934-f4df8_openshift-infra(3312ebad-9fb6-4efb-92a3-92c49763672e): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:14:45 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 18:14:45 crc kubenswrapper[4830]: E0227 18:14:45.875317 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536934-f4df8" podUID="3312ebad-9fb6-4efb-92a3-92c49763672e" Feb 27 18:14:49 crc kubenswrapper[4830]: E0227 18:14:49.768410 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:14:49 crc kubenswrapper[4830]: E0227 18:14:49.769401 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:14:49 crc kubenswrapper[4830]: E0227 18:14:49.769798 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:14:50 crc kubenswrapper[4830]: E0227 18:14:50.765522 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-8nt27" podUID="451836eb-a90a-4644-ba0f-d03cd3cac130" Feb 27 18:14:54 crc kubenswrapper[4830]: E0227 18:14:54.785788 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:14:54 crc kubenswrapper[4830]: I0227 18:14:54.786166 4830 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 18:14:55 crc kubenswrapper[4830]: E0227 18:14:55.559376 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 18:14:55 crc kubenswrapper[4830]: E0227 18:14:55.559587 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8jztz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-lqb2w_openshift-marketplace(0596772a-54ae-4d9e-9db4-5d7138bae51e): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:14:55 crc kubenswrapper[4830]: E0227 18:14:55.561076 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:14:57 crc kubenswrapper[4830]: E0227 18:14:57.766644 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-f4df8" podUID="3312ebad-9fb6-4efb-92a3-92c49763672e" Feb 27 18:15:00 crc kubenswrapper[4830]: I0227 18:15:00.176468 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536935-rbfh4"] Feb 27 18:15:00 crc kubenswrapper[4830]: I0227 18:15:00.179461 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536935-rbfh4" Feb 27 18:15:00 crc kubenswrapper[4830]: I0227 18:15:00.182029 4830 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 18:15:00 crc kubenswrapper[4830]: I0227 18:15:00.186365 4830 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 18:15:00 crc kubenswrapper[4830]: I0227 18:15:00.197989 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536935-rbfh4"] Feb 27 18:15:00 crc kubenswrapper[4830]: I0227 18:15:00.216107 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d77a6342-1dfe-4c36-ad60-e00893ecc3a6-secret-volume\") pod \"collect-profiles-29536935-rbfh4\" (UID: \"d77a6342-1dfe-4c36-ad60-e00893ecc3a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536935-rbfh4" Feb 27 18:15:00 crc kubenswrapper[4830]: I0227 18:15:00.216504 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d77a6342-1dfe-4c36-ad60-e00893ecc3a6-config-volume\") pod \"collect-profiles-29536935-rbfh4\" (UID: \"d77a6342-1dfe-4c36-ad60-e00893ecc3a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536935-rbfh4" Feb 27 18:15:00 crc kubenswrapper[4830]: I0227 18:15:00.216539 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl7k4\" (UniqueName: \"kubernetes.io/projected/d77a6342-1dfe-4c36-ad60-e00893ecc3a6-kube-api-access-fl7k4\") pod \"collect-profiles-29536935-rbfh4\" (UID: \"d77a6342-1dfe-4c36-ad60-e00893ecc3a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536935-rbfh4" Feb 27 18:15:00 crc kubenswrapper[4830]: I0227 18:15:00.318321 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d77a6342-1dfe-4c36-ad60-e00893ecc3a6-config-volume\") pod \"collect-profiles-29536935-rbfh4\" (UID: \"d77a6342-1dfe-4c36-ad60-e00893ecc3a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536935-rbfh4" Feb 27 18:15:00 crc kubenswrapper[4830]: I0227 18:15:00.318378 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl7k4\" (UniqueName: \"kubernetes.io/projected/d77a6342-1dfe-4c36-ad60-e00893ecc3a6-kube-api-access-fl7k4\") pod \"collect-profiles-29536935-rbfh4\" (UID: \"d77a6342-1dfe-4c36-ad60-e00893ecc3a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536935-rbfh4" Feb 27 18:15:00 crc kubenswrapper[4830]: I0227 18:15:00.318559 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d77a6342-1dfe-4c36-ad60-e00893ecc3a6-secret-volume\") pod \"collect-profiles-29536935-rbfh4\" (UID: \"d77a6342-1dfe-4c36-ad60-e00893ecc3a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536935-rbfh4" Feb 27 18:15:00 crc kubenswrapper[4830]: I0227 18:15:00.320301 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d77a6342-1dfe-4c36-ad60-e00893ecc3a6-config-volume\") pod \"collect-profiles-29536935-rbfh4\" (UID: \"d77a6342-1dfe-4c36-ad60-e00893ecc3a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536935-rbfh4" Feb 27 18:15:00 crc kubenswrapper[4830]: I0227 18:15:00.327027 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d77a6342-1dfe-4c36-ad60-e00893ecc3a6-secret-volume\") pod \"collect-profiles-29536935-rbfh4\" (UID: \"d77a6342-1dfe-4c36-ad60-e00893ecc3a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536935-rbfh4" Feb 27 18:15:00 crc kubenswrapper[4830]: I0227 18:15:00.340162 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl7k4\" (UniqueName: \"kubernetes.io/projected/d77a6342-1dfe-4c36-ad60-e00893ecc3a6-kube-api-access-fl7k4\") pod \"collect-profiles-29536935-rbfh4\" (UID: \"d77a6342-1dfe-4c36-ad60-e00893ecc3a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536935-rbfh4" Feb 27 18:15:00 crc kubenswrapper[4830]: I0227 18:15:00.517363 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536935-rbfh4" Feb 27 18:15:00 crc kubenswrapper[4830]: E0227 18:15:00.768198 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:15:01 crc kubenswrapper[4830]: I0227 18:15:01.085187 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536935-rbfh4"] Feb 27 18:15:01 crc kubenswrapper[4830]: W0227 18:15:01.095497 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd77a6342_1dfe_4c36_ad60_e00893ecc3a6.slice/crio-0212a81256f8346d2b386651ba0479eb59035e3a17b1f756fa89e06153bdc3d1 WatchSource:0}: Error finding container 0212a81256f8346d2b386651ba0479eb59035e3a17b1f756fa89e06153bdc3d1: Status 404 returned error can't find the container with id 0212a81256f8346d2b386651ba0479eb59035e3a17b1f756fa89e06153bdc3d1 Feb 27 18:15:01 crc kubenswrapper[4830]: E0227 18:15:01.765057 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-8nt27" podUID="451836eb-a90a-4644-ba0f-d03cd3cac130" Feb 27 18:15:01 crc kubenswrapper[4830]: E0227 18:15:01.767741 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:15:01 crc kubenswrapper[4830]: I0227 18:15:01.825936 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536935-rbfh4" event={"ID":"d77a6342-1dfe-4c36-ad60-e00893ecc3a6","Type":"ContainerStarted","Data":"9f262da5527540e6b17251690af8e5c93ce67cf24c3623fb22e34f960fb2ca30"} Feb 27 18:15:01 crc kubenswrapper[4830]: I0227 18:15:01.826021 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536935-rbfh4" event={"ID":"d77a6342-1dfe-4c36-ad60-e00893ecc3a6","Type":"ContainerStarted","Data":"0212a81256f8346d2b386651ba0479eb59035e3a17b1f756fa89e06153bdc3d1"} Feb 27 18:15:01 crc kubenswrapper[4830]: I0227 18:15:01.854436 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29536935-rbfh4" podStartSLOduration=1.8544150259999999 podStartE2EDuration="1.854415026s" podCreationTimestamp="2026-02-27 18:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:15:01.852629333 +0000 UTC m=+7697.941901796" watchObservedRunningTime="2026-02-27 18:15:01.854415026 +0000 UTC m=+7697.943687499" Feb 27 18:15:02 crc kubenswrapper[4830]: I0227 18:15:02.843037 4830 generic.go:334] "Generic (PLEG): container finished" podID="d77a6342-1dfe-4c36-ad60-e00893ecc3a6" containerID="9f262da5527540e6b17251690af8e5c93ce67cf24c3623fb22e34f960fb2ca30" exitCode=0 Feb 27 18:15:02 crc kubenswrapper[4830]: I0227 18:15:02.843414 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536935-rbfh4" event={"ID":"d77a6342-1dfe-4c36-ad60-e00893ecc3a6","Type":"ContainerDied","Data":"9f262da5527540e6b17251690af8e5c93ce67cf24c3623fb22e34f960fb2ca30"} Feb 27 18:15:04 crc kubenswrapper[4830]: I0227 18:15:04.341237 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536935-rbfh4" Feb 27 18:15:04 crc kubenswrapper[4830]: I0227 18:15:04.515784 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d77a6342-1dfe-4c36-ad60-e00893ecc3a6-config-volume\") pod \"d77a6342-1dfe-4c36-ad60-e00893ecc3a6\" (UID: \"d77a6342-1dfe-4c36-ad60-e00893ecc3a6\") " Feb 27 18:15:04 crc kubenswrapper[4830]: I0227 18:15:04.515999 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d77a6342-1dfe-4c36-ad60-e00893ecc3a6-secret-volume\") pod \"d77a6342-1dfe-4c36-ad60-e00893ecc3a6\" (UID: \"d77a6342-1dfe-4c36-ad60-e00893ecc3a6\") " Feb 27 18:15:04 crc kubenswrapper[4830]: I0227 18:15:04.516177 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fl7k4\" (UniqueName: \"kubernetes.io/projected/d77a6342-1dfe-4c36-ad60-e00893ecc3a6-kube-api-access-fl7k4\") pod \"d77a6342-1dfe-4c36-ad60-e00893ecc3a6\" (UID: \"d77a6342-1dfe-4c36-ad60-e00893ecc3a6\") " Feb 27 18:15:04 crc kubenswrapper[4830]: I0227 18:15:04.517344 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d77a6342-1dfe-4c36-ad60-e00893ecc3a6-config-volume" (OuterVolumeSpecName: "config-volume") pod "d77a6342-1dfe-4c36-ad60-e00893ecc3a6" (UID: "d77a6342-1dfe-4c36-ad60-e00893ecc3a6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:15:04 crc kubenswrapper[4830]: I0227 18:15:04.526341 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d77a6342-1dfe-4c36-ad60-e00893ecc3a6-kube-api-access-fl7k4" (OuterVolumeSpecName: "kube-api-access-fl7k4") pod "d77a6342-1dfe-4c36-ad60-e00893ecc3a6" (UID: "d77a6342-1dfe-4c36-ad60-e00893ecc3a6"). InnerVolumeSpecName "kube-api-access-fl7k4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:15:04 crc kubenswrapper[4830]: I0227 18:15:04.539684 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d77a6342-1dfe-4c36-ad60-e00893ecc3a6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d77a6342-1dfe-4c36-ad60-e00893ecc3a6" (UID: "d77a6342-1dfe-4c36-ad60-e00893ecc3a6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:15:04 crc kubenswrapper[4830]: I0227 18:15:04.620198 4830 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d77a6342-1dfe-4c36-ad60-e00893ecc3a6-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 18:15:04 crc kubenswrapper[4830]: I0227 18:15:04.620257 4830 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d77a6342-1dfe-4c36-ad60-e00893ecc3a6-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 18:15:04 crc kubenswrapper[4830]: I0227 18:15:04.620279 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fl7k4\" (UniqueName: \"kubernetes.io/projected/d77a6342-1dfe-4c36-ad60-e00893ecc3a6-kube-api-access-fl7k4\") on node \"crc\" DevicePath \"\"" Feb 27 18:15:04 crc kubenswrapper[4830]: I0227 18:15:04.874840 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536935-rbfh4" event={"ID":"d77a6342-1dfe-4c36-ad60-e00893ecc3a6","Type":"ContainerDied","Data":"0212a81256f8346d2b386651ba0479eb59035e3a17b1f756fa89e06153bdc3d1"} Feb 27 18:15:04 crc kubenswrapper[4830]: I0227 18:15:04.874888 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0212a81256f8346d2b386651ba0479eb59035e3a17b1f756fa89e06153bdc3d1" Feb 27 18:15:04 crc kubenswrapper[4830]: I0227 18:15:04.874927 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536935-rbfh4" Feb 27 18:15:04 crc kubenswrapper[4830]: I0227 18:15:04.973630 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536890-t6hdx"] Feb 27 18:15:04 crc kubenswrapper[4830]: I0227 18:15:04.984283 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536890-t6hdx"] Feb 27 18:15:06 crc kubenswrapper[4830]: E0227 18:15:06.768173 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:15:06 crc kubenswrapper[4830]: I0227 18:15:06.800409 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa" path="/var/lib/kubelet/pods/28ac5da8-f4b7-4e4e-ad46-7ff9ef376eaa/volumes" Feb 27 18:15:08 crc kubenswrapper[4830]: E0227 18:15:08.766202 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-f4df8" podUID="3312ebad-9fb6-4efb-92a3-92c49763672e" Feb 27 18:15:09 crc kubenswrapper[4830]: E0227 18:15:09.767203 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:15:12 crc kubenswrapper[4830]: E0227 18:15:12.766146 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:15:13 crc kubenswrapper[4830]: E0227 18:15:13.764680 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:15:14 crc kubenswrapper[4830]: E0227 18:15:14.783345 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-8nt27" podUID="451836eb-a90a-4644-ba0f-d03cd3cac130" Feb 27 18:15:19 crc kubenswrapper[4830]: E0227 18:15:19.767383 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-f4df8" podUID="3312ebad-9fb6-4efb-92a3-92c49763672e" Feb 27 18:15:19 crc kubenswrapper[4830]: E0227 18:15:19.767587 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:15:22 crc kubenswrapper[4830]: E0227 18:15:22.766887 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:15:25 crc kubenswrapper[4830]: I0227 18:15:25.171734 4830 generic.go:334] "Generic (PLEG): container finished" podID="bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa" containerID="a2ee0c3bb13fa1093c47e6e4687d01afaa5fab4c4ee68a25f941ec2eb8b66a89" exitCode=0 Feb 27 18:15:25 crc kubenswrapper[4830]: I0227 18:15:25.172348 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pfkj9/must-gather-4nm2j" event={"ID":"bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa","Type":"ContainerDied","Data":"a2ee0c3bb13fa1093c47e6e4687d01afaa5fab4c4ee68a25f941ec2eb8b66a89"} Feb 27 18:15:25 crc kubenswrapper[4830]: I0227 18:15:25.173428 4830 scope.go:117] "RemoveContainer" containerID="a2ee0c3bb13fa1093c47e6e4687d01afaa5fab4c4ee68a25f941ec2eb8b66a89" Feb 27 18:15:25 crc kubenswrapper[4830]: I0227 18:15:25.337837 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pfkj9_must-gather-4nm2j_bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa/gather/0.log" Feb 27 18:15:25 crc kubenswrapper[4830]: E0227 18:15:25.764087 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:15:27 crc kubenswrapper[4830]: E0227 18:15:27.765045 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:15:29 crc kubenswrapper[4830]: E0227 18:15:29.765086 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-8nt27" podUID="451836eb-a90a-4644-ba0f-d03cd3cac130" Feb 27 18:15:30 crc kubenswrapper[4830]: E0227 18:15:30.766436 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:15:31 crc kubenswrapper[4830]: E0227 18:15:31.691662 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:15:31 crc kubenswrapper[4830]: E0227 18:15:31.691905 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:15:31 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:15:31 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-58df7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536934-f4df8_openshift-infra(3312ebad-9fb6-4efb-92a3-92c49763672e): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:15:31 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 18:15:31 crc kubenswrapper[4830]: E0227 18:15:31.693535 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536934-f4df8" podUID="3312ebad-9fb6-4efb-92a3-92c49763672e" Feb 27 18:15:33 crc kubenswrapper[4830]: I0227 18:15:33.531012 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pfkj9/must-gather-4nm2j"] Feb 27 18:15:33 crc kubenswrapper[4830]: I0227 18:15:33.531987 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-pfkj9/must-gather-4nm2j" podUID="bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa" containerName="copy" containerID="cri-o://0117ca02adf9dc2b8df32010488e9f0524a4dfbb77de9f70a72690e3a80ee023" gracePeriod=2 Feb 27 18:15:33 crc kubenswrapper[4830]: I0227 18:15:33.545694 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pfkj9/must-gather-4nm2j"] Feb 27 18:15:33 crc kubenswrapper[4830]: E0227 18:15:33.765393 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:15:34 crc kubenswrapper[4830]: I0227 18:15:34.066251 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pfkj9_must-gather-4nm2j_bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa/copy/0.log" Feb 27 18:15:34 crc kubenswrapper[4830]: I0227 18:15:34.067622 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfkj9/must-gather-4nm2j" Feb 27 18:15:34 crc kubenswrapper[4830]: I0227 18:15:34.248502 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa-must-gather-output\") pod \"bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa\" (UID: \"bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa\") " Feb 27 18:15:34 crc kubenswrapper[4830]: I0227 18:15:34.248694 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xwgd\" (UniqueName: \"kubernetes.io/projected/bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa-kube-api-access-5xwgd\") pod \"bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa\" (UID: \"bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa\") " Feb 27 18:15:34 crc kubenswrapper[4830]: I0227 18:15:34.255042 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa-kube-api-access-5xwgd" (OuterVolumeSpecName: "kube-api-access-5xwgd") pod "bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa" (UID: "bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa"). InnerVolumeSpecName "kube-api-access-5xwgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:15:34 crc kubenswrapper[4830]: I0227 18:15:34.264296 4830 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-pfkj9_must-gather-4nm2j_bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa/copy/0.log" Feb 27 18:15:34 crc kubenswrapper[4830]: I0227 18:15:34.265287 4830 generic.go:334] "Generic (PLEG): container finished" podID="bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa" containerID="0117ca02adf9dc2b8df32010488e9f0524a4dfbb77de9f70a72690e3a80ee023" exitCode=143 Feb 27 18:15:34 crc kubenswrapper[4830]: I0227 18:15:34.265355 4830 scope.go:117] "RemoveContainer" containerID="0117ca02adf9dc2b8df32010488e9f0524a4dfbb77de9f70a72690e3a80ee023" Feb 27 18:15:34 crc kubenswrapper[4830]: I0227 18:15:34.265405 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pfkj9/must-gather-4nm2j" Feb 27 18:15:34 crc kubenswrapper[4830]: I0227 18:15:34.332061 4830 scope.go:117] "RemoveContainer" containerID="a2ee0c3bb13fa1093c47e6e4687d01afaa5fab4c4ee68a25f941ec2eb8b66a89" Feb 27 18:15:34 crc kubenswrapper[4830]: I0227 18:15:34.351933 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xwgd\" (UniqueName: \"kubernetes.io/projected/bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa-kube-api-access-5xwgd\") on node \"crc\" DevicePath \"\"" Feb 27 18:15:34 crc kubenswrapper[4830]: I0227 18:15:34.411773 4830 scope.go:117] "RemoveContainer" containerID="0117ca02adf9dc2b8df32010488e9f0524a4dfbb77de9f70a72690e3a80ee023" Feb 27 18:15:34 crc kubenswrapper[4830]: E0227 18:15:34.416170 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0117ca02adf9dc2b8df32010488e9f0524a4dfbb77de9f70a72690e3a80ee023\": container with ID starting with 0117ca02adf9dc2b8df32010488e9f0524a4dfbb77de9f70a72690e3a80ee023 not found: ID does not exist" containerID="0117ca02adf9dc2b8df32010488e9f0524a4dfbb77de9f70a72690e3a80ee023" Feb 27 18:15:34 crc kubenswrapper[4830]: I0227 18:15:34.416227 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0117ca02adf9dc2b8df32010488e9f0524a4dfbb77de9f70a72690e3a80ee023"} err="failed to get container status \"0117ca02adf9dc2b8df32010488e9f0524a4dfbb77de9f70a72690e3a80ee023\": rpc error: code = NotFound desc = could not find container \"0117ca02adf9dc2b8df32010488e9f0524a4dfbb77de9f70a72690e3a80ee023\": container with ID starting with 0117ca02adf9dc2b8df32010488e9f0524a4dfbb77de9f70a72690e3a80ee023 not found: ID does not exist" Feb 27 18:15:34 crc kubenswrapper[4830]: I0227 18:15:34.416256 4830 scope.go:117] "RemoveContainer" containerID="a2ee0c3bb13fa1093c47e6e4687d01afaa5fab4c4ee68a25f941ec2eb8b66a89" Feb 27 18:15:34 crc kubenswrapper[4830]: E0227 18:15:34.416667 4830 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2ee0c3bb13fa1093c47e6e4687d01afaa5fab4c4ee68a25f941ec2eb8b66a89\": container with ID starting with a2ee0c3bb13fa1093c47e6e4687d01afaa5fab4c4ee68a25f941ec2eb8b66a89 not found: ID does not exist" containerID="a2ee0c3bb13fa1093c47e6e4687d01afaa5fab4c4ee68a25f941ec2eb8b66a89" Feb 27 18:15:34 crc kubenswrapper[4830]: I0227 18:15:34.416697 4830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2ee0c3bb13fa1093c47e6e4687d01afaa5fab4c4ee68a25f941ec2eb8b66a89"} err="failed to get container status \"a2ee0c3bb13fa1093c47e6e4687d01afaa5fab4c4ee68a25f941ec2eb8b66a89\": rpc error: code = NotFound desc = could not find container \"a2ee0c3bb13fa1093c47e6e4687d01afaa5fab4c4ee68a25f941ec2eb8b66a89\": container with ID starting with a2ee0c3bb13fa1093c47e6e4687d01afaa5fab4c4ee68a25f941ec2eb8b66a89 not found: ID does not exist" Feb 27 18:15:34 crc kubenswrapper[4830]: I0227 18:15:34.454583 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa" (UID: "bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:15:34 crc kubenswrapper[4830]: I0227 18:15:34.555940 4830 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 27 18:15:34 crc kubenswrapper[4830]: I0227 18:15:34.790796 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa" path="/var/lib/kubelet/pods/bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa/volumes" Feb 27 18:15:38 crc kubenswrapper[4830]: E0227 18:15:38.767178 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:15:42 crc kubenswrapper[4830]: E0227 18:15:42.766815 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:15:44 crc kubenswrapper[4830]: E0227 18:15:44.783281 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-8nt27" podUID="451836eb-a90a-4644-ba0f-d03cd3cac130" Feb 27 18:15:44 crc kubenswrapper[4830]: E0227 18:15:44.783369 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:15:46 crc kubenswrapper[4830]: E0227 18:15:46.786199 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-f4df8" podUID="3312ebad-9fb6-4efb-92a3-92c49763672e" Feb 27 18:15:50 crc kubenswrapper[4830]: E0227 18:15:50.767741 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:15:55 crc kubenswrapper[4830]: E0227 18:15:55.766782 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:15:58 crc kubenswrapper[4830]: E0227 18:15:58.218758 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/alertmanager-rhel9@sha256=b23d4d4796437a2d93a9ad40b24c1130bcf4315029983aa275426d01d2955388/signature-4: status 500 (Internal Server Error)" image="registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3" Feb 27 18:15:58 crc kubenswrapper[4830]: E0227 18:15:58.219430 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:alertmanager,Image:registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3,Command:[],Args:[--config.file=/etc/alertmanager/config_out/alertmanager.env.yaml --storage.path=/alertmanager --data.retention=120h --cluster.listen-address=[$(POD_IP)]:9094 --web.listen-address=:9093 --web.route-prefix=/ --cluster.label=openstack/metric-storage --cluster.peer=alertmanager-metric-storage-0.alertmanager-operated:9094 --cluster.peer=alertmanager-metric-storage-1.alertmanager-operated:9094 --cluster.reconnect-timeout=5m --web.config.file=/etc/alertmanager/web_config/web-config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:web,HostPort:0,ContainerPort:9093,Protocol:TCP,HostIP:,},ContainerPort{Name:mesh-tcp,HostPort:0,ContainerPort:9094,Protocol:TCP,HostIP:,},ContainerPort{Name:mesh-udp,HostPort:0,ContainerPort:9094,Protocol:UDP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{memory: {{209715200 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:false,MountPath:/etc/alertmanager/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-out,ReadOnly:true,MountPath:/etc/alertmanager/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-assets,ReadOnly:true,MountPath:/etc/alertmanager/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:alertmanager-metric-storage-db,ReadOnly:false,MountPath:/alertmanager,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:web-config,ReadOnly:true,MountPath:/etc/alertmanager/web_config/web-config.yaml,SubPath:web-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cluster-tls-config,ReadOnly:true,MountPath:/etc/alertmanager/cluster_tls_config/cluster-tls-config.yaml,SubPath:cluster-tls-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zv45l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/healthy,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod alertmanager-metric-storage-0_openstack(8608d556-6b34-4ab2-b676-007c65e0d359): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/alertmanager-rhel9@sha256=b23d4d4796437a2d93a9ad40b24c1130bcf4315029983aa275426d01d2955388/signature-4: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:15:58 crc kubenswrapper[4830]: E0227 18:15:58.220717 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/alertmanager-rhel9@sha256=b23d4d4796437a2d93a9ad40b24c1130bcf4315029983aa275426d01d2955388/signature-4: status 500 (Internal Server Error)\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:15:58 crc kubenswrapper[4830]: E0227 18:15:58.765552 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:15:58 crc kubenswrapper[4830]: E0227 18:15:58.771155 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-f4df8" podUID="3312ebad-9fb6-4efb-92a3-92c49763672e" Feb 27 18:15:59 crc kubenswrapper[4830]: I0227 18:15:59.048989 4830 scope.go:117] "RemoveContainer" containerID="3daacc021bd16d7fbbf140a8e9591e43de306c4c9f70b304c42213e6040e61f8" Feb 27 18:15:59 crc kubenswrapper[4830]: I0227 18:15:59.074827 4830 scope.go:117] "RemoveContainer" containerID="97e7bd3f804dde277a2e36e53ab3a6dab5844013cd0137d6d65aec6747014104" Feb 27 18:15:59 crc kubenswrapper[4830]: I0227 18:15:59.571202 4830 generic.go:334] "Generic (PLEG): container finished" podID="451836eb-a90a-4644-ba0f-d03cd3cac130" containerID="c7d7c4a7b2e330dde34e92924229cc6203a75b1a7587dc83d5d5e1de1c6aea4a" exitCode=0 Feb 27 18:15:59 crc kubenswrapper[4830]: I0227 18:15:59.571261 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536930-8nt27" event={"ID":"451836eb-a90a-4644-ba0f-d03cd3cac130","Type":"ContainerDied","Data":"c7d7c4a7b2e330dde34e92924229cc6203a75b1a7587dc83d5d5e1de1c6aea4a"} Feb 27 18:16:00 crc kubenswrapper[4830]: I0227 18:16:00.171475 4830 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536936-qpf8b"] Feb 27 18:16:00 crc kubenswrapper[4830]: E0227 18:16:00.172658 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d77a6342-1dfe-4c36-ad60-e00893ecc3a6" containerName="collect-profiles" Feb 27 18:16:00 crc kubenswrapper[4830]: I0227 18:16:00.172688 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="d77a6342-1dfe-4c36-ad60-e00893ecc3a6" containerName="collect-profiles" Feb 27 18:16:00 crc kubenswrapper[4830]: E0227 18:16:00.172729 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa" containerName="gather" Feb 27 18:16:00 crc kubenswrapper[4830]: I0227 18:16:00.172744 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa" containerName="gather" Feb 27 18:16:00 crc kubenswrapper[4830]: E0227 18:16:00.172767 4830 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa" containerName="copy" Feb 27 18:16:00 crc kubenswrapper[4830]: I0227 18:16:00.172779 4830 state_mem.go:107] "Deleted CPUSet assignment" podUID="bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa" containerName="copy" Feb 27 18:16:00 crc kubenswrapper[4830]: I0227 18:16:00.173203 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa" containerName="copy" Feb 27 18:16:00 crc kubenswrapper[4830]: I0227 18:16:00.173253 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="d77a6342-1dfe-4c36-ad60-e00893ecc3a6" containerName="collect-profiles" Feb 27 18:16:00 crc kubenswrapper[4830]: I0227 18:16:00.173273 4830 memory_manager.go:354] "RemoveStaleState removing state" podUID="bae5dd66-7f10-45ef-b8ae-82e2d6dfa0aa" containerName="gather" Feb 27 18:16:00 crc kubenswrapper[4830]: I0227 18:16:00.174562 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536936-qpf8b" Feb 27 18:16:00 crc kubenswrapper[4830]: I0227 18:16:00.186656 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536936-qpf8b"] Feb 27 18:16:00 crc kubenswrapper[4830]: I0227 18:16:00.296084 4830 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcj9d\" (UniqueName: \"kubernetes.io/projected/db772b75-68da-488b-b9e5-daf61e0e4319-kube-api-access-jcj9d\") pod \"auto-csr-approver-29536936-qpf8b\" (UID: \"db772b75-68da-488b-b9e5-daf61e0e4319\") " pod="openshift-infra/auto-csr-approver-29536936-qpf8b" Feb 27 18:16:00 crc kubenswrapper[4830]: I0227 18:16:00.399065 4830 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcj9d\" (UniqueName: \"kubernetes.io/projected/db772b75-68da-488b-b9e5-daf61e0e4319-kube-api-access-jcj9d\") pod \"auto-csr-approver-29536936-qpf8b\" (UID: \"db772b75-68da-488b-b9e5-daf61e0e4319\") " pod="openshift-infra/auto-csr-approver-29536936-qpf8b" Feb 27 18:16:00 crc kubenswrapper[4830]: I0227 18:16:00.424349 4830 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcj9d\" (UniqueName: \"kubernetes.io/projected/db772b75-68da-488b-b9e5-daf61e0e4319-kube-api-access-jcj9d\") pod \"auto-csr-approver-29536936-qpf8b\" (UID: \"db772b75-68da-488b-b9e5-daf61e0e4319\") " pod="openshift-infra/auto-csr-approver-29536936-qpf8b" Feb 27 18:16:00 crc kubenswrapper[4830]: I0227 18:16:00.532886 4830 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536936-qpf8b" Feb 27 18:16:01 crc kubenswrapper[4830]: I0227 18:16:01.067540 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536930-8nt27" Feb 27 18:16:01 crc kubenswrapper[4830]: I0227 18:16:01.216474 4830 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536936-qpf8b"] Feb 27 18:16:01 crc kubenswrapper[4830]: W0227 18:16:01.218078 4830 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb772b75_68da_488b_b9e5_daf61e0e4319.slice/crio-7437b2d7fa63e7bdaf523acc600eb7c581530083902eb3809255a409b2c58b7d WatchSource:0}: Error finding container 7437b2d7fa63e7bdaf523acc600eb7c581530083902eb3809255a409b2c58b7d: Status 404 returned error can't find the container with id 7437b2d7fa63e7bdaf523acc600eb7c581530083902eb3809255a409b2c58b7d Feb 27 18:16:01 crc kubenswrapper[4830]: I0227 18:16:01.220453 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2dt6\" (UniqueName: \"kubernetes.io/projected/451836eb-a90a-4644-ba0f-d03cd3cac130-kube-api-access-l2dt6\") pod \"451836eb-a90a-4644-ba0f-d03cd3cac130\" (UID: \"451836eb-a90a-4644-ba0f-d03cd3cac130\") " Feb 27 18:16:01 crc kubenswrapper[4830]: I0227 18:16:01.228406 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/451836eb-a90a-4644-ba0f-d03cd3cac130-kube-api-access-l2dt6" (OuterVolumeSpecName: "kube-api-access-l2dt6") pod "451836eb-a90a-4644-ba0f-d03cd3cac130" (UID: "451836eb-a90a-4644-ba0f-d03cd3cac130"). InnerVolumeSpecName "kube-api-access-l2dt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:16:01 crc kubenswrapper[4830]: I0227 18:16:01.324092 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2dt6\" (UniqueName: \"kubernetes.io/projected/451836eb-a90a-4644-ba0f-d03cd3cac130-kube-api-access-l2dt6\") on node \"crc\" DevicePath \"\"" Feb 27 18:16:01 crc kubenswrapper[4830]: I0227 18:16:01.603392 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536930-8nt27" event={"ID":"451836eb-a90a-4644-ba0f-d03cd3cac130","Type":"ContainerDied","Data":"37960e0e7a8e905272c89cfef0eeb990435ee656123d8616b44528294b220c4e"} Feb 27 18:16:01 crc kubenswrapper[4830]: I0227 18:16:01.603460 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37960e0e7a8e905272c89cfef0eeb990435ee656123d8616b44528294b220c4e" Feb 27 18:16:01 crc kubenswrapper[4830]: I0227 18:16:01.603505 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536930-8nt27" Feb 27 18:16:01 crc kubenswrapper[4830]: I0227 18:16:01.605229 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536936-qpf8b" event={"ID":"db772b75-68da-488b-b9e5-daf61e0e4319","Type":"ContainerStarted","Data":"7437b2d7fa63e7bdaf523acc600eb7c581530083902eb3809255a409b2c58b7d"} Feb 27 18:16:02 crc kubenswrapper[4830]: I0227 18:16:02.164491 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536924-p7752"] Feb 27 18:16:02 crc kubenswrapper[4830]: I0227 18:16:02.172402 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536924-p7752"] Feb 27 18:16:02 crc kubenswrapper[4830]: I0227 18:16:02.619074 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536936-qpf8b" event={"ID":"db772b75-68da-488b-b9e5-daf61e0e4319","Type":"ContainerStarted","Data":"12e0c5f54c1df5edacb7285912af7ae8837f5e27dd2f52734c2b6c2e8324c2db"} Feb 27 18:16:02 crc kubenswrapper[4830]: I0227 18:16:02.641269 4830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536936-qpf8b" podStartSLOduration=1.6273388770000001 podStartE2EDuration="2.641241155s" podCreationTimestamp="2026-02-27 18:16:00 +0000 UTC" firstStartedPulling="2026-02-27 18:16:01.22206345 +0000 UTC m=+7757.311335953" lastFinishedPulling="2026-02-27 18:16:02.235965768 +0000 UTC m=+7758.325238231" observedRunningTime="2026-02-27 18:16:02.637183528 +0000 UTC m=+7758.726455991" watchObservedRunningTime="2026-02-27 18:16:02.641241155 +0000 UTC m=+7758.730513658" Feb 27 18:16:02 crc kubenswrapper[4830]: E0227 18:16:02.765883 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:16:02 crc kubenswrapper[4830]: I0227 18:16:02.786185 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ead8534-2b58-480e-9367-3aa26d44a876" path="/var/lib/kubelet/pods/0ead8534-2b58-480e-9367-3aa26d44a876/volumes" Feb 27 18:16:03 crc kubenswrapper[4830]: I0227 18:16:03.159810 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:16:03 crc kubenswrapper[4830]: I0227 18:16:03.160241 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:16:03 crc kubenswrapper[4830]: I0227 18:16:03.637122 4830 generic.go:334] "Generic (PLEG): container finished" podID="db772b75-68da-488b-b9e5-daf61e0e4319" containerID="12e0c5f54c1df5edacb7285912af7ae8837f5e27dd2f52734c2b6c2e8324c2db" exitCode=0 Feb 27 18:16:03 crc kubenswrapper[4830]: I0227 18:16:03.637196 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536936-qpf8b" event={"ID":"db772b75-68da-488b-b9e5-daf61e0e4319","Type":"ContainerDied","Data":"12e0c5f54c1df5edacb7285912af7ae8837f5e27dd2f52734c2b6c2e8324c2db"} Feb 27 18:16:05 crc kubenswrapper[4830]: I0227 18:16:05.140064 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536936-qpf8b" Feb 27 18:16:05 crc kubenswrapper[4830]: I0227 18:16:05.230868 4830 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcj9d\" (UniqueName: \"kubernetes.io/projected/db772b75-68da-488b-b9e5-daf61e0e4319-kube-api-access-jcj9d\") pod \"db772b75-68da-488b-b9e5-daf61e0e4319\" (UID: \"db772b75-68da-488b-b9e5-daf61e0e4319\") " Feb 27 18:16:05 crc kubenswrapper[4830]: I0227 18:16:05.243271 4830 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db772b75-68da-488b-b9e5-daf61e0e4319-kube-api-access-jcj9d" (OuterVolumeSpecName: "kube-api-access-jcj9d") pod "db772b75-68da-488b-b9e5-daf61e0e4319" (UID: "db772b75-68da-488b-b9e5-daf61e0e4319"). InnerVolumeSpecName "kube-api-access-jcj9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:16:05 crc kubenswrapper[4830]: I0227 18:16:05.334391 4830 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcj9d\" (UniqueName: \"kubernetes.io/projected/db772b75-68da-488b-b9e5-daf61e0e4319-kube-api-access-jcj9d\") on node \"crc\" DevicePath \"\"" Feb 27 18:16:05 crc kubenswrapper[4830]: I0227 18:16:05.667478 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536936-qpf8b" event={"ID":"db772b75-68da-488b-b9e5-daf61e0e4319","Type":"ContainerDied","Data":"7437b2d7fa63e7bdaf523acc600eb7c581530083902eb3809255a409b2c58b7d"} Feb 27 18:16:05 crc kubenswrapper[4830]: I0227 18:16:05.667538 4830 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7437b2d7fa63e7bdaf523acc600eb7c581530083902eb3809255a409b2c58b7d" Feb 27 18:16:05 crc kubenswrapper[4830]: I0227 18:16:05.667567 4830 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536936-qpf8b" Feb 27 18:16:05 crc kubenswrapper[4830]: I0227 18:16:05.744091 4830 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536928-wckhp"] Feb 27 18:16:05 crc kubenswrapper[4830]: I0227 18:16:05.755429 4830 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536928-wckhp"] Feb 27 18:16:06 crc kubenswrapper[4830]: I0227 18:16:06.780543 4830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35a44dce-1ea1-4005-84c9-f14986ee706b" path="/var/lib/kubelet/pods/35a44dce-1ea1-4005-84c9-f14986ee706b/volumes" Feb 27 18:16:07 crc kubenswrapper[4830]: E0227 18:16:07.766723 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:16:08 crc kubenswrapper[4830]: E0227 18:16:08.766454 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:16:09 crc kubenswrapper[4830]: E0227 18:16:09.764445 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-f4df8" podUID="3312ebad-9fb6-4efb-92a3-92c49763672e" Feb 27 18:16:13 crc kubenswrapper[4830]: E0227 18:16:13.767010 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:16:14 crc kubenswrapper[4830]: E0227 18:16:14.777837 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:16:20 crc kubenswrapper[4830]: E0227 18:16:20.765764 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:16:20 crc kubenswrapper[4830]: E0227 18:16:20.765813 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:16:21 crc kubenswrapper[4830]: E0227 18:16:21.765424 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-f4df8" podUID="3312ebad-9fb6-4efb-92a3-92c49763672e" Feb 27 18:16:24 crc kubenswrapper[4830]: E0227 18:16:24.521252 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/thanos-rhel9@sha256=b77218de6528d52542abddf9fb3faececdf1a3e47987ac740ab00468d461a60b/signature-4: status 500 (Internal Server Error)" image="registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34" Feb 27 18:16:24 crc kubenswrapper[4830]: E0227 18:16:24.521753 4830 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:thanos-sidecar,Image:registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34,Command:[],Args:[sidecar --prometheus.url=http://localhost:9090/ --grpc-address=:10901 --http-address=:10902 --log.level=info --prometheus.http-client-file=/etc/thanos/config/prometheus.http-client-file.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:10902,Protocol:TCP,HostIP:,},ContainerPort{Name:grpc,HostPort:0,ContainerPort:10901,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:thanos-prometheus-http-client-file,ReadOnly:false,MountPath:/etc/thanos/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w6l8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(75bcbe49-556d-4af7-9506-514c14ec8d9e): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/thanos-rhel9@sha256=b77218de6528d52542abddf9fb3faececdf1a3e47987ac740ab00468d461a60b/signature-4: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:16:24 crc kubenswrapper[4830]: E0227 18:16:24.523045 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/cluster-observability-operator/thanos-rhel9@sha256=b77218de6528d52542abddf9fb3faececdf1a3e47987ac740ab00468d461a60b/signature-4: status 500 (Internal Server Error)\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:16:26 crc kubenswrapper[4830]: E0227 18:16:26.766546 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:16:29 crc kubenswrapper[4830]: E0227 18:16:29.772437 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:16:31 crc kubenswrapper[4830]: E0227 18:16:31.764849 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:16:33 crc kubenswrapper[4830]: I0227 18:16:33.160016 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:16:33 crc kubenswrapper[4830]: I0227 18:16:33.160453 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:16:33 crc kubenswrapper[4830]: E0227 18:16:33.764416 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-f4df8" podUID="3312ebad-9fb6-4efb-92a3-92c49763672e" Feb 27 18:16:34 crc kubenswrapper[4830]: E0227 18:16:34.772593 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:16:38 crc kubenswrapper[4830]: E0227 18:16:38.768373 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:16:39 crc kubenswrapper[4830]: E0227 18:16:39.765752 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:16:40 crc kubenswrapper[4830]: E0227 18:16:40.766135 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:16:44 crc kubenswrapper[4830]: E0227 18:16:44.774198 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:16:46 crc kubenswrapper[4830]: E0227 18:16:46.766120 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:16:47 crc kubenswrapper[4830]: E0227 18:16:47.765152 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-f4df8" podUID="3312ebad-9fb6-4efb-92a3-92c49763672e" Feb 27 18:16:52 crc kubenswrapper[4830]: E0227 18:16:52.769039 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:16:54 crc kubenswrapper[4830]: E0227 18:16:54.778438 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:16:55 crc kubenswrapper[4830]: E0227 18:16:55.765711 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:16:56 crc kubenswrapper[4830]: E0227 18:16:56.767153 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:16:57 crc kubenswrapper[4830]: E0227 18:16:57.766242 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:16:59 crc kubenswrapper[4830]: I0227 18:16:59.232861 4830 scope.go:117] "RemoveContainer" containerID="a4db4e5f1764770929cc4adb2cec729b768e86e6d1828156f2bb0782d66b1912" Feb 27 18:16:59 crc kubenswrapper[4830]: I0227 18:16:59.317435 4830 scope.go:117] "RemoveContainer" containerID="73d497eb649e304ed03d6fbd993e8d97dd6c23c18aa5eb0096b8c72f39c60a21" Feb 27 18:17:02 crc kubenswrapper[4830]: E0227 18:17:02.675454 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:17:02 crc kubenswrapper[4830]: E0227 18:17:02.676373 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:17:02 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:17:02 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-58df7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536934-f4df8_openshift-infra(3312ebad-9fb6-4efb-92a3-92c49763672e): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:17:02 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 18:17:02 crc kubenswrapper[4830]: E0227 18:17:02.678132 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536934-f4df8" podUID="3312ebad-9fb6-4efb-92a3-92c49763672e" Feb 27 18:17:03 crc kubenswrapper[4830]: I0227 18:17:03.160258 4830 patch_prober.go:28] interesting pod/machine-config-daemon-2tv5v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:17:03 crc kubenswrapper[4830]: I0227 18:17:03.160342 4830 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:17:03 crc kubenswrapper[4830]: I0227 18:17:03.160404 4830 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" Feb 27 18:17:03 crc kubenswrapper[4830]: I0227 18:17:03.161373 4830 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a8b39d391d0b62f855059e989b55160b60188dcf89720fb39518165931511a21"} pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 18:17:03 crc kubenswrapper[4830]: I0227 18:17:03.161473 4830 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerName="machine-config-daemon" containerID="cri-o://a8b39d391d0b62f855059e989b55160b60188dcf89720fb39518165931511a21" gracePeriod=600 Feb 27 18:17:03 crc kubenswrapper[4830]: E0227 18:17:03.292361 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 18:17:03 crc kubenswrapper[4830]: I0227 18:17:03.433828 4830 generic.go:334] "Generic (PLEG): container finished" podID="00d6b7ce-4757-4275-8345-60c1b546ce25" containerID="a8b39d391d0b62f855059e989b55160b60188dcf89720fb39518165931511a21" exitCode=0 Feb 27 18:17:03 crc kubenswrapper[4830]: I0227 18:17:03.433873 4830 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" event={"ID":"00d6b7ce-4757-4275-8345-60c1b546ce25","Type":"ContainerDied","Data":"a8b39d391d0b62f855059e989b55160b60188dcf89720fb39518165931511a21"} Feb 27 18:17:03 crc kubenswrapper[4830]: I0227 18:17:03.433912 4830 scope.go:117] "RemoveContainer" containerID="469f495eca7f2f6702dc34d5195646e1c220a84d4e0dd0fdedb43c726d6afe28" Feb 27 18:17:03 crc kubenswrapper[4830]: I0227 18:17:03.436199 4830 scope.go:117] "RemoveContainer" containerID="a8b39d391d0b62f855059e989b55160b60188dcf89720fb39518165931511a21" Feb 27 18:17:03 crc kubenswrapper[4830]: E0227 18:17:03.437357 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 18:17:07 crc kubenswrapper[4830]: E0227 18:17:07.766623 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:17:07 crc kubenswrapper[4830]: E0227 18:17:07.766652 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:17:08 crc kubenswrapper[4830]: E0227 18:17:08.766351 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:17:08 crc kubenswrapper[4830]: E0227 18:17:08.768187 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:17:08 crc kubenswrapper[4830]: E0227 18:17:08.985625 4830 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:17:08 crc kubenswrapper[4830]: E0227 18:17:08.985895 4830 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:17:08 crc kubenswrapper[4830]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:17:08 crc kubenswrapper[4830]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qj7ht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536926-4ghnd_openshift-infra(43ed5a43-8e62-46bf-8151-7179e13730dd): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:17:08 crc kubenswrapper[4830]: > logger="UnhandledError" Feb 27 18:17:08 crc kubenswrapper[4830]: E0227 18:17:08.987220 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:17:15 crc kubenswrapper[4830]: E0227 18:17:15.766823 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-f4df8" podUID="3312ebad-9fb6-4efb-92a3-92c49763672e" Feb 27 18:17:17 crc kubenswrapper[4830]: I0227 18:17:17.763116 4830 scope.go:117] "RemoveContainer" containerID="a8b39d391d0b62f855059e989b55160b60188dcf89720fb39518165931511a21" Feb 27 18:17:17 crc kubenswrapper[4830]: E0227 18:17:17.764014 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 18:17:18 crc kubenswrapper[4830]: E0227 18:17:18.764893 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75bcbe49-556d-4af7-9506-514c14ec8d9e" Feb 27 18:17:19 crc kubenswrapper[4830]: E0227 18:17:19.765542 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:dc62889b883f597de91b5389cc52c84c607247d49a807693be2f688e4703dfc3\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="8608d556-6b34-4ab2-b676-007c65e0d359" Feb 27 18:17:20 crc kubenswrapper[4830]: E0227 18:17:20.764648 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5pkvb" podUID="4d0e4d8e-d4ab-47f9-8015-5ace0337272f" Feb 27 18:17:21 crc kubenswrapper[4830]: E0227 18:17:21.766221 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-4ghnd" podUID="43ed5a43-8e62-46bf-8151-7179e13730dd" Feb 27 18:17:21 crc kubenswrapper[4830]: E0227 18:17:21.766247 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lqb2w" podUID="0596772a-54ae-4d9e-9db4-5d7138bae51e" Feb 27 18:17:29 crc kubenswrapper[4830]: I0227 18:17:29.762404 4830 scope.go:117] "RemoveContainer" containerID="a8b39d391d0b62f855059e989b55160b60188dcf89720fb39518165931511a21" Feb 27 18:17:29 crc kubenswrapper[4830]: E0227 18:17:29.763343 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2tv5v_openshift-machine-config-operator(00d6b7ce-4757-4275-8345-60c1b546ce25)\"" pod="openshift-machine-config-operator/machine-config-daemon-2tv5v" podUID="00d6b7ce-4757-4275-8345-60c1b546ce25" Feb 27 18:17:29 crc kubenswrapper[4830]: E0227 18:17:29.765615 4830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-f4df8" podUID="3312ebad-9fb6-4efb-92a3-92c49763672e"